Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,000 | 54,228,373 |
Why does my code take so long to write CSV file in Dask Python
|
<p>Below is my Python code:</p>
<pre><code>import dask.dataframe as dd
VALUE2015 = dd.read_csv('A/SKD - M2M by Salesman (value by uom) (NEWSALES)2015-2016.csv', usecols = VALUEFY, dtype = traintypes1)
REPORT = VALUE2015.groupby(index).agg({'JAN':'sum', 'FEB':'sum', 'MAR':'sum', 'APR':'sum', 'MAY':'sum','JUN':'sum', 'JUL':'sum', 'AUG':'sum', 'SEP':'sum', 'OCT':'sum', 'NOV':'sum', 'DEC':'sum'}).compute()
REPORT.to_csv('VALUE*.csv', header=True)
</code></pre>
<p>It takes 6 minutes to create a 100MB CSV file.</p>
|
<p>Looking through Dask documentation, it says there that, "generally speaking, Dask.dataframe groupby-aggregations are roughly same performance as Pandas groupby-aggregations." So unless you're using a Dask distributed client to manage workers, threads, etc., the benefit from using it over vanilla Pandas isn't always there.</p>
<p>Also, try to time each step in your code because if the bulk of the 6 minutes is taken up by writing the .CSV to file on disk, then again Dask will be of no help (for a single file).</p>
<p><a href="https://github.com/dask/dask-tutorial/blob/master/05_distributed.ipynb" rel="nofollow noreferrer">Here</a>'s a nice tutorial from Dask on adding distributed schedulers for your tasks.</p>
|
python|pandas|dask|dask-distributed|dask-ml
| 1 |
1,001 | 22,700,455 |
Program format getting change in wing
|
<p>if see the picture on this link <a href="https://drive.google.com/file/d/0B_CP5fn_tuEDTDZoclM5M0V0cmc/edit?usp=sharing" rel="nofollow">https://drive.google.com/file/d/0B_CP5fn_tuEDTDZoclM5M0V0cmc/edit?usp=sharing</a></p>
<p>this what my program looks when I write in sublime. But when I copy and paste the program in wing it looks like picture on the following link <a href="https://drive.google.com/file/d/0B_CP5fn_tuEDZEd0SVktVHRMcEE/edit?usp=sharing" rel="nofollow">https://drive.google.com/file/d/0B_CP5fn_tuEDZEd0SVktVHRMcEE/edit?usp=sharing</a></p>
<p>When write and save the file in sublime, and then try to run it in python it give me error</p>
<p>But when paste in wing and format the indentation and save. Then when I run in python it works fine.</p>
<p>I dont how to indent the program well in sublime.</p>
|
<p>By looking at the images it seem likely that the editor settings related to indenting are different in sublime and wing.</p>
<p>Check if any of the editors are using tabs instead of spaces when indenting the code and if they are, change the editor to use 4 x whitespace instead of a tab.</p>
|
python|formatting|indentation
| 0 |
1,002 | 23,562,784 |
What is more efficient .objects.filter().exists() or get() wrapped on a try
|
<p>I'm writing tests for a django application and I want to check if an object has been saved to the database. Which is the most efficient/correct way to do it?</p>
<pre><code>User.objects.filter(username=testusername).exists()
</code></pre>
<p>or</p>
<pre><code>try:
User.objects.get(username=testusername)
except User.DoesNotExist:
</code></pre>
|
<h2>Speed test: <code>exists()</code> vs. <code>get() + try/except</code></h2>
<p>Test functions in <strong>test.py</strong>:</p>
<pre><code>from testapp.models import User
def exists(x):
return User.objects.filter(pk=x).exists()
def get(x):
try:
User.objects.get(pk=x)
return True
except User.DoesNotExist:
return False
</code></pre>
<p>Using <strong>timeit</strong> in shell:</p>
<pre><code>In [1]: from testapp import test
In [2]: %timeit for x in range(100): test.exists(x)
10 loops, best of 3: 88.4 ms per loop
In [3]: %timeit for x in range(100): test.get(x)
10 loops, best of 3: 105 ms per loop
In [4]: timeit for x in range(1000): test.exists(x)
1 loops, best of 3: 880 ms per loop
In [5]: timeit for x in range(1000): test.get(x)
1 loops, best of 3: 1.02 s per loop
</code></pre>
<p><strong>Conclusion</strong>: <code>exists()</code> is <strong>over 10% faster</strong> for checking if an object has been saved in the database.</p>
|
python|django|testing|django-models
| 29 |
1,003 | 31,888,866 |
I want to deploy using the entries that i have in my database
|
<p>So i used postgres in development for my django project and have important entries in there and i want to deploy to my app in heroku</p>
<p>is there a simple way to do this? </p>
|
<p>Sure. You just need to export your local database and import it on the Heroku Postgres database. Heroku has a <a href="https://devcenter.heroku.com/articles/heroku-postgres-import-export#import" rel="nofollow">guide</a> to do just that.</p>
<ol>
<li>Create a dump from your local database. <code>PGPASSWORD=mypassword pg_dump -Fc --no-acl --no-owner -h localhost -U myuser mydb > mydb.dump</code></li>
<li>Upload <code>mydb.dump</code> somewhere Heroku can access it.</li>
<li>Import to heroku. <code>heroku pg:backups restore 'https://s3.amazonaws.com/me/items/3H0q/mydb.dump' DATABASE_URL</code></li>
</ol>
<p><a href="https://devcenter.heroku.com/articles/heroku-postgres-import-export#import" rel="nofollow">Source</a></p>
|
python|django|postgresql|heroku
| 0 |
1,004 | 32,939,447 |
name " " is not defined
|
<pre><code>import math
EMPTY = '-'
def is_between(value, min_value, max_value):
""" (number, number, number) -> bool
Precondition: min_value <= max_value
Return True if and only if value is between min_value and max_value,
or equal to one or both of them.
>>> is_between(1.0, 0.0, 2)
True
>>> is_between(0, 1, 2)
False
"""
return value >= min_value and value <= max_value
# Students are to complete the body of this function, and then put their
# solutions for the other required functions below this function.
def game_board_full(cells):
""" (str) -> bool
Return True if no EMPTY in cells and else False
>>> game_board_full ("xxox")
True
>>> game_board_full ("xx-o")
False
"""
return "-" not in cells
def get_board_size (cells):
""" (str) -> int
Return the square root of the length of the cells
>>>get_board_size ("xxox")
2
>>>get_board_size ("xoxoxoxox")
3
"""
sqrt_cell= len(cells) ** 0.5
return int(sqrt_cell)
def make_empty_board (size):
""" (int) -> str
Precondition: size>=1 and size<=9
Return a string for storing information with the size
>>>make_empty_board (2)
"----"
>>>make_empty_board (3)
"---------"
"""
return "-" *size ** 2
def get_position (row_index,col_index,size):
""" (int,int,int) -> int
Precondition:size >=col_index and size >= row_index
Return the str_index of the cell with row_index,col_index and size
>>>get_position (2,2,4)
5
>>>get_position (3,4,5)
13
"""
str_index = (row_index - 1) * size + col_index - 1
return str_index
def make_move( symbol,row_index,col_index,game_board):
"""(str,int,int,str) -> str
Return the resultant game board with symbol,row_index,col_index and game_board
>>>make_move("o",1,1,"----")
"o---"
>>>make_move("x"2,3,"---------")
"-----x---"
"""
length=len(game_board)
size=len(cells) ** 0.5
str_index = (row_index - 1) * size + col_index - 1
return "-"*(str_index-1)+symbol+"-"*(length-str_index)
def extract_line (cells,direction,cells_num):
""" (str,str,int) -> str
Return the characters of a specified row with cells, direction and cells_num
>>>extract_line ("xoxoxoxox","across",2)
"oxo"
>>>extract_line ("xoxo","up_diagonal","-")
"xo"
"""
num=cells_num
s=cells
size= get_board_size (cells)
if direction=="across":
return s[(num-1)* size : num*size]
elif direction=="down":
return s[num-1:size **2:size]
elif direction=="up_diagonal":
return s[(size-1)*size:size-2:1-size]
elif direction=="down_diagonal":
return s[0:size*2:size+1]
</code></pre>
<blockquote>
<p>NameError: name 'cells' is not defined </p>
</blockquote>
<p>I don't know how to define <code>cells</code> because it is a parameter.</p>
|
<p>You have NO <code>cells</code> parameter in </p>
<pre><code>def make_move( symbol,row_index,col_index,game_board):
</code></pre>
<p>Next time read the error message carefully so you know in which code line you have a problem. </p>
|
python|nameerror
| 1 |
1,005 | 37,786,536 |
How to define policies for Python application in Bluemix Autoscaling service?
|
<p>I noticed that the policy types depend on the target runtime. For example, for Java it is possible to define policies based on memory, throughput, response time... etc. The only possibility for Python is memory based policy. Is there any workaround for that? </p>
|
<p>Bluemix Auto scaling service for Liberty for Java™ applications, supports scaling rules for JVM Heap, Memory, and Throughput. Actually, Auto Scaling services on Bluemix works with IBM JVM. </p>
<p>For other types of runtimes, including Python runtime, there is only <a href="https://console.ng.bluemix.net/docs/services/Auto-Scaling/index.html" rel="nofollow">scaling rules</a> for Memory.</p>
|
python|ibm-cloud|autoscaling
| 0 |
1,006 | 37,996,299 |
Save Game Progress for Multiple Sprites
|
<p>I'm working on a game in Pygame that includes a player class and an enemy class. Each class has multiple variables within it. I'm trying to figure out how I can save the data of these sprites by using Python's built-in <code>pickle</code> module. I thought of doing something similar to this:</p>
<pre><code>data_file = open_file("save.dat","wb")
for i in enemyList:
pickle.dump(i.health)
pickle.dump(i.rect.x)
pickle.dump(i.rect.y)
pickle.dump(i.image)
</code></pre>
<p>and so on for each variable. How can I save the data and retrieve it in the same state it was in previously?</p>
|
<p><strong>Answer</strong></p>
<p>Since pickle is object serialization, you should just be able to dump your whole object. The <code>b</code> in <code>wb</code> is for binary. This is because you don't have to know how an object is represented in binary, you can just dump it like so:</p>
<pre><code>data_file = open_file("save.dat","wb")
for i in enemyList:
pickle.dump(i, data_file)
</code></pre>
<p>Then when you load it back in you will have the whole object.</p>
<p>To open it:</p>
<pre><code>with open('save.dat', 'rb') as fp:
i = pickle.load(fp)
</code></pre>
<p>I havn't used pickle before, but since it is all binary you should just be able to dump your enemyList if it an object:</p>
<pre><code>data_file = open_file("save.dat","wb")
pickle.dump(enemyList, data_file)
with open('save.dat', 'rb') as fp:
enemyList = pickle.load(fp)
</code></pre>
<p><strong>Excluding/Including Additional State</strong></p>
<p>Pickle uses the <code>__getstate__</code> and <code>__setstate__</code> methods to alter state before reading and writing pickle serialized data. If you wish to omit un-serialization data you must override these methods. Here is the documentation to help you in doing so:</p>
<p><a href="https://docs.python.org/3/library/pickle.html#example" rel="nofollow">Pickle State</a></p>
<p><strong>Consideration</strong></p>
<p>Serialization (and therefor python pickle) is seen as an alternative to creating your own file format. Which often times, I find to be easier depending on the data types. If you are not in control of your object hierarchy, sometimes you don't want to create your own inherited object to try and gain control of all the data. Sometimes it is just easier to write your own file format. </p>
|
python|save|pygame|pickle
| 1 |
1,007 | 51,420,774 |
how to omit tns from response and change tag name in spyne?
|
<p>how do omit tns from my response and also change the tag name.?
my response is like this</p>
<pre><code><soap11env:Envelope xmlns:soap11env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:tns="spyne.example">
<soap11env:Body>
<tns:FnSchedule_CityResponse>
<tns:FnSchedule_CityResult>
<tns:ErrorString></tns:ErrorString>
<tns:CityName>HYDERABAD</tns:CityName>
<tns:CityId>1</tns:CityId>
<tns:ErrId>0</tns:ErrId>
</tns:FnSchedule_CityResult>
</tns:FnSchedule_CityResponse>
</soap11env:Body>
</soap11env:Envelope>
</code></pre>
<p>I want to remove tns and change "soap11env" to "soap".
Having these values is causing validation issues.</p>
<p>i referred this question on stack overflow, implemented it, but was not helpful.
<a href="https://stackoverflow.com/questions/28832969/remove-the-namespace-from-spyne-response-variables?lq=1">Remove the namespace from Spyne response variables</a></p>
|
<p>In order to change soap11env to soap just simply override the response using</p>
<pre><code>application.interface.nsmap['soap'] = application.interface.nsmap['soap11env']
</code></pre>
<p>The 'tns' or target namespaces must not be change but there may arrive a few cases, One might need to change a name in order to completely test something.</p>
<p>To change the namespaces,</p>
<pre><code>def on_method_return_string(ctx):
ctx.out_string[0] = ctx.out_string[0].replace(b'ns4', b'diffgr')
ctx.out_string[0] = ctx.out_string[0].replace(b'ns5', b'msdata')
</code></pre>
<p>YourModelClassName.event_manager.add_listener('method_return_string',
on_method_return_string)</p>
<p>What I did here was, replace the namespace ns4 with diffgr and ns5 with msdata. ns4 and ns5 are sub name spaces that I had in the responses of some third party application. This solution I found in the mailing list maintained for spyne.</p>
|
python|python-2.7|spyne
| 0 |
1,008 | 62,783,372 |
ERROR: Command errored out with exit status 1 while installing requirements.txt
|
<p>I have been trying to install packages from a requirements.txt file but I'm getting error. I made a virtual environment to install the packages but got his huge array. My machine is running on python 3.8</p>
<p>Below is the error what I got in my terminal while trying to install the requirements.txt file in my virutal environment</p>
<pre><code> Getting requirements to build wheel ... done
Preparing wheel metadata ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\user\Documents\demoProjectwebapp-v3\venv\Scripts\python.exe' 'C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\user\AppData\Local\Temp\tmpzo50rwzk'
cwd: C:\Users\user\AppData\Local\Temp\pip-install-dpmca2i7\scipy
Complete output (170 lines):
lapack_opt_info:
lapack_mkl_info:
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries mkl_rt not found in ['C:\\Users\\user\\Documents\\demoProjectwebapp-v3\\venv\\lib', 'C:\\']
NOT AVAILABLE
openblas_lapack_info:
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries openblas not found in ['C:\\Users\\user\\Documents\\demoProjectwebapp-v3\\venv\\lib', 'C:\\']
get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']'
customize GnuFCompiler
Could not locate executable g77
Could not locate executable f77
customize IntelVisualFCompiler
Could not locate executable ifort
Could not locate executable ifl
customize AbsoftFCompiler
Could not locate executable f90
customize CompaqVisualFCompiler
Could not locate executable DF
customize IntelItaniumVisualFCompiler
Could not locate executable efl
customize Gnu95FCompiler
Could not locate executable gfortran
Could not locate executable f95
customize G95FCompiler
Could not locate executable g95
customize IntelEM64VisualFCompiler
customize IntelEM64TFCompiler
Could not locate executable efort
Could not locate executable efc
customize PGroupFlangCompiler
Could not locate executable flang
don't know how to compile Fortran code on platform 'nt'
NOT AVAILABLE
openblas_clapack_info:
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries openblas,lapack not found in ['C:\\Users\\user\\Documents\\demoProjectwebapp-v3\\venv\\lib', 'C:\\']
NOT AVAILABLE
atlas_3_10_threads_info:
Setting PTATLAS=ATLAS
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries tatlas,tatlas not found in C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries lapack_atlas not found in C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries tatlas,tatlas not found in C:\
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
NOT AVAILABLE
atlas_3_10_info:
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries satlas,satlas not found in C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries lapack_atlas not found in C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries satlas,satlas not found in C:\
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_3_10_info'>
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries ptf77blas,ptcblas,atlas not found in C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries lapack_atlas not found in C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries ptf77blas,ptcblas,atlas not found in C:\
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_threads_info'>
NOT AVAILABLE
atlas_info:
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries f77blas,cblas,atlas not found in C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries lapack_atlas not found in C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries f77blas,cblas,atlas not found in C:\
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_info'>
NOT AVAILABLE
lapack_info:
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries lapack not found in ['C:\\Users\\user\\Documents\\demoProjectwebapp-v3\\venv\\lib', 'C:\\']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
setup.py:111: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
setup.py:386: UserWarning: Unrecognized setuptools command ('dist_info --egg-base C:\Users\user\AppData\Local\Temp\pip-modern-metadata-fv_m7gz4'), proceeding with generating Cython sources and expanding templates
warnings.warn("Unrecognized setuptools command ('{}'), proceeding with "
Running from scipy source directory.
C:\Users\user\AppData\Local\Temp\pip-build-env-glp1ljn7\overlay\Lib\site-packages\numpy\distutils\system_info.py:624: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
self.calc_info()
C:\Users\user\AppData\Local\Temp\pip-build-env-glp1ljn7\overlay\Lib\site-packages\numpy\distutils\system_info.py:624: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
self.calc_info()
C:\Users\user\AppData\Local\Temp\pip-build-env-glp1ljn7\overlay\Lib\site-packages\numpy\distutils\system_info.py:624: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
self.calc_info()
Traceback (most recent call last):
File "C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib\site-packages\pip\_vendor\pep517\_in_process.py", line 280, in <module>
main()
File "C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib\site-packages\pip\_vendor\pep517\_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib\site-packages\pip\_vendor\pep517\_in_process.py", line 133, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "C:\Users\user\AppData\Local\Temp\pip-build-env-glp1ljn7\overlay\Lib\site-packages\setuptools\build_meta.py", line 157, in prepare_metadata_for_build_wheel
self.run_setup()
File "C:\Users\user\AppData\Local\Temp\pip-build-env-glp1ljn7\overlay\Lib\site-packages\setuptools\build_meta.py", line 248, in run_setup
super(_BuildMetaLegacyBackend,
File "C:\Users\user\AppData\Local\Temp\pip-build-env-glp1ljn7\overlay\Lib\site-packages\setuptools\build_meta.py", line 142, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 505, in <module>
File "setup.py", line 501, in setup_package
setup(**metadata)
File "C:\Users\user\AppData\Local\Temp\pip-build-env-glp1ljn7\overlay\Lib\site-packages\numpy\distutils\core.py", line 135, in setup
config = configuration()
raise NotFoundError(msg)
numpy.distutils.system_info.NotFoundError: No lapack/blas resources found.
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\user\Documents\demoProjectwebapp-v3\venv\Scripts\python.exe' 'C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\user\AppData\Local\Temp\tmpzo50rwzk' Check the logs for full command output.
</code></pre>
|
<p>Try isolate which line of requirements.txt gives you an error, you can try comment out torch and see how installation goes without it.</p>
<p>To replicate your error message try pip install torch, I think it would give you the error message you experienced.</p>
<p>Two things after that:</p>
<ul>
<li>go to torch documentation, find out about manual installation</li>
<li>try installing with conda, another package manager</li>
</ul>
<p>Also, you may want to install Anaconda package bundle to try if most libraries are there.</p>
<p>As a side note - bad experience in installing some heavy dependency happens often, so treat it as excercise, it is not something you did wrong, it is kind of nature of dependencies you try using. Good luck!</p>
|
python|python-3.x|machine-learning|pip
| 0 |
1,009 | 62,700,468 |
How to check whether a graph is an undirected graph?
|
<p>Currently, I am creating a function to check whether a graph is un-directed.
The way, my graphs are stored are in this way. This is a un-directed graph of 3 nodes, 1, 2, 3.</p>
<pre><code>graph = {1: {2:{...}, 3:{...}}, 2: {1:{...}, 3:{...}}, 3: {1:{...}, 2:{...}}}
</code></pre>
<p>the {...} represents alternating layers of the dictionaries for the connections in each of the nodes. It is infinitely recurring, since it is nested in each other.</p>
<p>More details about graph:</p>
<ol>
<li>the keys refer to the node, and it's values refer to a dict, with the nodes that are connected to the key.</li>
<li>Example: two nodes (1, 2) with an undirected edge: <code>graph = {1: {2: {1: {...}}}, 2: {1: {2: {...}}}}</code></li>
<li>Example2: two nodes (1, 2) with a directed edge from 1 to 2: <code>graph = {1: {2: {}}, 2: {}}</code></li>
</ol>
<p>My current way of figuring out whether a graph is un-directed or not, is by checking whether the number of edges in the graph is equal to (n*(n-1))/2 (n represents the number of nodes) , but this cannot differentiate between 15 directed edges and 15 un-directed edges, so what other way can i confirm that my graph is undirected?</p>
|
<p>First off, I think you're abusing terminology by calling a graph with edges in both directions "undirected". In a real undirected graph, there is no notion of direction to an edge, which often means you don't need redundant direction information in the graph's representation in a computer program. What you have is a directed graph, and you want to see if it <em>could</em> be represented by an undirected graph, even though you're not doing so yet.</p>
<p>I'm not sure there's any easier way to do this than by checking every edge in the graph to see if the reversed edge also exists. This is pretty easy with your graph structure, just loop over the verticies and check if there is a returning edge for every outgoing edge:</p>
<pre><code>def undirected_compatible(graph):
for src, edges in graph.items(): # edges is dict of outgoing edges from src
for dst, dst_edges in edges.items(): # dst_edges is dict of outgoing edges from dst
if src not in dst_edges:
return False
return True
</code></pre>
<p>I'd note that a more typical way of describing a graph like yours would be to omit the nested dictionaries and just give a list of destinations for the edges. A fully connected 3-node graph would be:</p>
<pre><code>{1: [2, 3], 2: [1, 3], 3: [1, 2]}
</code></pre>
<p>You can get the same information from this graph as your current one, you'd just need an extra indirection to look up the destination node in the top level graph dict, rather than having it be the value of the corresponding key in the edge container already. A version of my function above for this more conventional structure would be:</p>
<pre><code>def undirected_compatible(graph):
for src, edges in graph.items():
for dst in edges:
if src not in graph[dst]:
return False
return True
</code></pre>
<p>The <code>not in</code> test may make this slower for large graphs, since searching a list for an item is less asymptotically efficient than checking if a key is in a dictionary. If you needed the higher performance, you could use sets instead of lists, to speed up the membership tests.</p>
|
python|python-3.x|dictionary|graph
| 1 |
1,010 | 53,542,497 |
How to Return a List of Values From Within a Dictionary?
|
<p>I need to return a list of values for a given id number using two previously created dictionaries, where the values I need are stored within the dictionaries.</p>
<p>The two dictionaries I've created are as follows:</p>
<pre><code>{100: ('Mulan', [300, 500], [200, 400]),
200: ('Ariel', [100, 500], [500]),
300: ('Jasmine', [500], [500, 100]),
400: ('Elsa', [100, 500], []),
500: ('Belle', [200, 300], [100, 200, 300, 400])}
{100000: (400, 'Does not want to build a %SnowMan %StopAsking', ['SnowMan', 'StopAsking'], [100, 200, 300], [400, 500]),
100001: (200, 'Make the ocean great again.', [''], [], [400]),
100002: (500, "Help I'm being held captive by a beast! %OhNoes", ['OhNoes'], [400], [100, 200, 300]),
100003: (500, "Actually nm. This isn't so bad lolz :P %StockholmeSyndrome", ['StockholmeSyndrome'], [400, 100], []),
100004: (300, 'If some random dude offers to %ShowYouTheWorld do yourself a favour and %JustSayNo.', ['ShowYouTheWorld', 'JustSayNo'], [500, 200], [400]),
100005: (400, 'LOLZ BELLE. %StockholmeSyndrome %SnowMan', ['StockholmeSyndrome', 'SnowMan'], [], [200, 300, 100, 500])}
</code></pre>
<p>The first dictionary is of the form {id: (name, followers, following}.</p>
<p>The second dictionary is of the form {key: (id, chirp, tags, likes, dislikes}.</p>
<p>For the given id numbers <code>100, 200, 300, 400, 500</code>, I need to return the chirp with the most likes for each user they follow. </p>
<p>An example of the output, for say id number 500, would be:</p>
<pre><code>['Make the ocean great again.',
'If some random dude offers to %ShowYouTheWorld do yourself a favour and %JustSayNo.',
'Does not want to build a %SnowMan %StopAsking']
</code></pre>
<p>I understand the process that needs to happen here, but I need some help with how to get the function to find the necessary value in one dictionary, and then search for the required values in the second dictionary.</p>
<p>Thanks so much for any guidance you can offer! </p>
|
<p>You'll need to use nested loops to go through both dictionaries starting with the first:</p>
<pre><code>user_input = 500
for key, value in dictionary1.items():
if user_input == key:
for key2, value2 in dictionary2.items():
for items in value[1]:
if items == value2[0]:
print(value2[1])
</code></pre>
<p>key = id</p>
<p>value[1][0] = name</p>
<p>value[1][1] = followers</p>
<p>value[1][2] = following</p>
<p>key2 = key</p>
<p>value2[0] = id</p>
<p>value2[1] = chirp</p>
<p>value2[2] = tags</p>
<p>value2[3] = likes</p>
<p>value2[4] = dislikes</p>
|
python|python-3.x|dictionary
| 0 |
1,011 | 54,937,021 |
Output of python code is one character per line
|
<p>I'm new to Python and having some trouble with an API scraping I'm attempting. What I want to do is pull a list of book titles using this code:</p>
<pre><code>r = requests.get('https://api.dp.la/v2/items?q=magic+AND+wizard&api_key=09a0efa145eaa3c80f6acf7c3b14b588')
data = json.loads(r.text)
for doc in data["docs"]:
for title in doc["sourceResource"]["title"]:
print (title)
</code></pre>
<p>Which works to pull the titles, but most (not all) titles are outputting as one character per line. I've tried adding .splitlines() but this doesn't fix the problem. Any advice would be appreciated!</p>
|
<p>The problem is that you have two types of title in the response, some are plain strings <code>"Germain the wizard"</code> and some others are arrays of string <code>['Joe Strong, the boy wizard : or, The mysteries of magic exposed /']</code>. It seems like in this particular case, all lists have length one, but I guess that will not always be the case. To illustrate what you might need to do I added a <code>join</code> here instead of just taking <code>title[0]</code>.</p>
<pre class="lang-py prettyprint-override"><code>import requests
import json
r = requests.get('https://api.dp.la/v2/items?q=magic+AND+wizard&api_key=09a0efa145eaa3c80f6acf7c3b14b588')
data = json.loads(r.text)
for doc in data["docs"]:
title = doc["sourceResource"]["title"]
if isinstance(title, list):
print(" ".join(title))
else:
print(title)
</code></pre>
<p>In my opinion that should never happen, an API should return predictable types, otherwise it looks messy on the users' side.</p>
|
python|python-3.x|for-loop
| 0 |
1,012 | 33,065,510 |
Convert AngularJS website to Flask
|
<p>I created an Angular website with ui-router:</p>
<pre><code>angular app structure
|--index.html
|--js
|--app.js
|--angular.js
|-- ...
|--stylesheets
|--main.css
|-- ...
|--template
|--navbar.html
|--about.html
|-- ...
</code></pre>
<p>Each js and css is linked like this:</p>
<pre><code><script src="js/main.js"></script>
</code></pre>
<p>I want to serve this with Flask. I threw everything in the "templates" folder and wrote a simple Flask app:</p>
<p><code>server.py</code>:</p>
<pre><code>from flask import Flask, make_response
app = Flask(__name__)
@app.route('/')
def view():
return make_response(open('templates/index.html').read())
app.debug = True
if __name__ == '__main__':
app.run()
</code></pre>
<pre><code>flask app
|--server.py
|--templates
|--index.html
|--js
|--app.js
|--angular.js
|-- ...
|--stylesheets
|--main.css
|-- ...
|--template
|--navbar.html
|--about.html
|-- ...
</code></pre>
<p>None of my file are loading when I go to the root url. How do I serve the Angular files from the Flask app?</p>
|
<p>You need to render your template. The best way to do that is </p>
<pre><code>@app.route('/')
def view():
return render_template('index.html')
</code></pre>
|
python|angularjs|flask
| 2 |
1,013 | 33,407,050 |
MySQL query throwing 1064 error
|
<p>I have a huge data which is stored in mysql db. One of the columns in the database is a long string. One of the strings is "iEdge detected the 'warning' condition 'iedge it" which is stored in string_type. I have to query the database and find how many such strings are there.I am querying from my python program. When I do it using something like</p>
<pre><code> cur.execute("select count(*) from table1 as tmp where tmp.err_string='"+row[r]+"'")
</code></pre>
<p>row[r] contains "iEdge detected the 'warning' condition 'iedge it"</p>
<p>I am getting error 1064 (You have an error in your SQL syntax...). I think it is happening because of some quotes in the string. May I know how to fix this?</p>
|
<p>Can you try this:</p>
<pre><code>sql = "select count(*) from table1 as tmp where tmp.err_string=%s"
cursor.execute(sql, [row[r]])
</code></pre>
<p>Let the MySQL Python library worry about escaping special characters and how to quote your string.</p>
<p>See <a href="https://stackoverflow.com/questions/15798969/python-mysql-escape-special-characters">this SO post</a> for more information.</p>
|
python|mysql|sql|flask
| 0 |
1,014 | 13,121,212 |
Python - regular expressions - find every word except in tags
|
<p>How to find all words except the ones in tags using RE module?</p>
<p>I know how to find something, but how to do it opposite way? Like I write something to search for, but acutally I want to search for every word except everything inside tags and tags themselves?</p>
<p>So far I managed this:</p>
<pre><code>f = open (filename,'r')
data = re.findall(r"<.+?>", f.read())
</code></pre>
<p>Well it prints everything inside <code><></code> tags, but how to make it find every word except thats inside those tags?
I tried <code>^</code>, to use at the start of pattern inside <code>[]</code>, but then symbols as <code>.</code> are treated literally without special meaning.
Also I managed to solve this, by splitting string, using <code>'''\= <>"'''</code>, then checking whole string for words that are inside <code><></code> tags (like align, right, td etc), and appending words that are not inside <> tags in another list. But that a bit ugly solution.</p>
<p>Is there some simple way to search for every word except anything that's inside <code><></code> and these tags themselves?
So let say string <code>'hello 123 <b>Bold</b> <p>end</p>'</code>
with <code>re.findall</code>, would return:</p>
<pre><code>['hello', '123', 'Bold', 'end']
</code></pre>
|
<p>If you want to <a href="https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454">avoid</a> using a regular expression, <a href="http://www.crummy.com/software/BeautifulSoup/" rel="nofollow noreferrer">BeautifulSoup</a> makes it very easy to get just the text from an HTML document:</p>
<pre><code>from BeautifulSoup import BeautifulSoup
soup = BeautifulSoup(html_string)
text = "".join(soup.findAll(text=True))
</code></pre>
<p>From there, you can get the list of words with <code>split</code>:</p>
<pre><code>words = text.split()
</code></pre>
|
python|regex
| 2 |
1,015 | 21,701,338 |
Add build information in Jenkins using REST
|
<p>Does anyone know how to add build information to an existing Jenkins build? </p>
<p>What I'm trying to do is replace the #1 build number with the actual full version number that the build represents. I can do this manually by going to http://MyJenkinsServer/job/[jobname]/[buildnumber]/configure</p>
<p>I have tried to reverse engineer the headers using chrome by seeing what it sends to the server and I found the following:</p>
<pre><code>Request URL:http://<server>/job/test_job/1/configSubmit
Request Method:POST
Status Code:200 OK
Request Headers view source
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Cache-Control:max-age=0
Connection:keep-alive
Content-Length:192
Content-Type:application/x-www-form-urlencoded
Cookie:hudson_auto_refresh=false; JSESSIONID=qbn3q22phkbc12f1ikk0ssijb; screenResolution=1920x1200
Referer:http://<server>/job/test_job/1/configure
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4
Form Data view URL encoded
displayName:#1
description:test4
core:apply:true
json:{"displayName": "#1", "description": "test4", "": "test4", "core:apply": "true"}**
Response Headers view source
Content-Length:155
Content-Type:text/html;charset=UTF-8
Server:Jetty(8.y.z-SNAPSHOT)
</code></pre>
<p>This at least gives me the form parameters that I need to POST. So from this I came up with the following python3 code:</p>
<pre><code>import requests
params={"displayName":"Hello World",
"description":"This is my description",
"":"This is my description",
"core:apply":"true"}
a = requests.post("http://myjenkinsserver/job/test_jira_job_update/1/configSubmit", data=params, auth=( username, pwd), headers={"content-type":"text/html;charset=UTF-8"} )
if a.raw.status != 200:
print("***ERROR***")
print(a.raw.status)
print(a.raw.reason)
</code></pre>
<p>but sadly this failed with the following error:</p>
<pre><code>***ERROR***
400
Nothing is submitted
</code></pre>
<p>Any ideas what I am doing wrong? Is my approach to this problem completely wrong?</p>
|
<p>It's a bit confusing to reverse engineer this. You just need to submit the <em>json</em> parameter in your POST:</p>
<pre><code>p = {'json': '{"displayName":"New Name", "description":"New Description"}'}
requests.post('http://jenkins:8080/job/jobname/5/configSubmit', data=p, auth=(user, token))
</code></pre>
<p>In my tests, the above works to set the build name and description with Jenkins 1.517. </p>
<p>(Also, I don't think you should set the content-type header, since you should be submitting form-encoded data.)</p>
|
python|post|jenkins
| 8 |
1,016 | 41,125,598 |
Suppress warnings for python-xarray
|
<p>I'm running the following code </p>
<pre><code>positive_values = values.where(values > 0)
</code></pre>
<p>In this example <code>values</code> may contain <code>nan</code> elements. I believe that for this reason, I'm getting the following runtime warning: </p>
<pre><code>RuntimeWarning: invalid value encountered in greater_equal if not reflexive
</code></pre>
<p>Does <code>xarray</code> have methods of surpressing these warnings? </p>
|
<p>The <a href="https://docs.python.org/3.5/library/warnings.html" rel="nofollow noreferrer"><code>warnings</code></a> module provides the functionality you are looking for.</p>
<p>To suppress all warnings do (see <a href="https://stackoverflow.com/a/41126444/1322401">John Coleman's answer</a> for why this is not good practice):</p>
<pre><code>import warnings
warnings.simplefilter("ignore")
# warnings.simplefilter("ignore", category=RuntimeWarning) # for RuntimeWarning only
</code></pre>
<p>To make the suppression temporary do it inside the <code>warnings.catch_warnings()</code> context manager:</p>
<pre><code>import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
positive_values = values.where(values > 0)
</code></pre>
<p>The context manager saves the original warning settings prior to entering the context and then sets them back when exiting the context.</p>
|
python|python-3.x|suppress-warnings|python-xarray
| 6 |
1,017 | 38,083,670 |
How to customize pybusyinfo window in (windows OS) to make it appear at top corner of window and the other formatting options?
|
<p>I am writing a python script to get the climate conditions in particular area every 30 minutes and give a popup notification.</p>
<p>This code gives popup at the center of the screen which is annoying.I wish to have the popup similar to notify-send in linux[which appears at right corner] and the message is aligned at the center of pybusyinfo window ,and how to align it to right?</p>
<p>Any change of code in pybusyinfo would be helpful.</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import datetime,time
import wx
import wx.lib.agw.pybusyinfo as PBI
now = datetime.datetime.now()
hour=now.hour
# gets current time
def main():
chrome_path = 'C:/Program Files (x86)/Google/Chrome/Application/chrome.exe %s'
g_link = 'http://www.accuweather.com/en/in/tambaram/190794/hourly-weather-forecast/190794?hour='+str(hour)
g_res= requests.get(g_link)
g_links= BeautifulSoup(g_res.text,"lxml")
if hour > 18 :
temp = g_links.find('td', {'class' :'first-col bg-s'}).text
climate = g_links.find('td', {'class' :'night bg-s icon first-col'}).text
else :
temp = g_links.find('td', {'class' :'first-col bg-c'}).text
climate = g_links.find('td', {'class' :'day bg-c icon first-col'}).text
for loc in g_links.find_all('h1'):
location=loc.text
info = location +' ' + str(now.hour)+':'+str(now.minute)
#print 'Temp : '+temp
#print climate
def showmsg():
app = wx.App(redirect=False)
title = 'Weather'
msg= info+'\n'+temp + '\n'+ climate
d = PBI.PyBusyInfo(msg,title=title)
return d
if __name__ == '__main__':
d = showmsg()
time.sleep(6)
while True:
main()
time.sleep(1800)
</code></pre>
|
<pre><code>screen_size = wx.DisplaySize()
d_size = d._infoFrame.GetSize()
pos_x = screen_size[0] - d_size[0] # Right - popup.width (aligned to right side)
pos_y = screen_size[1] - d_size[1] # Bottom - popup.height (aligned to bottom)
d.SetPosition((pos_x,pos_t))
d.Update() # force redraw ... (otherwise your "work " will block redraw)
</code></pre>
<p>to align the text you will need to subclass PyBusyFrame</p>
<pre><code>class MyPyBusyFrame(PBI.PyBusyFrame):
def OnPaint(self, event):
"""
Handles the ``wx.EVT_PAINT`` event for L{PyInfoFrame}.
:param `event`: a `wx.PaintEvent` to be processed.
"""
panel = event.GetEventObject()
dc = wx.BufferedPaintDC(panel)
dc.Clear()
# Fill the background with a gradient shading
startColour = wx.SystemSettings_GetColour(wx.SYS_COLOUR_ACTIVECAPTION)
endColour = wx.WHITE
rect = panel.GetRect()
dc.GradientFillLinear(rect, startColour, endColour, wx.SOUTH)
# Draw the label
font = wx.SystemSettings_GetFont(wx.SYS_DEFAULT_GUI_FONT)
dc.SetFont(font)
# Draw the message
rect2 = wx.Rect(*rect)
rect2.height += 20
#############################################
# CHANGE ALIGNMENT HERE
#############################################
dc.DrawLabel(self._message, rect2, alignment=wx.ALIGN_CENTER|wx.ALIGN_CENTER)
# Draw the top title
font.SetWeight(wx.BOLD)
dc.SetFont(font)
dc.SetPen(wx.Pen(wx.SystemSettings_GetColour(wx.SYS_COLOUR_CAPTIONTEXT)))
dc.SetTextForeground(wx.SystemSettings_GetColour(wx.SYS_COLOUR_CAPTIONTEXT))
if self._icon.IsOk():
iconWidth, iconHeight = self._icon.GetWidth(), self._icon.GetHeight()
dummy, textHeight = dc.GetTextExtent(self._title)
textXPos, textYPos = iconWidth + 10, (iconHeight-textHeight)/2
dc.DrawBitmap(self._icon, 5, 5, True)
else:
textXPos, textYPos = 5, 0
dc.DrawText(self._title, textXPos, textYPos+5)
dc.DrawLine(5, 25, rect.width-5, 25)
size = self.GetSize()
dc.SetPen(wx.Pen(startColour, 1))
dc.SetBrush(wx.TRANSPARENT_BRUSH)
dc.DrawRoundedRectangle(0, 0, size.x, size.y-1, 12)
</code></pre>
<p>then you would have to create your own BusyInfo function that instanciated your frame and returns it (see <a href="https://github.com/wxWidgets/wxPython/blob/master/wx/lib/agw/pybusyinfo.py#L251" rel="nofollow">https://github.com/wxWidgets/wxPython/blob/master/wx/lib/agw/pybusyinfo.py#L251</a> )</p>
|
python|python-2.7|wxpython|notify
| 0 |
1,018 | 38,328,588 |
Scrapy Logging Level Change
|
<p>I'm trying to start scrapy spider from my scripty as shown in <a href="http://doc.scrapy.org/en/latest/topics/practices.html#run-scrapy-from-a-script" rel="noreferrer">here</a></p>
<pre><code>logging.basicConfig(
filename='log.txt',
format='%(levelname)s: %(message)s',
level=logging.CRITICAL
)
configure_logging(install_root_handler=False)
process = CrawlerProcess(get_project_settings())
process.crawl('1740')
process.start() # the script will block here until the crawling is finished
</code></pre>
<p>I want to configure the <strong>logging level of my spider</strong> but even if i do not install root logger handler and configure my basic config with <em>logging.basicConfig</em> method it does not obey the determinded level. </p>
<pre><code>INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
INFO: Enabled item pipelines:
['collector.pipelines.CollectorPipeline']
INFO: Spider opened
INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
</code></pre>
<p>It is following format and file name determined in basicConfig but it does not use logging level. I do not determine logging level other then this place. </p>
<p><strong>NOTE:</strong> There is not any other place which i import logging or change logging level.</p>
|
<p>For scrapy itself you should define logging settings in <code>settings.py</code> <a href="http://doc.scrapy.org/en/latest/topics/logging.html?highlight=logging#logging-settings" rel="noreferrer">as described in the docs</a></p>
<p>so in <code>settings.py</code> you can set:</p>
<pre><code>LOG_LEVEL = 'ERROR' # to only display errors
LOG_FORMAT = '%(levelname)s: %(message)s'
LOG_FILE = 'log.txt'
</code></pre>
|
python-3.x|logging|scrapy
| 20 |
1,019 | 30,926,043 |
Trouble outputing file size to a label from a listbox in Python 3
|
<p>I'm using <code>os.path.getsize()</code> to output the size of a file to a label. The file path is stored in a listbox. The function works, but it outputs the file size in bits, so I wrote the following to convert to more appropriate units, and it is now displaying only in TB. It's executing all of the <code>if</code> statements, regardless if the condition is true.</p>
<pre><code>activeFile = FilesList.get(ACTIVE)
fileSize = os.path.getsize(activeFile)
fileSizeStr = str(fileSize) + ' Bits'
if fileSize > 8:
fileSize = fileSize / 8
fileSizeStr = str(fileSize) + ' Bytes'
if fileSize < 1024:
fileSize = fileSize / 1024
fileSizeStr = str(fileSize) + ' KB'
if fileSize < 1024:
fileSize = fileSize / 1024
fileSizeStr = str(fileSize) + ' MB'
if fileSize < 1024:
fileSize = fileSize / 1024
fileSizeStr = str (fileSize) + ' GB'
if fileSize < 1024:
fileSize = fileSize / 1024
fileSizeStr = str(fileSize) + ' TB'
</code></pre>
|
<p>There are couple problems in your code, </p>
<ul>
<li>You always re-assign <code>fileSizeStr</code>. You need to concatenate new values. </li>
<li>You need to check if <code>fileSize</code> greater than or equal to 1024, not smaller. </li>
<li>New <code>fileSize</code> should be remainder of the first calculation, not the result of it.</li>
</ul>
<p>Also, checking from larger one would be better IMHO. </p>
<pre><code>#constants
TB = 2**43
GB = 2**33
MB = 2**23
KB = 2**13
BYTES = 2**3
#some test value here
fileSize = 8
#empty string to be filled and shown later
fileSizeStr = ""
#calculations
if fileSize >= TB:
fileTB = fileSize / TB
fileSize = fileSize % TB
fileSizeStr += str(fileTB) + 'TB '
if fileSize >= GB:
fileGB = fileSize / GB
fileSize = fileSize % GB
fileSizeStr += str(fileGB) + 'GB '
if fileSize >= MB:
fileMB = fileSize / MB
fileSize = fileSize % MB
fileSizeStr += str(fileMB) + 'MB '
if fileSize >= KB:
fileKB = fileSize / KB
fileSize = fileSize % KB
fileSizeStr += str(fileKB) + 'KB '
if fileSize >= BYTES:
fileB = fileSize / BYTES
fileSize = fileSize % BYTES
fileSizeStr += str(fileB) + 'Byte(s) '
fileSizeStr += str(fileSize) + 'Bit(s)'
print fileSizeStr
</code></pre>
|
python-3.x|operating-system
| 1 |
1,020 | 30,949,405 |
Is there danger in installing 2 versions of Anaconda for Python on one machine?
|
<p>Some background: I have an intel Mac osx (running Yosemite) and use PyCharm community edition as my main IDE. I usually code in Python 3.4 however, I'm taking some MIT OCW courses which all use Python 2. To make it easier on myself when using MIT's skeleton files I have downloaded Python 2.7 and switch the PyCharm interpreter depending on my project.</p>
<p>Here's my question:</p>
<p>I'm wondering if I would run into any trouble downloading the 2.7 and 3.4 versions of Anaconda. </p>
<p>If this is ok, would I need to do anything special with my import commands depending on which version of Python I'm coding in?</p>
<p>Thanks! Happy to add clarity / more info if this isn't enough to answer my questions.</p>
|
<p>There's no danger, but it's also not the recommended way of achieving this. Rather, you should use <code>conda</code>, the package manager that comes with Anaconda, to create an environment for the other version of Python. For instance, if you started with Anaconda3,</p>
<pre><code>conda create -n python27 python=2.7 anaconda
</code></pre>
<p>would create an environment called <code>python27</code> in ~/anaconda/envs/python27 with Python 2.7 and all the packages from Anaconda. You would then point to ~/anaconda/bin/python or ~/anaconda/envs/python27/bin/python depending on what version of Python you want. In the terminal, use <code>source activate python27</code> and <code>source deactivate</code> to switch between the two. </p>
<p>See <a href="http://conda.pydata.org/docs/" rel="nofollow">http://conda.pydata.org/docs/</a> for more information on conda. </p>
|
python|macos|python-2.7|python-3.x|anaconda
| 0 |
1,021 | 51,891,791 |
Regex python : find different forms of currency with amount
|
<p>I try to find the amounts in euros in receipts.
I extract the values, but the currency can appear in different ways: "EUR", "E" or"€". I do not succeed in specifying these different forms within the regex. In addition, the "E" must not raise words that also begin with "E" such as "Eggs".</p>
<p>Currently my regex is <code>\d+[\.+\,+]\d*\s*[(e|eur|euros|€)]+\W</code> but the brackets don't work correctly because it retrieves all the words that contains E...</p>
<p>My goal: find the amounts if we find the form amount + EUR or amount + € or amount + E</p>
<p>See here an example : <a href="https://regex101.com/r/F3Zm9M/2" rel="nofollow noreferrer">https://regex101.com/r/F3Zm9M/2</a></p>
<p>Thank you</p>
|
<p>There are a couple things going on here.
First, you're not capturing what I think you want to capture (you said the values). You should have something like <code>(\d+(?:.|,)\d\d)</code> (the ?: inside the inner parentheses groups the . and , without making it another capturing group).
Second, your [(e|eur|euros|€)] is not doing at all what you want it to - look at the explanation on the side panel of regex101 that you linked. What you want instead is just <code>e|eur|euros|€</code>. Again, in order to group these and have the <code>|</code> work like you want, you group them, and I'm assuming you don't want to capture these symbols, so use <code>(?:e|eur|euros|€)</code>. You might want to think about adding spaces to make sure the 'e' or 'eur' isn't inside a word, though then you might not match something like 'EUR3000'.
Overall, I'm not sure entirely what you're trying to match, but I hope this helps you get started.</p>
|
python|regex|currency
| 0 |
1,022 | 51,580,689 |
Python program to convert words to numbers in a text file containing English words also
|
<p>I would like to use word2number from <a href="https://pypi.org/project/word2number/" rel="nofollow noreferrer">https://pypi.org/project/word2number/</a> to convert words to numbers in a text file to another file as output. </p>
<p>A similar program is available to convert numbers to words. So how do I workaround this program to suit my case.</p>
<pre><code>import re
import num2words
with open('input.txt') as f_input:
text = f_input.read()
text = re.sub(r"(\d+)", lambda x: num2words.num2words(int(x.group(0))), text)
with open('output.txt', 'w') as f_output:
f_output.write(text)
</code></pre>
|
<p>There's definitely a more pythonic way to do this but here you go, you will need to replace word2number with the function call from the library you want to use where the parameter is a string. Also this will skip newline characters and make one big line.</p>
<pre><code>lines = f_input.readlines()
nums = list()
for line in lines:
words = line.split(' ')
for word in words:
nums.append(word2number(word))
with open('output.txt', 'w') as f_output:
f_output.write(" ".join(nums))
</code></pre>
|
python|numbers|words
| 0 |
1,023 | 62,220,371 |
Find the number of clusters in a list of integers
|
<p>Let's consider the distance <code>d(a, b) = number of digits which are pairwise different in a and b</code>, e.g.:</p>
<pre><code>d(1003000000, 1000090000) = 2 # the 4th and 6th digits don't match
</code></pre>
<p>(we only work with 10-digit numbers) and this list:</p>
<pre><code>L = [2678888873,
2678878873, # distance 1 from L[0]
1000000000,
1000040000, # distance 1 from L[2]
1000300000, # distance 1 from L[2], distance 2 from L[3]
1000300009, # distance 1 from L[4], distance 2 from L[2]
]
</code></pre>
<p>I would like to find the minimal number of points P such that each integer in the list is at a distance <= 1 of a point in P.</p>
<p>Here I think this number is 3: every number in the list is at distance <= 1 of 2678888873, 1000000000, or 1000300009.</p>
<p>I imagine an O(n^2) algorithm is possible by first computing a distance matrix i.e. <code>M[i, j] = d(L[i], L[j])</code>.</p>
<p><strong>Is there a better way to do this, especially using Numpy?</strong> (maybe there's a built-in algorithm in Numpy/Scipy?)</p>
<hr>
<p>PS: If we see these 10-digit integers as strings, we're close to finding a minimal number of clusters in a list of many words with a Levenshtein distance.</p>
<p>PS2: I know realize this distance has a name on strings: <a href="https://en.wikipedia.org/wiki/Hamming_distance" rel="nofollow noreferrer">Hamming distance</a>.</p>
|
<p>Let's see what we know from a the distance metric. Given a number <code>P</code> (not necessarily in <code>L</code>), if two members of <code>L</code> are within distance 1 of <code>P</code>, they each share 9 digits with <code>P</code>, but not necessarily the same ones, so they are only guaranteed to share 8 digits with each other. So any two numbers that have distance 2 are guaranteed to two unique <code>P</code>s that are distance 1 from each of them (and distance 2 from each other as well). You can use this information to reduce the amount of brute-force effort required to optimize the selection of <code>P</code>.</p>
<p>Let's say you have a distance matrix. You can immediately discard rows (or columns) that don't have entries less than 3: they are their own cluster automatically. For the remaining entries that are equal to 2, construct a list of possible <code>P</code> values. Find the number of elements of <code>L</code> that are within 1 of each element of <code>P</code> (another distance matrix). Sort <code>P</code> by the number of neighbors, and select. You will need to update the matrix at each iteration as you remove members with maximal neighbors to avoid inefficient grouping due to overlap (members of <code>L</code> that are near multiple members of <code>P</code>).</p>
<p>You can compute a distance matrix for <code>L</code> in numpy by first converting it to a 2D array of digits:</p>
<pre><code>L = np.array([2678888873, 2678878873, 1000000000, 1000040000, 1000300000, 1000300009])
z = 10 # Number of digits
n = len(L) # Number of numbers
dec = 10**np.arange(z).reshape(-1, 1).astype(np.int64)
digits = (L // dec) % 10
</code></pre>
<p><code>digits</code> is now a 10xN array:</p>
<pre><code>array([[3, 3, 0, 0, 0, 9],
[7, 7, 0, 0, 0, 0],
[8, 8, 0, 0, 0, 0],
[8, 8, 0, 0, 0, 0],
[8, 7, 0, 4, 0, 0],
[8, 8, 0, 0, 3, 3],
[8, 8, 0, 0, 0, 0],
[7, 7, 0, 0, 0, 0],
[6, 6, 0, 0, 0, 0],
[2, 2, 1, 1, 1, 1]], dtype=int64)
</code></pre>
<p>You can compute the distance between <code>digits</code> and itself, or <code>digits</code> and any other 10xM array using <code>!=</code> and <code>sum</code> along the right axis:</p>
<pre><code>distance = (digits[:, None, :] != digits[..., None]).sum(axis=0)
</code></pre>
<p>The result:</p>
<pre><code>array([[ 0, 1, 10, 10, 10, 10],
[ 1, 0, 10, 10, 10, 10],
[10, 10, 0, 1, 1, 2],
[10, 10, 1, 0, 2, 3],
[10, 10, 1, 2, 0, 1],
[10, 10, 2, 3, 1, 0]])
</code></pre>
<p>We are only concerned with the upper (or lower) triangle of that matrix, so we can immediately mask out the other triangle:</p>
<pre><code>distance[np.tril_indices(n)] = z + 1
</code></pre>
<p>Find all candidate values of <code>P</code>: all elements of <code>L</code>, but also all pairs between elements that have distance 2:</p>
<pre><code># Find indices of pairs that differ by 2
indices = np.nonzero(distance == 2)
# Extract those numbers as 10xKx2 array
d = digits[:, np.stack(indices, axis=1)]
# Compute where the difference is nonzero (Kx2)
locs = np.diff(d, axis=2).astype(bool).squeeze()
# Find the index of the first digit to replace (K)
s = np.argmax(locs, axis=0)
</code></pre>
<p>The extra values of <code>P</code> are constructed from each half of <code>d</code>, with the digits represented by <code>k</code> replaced from the other half:</p>
<pre><code>P0 = digits[:, indices[0]]
P1 = digits[:, indices[1]]
k = np.arange(s.size)
tmp = P0[s, k]
P0[s, k] = P1[s, k]
P1[s, k] = tmp
Pextra = np.unique(np.concatenate((P0, P1), axis=1)
</code></pre>
<p>So now you can compute the total set of possibilities for <code>P</code>:</p>
<pre><code>P = np.concatenate((digits, Pextra), axis=1)
distance2 = (P[:, None, :] != digits[..., None]).sum(axis=0)
</code></pre>
<p>You can discard any elements of <code>Pextra</code> that match with elements of <code>digits</code> based on the distance:</p>
<pre><code>mask = np.concatenate((np.ones(n, bool), distance2[:, n:].all(axis=0)))
P = P[:, mask]
distance2 = distance2[:, mask]
</code></pre>
<p>Now you can iteratively distance <code>P</code> with <code>L</code>, and select the best values of <code>P</code>, removing any values that have been selected from the distance matrix. A greedy selection from <code>P</code> will not necessarily be optimal, since an alternative combination may require fewer elements due to overlaps, but that is a matter for a simple (but somewhat expensive) graph traversal algorithm. The following snippet just shows a simple greedy selection, which will work fine for your toy example:</p>
<pre><code>distMask = distance2 <= 1
quality = distMask.sum(axis=0)
clusters = []
accounted = 0
while accounted < n:
# Get the cluster location
best = np.argmax(quality)
# Get the cluster number
clusters.append(P[:, best].dot(dec).item())
# Remove numbers in cluser from consideration
accounted += quality[best]
quality -= distMask[distMask[:, best], :].sum(axis=0)
</code></pre>
<p>The last couple of steps can be optimized using sets and graphs, but this shows a starting point for a valid approach. This is going to be slow for large data, but probably not prohibitively so. Do some benchmarks to decide how much time you want to spend optimizing vs just running the algorithm.</p>
|
python|numpy|cluster-analysis|nearest-neighbor|levenshtein-distance
| 1 |
1,024 | 36,479,773 |
Multivariate Optimization - scipy.optimize input parsing error
|
<p><a href="https://i.stack.imgur.com/pLpi5.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pLpi5.jpg" alt="rgb image"></a></p>
<p>I have the above rgb image saved as <code>tux.jpg</code>. Now I want to get the closest approximation to this image that is an outer product of two vectors I.e of the form A<i>·</i><i>B</i><sup><i>T</i></sup>.</p>
<p>Here is my code - </p>
<pre><code>#load image to memory
import Image
im = Image.open('tux.jpg','r')
#save image to numpy array
import numpy as np
mat = np.asfarray(im.convert(mode='L')) # mat is a numpy array of dimension 354*300
msizex,msizey = mat.shape
x0 = np.sum(mat,axis=1)/msizex
y0 = np.sum(mat,axis=0)/msizey
X0 = np.concatenate((x0,y0)) # X0.shape is (654,)
# define error of outer product with respect to original image
def sumsquares(X):
""" sum of squares -
calculates the difference between original and outer product
input X is a 1D numpy array with the first 354 elements
representing vector A and the rest 300 representing vector B.
The error is obtained by subtracting the trial $A\cdot B^T$
from the original and then adding the square of all entries in
the matrix.
"""
assert X.shape[0] == msizex+msizey
x = X0[:msizex]
y = X0[msizex:]
return np.sum(
(
np.outer(x,y) - mat
)**2
)
#import minimize
from scipy.optimize import minimize
res = minimize(sumsquares, X0,
method='nelder-mead',
options={'disp':True}
)
xout = res.x[:msizex]
yout = res.x[msizex:]
mout = np.outer(xout,yout)
imout= Image.fromarray(mout,mode='L')
imout.show()
</code></pre>
<p>The result is <a href="https://i.stack.imgur.com/UQ8Pd.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UQ8Pd.jpg" alt="the following image"></a>.</p>
<pre><code> Optimization terminated successfully.
Current function value: 158667093349733.531250
Iterations: 19
Function evaluations: 12463
</code></pre>
<p>This doesn't look good enough to me. Is there any way to improve this? The noise in the output is not even of the same length as the structures in the original picture. My guess is that the algorithm isn't going through. How can I debug or improve this?</p>
<p>EDIT1: I created the image below with the code </p>
<pre><code>size = 256
mat0 = np.zeros((size,size))
mat0[size/4:3*size/4,size/4:3*size/4] = 1000
#mat0[size/4:3*size/4,] = 1000
#mat0[:3*size/4,size/4:] = 1000
im0 = Image.fromarray(mat0)
im0.show()
</code></pre>
<p>The two commented out lines result in two other images. Here are the result of my experiments - </p>
<ol>
<li>Square in the middle.
Input - <a href="https://i.stack.imgur.com/mnZlS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mnZlS.jpg" alt="square middle"></a><br>
Output - Same </li>
<li>Band in the middle.
Input - <a href="https://i.stack.imgur.com/zsJqw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zsJqw.jpg" alt="band middle"></a><br>
Output - <a href="https://i.stack.imgur.com/mnZlS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mnZlS.jpg" alt="square middle"></a> </li>
<li>White chunk to the North East
Input - <a href="https://i.stack.imgur.com/bXuZm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bXuZm.jpg" alt="north east"></a><br>
Output- <a href="https://i.stack.imgur.com/ALqOP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ALqOP.jpg" alt="north west"></a></li>
</ol>
<p>While this is much better than what I expected, cases 2 and 3 still end up being wrong. I hope that the arguments to the <code>minimize</code> function mean what I think they mean. </p>
|
<p>1) <strong>The rendering problem of the first image seems to be an issue in the conversion from numpy array to image. I get the right rendering by running:</strong> </p>
<pre><code>imout = Image.fromarray(mout/np.max(mout)*255)
</code></pre>
<p>(i.e. normalize the image to a maximum value of 255 and let it determine the mode automatically). </p>
<p>In general, to check that Image.fromarray is working, it is useful to compare the output of imout.show() with </p>
<pre><code>import matplotlib.pyplot as plt
plt.matshow(mout/np.max(mout)*255, cmap=plt.cm.gray)
</code></pre>
<p>and you should get the same results. BTW, by doing that, I get all the 3 other cases correct. </p>
<p>2) <strong>Secondly, the main problem with tux.png is that it is not possible to reconstruct an image with such a detailed structure with only an outer product of two 1-D vectors</strong>. </p>
<p>(This tends to work for simple images such as the blocky ones shown above, but not for an image with few symmetries and many details). </p>
<p>To prove the point: </p>
<ul>
<li><p>There exist matrix factorization techniques that allow reconstructing a matrix as the product of two low-rank matrices M=AB, such as <a href="http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html" rel="nofollow noreferrer">sklearn.decomposition.NMF</a>. </p></li>
<li><p>In this case, setting the rank of A and B to 1 would be equivalent to your problem (with a different optimization technique). </p></li>
<li><p>Playing with the code below you can easily see that with n_components=1 (which is equivalent to an outer product of two 1-D vectors), the resulting reconstructed matrix looks very similar to the one outputted by your method, and that with bigger n_components, the better the reconstruction. </p></li>
</ul>
<p>For reproducibility: </p>
<pre><code>import matplotlib.pyplot as plt
from sklearn.decomposition import NMF
nmf = NMF(n_components=20)
prj = nmf.fit_transform(mat)
out = prj.dot(nmf.components_)
out = np.asarray(out, dtype=float)
imout = Image.fromarray(out)
imout.show()
</code></pre>
<p>For illustration, this is the NMF reconstruction with 1 component (this is exactly an outer product between two 1-D vectors):</p>
<p><a href="https://i.stack.imgur.com/Ek438.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ek438.png" alt="NMF reconstruction with 1 component, equivalent to outer product of 2 vectors"></a></p>
<p>With 2 components: </p>
<p><a href="https://i.stack.imgur.com/wcf6A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wcf6A.png" alt="2 components"></a></p>
<p>And this is the NMF reconstruction with 20 components. </p>
<p><a href="https://i.stack.imgur.com/bxYG3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bxYG3.png" alt="NMF reconstruction with 20 components, equivalent to the sum of 20 1-D vector outer products)"></a></p>
<p>Which clearly indicates that a single 1-D outer product is not enough for this images. However it works for the blocky images. </p>
<p>If you are not restricted to an outer product of vectors, then matrix factorization can be an alternative. BTW, there exist a vast number of matrix factorization techniques. Another alternative in sklearn is <a href="http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html" rel="nofollow noreferrer">SVD</a>. </p>
<p>3) Finally, there may be a scaling issue in x0 and y0. Note that the elements of np.outer(x0, y0) are orders of magnitude higher than the elements of mat. </p>
<p>Although I still get good results for the 3 blocky examples with the code provided, in general it is a good practice to have comparable scales when taking square differences. For instance, you might want to scale x0 and y0 so that the norm of np.outer is comparable to the one of mat. </p>
|
python|image|numpy|math|least-squares
| 0 |
1,025 | 36,663,287 |
Erratic (seemingly random) behavior of seek() and split() in python
|
<p>Consider the following code:</p>
<pre><code>import sys
with open(sys.argv[1]) as data_file:
data_file.readline() #skipping lines of texts
data_file.readline()
data_file.readline() #skipping lines of texts
data_file.readline()
data_file.readline() #skipping lines of texts
data_file.readline() #skipping lines of texts
data_file.readline() #skipping lines of texts
data_file.readline() #skipping lines of texts
data_file.readline() #skipping lines of texts
while True:
print "#"
pos=data_file.tell()
next_mol=data_file.readline().split()
print next_mol
data_file.seek(pos)
print data_file.readline().split()
</code></pre>
<p>here sys.argv[1] is the name of text file, which contains the following data:</p>
<pre><code>ITEM: TIMESTEP
31500000
ITEM: NUMBER OF ATOMS
28244
ITEM: BOX BOUNDS pp pp pp
0.706774 63.6072
1.77317 62.6918
-4.27518 67.4572
ITEM: ATOMS id type x y z
1 1 8.07271 20.6394 38.953
2 1 7.45444 20.2706 37.5682
3 1 7.94593 21.3438 36.5822
4 2 8.88701 22.2414 37.422
5 6 8.97587 21.7898 38.6976
6 7 9.51512 23.1098 36.8675
7 1 9.83459 22.2787 39.7728
8 3 8.54346 19.7726 39.3733
9 3 7.3188 20.9572 39.6053
10 3 6.33686 20.2798 37.6457
11 3 7.62824 19.2464 37.1935
12 3 7.14438 21.9616 36.2781
13 3 8.4454 20.9589 35.6742
14 3 9.51704 23.2023 40.2712
15 3 10.839 22.4705 39.342
16 3 9.84061 21.5031 40.5668
</code></pre>
<p>gives me following output:</p>
<pre><code>#
['1', '1', '8.07271', '20.6394', '38.953']
['71', '20.6394', '38.953']
#
['2', '1', '7.45444', '20.2706', '37.5682']
['1', '7.45444', '20.2706', '37.5682']
#
['3', '1', '7.94593', '21.3438', '36.5822']
['1', '7.94593', '21.3438', '36.5822']
#
['4', '2', '8.88701', '22.2414', '37.422']
['2', '8.88701', '22.2414', '37.422']
#
['5', '6', '8.97587', '21.7898', '38.6976']
['6', '8.97587', '21.7898', '38.6976']
#
['6', '7', '9.51512', '23.1098', '36.8675']
['7', '9.51512', '23.1098', '36.8675']
#
['7', '1', '9.83459', '22.2787', '39.7728']
['1', '9.83459', '22.2787', '39.7728']
#
['8', '3', '8.54346', '19.7726', '39.3733']
['3', '8.54346', '19.7726', '39.3733']
#
['9', '3', '7.3188', '20.9572', '39.6053']
['3', '7.3188', '20.9572', '39.6053']
#
['10', '3', '6.33686', '20.2798', '37.6457']
['0', '3', '6.33686', '20.2798', '37.6457']
</code></pre>
<p>I was expecting both strings between '#' to be same. Am i missing something here?</p>
|
<p><code>file.readline()</code> uses a <em>read-ahead buffer</em> to find newlines, so it can return you a neat line that ends in <code>\n</code>. The alternative is to read byte by byte until a newline is found, which would be extremely inefficient.</p>
<p>As such, your first <code>file.readline()</code> reads in a chunk of information from the file, parses out the first line and returns that. Then a next call to <code>file.readline()</code> may well be able to give you the next line from the buffer alone, etc.</p>
<p>By the time you get to your <code>while</code> loop, the read-ahead buffer has been filled with every thing up to <code>1 1 8.072</code> (the first bytes after the <code>ITEM: ATOMS id type x y z</code> line). The next <code>file.readline()</code> call then reads in more buffer to find another newline, moving the file position to after the initial <code>2</code> on the next line, etc.</p>
<p>You can't reliably get the right file position from a file <em>and</em> use <code>file.readline()</code> calls; you'd have to take into account the number of lines read, the actual buffer size, and the style of line separators used in the file. Your problem can almost certainly be solved in different ways, like storing the already read lines in a queue or stack of some sort, for use in later iterations of your loop.</p>
|
python
| 2 |
1,026 | 36,390,596 |
How to access and edit variables inside functions in python
|
<p>Im new(-ish) to python and I made a game today which after I finished I realised I'd made a big mistake : </p>
<p>inside the functions I had to access and edit variables which where also accessed and changed in <em>other</em> functions and maybe in the future outside the functions. And I don't know how to do that.</p>
<p>I've researched for a long time and found very few things that might solve the problem, I've tried a few, but they haven't worked and I don't understand how to use others.</p>
<p>Could you please try to help me with the problem and if you spot others please tell me, as Im not too good at debugging :(</p>
<p>Here is the code below, its quite big (I've put the variables I need to access and change in bold):
from random import randint
print ("Ghost Game v2.0")
print ("select difficulty")</p>
<pre><code>score = 0
alive = True
difficulty = 0
doors = 0
ghost_door = 0
action = 0
ghost_power = 0
#define the function 'ask_difficulty'
def ask_difficulty() :
difficulty = input ("Hard, Normal, Easy")
set_difficulty()
# define the function 'set_difficulty' which sets the difficulty.
def set_difficulty() :
if difficulty == 'Hard' or 'Normal' or 'Easy' :
if difficulty == 'Hard' :
doors = 2
elif difficulty == 'Normal' :
doors = 3
elif difficulty == 'Easy' :
doors = 5
else:
print ("Invalid input, please type Hard, Normal, or Easy")
ask_difficulty()
# define the function 'ghost_door_choose' which sets the ghost door and the chosen door
def ghost_door_choose(x):
ghost_door = randint (1, x)
print (doors + " doors ahead...")
print ("A ghost behind one.")
print ("Which do you open?")
if doors == 2 :
door = int("Door number 1, or door number 2...")
if 1 or 2 in door :
ghost_or_no()
else :
print ("Invalid input")
ghost_door_choose(difficulty)
elif doors == 3 :
door = int("Door number 1, door number 2, or door number 3")
if 1 or 2 or 3 in door :
ghost_or_no()
else:
print ("Invalid input")
ghost_door_choose(difficulty)
elif doors == 5 :
print("Door number 1, door number 2, door number 3, door number 4, or door number 5.")
if 1 or 2 or 3 or 4 or 5 in door :
ghost_or_no()
else:
print ("Invalid input")
ghost_door_choose(difficulty)
# define the function 'ghost_or_no'
def ghost_or_no() :
if door == ghost_door:
print ("GHOST!!")
print ("Initiating battle...")
battle()
else:
print ("No ghost, you\'ve been lucky, but will luck remain with you...")
score = score + 1
ghost_door_choose(difficulty)
# define the function 'battle' which is the battle program
def battle() :
ghost_power = randint (1, 4) # 1 = Speed, 2 = Strength, 3 = The ghost is not friendly, 4 = The ghost is friendly
print ("You have 3 options")
print ("You can flee, but beware, the ghost may be fast (flee),")
print ("You can battle it, but beware, the ghost might be strong (fight),")
print ("Or you can aproach the ghost and be friendly, but beware, the ghost may not be friendly (aproach)...")
action = input ("What do you choose?")
if flee in action :
action = 1
elif fight in action :
action = 2
elif aproach in action :
action = 3
else :
print ("Invalid input")
battle()
if ghost_power == action :
if action == 1:
print ("Oh no, the ghost\'s power was speed!")
print ("DEFEAT")
print ("You\'r score is " + score)
alive = False
elif action == 2:
print ("Oh no, the ghost\'s power was strength!")
print ("DEFEAT")
print ("You\'r score is " + score)
alive = False
elif action == 3:
print ("Oh no, the ghost wasn\'t friendly ")
alive = False
elif ghost_power == 4 and action == 3 :
print ("Congratulations, The ghost was friendly!")
score = score + 1
ghost_door_choose(difficulty)
elif ghost_power != action and ghost_power != 4 :
if action == 1:
print ("Congratulations, the ghost wasn\'t fast!")
score = score + 1
ghost_door_choose(difficulty)
elif action == 2:
print ("Congratulations, you defeated the ghost!")
score = score +1
ghost_door_choose(difficulty)
elif ghost_power != action and ghost_power == 4 :
if action == 1:
print ("You ran away from a friendly ghost!")
print ("Because you ran away for no reason, your score is now 0")
score = 0
ghost_door_choose(difficulty)
elif action == 1:
print ("You killed a friendly ghost!")
print ("Your score is now 0 because you killed the friendly ghost")
score = 0
ghost_door_choose(difficulty)
#actual game loop
ask_difficulty()
while alive :
ghost_door_choose(doors)
</code></pre>
|
<p>Consider:</p>
<pre><code>x=0
z=22
def func(x,y):
y=22
z+=1
print x,y,z
func('x','y')
</code></pre>
<p>When you call <code>func</code> you will get <code>UnboundLocalError: local variable 'z' referenced before assignment</code></p>
<p>To fix the error in our function, do:</p>
<pre><code>x=0
z=22
def func(x,y):
global z
y=22
z+=1
print x,y,z
</code></pre>
<p>The <code>global</code> keyword allows a local reference to a global defined variable to be changed.</p>
<p>Notice too that the local version of <code>x</code> is printed, not the global version. This is what you would expect. The ambiguity is if there is no local version of a value. Python treats globally defined values as read only unless you use the <code>global</code> keyword.</p>
<p>As stated in comments, a class to hold these variables would be better. </p>
|
python|function|variables
| 0 |
1,027 | 13,382,139 |
python scipy unit test
|
<p>I have installed a number of python modules into a common Linux directory that a number of people will be using via an NFS mount (yes I understand that there is a performance hit with this esp with python) I have been able to run the scipy.test('full') as the user that owns the NFS mount as well as root.</p>
<p>Is there a way that I can pass in an argument to the scipy.test() function that will tell it what dir to build the sc_* and linux227compiled_catalog.d* files in? ie scpipt.test('full', /tmp) so that any user who mounts this can run these tests and not have write access to the NFS mount?</p>
<p>Thanks in advance.</p>
|
<p>nvm ... I put the following into the test script:</p>
<pre><code>import scipy
import os
import shutil
directory = os.getcwd()
userHomeDirectory = ( "/home/" + os.getlogin())
userHomeScipyTests = ( userHomeDirectory + "/scipytests" )
# print ("your current directory location is: " + directory)
print ("Making the following temp folder: " + userHomeScipyTests )
if (os.path.isdir(userHomeScipyTests)):
shutil.rmtree(userHomeScipyTests)
os.makedirs ( userHomeScipyTests )
os.chdir( userHomeScipyTests )
print os.getcwd()
output = scipy.test('full')
# print ("this is the output of the scipy full test: " + str(output.wasSuccessful()))
self.assertEqual(str(output.wasSuccessful()), 'True', 'FullSciPyTest failed')
if (output.wasSuccessful):
print ("Removing the following temp folder: " + userHomeScipyTests )
shutil.rmtree(userHomeScipyTests)
</code></pre>
|
python|scipy
| 0 |
1,028 | 17,125,978 |
Memory leak by ctypes pointers used within python class
|
<p>I try to wrap some C code via ctypes. Altough, my code (attached below) is functional, <a href="https://pypi.python.org/pypi/memory_profiler" rel="noreferrer">memory_profiler</a> suggests it is suffering a memory leak somewhere. The basic C struct, I'm trying to wrap is defined in 'image.h'. It defines an image object, containing a pointer to the data, a pointer array (needed for various other functions not included here), along with some shape information. </p>
<p><strong>image.h</strong>:</p>
<pre><code>#include <stdio.h>
#include <stdlib.h>
typedef struct image {
double * data; /*< The main pointer to the image data*/
i3_flt **row; /*< An array of pointers to each row of the image*/
unsigned long n; /*< The total number of pixels in the image*/
unsigned long nx; /*< The number of pixels per row (horizontal image dimensions)*/
unsigned long ny; /*< The number of pixels per column (vertical image dimensions)*/
} image;
</code></pre>
<p>The python code that wraps this C struct via ctypes is contained in 'image_wrapper.py' below. The python class <em>Image</em> implements many more methods which I didn't include here. The idea is to have a python object, that is as convenient to use as a numpy array. In fact, the class contains a numpy array as an attribute (self.array) which points to the exact same memory location than the data pointer within the C struct. </p>
<p><strong>image_wrapper.py</strong>:</p>
<pre><code>import numpy
import ctypes as c
class Image(object):
def __init__(self, nx, ny):
self.nx = nx
self.ny = ny
self.n = nx * ny
self.shape = tuple((nx, ny))
self.array = numpy.zeros((nx, ny), order='C', dtype=c.c_double)
self._argtype = self._argtype_generator()
self._update_cstruct_from_array()
def _update_cstruct_from_array(self):
data_pointer = self.array.ctypes.data_as(c.POINTER(c.c_double))
ctypes_pointer = c.POINTER(c.c_double) * self.ny
row_pointers = ctypes_pointer(
*[self.array[i,:].ctypes.data_as(c.POINTER(c.c_double)) for i in range(self.ny)])
ctypes_pointer = c.POINTER(ctypes_pointer)
row_pointer = ctypes_pointer(row_pointers)
self._cstruct = c.pointer(self._argtype(data=data_pointer,
row=row_pointer,
n=self.n,
nx=self.nx,
ny=self.ny))
def _argtype_generator(self):
class _Argtype(c.Structure):
_fields_ = [("data", c.POINTER(c.c_double)),
("row", c.POINTER(c.POINTER(c.c_double) * self.ny)),
("n", c.c_ulong),
("nx", c.c_ulong),
("ny", c.c_ulong)]
return _Argtype
</code></pre>
<p>Now, testing the memory consumption of the above code with memory_profiler suggests that Python's garbage collector is unable to clean up all references. Here is my test code, that creates a variable number of class instances within loops of different sizes.</p>
<p><strong>test_image_wrapper.py</strong></p>
<pre><code>import sys
import image_wrapper as img
import numpy as np
@profile
def main(argv):
image_size = 500
print 'Create 10 images\n'
for i in range(10):
x = img.Image(image_size, image_size)
del x
print 'Create 100 images\n'
for i in range(100):
x = img.Image(image_size, image_size)
del x
print 'Create 1000 images\n'
for i in range(1000):
x = img.Image(image_size, image_size)
del x
print 'Create 10000 images\n'
for i in range(10000):
x = img.Image(image_size, image_size)
del x
if __name__ == "__main__":
main(sys.argv)
</code></pre>
<p>The @profile is telling memory_profiler to analyse the subsequent function, here main. Running python with memory_profiler on test_image_wrapper.py via</p>
<pre><code>python -m memory_profiler test_image_wrapper.py
</code></pre>
<p>yields the following output:</p>
<pre><code>Filename: test_image_wrapper.py
Line # Mem usage Increment Line Contents
================================================
49 @profile
50 def main(argv):
51 """
52 Script to test memory usage of image.py
53 16.898 MB 0.000 MB """
54 16.898 MB 0.000 MB image_size = 500
55
56 16.906 MB 0.008 MB print 'Create 10 images\n'
57 19.152 MB 2.246 MB for i in range(10):
58 19.152 MB 0.000 MB x = img.Image(image_size, image_size)
59 19.152 MB 0.000 MB del x
60
61 19.152 MB 0.000 MB print 'Create 100 images\n'
62 19.512 MB 0.359 MB for i in range(100):
63 19.516 MB 0.004 MB x = img.Image(image_size, image_size)
64 19.516 MB 0.000 MB del x
65
66 19.516 MB 0.000 MB print 'Create 1000 images\n'
67 25.324 MB 5.809 MB for i in range(1000):
68 25.328 MB 0.004 MB x = img.Image(image_size, image_size)
69 25.328 MB 0.000 MB del x
70
71 25.328 MB 0.000 MB print 'Create 10000 images\n'
72 83.543 MB 58.215 MB for i in range(10000):
73 83.543 MB 0.000 MB x = img.Image(image_size, image_size)
74 del x
</code></pre>
<p>Each instance of the class Image within python seems to leave about 5-6kB, summing up to ~58MB when processing 10k images. For an individual object this seems not much, but as I have to run on ten millions, I do care. The line that seems to cause the leak is the following contained in image_wrapper.py. </p>
<pre><code> self._cstruct = c.pointer(self._argtype(data=data_pointer,
row=row_pointer,
n=self.n,
nx=self.nx,
ny=self.ny))
</code></pre>
<p>As mentioned above, it seems Python's garbage collector is unable to clean up all references. I did try to implement my own <em>del</em> function, something like</p>
<pre><code>def __del__(self):
del self._cstruct
del self
</code></pre>
<p>Unfortunately, this doesn't seem to fix the issue. After spending a day of researching and trying several memory debuggers, my last resort seems stackoverflow. Many thanks for your valuable thoughts and suggestions.</p>
|
<p>It may not be the only issue, but for sure the caching of each <code>_Argtype</code>: <code>LP__Argtype</code> pair in the dict <code>_ctypes._pointer_type_cache</code> is not insignificant. Memory usage should go down if you <code>clear</code> the cache. </p>
<p>The pointer and function type caches can be cleared with <code>ctypes._reset_cache()</code>. Bear in mind that clearing the cache can cause problems. For example:</p>
<pre><code>from ctypes import *
import ctypes
c_double_p = POINTER(c_double)
c_double_pp = POINTER(c_double_p)
class Image(Structure):
_fields_ = [('row', c_double_pp)]
ctypes._reset_cache()
nc_double_p = POINTER(c_double)
nc_double_pp = POINTER(nc_double_p)
</code></pre>
<p>The old pointers still work with <code>Image</code>:</p>
<pre><code>>>> img = Image((c_double_p * 10)())
>>> img = Image(c_double_pp(c_double_p(c_double())))
</code></pre>
<p>New pointers created after resetting the cache won't work:</p>
<pre><code>>>> img = Image((nc_double_p * 10)())
TypeError: incompatible types, LP_c_double_Array_10 instance
instead of LP_LP_c_double instance
>>> img = Image(nc_double_pp(nc_double_p(c_double())))
TypeError: incompatible types, LP_LP_c_double instance
instead of LP_LP_c_double instance
</code></pre>
<hr>
<p>If resetting the cache solves your problem, maybe that's good enough. But generally the pointer cache is both necessary and beneficial, so personally I'd look for another way. For example, as far as I can see there's no reason to customize <code>_Argtype</code> for each image. You could just define <code>row</code> as a <code>double **</code> initialized to the array of pointers. </p>
|
python|pointers|memory-leaks|ctypes
| 3 |
1,029 | 43,916,453 |
Select related of selected related
|
<p>Say I have a relationship (by foreign key) like this: <em>Model 1 → Model 2 → Model 3</em>. Can I follow foreign key relationship with <code>select_related()</code> more than one level deep? I.e. not only from <em>Model 1</em> to <em>Model 2</em> but also from <em>Model 2 to Model 3</em>?</p>
|
<p>Yes, you can, by using the normal double-underscore syntax - as <a href="https://docs.djangoproject.com/en/1.11/ref/models/querysets/#select-related" rel="nofollow noreferrer">explicitly described</a> in the documentation:</p>
<pre><code>Model1.objects.select_related('model2__model3')
</code></pre>
|
python|django|orm
| 2 |
1,030 | 43,742,931 |
how repeat one plot in multiples subplots matplotlib
|
<p>Please I need repeat a climatological plot (fill_between(x,y1,y2) in multiples subplots, exist any tips to resolve that? Here is part of my code. </p>
<pre><code>from matplotlib import pyplot as plt
plt.figure()
fig, axs = plt.subplots(nrows=2, ncols=2, sharex=True)
ax = axs[0,0]
ax.fill_between(month, y2,y3 , alpha=0.25, color='grey')
ax.errorbar(a1[0], a1[1], a1[2], marker='o')
ax = axs[0,1]
ax.fill_between(month, y2,y3 , alpha=0.25, color='grey')
ax.errorbar(a2[0], a2[1], a2[2], marker='o')
ax = axs[1,0]
ax.fill_between(month, y2,y3 , alpha=0.25, color='grey')
ax.errorbar(a3[0], a3[1], a3[2], marker='o')
ax = axs[1,1]
ax.fill_between(month, y2,y3 , alpha=0.25, color='grey')
ax.errorbar(a4[0], a4[1], a4[2], marker='o')
</code></pre>
<p>The data a1,a2,a3,....a24 because are 24 years come from</p>
<pre><code>a1 = graf1.graf1('mon1992.dat')
a2 = graf1.graf1('mon1993.dat')
a3 = graf1.graf1('mon1994.dat')
a4 = graf1.graf1('mon1995.dat')
</code></pre>
<p>And graf1 is a module</p>
<pre><code>import pandas as pd
def graf1(archivo):
archivo = '/Users/ccasiccia/Desktop/research/dataO3_180/' + archivo
data2 = pd.read_csv(archivo, delim_whitespace = True, header=None)
xd2 = data2.ix[:,0]
yd2 = data2.ix[:,1]
sd2 = data2.ix[:,2]
return xd2, yd2, sd2
</code></pre>
<p>Considering the data obtained with the graf1 function I need build figures with 4 subplots (2x2) and the question here is: how can add the clima plot (ax.fill_between(month, y2, y3, alpha=0.25, color='grey') in each subplots? but not how I did, repeating the instruction on each subplot.
Follow the data to one year (ex. mon1996.dat)</p>
<pre><code>1 290.1931 23.21468
2 280.32778 17.70719
3 274.70455 19.08037
4 292.43913 27.8067
5 292.49667 24.57176
6 301.64667 26.96397
7 323.13889 20.30883
8 319.76 22.01486
9 306.432 20.07016
10 310.54444 45.90831
11 341.484 27.99424
12 300.71935 12.98657
</code></pre>
<p>This is the climatological data for fill_between</p>
<pre><code>1 322.418 20.25 20.287 342.668 302.168
2 315.1 21.534 21.578 336.634 293.566
3 293.268 23.694 23.738 316.962 269.574
4 292.928 26.499 26.55 319.427 266.429
5 301.565 31.153 31.21 332.718 270.412
6 304.135 35.883 35.953 340.018 268.252
7 317.792 36.85 36.916 354.642 280.942
8 321.36 35.798 35.863 357.158 285.562
9 324.558 33.472 33.535 358.03 291.086
10 336.043 45.679 45.762 381.722 290.364
11 338.736 33.518 33.58 372.254 305.218
12 327.578 27.093 27.144 354.671 300.485
</code></pre>
|
<p>Depending on how the data is organized it may be quite easy to loop over the plots to fill them. </p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
month=np.linspace(1,8)
y2 = -0.15*(month-4)**2+2.3
y3 = 0.1*(month-3.7)**2
x = np.logspace(1,5,base=1.5, num=16).reshape(4,4).T
y = np.sinc(x-3)**2+1
yerr = np.sqrt(y)
fig, axs = plt.subplots(nrows=2, ncols=2, sharex=True)
for i, ax in enumerate(axs.flatten()):
ax.fill_between(month, y2,y3 , alpha=0.25, color='gold')
ax.errorbar(x[:,i], y[:,i],yerr[:,i], marker='o')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/wWeYN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wWeYN.png" alt="enter image description here"></a></p>
|
python|matplotlib
| 0 |
1,031 | 43,850,001 |
Python Django project - move div class footer into body
|
<p>I'm creating a blog with python and django. Most of it has been fine up until i've just tried to create the footer. The footer display's fine on the home page but when you click into the blog post the footer gets constrained by the content container and row div class.
When you look at this in firefox dev inspector and DOM it's showing that my footer div is sat within the content container and row, and not in the body. Most things i've usually managed to find the answer for but this is driving me nuts. I think i'm either missing something or i'm not asking the right question.</p>
<ol>
<li>I don't understand why it's just the footer div that has been put within the content container / row div and nothing else that is effected in the same way.</li>
<li>Is there anyway to amend this without using jquery / jscript and changing the parentElement node?</li>
<li>If i have to amend this with jscript, where exactly do i have to amend it, and with what?</li>
</ol>
<p>Thanks</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-css lang-css prettyprint-override"><code>.footer {
height: 50px;
background-color: #000000;
color: #ffffff;
padding: 10px;
text-align: center;
clear: both;
}</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code> <div class="content container">
<div class="row">
<div class="col-md-8">
{% block content %}
{% endblock %}
</div>
</div>
</div>
<footer>
<div class="footer">
Copyright &copy 2017
</div>
</footer></code></pre>
</div>
</div>
</p>
<p><a href="https://i.stack.imgur.com/2coSv.jpg" rel="nofollow noreferrer">firefox dev inspector image - sat in row div</a></p>
<p><a href="https://i.stack.imgur.com/ZtETw.jpg" rel="nofollow noreferrer">firefox dev inspector image - moved to body</a></p>
|
<p>This is the blog post detail page</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>{% extends 'blog/base.html' %}
{% block content %}
{% if post.published_date %}
{{ post.published_date }}
by Matt Cheetham
{% else %}
<a class="btn btn-default" href="{% url 'blog.views.post_publish' pk=post.pk %}">Publish</a>
{% endif %}
{% if user.is_authenticated %}
<a class="btn btn-default" href="{% url 'post_edit' pk=post.pk %}"><span class="glyphicon glyphicon-pencil"></span></a>
<a class="btn btn-default" href="{% url 'post_remove' pk=post.pk %}"><span class="glyphicon glyphicon-remove"></span></a>
{% endif %}
<div class="postview">
{{ post.title }}
</div>
<p>{{ post.text|safe }}</p>
<div class="comment">
<div class="date">
<h3>Comments</h3>
<br>
<a class="btn btn-default" href="{% url 'add_comment_to_post' pk=post.pk %}">Add comment</a>
<br>
<br>
{% for comment in post.comments.all %}
{% if user.is_authenticated or comment.approved_comment %}
{{ comment.created_date }}
{% if not comment.approved_comment %}
<a class="btn btn-default" href="{% url 'comment_remove' pk=comment.pk %}"><span class="glyphicon glyphicon-remove"></span></a>
<a class="btn btn-default" href="{% url 'comment_approve' pk=comment.pk %}"><span class="glyphicon glyphicon-ok"></span></a>
{% endif %}
</div>
<strong>{{ comment.author }}</strong>
<p>{{ comment.text|safe }}</p>
</div>
{% endif %}
{% empty %}
<p>No comments here yet...</p>
{% endfor %}
{% endblock %}</code></pre>
</div>
</div>
</p>
|
javascript|jquery|python|html|css
| 0 |
1,032 | 43,549,269 |
Seaborn ImportError: DLL load failed: The specified module could not be found
|
<p>I am getting the "ImportError: DLL load failed: The specified module could not be found." when importing the module <strong>seaborn</strong>.</p>
<p>I tried uninstalling both seaborn and matplotlib, then reinstalling by using </p>
<pre><code>pip install seaborn
</code></pre>
<p>but no luck. I still get the same error. </p>
<pre><code>ImportError Traceback (most recent call last)
<ipython-input-5-085c0287ecb5> in <module>()
----> 1 import seaborn
C:\Users\johnsam\venv\lib\site-packages\seaborn\__init__.py in <module>()
4
5 # Import seaborn objects
----> 6 from .rcmod import *
7 from .utils import *
8 from .palettes import *
C:\Users\johnsam\venv\lib\site-packages\seaborn\rcmod.py in <module>()
6 import matplotlib as mpl
7
----> 8 from . import palettes, _orig_rc_params
9
10
C:\Users\johnsam\venv\lib\site-packages\seaborn\palettes.py in <module>()
10 from .external.six.moves import range
11
---> 12 from .utils import desaturate, set_hls_values, get_color_cycle
13 from .xkcd_rgb import xkcd_rgb
14 from .crayons import crayons
C:\Users\johnsam\venv\lib\site-packages\seaborn\utils.py in <module>()
6
7 import numpy as np
----> 8 from scipy import stats
9 import pandas as pd
10 import matplotlib as mpl
C:\Program Files\Continuum\Anaconda3\lib\site-packages\scipy\stats\__init__.py in <module>()
332 from __future__ import division, print_function, absolute_import
333
--> 334 from .stats import *
335 from .distributions import *
336 from .rv import *
C:\Program Files\Continuum\Anaconda3\lib\site-packages\scipy\stats\stats.py in <module>()
179 from scipy.lib.six import callable, string_types
180 from numpy import array, asarray, ma, zeros, sum
--> 181 import scipy.special as special
182 import scipy.linalg as linalg
183 import numpy as np
C:\Program Files\Continuum\Anaconda3\lib\site-packages\scipy\special\__init__.py in <module>()
544 from __future__ import division, print_function, absolute_import
545
--> 546 from ._ufuncs import *
547
548 from .basic import *
ImportError: DLL load failed: The specified module could not be found.
</code></pre>
<p>Is there a way to get around this error?</p>
|
<p>I was having this issue until I uninstalled and reinstalled scipy with the pip command. Just got to your command line and type <code>pip uninstall scipy</code> and <code>pip install scipy</code>.</p>
<p>Hopefully that works for you as well. I also uninstalled/installed seaborn before this although I'm not sure if that was necessary.</p>
<p>Using conda rather than pip may also work. </p>
|
python|matplotlib|error-handling|seaborn
| 4 |
1,033 | 54,593,163 |
Find and remove duplicate files using Python
|
<p>I have several folders which contain duplicate files that have slightly different names (e.g. file_abc.jpg, file_abc(1).jpg), or a suffix with "(1) on the end. I am trying to develop a relative simple method to search through a folder, identify duplicates, and then delete them. The criteria for a duplicate is "(1)" at the end of file, so long as the original also exists.</p>
<p>I can identify duplicate okay, however I am having trouble creating the text string in the right format to delete them. It needs to be <code>"C:\Data\temp\file_abc(1).jpg"</code>, however using the code below I end up with <code>r"C:\Data\temp''file_abc(1).jpg"</code>.</p>
<p>I have looked at answers [<a href="https://stackoverflow.com/questions/748675/finding-duplicate-files-and-removing-them][1]">Finding duplicate files and removing them</a>, however this seems to be far more sophisticated than what I need.</p>
<p>If there are better (+simple) ways to do this then I let me know, however I only have around 10,000 files in total in 50 odd folders, so not a great deal of data to crunch through.</p>
<p>My code so far is:</p>
<pre><code>import os
file_path = r"C:\Data\temp"
file_list = os.listdir(file_path)
print (file_list)
for file in file_list:
if ("(1)" in file):
index_no = file_list.index(file)
print("!! Duplicate file, number in list: "+str(file_list.index(file)))
file_remove = ('r"%s' %file_path+"'\'"+file+'"')
print ("The text string is: " + file_remove)
os.remove(file_remove)
</code></pre>
|
<p>Your code is just a little more complex than necessary, and you didn't apply a proper way to create a file path out of a path and a file name. And I think you should not remove files which have no original (i. e. which aren't duplicates though their name looks like it).</p>
<p>Try this:</p>
<pre><code>for file_name in file_list:
if "(1)" not in file_name:
continue
original_file_name = file_name.replace('(1)', '')
if not os.path.exists(os.path.join(file_path, original_file_name):
continue # do not remove files which have no original
os.remove(os.path.join(file_path, file_name))
</code></pre>
<p>Mind though, that this doesn't work properly for files which have multiple occurrences of <code>(1)</code> in them, and files with <code>(2)</code> or higher numbers also aren't handled at all. So my real proposition would be this:</p>
<ul>
<li>Make a list of all files in the whole directory tree below a given start (use <code>os.walk()</code> to get this), then</li>
<li>sort all files by size, then</li>
<li>walk linearly through this list, identify the doubles (which are neighbours in this list) and</li>
<li>yield each such double-group (i. e. a small list of files (typically just two) which are identical).</li>
</ul>
<p>Of course you should check the contents of these few files then to be sure that not just two of them are accidentally the same size without being identical. If you are sure you have a group of identical ones, remove all but the one with the simplest names (e. g. without suffixes <code>(1)</code> etc.).</p>
<hr>
<p>By the way, I would call the <code>file_path</code> something like <code>dir_path</code> or <code>root_dir_path</code> (because it is a directory and a complete path to it).</p>
|
python|python-2.7|file-management|data-management
| 3 |
1,034 | 54,658,862 |
Is there a way to convert named function arguments to dict
|
<p>I am trying to find out if there is a way to convert named arguments to <code>dict</code>. </p>
<p>I understand using <code>**kwargs</code> in place of individual named arguments would be pretty straight forward.</p>
<pre><code>def func(arg1=None, arg2=None, arg3=None):
# How can I convert these arguments to {'arg1': None, 'arg2': None, 'arg3': None}`
</code></pre>
|
<p>You can use <code>locals()</code> to get the local arguments:</p>
<pre><code>def func(arg1=None, arg2=None, arg3=None):
print(locals())
func() # {'arg3': None, 'arg2': None, 'arg1': None}
</code></pre>
|
python|function|dictionary|parameter-passing|named
| 5 |
1,035 | 52,574,943 |
How to add values into an empty list from a for loop in python?
|
<p>The given python code is supposed to accept a number and make a list containing
all odd numbers between 0 and that number</p>
<pre><code>n = int(input('Enter number : '))
i = 0
series = []
while (i <= n):
if (i % 2 != 0):
series += [i]
print('The list of odd numbers :\n')
for num in series:
print(num)
</code></pre>
|
<p>So, when dealing with lists or arrays, it's very important to understand the difference between referring to an element of the array and the array itself.</p>
<p>In your current code, series refers to the list. When you attempt to perform series + [i], you are trying to add [i] to the reference to the list. Now, the [] notation is used to <strong>access</strong> elements in a list, but not place them. Additionally, the notation would be <code>series[i]</code> to access the ith element, but this still wouldn't add your new element. </p>
<p>One of the most critical parts of learning to code is learning exactly what to google. In this case, the terminology you want is "append", which is actually a built in method for lists which can be used as follows:</p>
<pre><code>series.append(i)
</code></pre>
<p>Good luck with your learning!</p>
|
python|list|loops
| 2 |
1,036 | 47,729,323 |
Elastic Beanstalk with Django: is there a way to run manage.py shell and have access to environment variables?
|
<p>Similar question was asked <a href="https://stackoverflow.com/questions/19997343/run-manage-py-from-aws-eb-linux-instance">here</a>, however the solution does not give the shell access to the same environment as the deployment. If I inspect <code>os.environ</code> from within the shell, none of the environment variables appear. </p>
<p>Is there a way to run the <code>manage.py shell</code> with the environment?</p>
<p>PS: As a little side question, I know the mantra for EBS is to stop using <code>eb ssh</code>, but then how would you run one-off management scripts (that you don't want to run on every deploy)?</p>
|
<p>One of the cases you have to run something once is db schema migrations. Usually you store information about that in the db... So you can use db to sync / ensure that something was triggered only once.</p>
<p>Personally I have nothing against using <code>eb ssh</code>, I see problems with it however. If you want to have CI/CD, that manual operation is against the rules.</p>
<p>Looks like you are referring to WWW/API part of Beanstalk. If you need something that is quite frequent... maybe worker is more suitable? Problem here is that if API goes deployed first you would have wrong schema.</p>
<p>In general you are using EC2, so it's <code>user data</code> stores information that spins up you service. So there you can put your "stuff". Still you need to sync / ensure. <a href="http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-container.html" rel="nofollow noreferrer">Here</a> are docs for beanstalk - for more information how to do that.</p>
<h1>Edit</h1>
<p>Beanstalk is kind of instrumentation on top of EC2. So there must be a way to work with it, since you have access to <code>user data</code> of that EC2s. No worries you don't need to dig that deep. There is good way of instrumenting your server. It is called ebextensions. It can be used to put files on the server, trigger commands, instrument cron. What ever you want.</p>
<p>You can create <a href="http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions.html" rel="nofollow noreferrer">ebextension</a>, with <a href="http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-container.html" rel="nofollow noreferrer">container_commands</a> this time <code>Python Configuration Namespaces</code> section. That commands are executed on each deployment. Still, problem is that you need to sync since more then one deployment can go at the same time. Good part is that you can set env in the way you want.</p>
<p>I have no problem to access to the environment variables. How did you get the problem? Try do prepare page with the map.</p>
|
python|django|amazon-web-services|environment-variables|amazon-elastic-beanstalk
| 0 |
1,037 | 47,945,097 |
Apply multiple if/else statement to groupby object in pandas
|
<p>I have a very large DataFrame according to below:</p>
<pre>
id amt date
1 0 2010-02-01
1 0 2012-05-12
1 0 2016-08-09
1 20 1970-01-01
2 0 2016-03-21
2 0 2017-11-10
2 0 2012-09-01
2 0 2016-04-15
</pre>
<p>What I want is to reduce it to one row per id according to following logic:</p>
<ol>
<li>For a given ID-group: if amt > 0 and date == 1970-01-01 then output row. </li>
<li>For a given ID-group: if amt == 0 for all id rows, output max date for id</li>
</ol>
<p>I want appearance according to below.</p>
<pre>
id amt date
1 20 1970-01-01
2 0 2017-11-10
</pre>
<p>I have actually solved it through sort and grouping by ID and then taking last(). However, my issue came when I tried to write a function which operates on each separate groupby object and applies the logic i have in point 1 and point 2 above (if/else-style). Can someone help me with this?</p>
<p>Code for DataFrame is below - and please note, the data is large so quick execution is helpful.</p>
<p>Many thanks,</p>
<p>/Swepab</p>
<pre><code>df = pd.DataFrame({'id' : [1, 1, 1, 1, 2, 2, 2, 2]
,'amt' : [0, 0, 0, 20, 0 ,0, 0, 0]
,'date' : ['2010-02-01', '2012-05-12','2016-08-09'
,'1970-01-01','2016-03-21','2017-11-10'
,'2012-09-01','2016-04-15']})
df['date'] = pd.to_datetime(df.date,format = "%Y-%m-%d")
df = df[['id', 'amt', 'date']]
</code></pre>
|
<p>I wrote a custom function which you can apply on individual groups</p>
<pre><code>def custom_fx(df):
if df.amt.sum() == 0:
max_date = df.date.max()
return df.loc[df.date==max_date,:]
elif df.amt.sum() != 0 :
return df[df.date.isin(["1970-01-01"])]
for groups,data in df.groupby("id"):
print(custom_fx(data))
</code></pre>
<p>OUTPUT:</p>
<pre><code> amt date id
3 20 1970-01-01 1
amt date id
5 0 2017-11-10 2
</code></pre>
|
python|pandas|group-by
| 1 |
1,038 | 34,403,152 |
Python csv.reader to separate items by comma but ignore those within pairs of double-quotes
|
<p>I'm trying to use csv.reader to create a list of items from a string, but I'm having trouble. For instance, I have the following string:</p>
<pre><code>bibinfo = "wooldridge1999asymptotic, author = \"Wooldridge, Jeffrey M.\", title = \"Asymptotic Properties of Weighted M-Estimators for Variable Probability Samples\", journal = \"Econometrica\", volume = \"\", year = 1999"
</code></pre>
<p>And I run the following code:</p>
<pre><code>import csv
from io import StringIO
bibitems = [bibitem for bibitem in csv.reader(StringIO(bibinfo), skipinitialspace = True)][0]
</code></pre>
<p>But instead of having a list in which commas within a pair of double-quotes are not considered as separators, I obtain the following (unwanted) result:</p>
<pre><code>['wooldridge1999asymptotic', 'author = "Wooldridge', 'Jeffrey M."', 'title = "Asymptotic Properties of Weighted M-Estimators for Variable Probability Samples"', 'journal = "Econometrica"', 'volume = ""', 'year = 1999']
</code></pre>
<p>In other words, it separates some items (like author's surname from first name) when it should not. I followed the tips in this other <a href="https://stackoverflow.com/questions/8311900/python-read-csv-file-with-comma-within-fields">link</a>, but it seems that I'm missing something else too.</p>
|
<p>It works if the <code>"</code> is at beginning of the item:</p>
<pre><code>"author = Wooldridge, Jeffrey M."
</code></pre>
<p>With the changed text:</p>
<pre><code>>>> s = """wooldridge1999asymptotic, "author = Wooldridge, Jeffrey M.", title = "Asymptotic Properties of Weighted M-Estimators for Variable Probability Samples", journal = "Econometrica", volume = "", year = 1999"""
>>> list(csv.reader(s.splitlines(), skipinitialspace=True))
[['wooldridge1999asymptotic',
'author = Wooldridge, Jeffrey M.',
'title = "Asymptotic Properties of Weighted M-Estimators for Variable Probability Samples"',
'journal = "Econometrica"',
'volume = ""',
'year = 1999']]
</code></pre>
|
csv|python-3.4
| 0 |
1,039 | 66,278,328 |
How to set some space between the colorbar and the image
|
<p>I would like to set some space between the image and the colorbar, I have tried the pad but do nothing, so...
This is the image I have:
<a href="https://i.stack.imgur.com/liEOY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/liEOY.png" alt="enter image description here" /></a></p>
<p>and this is the code:</p>
<pre><code>from mpl_toolkits.axes_grid1 import make_axes_locatable
from matplotlib.colors import LogNorm
from matplotlib.ticker import LogLocator
from matplotlib import rcParams
rcParams['font.size']=35
x = np.arange(0,16,1)
yx= np.linspace(-50,0,38)
mx = np.random.rand(15,38)
m2 = np.linspace(0,6,38)
fig, ax = plt.subplots(figsize=(40,30))
divider = make_axes_locatable(ax)
cax = divider.append_axes('right', size='5%', pad=2)
im = ax.pcolor(x,yx,mx.T,norm=LogNorm(0.1, 100),cmap= 'jet')
cbar = fig.colorbar(im,pad = 2,cax=cax, orientation='vertical')
cbar.ax.yaxis.set_major_locator(LogLocator()) # <- Why? See above.
cbar.ax.set_ylabel('Resistividade \u03C1 [ohm.m]', rotation=270)
ax2=ax.twiny()
ax2.plot(m2,yx,'k--',linewidth=10)
#ax2.set_xlim([0,60])
ax2.set_xlabel('Resistividade \u03C1 [ohm.m]')
ax.set_xlabel('Aquisição')
ax.set_ylabel('Profundidade [m]')
#fig.tight_layout()
plt.savefig('mrec_1'+'.png',bbox_inches = "tight", format='png', dpi=300)
plt.show()
</code></pre>
|
<p>The secondary axes occupies all of the space in the figure that is meant for axes. Therefore, no matter what padding you give to the colorbar of <code>ax</code>, it wont affect <code>ax2</code>.</p>
<p>A hacky-ish solution would be to also spit your secondary axes exactly the same as the primary axes, and then delete the axes where the second colorbar goes:</p>
<pre><code>fig, ax = plt.subplots(figsize=(10, 8))
pad = 0.2 # change the padding. Will affect both axes
im = ax.pcolor(x, yx, mx.T, norm=LogNorm(0.1, 100), cmap='jet')
divider = make_axes_locatable(ax)
cax = divider.append_axes('right', size='5%', pad=pad)
ax2 = ax.twiny()
ax2.plot(m2, yx, 'k--', linewidth=10)
ax2.set_xlim([0, 60])
ax2.set_xlabel('Resistividade \u03C1 [ohm.m]')
ax.set_xlabel('Aquisição')
ax.set_ylabel('Profundidade [m]')
cbar = fig.colorbar(im,pad = 2,cax=cax, orientation='vertical')
cbar.ax.yaxis.set_major_locator(LogLocator())
cbar.ax.set_ylabel('Resistividade \u03C1 [ohm.m]', rotation=270)
secondary_divider = make_axes_locatable(ax2) # divide second axes
redundant_cax = secondary_divider.append_axes('right', size='5%', pad=pad)
redundant_cax.remove() # delete the second (empty) colorbar
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/cKlYR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cKlYR.png" alt="enter image description here" /></a></p>
|
python|matplotlib|colorbar
| 1 |
1,040 | 72,547,507 |
How to calculate progressively using python
|
<p>I am creating a fitness wearable device python program that tracks the distance its users walk or run daily. To motivate the users to meet and exceed the target distance, it rewards users with fitness points on a leadership board for the users who meet and exceed the target distance in a week.</p>
<p>The fitness point is calculated with a minimum distance of 32 km per week, and the points for each km in excess of the minimum distance is show as follow:</p>
<ul>
<li>Distance: 0 to 32 km Fitness Point: 0</li>
<li>Distance: 33 to 40 km Fitness Point:325 points per km</li>
<li>Distance: 41 to 48 km Fitness Point: 550 points per km</li>
<li>Distance: Greater than 48 km Point: 600 points per km</li>
</ul>
<p>How do I make the points calculate progressively.</p>
<pre><code>def fitness_app(distance):
while True:
distance = int(input("Please Enter Distance in Km: "))
if 0 > distance < 32:
fitness_pt = 0
print(fitness_pt)
elif 33 > distance < 40:
fitness_pt = 325 * distance
print(fitness_pt)
elif 41 > distance < 48:
fitness_pt = 550 * distance
print(fitness_pt)
elif distance > 48:
fitness_pt = 600 * distance
print(fitness_pt)
print(fitness_app(distance=True))
</code></pre>
|
<p>I think you're almost there, just that the comparisons don't need to be so complicated:</p>
<pre class="lang-py prettyprint-override"><code>def fitness_app():
while True:
distance = int(input("Please Enter Distance in Km: "))
if distance < 32:
fitness_pt = 0
elif distance < 40:
fitness_pt = 325 * distance
elif distance < 48:
fitness_pt = 550 * distance
else:
fitness_pt = 600 * distance
print(fitness_pt)
fitness_app()
</code></pre>
<p>Note: other superfluous complications also removed.</p>
|
python
| 1 |
1,041 | 39,841,451 |
How to fix python program that appears to be doing an extra loop?
|
<p>A portion of a python program I am writing seems to be looping an extra time. The part of the program that isn't working is below. It is supposed to ask for a string from the user and create a two-dimensional list where each distinct character of the string is put in its own sub-list. (Hopefully that makes sense... if not I can try to explain better. Perhaps the code will help)</p>
<pre><code>def getInput(emptyList):
inputString = input("Please enter a sentence:\n").strip().upper()
functionList = [x for x in inputString]
emptyList.extend(functionList)
return 0
def sortList(listA,listB):
listA.sort()
currentElement = listA[0]
compareTo = listA[0]
elementsCounted = 0
i = 0
listB.append([])
while elementsCounted < len(listA):
while currentElement == compareTo:
listB[i].append(currentElement)
elementsCounted += 1
print(listB)
if elementsCounted < len(listA):
currentElement = listA[elementsCounted]
else:
break
if currentElement != compareTo:
i += 1
listB.append([])
compareTo = listA[i]
return 0
def main():
myList = list()
sortedList = list()
getInput(myList)
sortList(myList,sortedList)
print(sortedList)
main()
</code></pre>
<p>If the user enters <code>qwerty</code>, the program returns <code>[['E'], ['Q'], ['R'], ['T'], ['W'], ['Y']]</code> which is correct but if the user enters <code>qwwerrty</code> the program returns <code>[['E'], ['Q'], ['R', 'R'], [], ['T'], ['W', 'W'], [], ['Y']]</code>. Note the extra empty list after each "double" character. It appears that the loop is making one extra iteration or that the <code>if</code> statement before <code>listB.append([])</code> isn't written properly.</p>
<p>I can't seem to figure it out more than this. Thank you in advance for your help.</p>
<p>NOTE: <code>elementsCounted</code> should be a cumulative tally of each element that has been processed from listA. <code>i</code> is the index of the current element in listB. For example, if <code>['A','A','B']</code> was listA and the program is processing the second A, then it is the second element being counted but <code>i</code> is still 0 because it belongs in listB[0]. <code>currentElement</code> is the one currently being processed and it is being compared to the first element that was processed as that "i". For the <code>['A','A','B'] example, when processing the second A, it is being compared to the first A to see if</code>i<code>should be incremented. In the next loop, it is comparing 'B' to the first 'A' and thus will increase</code>i` by one since 'B' belongs in the next sub-list.</p>
|
<p>Your mistake lies in this part:</p>
<pre><code>if currentElement != compareTo:
...
compareTo = listA[i]
</code></pre>
<p>It should be:</p>
<pre><code>if currentElement != compareTo:
...
compareTo = listA[elementsCounted]
</code></pre>
<p>It's an overly complex function for such a simple task.</p>
|
python|list|loops|if-statement|while-loop
| 1 |
1,042 | 16,156,505 |
Retrieve Test Parameter Values from Quality Center
|
<p>I have been trying to get the actual value of my parameters from Quality Center that have been set in my test's test configuration. I am using the OTA API through python. I cannot seem to get anything but the default value. </p>
<p>Where should I be retrieving the parameter's value from? The test, design steps, test configuration, test set? If someone could point me in the right direction that would help.</p>
<p>Thanks!
Jason</p>
|
<p>Can you post your code. I may be help you out. Have a look at following code. Assuming you know how to set the connection up : ( You need Test lab--> test set usually starts with Root) - hope this helps </p>
<pre><code>GetTest=test_lab_folder.TestSetFactory
TestSetFilter=GetTest.Filter
GetTSList=GetTest.NewList(TestSetFilter.Text)
for j in range (1,GetTSList.Count + 1):
TestSet=GetTSList.Item(j)
print TestSet.Name
LabTests=TestSet.TSTestFactory
LabTestSet=LabTests.NewList("")
for k in range(1,LabTestSet.Count +1 ):
LabTest=LabTestSet.Item(k)
TestsetParam=LabTest.ParameterValueFactory
ParamFilter=TestsetParam.Filter
NewParamList=TestsetParam.NewList(ParamFilter.Text)
for n in range (1,NewParamList.Count + 1):
param=NewParamList.Item(n)
print param.ActualValue
</code></pre>
|
python|hp-quality-center
| 0 |
1,043 | 16,241,944 |
Playing a sound in a ipython notebook
|
<p>I would like to be able to play a sound file in a ipython notebook.
My aim is to be able to listen to the results of different treatments applied to a sound directly from within the notebook.
Is this possible? If yes, what is the best solution to do so?</p>
|
<p>The previous answer is pretty old. You can use <a href="https://ipython.org/ipython-doc/dev/api/generated/IPython.display.html#IPython.display.Audio" rel="noreferrer">IPython.display.Audio</a> now. Like this:</p>
<pre><code>import IPython
IPython.display.Audio("my_audio_file.mp3")
</code></pre>
<p>Note that you can also process any type of audio content, and pass it to this function as a <code>numpy</code> array.</p>
<p>If you want to display multiple audio files, use the following:</p>
<pre><code>IPython.display.display(IPython.display.Audio("my_audio_file.mp3"))
IPython.display.display(IPython.display.Audio("my_audio_file.mp3"))
</code></pre>
|
audio|ipython|ipython-notebook
| 87 |
1,044 | 31,805,606 |
Saving XML using ETree in Python. It's not retaining namespaces, and adding ns0, ns1 and removing xmlns tags
|
<p>I see there are similar questions here, but nothing that has totally helped me.
I've also looked at the official documentation on namespaces but can't find anything that is really helping me, perhaps I'm just too new at XML formatting.
I understand that perhaps I need to create my own namespace dictionary? Either way, here is my situation:</p>
<p>I am getting a result from an API call, it gives me an XML that is stored as a string in my Python application. </p>
<p>What I'm trying to accomplish is just grab this XML, swap out a tiny value (The b:string value user ConditionValue/Default but that's irrelevant to this question)
and then save it as a string to send later on in a Rest POST call.</p>
<p>The source XML looks like this:</p>
<pre><code><Context xmlns="http://Test.the.Sdk/2010/07" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<xmlns i:nil="true" xmlns="http://schema.test.org/2004/07/Test.Soa.Vocab" xmlns:a="http://schema.test.org/2004/07/System.Xml.Serialize"/>
<Conditions xmlns:a="http://schema.test.org/2004/07/Test.Soa.Vocab">
<a:Condition>
<a:xmlns i:nil="true" xmlns:b="http://schema.test.org/2004/07/System.Xml.Serialize"/>
<Identifier>a23aacaf-9b6b-424f-92bb-5ab71505e3bc</Identifier>
<Name>Code</Name>
<ParameterSelections/>
<ParameterSetCollections/>
<Parameters/>
<Summary i:nil="true"/>
<Instance>25486d6c-36ba-4ab2-9fa6-0dbafbcf0389</Instance>
<ConditionValue>
<ComplexValue i:nil="true"/>
<Text i:nil="true" xmlns:b="http://schemas.microsoft.com/2003/10/Serialization/Arrays"/>
<Default>
<ComplexValue i:nil="true"/>
<Text xmlns:b="http://schemas.microsoft.com/2003/10/Serialization/Arrays">
<b:string>NULLCODE</b:string>
</Text>
</Default>
</ConditionValue>
<TypeCode>String</TypeCode>
</a:Condition>
<a:Condition>
<a:xmlns i:nil="true" xmlns:b="http://schema.test.org/2004/07/System.Xml.Serialize"/>
<Identifier>0af860f6-5611-4a23-96dc-eb3863975529</Identifier>
<Name>Content Type</Name>
<ParameterSelections/>
<ParameterSetCollections/>
<Parameters/>
<Summary i:nil="true"/>
<Instance>6364ec20-306a-4cab-aabc-8ec65c0903c9</Instance>
<ConditionValue>
<ComplexValue i:nil="true"/>
<Text i:nil="true" xmlns:b="http://schemas.microsoft.com/2003/10/Serialization/Arrays"/>
<Default>
<ComplexValue i:nil="true"/>
<Text xmlns:b="http://schemas.microsoft.com/2003/10/Serialization/Arrays">
<b:string>Standard</b:string>
</Text>
</Default>
</ConditionValue>
<TypeCode>String</TypeCode>
</a:Condition>
</Conditions>
</code></pre>
<p></p>
<p>My job is to swap out one of the values, retaining the entire structure of the source, and use this to submit a POST later on in the application. </p>
<p>The problem that I am having is that when it saves to a string or to a file, it totally messes up the namespaces:</p>
<pre><code><ns0:Context xmlns:ns0="http://Test.the.Sdk/2010/07" xmlns:ns1="http://schema.test.org/2004/07/Test.Soa.Vocab" xmlns:ns3="http://schemas.microsoft.com/2003/10/Serialization/Arrays" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<ns1:xmlns xsi:nil="true" />
<ns0:Conditions>
<ns1:Condition>
<ns1:xmlns xsi:nil="true" />
<ns0:Identifier>a23aacaf-9b6b-424f-92bb-5ab71505e3bc</ns0:Identifier>
<ns0:Name>Code</ns0:Name>
<ns0:ParameterSelections />
<ns0:ParameterSetCollections />
<ns0:Parameters />
<ns0:Summary xsi:nil="true" />
<ns0:Instance>25486d6c-36ba-4ab2-9fa6-0dbafbcf0389</ns0:Instance>
<ns0:ConditionValue>
<ns0:ComplexValue xsi:nil="true" />
<ns0:Text xsi:nil="true" />
<ns0:Default>
<ns0:ComplexValue xsi:nil="true" />
<ns0:Text>
<ns3:string>NULLCODE</ns3:string>
</ns0:Text>
</ns0:Default>
</ns0:ConditionValue>
<ns0:TypeCode>String</ns0:TypeCode>
</ns1:Condition>
<ns1:Condition>
<ns1:xmlns xsi:nil="true" />
<ns0:Identifier>0af860f6-5611-4a23-96dc-eb3863975529</ns0:Identifier>
<ns0:Name>Content Type</ns0:Name>
<ns0:ParameterSelections />
<ns0:ParameterSetCollections />
<ns0:Parameters />
<ns0:Summary xsi:nil="true" />
<ns0:Instance>6364ec20-306a-4cab-aabc-8ec65c0903c9</ns0:Instance>
<ns0:ConditionValue>
<ns0:ComplexValue xsi:nil="true" />
<ns0:Text xsi:nil="true" />
<ns0:Default>
<ns0:ComplexValue xsi:nil="true" />
<ns0:Text>
<ns3:string>Standard</ns3:string>
</ns0:Text>
</ns0:Default>
</ns0:ConditionValue>
<ns0:TypeCode>String</ns0:TypeCode>
</ns1:Condition>
</ns0:Conditions>
</code></pre>
<p>I've narrowed the code down to the most basic form and I'm still getting the same results so it's not anything to do with how I'm manipulating the file normally:</p>
<pre><code>import xml.etree.ElementTree as ET
import requests
get_context_xml = 'http://localhost/testapi/returnxml' #returns first XML example above.
source_context_xml = requests.get(get_context_xml)
Tree = ET.fromstring(source_context_xml)
#Ensure the original namespaces are intact.
for Conditions in Tree.iter('{http://schema.test.org/2004/07/Test.Soa.Vocab}Condition'):
print "success"
with open('/home/memyself/output.xml','w') as f:
f.write(ET.tostring(Tree))
</code></pre>
|
<p>You need to <a href="https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.register_namespace" rel="noreferrer">register</a> the prefix and the namespace before you do <code>fromstring()</code> (Reading the xml) to avoid the default namespace prefixes (like <code>ns0</code> and <code>ns1</code> , etc.) .</p>
<p>You can use the <a href="https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.register_namespace" rel="noreferrer"><code>ET.register_namespace()</code></a> function for that, Example -</p>
<pre><code>ET.register_namespace('<prefix>','http://Test.the.Sdk/2010/07')
ET.register_namespace('a','http://schema.test.org/2004/07/Test.Soa.Vocab')
</code></pre>
<p>You can leave the <code><prefix></code> empty if you do not want a prefix.</p>
<hr>
<p>Example/Demo -</p>
<pre><code>>>> r = ET.fromstring('<a xmlns="blah">a</a>')
>>> ET.tostring(r)
b'<ns0:a xmlns:ns0="blah">a</ns0:a>'
>>> ET.register_namespace('','blah')
>>> r = ET.fromstring('<a xmlns="blah">a</a>')
>>> ET.tostring(r)
b'<a xmlns="blah">a</a>'
</code></pre>
|
python|xml|lxml|elementtree
| 21 |
1,045 | 38,751,084 |
How to write a customized LSTM in tensorflow?
|
<p>I am trying to reimplement this paper <a href="http://mi.eng.cam.ac.uk/~thw28/papers/EMNLP15.pdf" rel="nofollow">Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems</a>, in which they add a gate to the LSTM cell and change how the state is computed.</p>
<p>How can I do this in tensorflow? Do I need to add a new OP ?</p>
|
<p>The <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/nn.html#rnn" rel="nofollow"><code>tf.nn.rnn()</code></a> and <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/nn.html#dynamic_rnn" rel="nofollow"><code>tf.nn.dynamic_rnn()</code></a> functions accept an argument <code>cell</code> of type <a href="https://github.com/tensorflow/tensorflow/blob/c5f94b10bbb30e525fa3ca313e7ccb173040c90a/tensorflow/python/ops/rnn_cell.py#L87" rel="nofollow"><code>tf.nn.rnn_cell.RNNCell</code></a>. For example you can take a look at the <a href="https://github.com/tensorflow/tensorflow/blob/c5f94b10bbb30e525fa3ca313e7ccb173040c90a/tensorflow/python/ops/rnn_cell.py#L256" rel="nofollow">implementation of <code>tf.nn.rnn_cell.BasicLSTMCell</code></a> (in particular the <a href="https://github.com/tensorflow/tensorflow/blob/c5f94b10bbb30e525fa3ca313e7ccb173040c90a/tensorflow/python/ops/rnn_cell.py#L302" rel="nofollow"><code>BasicLSTMCell.__call__()</code> method</a>), which might be a good starting point for your customized LSTM.</p>
|
machine-learning|neural-network|tensorflow|lstm
| 4 |
1,046 | 9,882,323 |
Advice extracting //td text and numbers
|
<p>I have been working through the tutorial adapting it to a project I want to achieve. I seem to have something going wrong that i just can't find the error to.</p>
<p>When using 'scrapy shell' I can get the response I expect. So for this site <a href="http://live.nrlstats.com/nrl/ladder.html" rel="nofollow">Nrl Ladder</a></p>
<pre><code>In [1]: hxs.select('//td').extract()
Out[1]:
[u'<td>\r\n<div id="ls-nav">\r\n<ul><li><a href="http://www.nrlstats.com/"><span>Home</span></a></li>\r\n<li class="ls-nav-on"><a href="/nrl"><span>NRL</span></a></li>\r\n<li><a href="/nyc"><span>NYC</span></a></li>\r\n<li><a href="/rep"><span>Rep Matches</span></a></li>\r\n\r\n</ul></div>\r\n</td>',
u'<td style="text-align:left" colspan="5">Round 4</td>',
u'<td colspan="5">Updated: 26/3/2012</td>',
u'<td style="text-align:left">1. Melbourne</td>',
u'<td>4</td>',
u'<td>4</td>',
u'<td>0</td>',
u'<td>0</td>',
u'<td>0</td>',
u'<td>122</td>',
u'<td>39</td>',
u'<td>83</td>',
u'<td>8</td>',
u'<td style="text-align:left">2. Canterbury-Bankstown</td>',
</code></pre>
<p>And on it goes.</p>
<p>I am really struggling to understand how to alter the tutorial project to change it to a different data type. </p>
<p>Is there anyway to bring up a help or documentation list to see what types I should use in items when using 'td' or any other item. Like i say it works easy in the shell but I cannot transform it to the files. Specifically both the team names and the points are 'td' but the team name is text.</p>
<p>here is what I have done.</p>
<pre><code>from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from nrl.items import NrlItem
class nrl(BaseSpider):
name = "nrl"
allowed_domains = ["http://live.nrlstats.com/"]
start_urls = [
"http://live.nrlstats.com/nrl/ladder.html",
]
def parse(self, response):
hxs = HtmlXPathSelector(response)
sites = hxs.select('//td')
items = []
for site in sites:
item = nrlItem()
item['team'] = site.select('/text()').extract()
item['points'] = site.select('/').extract()
items.append(item)
return items
</code></pre>
|
<p>I didn't quite understand your question, but here is a starting point, imo (haven't tested; see some comments in the code):</p>
<pre><code>from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from nrl.items import NrlItem
class nrl(BaseSpider):
name = "nrl"
allowed_domains = ["live.nrlstats.com"] # domains should be like this
start_urls = [
"http://live.nrlstats.com/nrl/ladder.html",
]
def parse(self, response):
hxs = HtmlXPathSelector(response)
rows = hxs.select('//table[@class="tabler"]//tr[starts-with(@class, "r")]') # select team rows
items = []
for row in rows:
item = nrlItem()
columns = row.select('./td/text()').extract() # select columns for the selected row
item['team'] = columns[0]
item['P'] = int(columns[1])
item['W'] = int(columns[2])
...
items.append(item)
return items
</code></pre>
<p>UPDATE:</p>
<p><code>//table[@class="tabler"//tr[starts-with(@class, "r")]</code> is an xpath query. See some xpath <a href="http://msdn.microsoft.com/en-us/library/ms256086.aspx" rel="nofollow">examples here</a>. </p>
<p><code>hxs.select(xpath_query)</code> always returns a list of nodes (also of type <code>HtmlXPathSelector</code>) which fall under the given query.</p>
<p><code>hxs.extract()</code> returns string representation of the node(s).</p>
<p>P.S. Beware that scrapy supports XPath 1.0, but not 2.0 (at least on Linux, not sure about Windows), so some of the newest xpath features might not work.</p>
<p>See also: </p>
<ul>
<li><a href="http://doc.scrapy.org/en/latest/topics/selectors.html" rel="nofollow">http://doc.scrapy.org/en/latest/topics/selectors.html</a></li>
<li><a href="http://doc.scrapy.org/en/latest/topics/firefox.html" rel="nofollow">http://doc.scrapy.org/en/latest/topics/firefox.html</a></li>
</ul>
|
python|xpath|scrapy
| 2 |
1,047 | 68,138,901 |
How To Dynamically Use User Input for Jira Python
|
<p>So I am trying to make an interactive method of pulling out Jira information, based on a Jira Key.</p>
<p>Full Code:</p>
<pre class="lang-py prettyprint-override"><code>import os
from atlassian import Jira
import json
with open('secrets.json','r') as f:
config = json.load(f)
jira_instance = Jira(
url = "https://mirantis.jira.com",
username = (config['user']['username']),
password = (config['user']['password'])
)
projects = jira_instance.get_all_projects(included_archived=None)
value = input("Please enter your Jira Key and the Issue ID:\n")
jira_key = (value)
issue = jira_instance.issue('(jira_key)', fields='summary,history,created,updated')
#issue = jira_instance.issue('DESDI-212', fields='summary,history,created,updated')
print(issue)
</code></pre>
<p>The main thing that is breaking is this:</p>
<pre class="lang-py prettyprint-override"><code>issue = jira_instance.issue('(jira_key)', fields='summary,history,created,updated')
</code></pre>
<p>For some odd reason, it doesn't like the way I am using user input for <code>jira_key</code> even though it will print out what I want if I use <code>print(jira_key)</code></p>
<p>Am I invoking it wrong?</p>
<p>I basically need this:</p>
<pre><code>issue = jira_instance.issue('DESDI-212', fields='summary,history,created,updated')
</code></pre>
<p>Whereby which, DESDI-212 will be user input.
When I try it using <code>'(jira_key)'</code> it responds back with this error:</p>
<pre class="lang-sh prettyprint-override"><code> rbarrett@MacBook-Pro-2 ~/Projects/Mirantis/Dataeng/Python python test_single.py ✔ 10422 22:03:34
Please enter your Jira Key and the Issue ID:
DESDI-212
Traceback (most recent call last):
File "/Users/rbarrett/Projects/Mirantis/Dataeng/Python/test_single.py", line 19, in <module>
issue = jira_instance.issue('(jira_key)', fields='summary,history,created,updated')
File "/usr/local/lib/python3.9/site-packages/atlassian/jira.py", line 676, in issue
return self.get("rest/api/2/issue/{0}?fields={1}".format(key, fields), params=params)
File "/usr/local/lib/python3.9/site-packages/atlassian/rest_client.py", line 264, in get
response = self.request(
File "/usr/local/lib/python3.9/site-packages/atlassian/rest_client.py", line 236, in request
self.raise_for_status(response)
File "/usr/local/lib/python3.9/site-packages/atlassian/jira.py", line 3715, in raise_for_status
raise HTTPError(error_msg, response=response)
requests.exceptions.HTTPError: Issue does not exist or you do not have permission to see it.
</code></pre>
<p>I expect to see this, which if I use <code>'DESDI-212'</code> instead of <code>'(jira_key)'</code> it actually works:</p>
<pre class="lang-sh prettyprint-override"><code>{'expand': 'renderedFields,names,schema,operations,editmeta,changelog,versionedRepresentations', 'id': '372744', 'self': 'https://mirantis.jira.com/rest/api/2/issue/372744', 'key': 'DESDI-212', 'fields': {'summary': 'Add the MSR version to be parsed into Loadstone', 'updated': '2021-06-23T17:33:21.206-0700', 'created': '2021-06-01T12:54:06.136-0700'}}
</code></pre>
|
<p>So it turns out I was invoking it wrong.
I needed to drop the <code>''</code> around the <code>'(jira_key)'</code> and just invoke it as follows with <code>(jira_key)</code> instead:</p>
<pre class="lang-py prettyprint-override"><code>import os
from atlassian import Jira
import json
with open('secrets.json','r') as f:
config = json.load(f)
jira_instance = Jira(
url = "https://mirantis.jira.com",
username = (config['user']['username']),
password = (config['user']['password'])
)
projects = jira_instance.get_all_projects(included_archived=None)
value = input("Please enter your Jira Key and the Issue ID:\n")
jira_key = (value)
issue = jira_instance.issue((jira_key), fields='summary,history,created,updated')
#issue = jira_instance.issue('DESDI-212', fields='summary,history,created,updated')
print(issue)
</code></pre>
<p>As such I got the expected output I needed, not it's working as expected:</p>
<pre class="lang-sh prettyprint-override"><code> rbarrett@MacBook-Pro-2 ~/Projects/Mirantis/Dataeng/Python python test_single.py ✔ 10428 22:22:17
Please enter your Jira Key and the Issue ID:
DESDI-212
{'expand': 'renderedFields,names,schema,operations,editmeta,changelog,versionedRepresentations', 'id': '372744', 'self': 'https://mirantis.jira.com/rest/api/2/issue/372744', 'key': 'DESDI-212', 'fields': {'summary': 'Add the MSR version to be parsed into Loadstone', 'updated': '2021-06-23T17:33:21.206-0700', 'created': '2021-06-01T12:54:06.136-0700'}}
</code></pre>
|
python|jira|python-jira
| 0 |
1,048 | 2,265,234 |
Design pattern to organize non-trivial ORM queries?
|
<p>I am developing a web API with 10 tables or so in the backend, with several one-to-many and many-to-many associations. The API essentially is a database wrapper that performs validated updates and conditional queries. It's written in Python, and I use SQLAlchemy for ORM and CherryPy for HTTP handling.</p>
<p>So far I have separated the 30-some queries the API performs into functions of their own, which look like this:</p>
<pre><code># in module "services.inventory"
def find_inventories(session, user_id, *inventory_ids, **kwargs):
query = session.query(Inventory, Product)
query = query.filter_by(user_id=user_id, deleted=False)
...
return query.all()
def find_inventories_by(session, app_id, user_id, by_app_id, by_type, limit, page):
....
# in another service module
def remove_old_goodie(session, app_id, user_id):
try:
old = _current_goodie(session, app_id, user_id)
services.inventory._remove(session, app_id, user_id, [old.id])
except ServiceException, e:
# log it and do stuff
....
</code></pre>
<p>The CherryPy request handler calls the query methods, which are scattered across several service modules, as needed. The rationale behind this solution is, since they need to access multiple model classes, they don't belong to individual models, and also these database queries should be separated out from direct handling of API accesses.</p>
<p>I realize that the above code might be called <a href="http://www.refactoring.com/catalog/introduceForeignMethod.html" rel="nofollow noreferrer">Foreign Methods</a> in the realm of refactoring. I could well live with this way of organizing for a while, but as things are starting to look a little messy, I'm looking for a way to refactor this code.</p>
<ul>
<li>Since the queries are tied directly to the API and its business logic, they are hard to generalize like getters and setters.</li>
<li>It smells to repeat the <code>session</code> argument like that, but as the current implementation of the API creates a new CherryPy handler instance for each API call and therefore the <code>session</code> object, there is no global way of getting at the current <code>session</code>.</li>
</ul>
<p>Is there a well-established pattern to organize such queries? Should I stick with the Foreign Methods and just try to unify the function signature (argument ordering, naming conventions etc.)? What would you suggest?</p>
|
<p>The standard way to have global access to the current session in a threaded environment is <a href="http://www.sqlalchemy.org/docs/session.html#contextual-thread-local-sessions" rel="nofollow noreferrer">ScopedSession</a>. There are some important aspects to get right when integrating with your framework, mainly transaction control and clearing out sessions between requests. A common pattern is to have an autocommit=False (the default) ScopedSession in a module and wrap any business logic execution in a try-catch clause that rolls back in case of exception and commits if the method succeeded, then finally calls Session.remove(). The business logic would then import the Session object into global scope and use it like a regular session.</p>
<p>There seems to be an existing <a href="http://tools.cherrypy.org/wiki/SqlAlchemy" rel="nofollow noreferrer">CherryPy-SQLAlchemy integration module</a>, but as I'm not too familiar with CherryPy, I can't comment on its quality.</p>
<p>Having queries encapsulated as functions is just fine. Not everything needs to be in a class. If they get too numerous just split into separate modules by topic.</p>
<p>What I have found useful is too factor out common criteria fragments. They usually fit rather well as classmethods on model classes. Aside from increasing readability and reducing duplication, they work as implementation hiding abstractions up to some extent, making refactoring the database less painful. (Example: instead of <code>(Foo.valid_from <= func.current_timestamp()) & (Foo.valid_until > func.current_timestamp())</code> you'd have <code>Foo.is_valid()</code>)</p>
|
python|design-patterns|orm|refactoring|sqlalchemy
| 1 |
1,049 | 32,417,242 |
Scheduling a task in python
|
<p>i'm trying to schedule a task every 5 seconds, here what i did:</p>
<pre><code>import schedule
import time
import tweepy
from threading import Timer
def job():
iGen = (i for i in range(1, 6))
for i in iGen:
i += 1
mymessage = "My message here " + str(i)
print(mymessage)
schedule.every(5).seconds.do(job)
while 1:
schedule.run_pending()
time.sleep(1)
</code></pre>
<p>but the result is:</p>
<pre><code>My message here 2 ..after 5 secs
My message here 3
My message here 4
My message here 5
My message here 6
My message here 2 ..after 5 secs
My message here 3
My message here 4
My message here 5
My message here 6
My message here 2 ..after 5 secs
My message here 3
My message here 4
My message here 5
My message here 6
</code></pre>
<p>what i need is:</p>
<pre><code>My message here 2 ..after 5 secs
My message here 3 ..after 5 secs
My message here 4 ..after 5 secs
My message here 5 ..after 5 secs
My message here 6 ..after 5 secs
</code></pre>
<p>sorry for the newbie question, Thank you</p>
|
<p>Your job is to loop over 2-6, printing for each. It sounds like you want the job to just print once each time it runs. This would do that, but would not number the messages.</p>
<pre><code>import schedule
import time
def job():
print("Message")
schedule.every(5).seconds.do(job)
while 1:
schedule.run_pending()
time.sleep(1)
</code></pre>
<p>To get numbering is a bit more complicated, but you can do it with a static variable:</p>
<pre><code>import schedule
import time
def job():
job.i += 1
print("Message: " + str(job.i))
job.i = 1
schedule.every(5).seconds.do(job)
while 1:
schedule.run_pending()
time.sleep(1)
</code></pre>
|
python
| 2 |
1,050 | 28,241,941 |
Disable OpenGL for Python / Matplotlib
|
<p>I'm doing a Python course for which I have installed Arch Linux in a VM. When I use Matplotlib.pyplot to plot things (x vs y) I get a bunch of errors.</p>
<pre><code>libGL error: pci id for fd 12: 80ee:beef, driver (null)
OpenGL Warning: glFlushVertexArrayRangeNV not found in mesa table
OpenGL Warning: glVertexArrayRangeNV not found in mesa table
OpenGL Warning: glCombinerInputNV not found in mesa table
OpenGL Warning: glCombinerOutputNV not found in mesa table
OpenGL Warning: glCombinerParameterfNV not found in mesa table
OpenGL Warning: glCombinerParameterfvNV not found in mesa table
OpenGL Warning: glCombinerParameteriNV not found in mesa table
OpenGL Warning: glCombinerParameterivNV not found in mesa table
OpenGL Warning: glFinalCombinerInputNV not found in mesa table
OpenGL Warning: glGetCombinerInputParameterfvNV not found in mesa table
OpenGL Warning: glGetCombinerInputParameterivNV not found in mesa table
OpenGL Warning: glGetCombinerOutputParameterfvNV not found in mesa table
OpenGL Warning: glGetCombinerOutputParameterivNV not found in mesa table
OpenGL Warning: glGetFinalCombinerInputParameterfvNV not found in mesa table
OpenGL Warning: glGetFinalCombinerInputParameterivNV not found in mesa table
OpenGL Warning: glDeleteFencesNV not found in mesa table
OpenGL Warning: glFinishFenceNV not found in mesa table
OpenGL Warning: glGenFencesNV not found in mesa table
OpenGL Warning: glGetFenceivNV not found in mesa table
OpenGL Warning: glIsFenceNV not found in mesa table
OpenGL Warning: glSetFenceNV not found in mesa table
OpenGL Warning: glTestFenceNV not found in mesa table
libGL error: core dri or dri2 extension not found
libGL error: failed to load driver: vboxvideo
OpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240
OpenGL Warning: Retry with 0x8002 returned 0 visuals
OpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240
OpenGL Warning: Retry with 0x8002 returned 0 visuals
OpenGL Warning: glXGetFBConfigAttrib for 00007f6ff33d0240, failed to get XVisualInfo
OpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240
OpenGL Warning: Retry with 0x8002 returned 0 visuals
OpenGL Warning: glXGetFBConfigAttrib for 00007f6ff33d0240, failed to get XVisualInfo
OpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240
OpenGL Warning: Retry with 0x8002 returned 0 visuals
OpenGL Warning: glXGetFBConfigAttrib for 00007f6ff33d0240, failed to get XVisualInfo
OpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240
OpenGL Warning: Retry with 0x8002 returned 0 visuals
OpenGL Warning: glXGetFBConfigAttrib for 00007f6ff33d0240, failed to get XVisualInfo
OpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240
OpenGL Warning: Retry with 0x8002 returned 0 visuals
OpenGL Warning: glXGetFBConfigAttrib for 00007f6ff33d0240, failed to get XVisualInfo
OpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240
OpenGL Warning: Retry with 0x8002 returned 0 visuals
OpenGL Warning: glXGetFBConfigAttrib for 00007f6ff33d0240, failed to get XVisualInfo
OpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240
OpenGL Warning: Retry with 0x8002 returned 0 visuals
OpenGL Warning: glXGetFBConfigAttrib for 00007f6ff33d0240, failed to get XVisualInfo
OpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240
OpenGL Warning: Retry with 0x8002 returned 0 visuals
OpenGL Warning: glXGetFBConfigAttrib for 00007f6ff33d0240, failed to get XVisualInfo
</code></pre>
<p>When I turn of 3D support for the VM it simply asks for openGL. My script does create a plot (empty canvas) but without a line.</p>
<p>I think it should be possible to draw some lines without openGL, right? How to go about this...</p>
<p>Edit: I think it was a VirtualBox bug combined with an error in my Python code. I could actually get good graphs with the error messages present in the end. In the latest versions of VirtualBox I'm not getting the error anymore. Thanx for the suggestions.</p>
|
<p>so, despite of all the errors, I never had anything not working actually, the fact that I didn't see graphs was not due to the error in the original post. It was something else, I guess unrelated tot mpl and more related to lack of 3D acceleration in VirtualBox.</p>
|
linux|opengl|python-3.x|matplotlib|virtualbox
| 0 |
1,051 | 44,082,545 |
How to use findNumbers in Google PhoneNumberLib?
|
<p>I am using <a href="https://github.com/googlei18n/libphonenumber" rel="nofollow noreferrer">Googles Phone Number Library</a> to find phone numbers in a text file. That phone number can be in any format or from any country. Regex is not solving the problem. I was coding in <a href="https://github.com/daviddrysdale/python-phonenumbers" rel="nofollow noreferrer">3rd party python version</a> of it,
but it is not that good and I can't find a way to use FindNumbers function. How to use it in Java or even better in python?</p>
<p>Here is an Example: 440-991-6659(F)</p>
|
<p>IN the python port that you link to, there is a <code>PhoneNumberMatcher</code> class that provides the <code>FindNumbers</code> functionality. The code is <a href="https://github.com/daviddrysdale/python-phonenumbers/blob/dev/python/phonenumbers/phonenumbermatcher.py#L456" rel="nofollow noreferrer">here</a>.</p>
<p>From the project's README:</p>
<blockquote>
<p>Sometimes, you've got a larger block of text that may or may not have
some phone numbers inside it. For this, the PhoneNumberMatcher object
provides the relevant functionality; you can iterate over it to
retrieve a sequence of PhoneNumberMatch objects. Each of these match
objects holds a PhoneNumber object together with information about
where the match occurred in the original string.</p>
<pre><code>>>> text = "Call me at 510-748-8230 if it's before 9:30, or on 703-4800500 after 10am."
>>> for match in phonenumbers.PhoneNumberMatcher(text, "US"):
... print match
...
PhoneNumberMatch [11,23) 510-748-8230
PhoneNumberMatch [51,62) 703-4800500
>>> for match in phonenumbers.PhoneNumberMatcher(text, "US"):
... print phonenumbers.format_number(match.number, phonenumbers.PhoneNumberFormat.E164)
...
+15107488230
+17034800500
</code></pre>
</blockquote>
|
java|python|libphonenumber|phonenumberutils
| 0 |
1,052 | 44,344,222 |
I cant understand this code in Python, can you help me?
|
<p>I had a code assignment but i could'nt find the answer, so i check it on the net. the code is written in python. The code is absolutely right but i cannot understand it. I am pretty much new to python so plz help me.</p>
<p>Here is the question</p>
<p>Assume s is a string of lower case characters.</p>
<p>Write a program that prints the longest substring of s in which the letters occur in alphabetical order. For example, if s = 'azcbobobegghakl', then your program should print</p>
<p>Longest substring in alphabetical order is: beggh
In the case of ties, print the first substring. For example, if s = 'abcbcd', then your program should print</p>
<p>Longest substring in alphabetical order is: abc</p>
<p>The code is:</p>
<pre><code> # initialise tracker variables
maxLen=0
current=s[0]
longest=s[0]
# step through s indices
for i in range(len(s) - 1):
if s[i + 1] >= s[i]:
current += s[i + 1]
# if current length is bigger update
if len(current) > maxLen:
maxLen = len(current)
longest = current
else:
current=s[i + 1]
i += 1
print ('Longest substring in alphabetical order is: ' + longest)
</code></pre>
|
<pre><code>s="abdhbdwba"
maxLen=0 # sets the current highest length to 0
current=s[0] # sets the current letter to the first letter (this is the output string)
longest=s[0] # sets the longest letter to the first letter(just for programming sake)
# step through s indices
for i in range(len(s) - 1): # goes over every letter in the string s except the last letter
if s[i + 1] >= s[i]: # checks if the next letter in the string is greater than (in ascii code) the current letter
current += s[i + 1] # if it is, adds the next letter to the current value
if len(current) > maxLen: # if we've got to a sequence that is larger, just set the max length to the length of the sequance
maxLen = len(current) # just lets the max length to the current length
longest = current # just sets the longest to the current value
else:
current=s[i + 1] # just sets the current as is
i += 1 # not sure why this is here?
print ('Longest substring in alphabetical order is: ' + longest) # just prints it out
</code></pre>
<p>Lets just go over some basics:</p>
<pre><code>for i in range(x):
print(i)
</code></pre>
<p>Will print i, i+1, i+2...i+(x - 1)</p>
<pre><code>x = y[i + 1]
</code></pre>
<p>x will now equal the (i + 1)th index in the array</p>
<pre><code>len(x)
</code></pre>
<p>Will output how long the string is in x</p>
|
python|iteration
| 2 |
1,053 | 32,896,019 |
Cursor when returning dictionary and print where are the keys
|
<p>I am trying to understand the data structures returned by cursor</p>
<p>I have the following code:</p>
<pre><code>con = psycopg2.connect("dbname='testdb2' user='kevin'")
cursor = con.cursor(cursor_factory=psycopg2.extras.DictCursor)
cursor.execute("SELECT * FROM Cars")
rows = cursor.fetchall()
for row in rows:
print row["id"], row["name"], row["price"]
</code></pre>
<p>Which outputs:</p>
<p>1 Audi 52642</p>
<p>2 Mercedes 57127</p>
<p>3 Skoda 9000</p>
<p>etc....</p>
<p>if I say</p>
<pre><code>for row in rows:
print rows
</code></pre>
<p>it outputs</p>
<pre><code>[[1, 'Audi', 52642], [2, 'Mercedes', 57127], [3, 'Skoda', 9000], [4, 'Volvo', 29000], [5, 'Bentley', 350000], [6, 'Citroen', 21000], [7, 'Hummer', 41400], [8, 'Volkswagen', 21600]]
</code></pre>
<p>Where are the keys ? I was expecting an out put like this </p>
<pre><code>[['Id': '1' , 'name':'Audi', 'price:'52642'], ['Id': '2' , 'name':'Mercedes', 'price:'57127'] ....etc
</code></pre>
<p>I am not sure if it from my lack of understanding python that I did expect that output.</p>
|
<p>Each rown is a <a href="http://initd.org/psycopg/docs/extras.html#psycopg2.extras.DictRow" rel="nofollow"><code>DictRow</code></a> which inherits from <code>list</code>:</p>
<p><a href="https://github.com/psycopg/psycopg2/blob/master/lib/extras.py" rel="nofollow">https://github.com/psycopg/psycopg2/blob/master/lib/extras.py</a></p>
<pre><code>class DictRow(list):
"""A row object that allow by-column-name access to data."""
__slots__ = ('_index',)
def __init__(self, cursor):
self._index = cursor.index
self[:] = [None] * len(cursor.description)
def __getitem__(self, x):
if not isinstance(x, (int, slice)):
x = self._index[x]
return list.__getitem__(self, x)
def __setitem__(self, x, v):
if not isinstance(x, (int, slice)):
x = self._index[x]
list.__setitem__(self, x, v)
def items(self):
return list(self.iteritems())
def keys(self):
return self._index.keys()
def values(self):
return tuple(self[:])
def has_key(self, x):
return x in self._index
def get(self, x, default=None):
try:
return self[x]
except:
return default
def iteritems(self):
for n, v in self._index.iteritems():
yield n, list.__getitem__(self, v)
def iterkeys(self):
return self._index.iterkeys()
def itervalues(self):
return list.__iter__(self)
def copy(self):
return dict(self.iteritems())
def __contains__(self, x):
return x in self._index
def __getstate__(self):
return self[:], self._index.copy()
def __setstate__(self, data):
self[:] = data[0]
self._index = data[1]
if _sys.version_info[0] > 2:
items = iteritems; del iteritems
keys = iterkeys; del iterkeys
values = itervalues; del itervalues
del has_key
</code></pre>
|
python|postgresql|cursor|psycopg2
| 1 |
1,054 | 54,366,507 |
Check if the string contains the substring returns true when its actually false
|
<p>Is it a problem with my editor or what stupid mistake am I making ? Here is the screen-shot</p>
<p><a href="https://i.stack.imgur.com/P9nkk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P9nkk.png" alt="enter image description here"></a></p>
<p>This code returns true and it actually should</p>
<pre><code>a = "https://www.reddit.com/comments/ado0ym/use_reddit_coins_to_award_gold_to_your_favorite/"
b = "use_reddit_coins_to_award_gold_to_your_favorite"
if b in a:
print("true")
# Results return true
</code></pre>
<p>But this must return False but returns True </p>
<pre><code>a = "https: // www.reddit.com/comments/ado0ym/"
b = "use_reddit_coins_to_award_gold_to_your_favorite"
if b in a:
print("true")
# Results return true
</code></pre>
|
<p>works fine: First one returns True, second one returns False:</p>
<p>If you're running your code, it should correctly print <code>true</code> because the first set is True, and then prints nothing after that:</p>
<pre><code>true
</code></pre>
<p>if both were True, you would see</p>
<pre><code>true
true
</code></pre>
<p>See below:</p>
<pre><code>a = "https://www.reddit.com/comments/ado0ym/use_reddit_coins_to_award_gold_to_your_favorite/"
b = "use_reddit_coins_to_award_gold_to_your_favorite"
print (b in a)
a = "https: // www.reddit.com/comments/ado0ym/"
b = "use_reddit_coins_to_award_gold_to_your_favorite"
print (b in a)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>True
False
</code></pre>
|
python|python-3.x
| 3 |
1,055 | 34,787,590 |
Error running python manage.py
|
<p>I'm using <code>flask</code> with Ubuntu, and when I run <code>python manage.py</code> I get this Traceback:</p>
<pre><code>Traceback (most recent call last):
File "manage.py", line 8, in <module>
app.run(debug=True,processes=True)
File "/proj/local/lib/python2.7/site-packages/flask/app.py", line 772, in run
run_simple(host, port, self, **options)
File "/proj/local/lib/python2.7/site-packages/werkzeug/serving.py", line 671, in run_simple
s.bind((hostname, port))
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 98] Address already in use
</code></pre>
|
<p>This means this port on the address you're trying to use (presumably <code>localhost</code>) is <em>already being used by another process</em>. </p>
<p>What to do to fix this:</p>
<ul>
<li>kill Python and restart your script</li>
<li>or find a process that's using your port and kill it</li>
<li>use another port for your app</li>
<li>wait for a few minutes, perhaps this port hasn't been 'freed' yet</li>
</ul>
|
python|ubuntu|flask
| 6 |
1,056 | 27,160,796 |
Using page text to select `html` element using`Beautiful Soup`
|
<p>I have a page which contains several repetitions of: <code><div...><h4>...<p>...</code> For example:</p>
<pre><code>html = '''
<div class="proletariat">
<h4>sickle</h4>
<p>Ignore this text</p>
</div>
<div class="proletariat">
<h4>hammer</h4>
<p>This is the text we want</p>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html)
</code></pre>
<p>If I write <code>print soup.select('div[class^="proletariat"] > h4 ~ p')</code>, I get:</p>
<pre><code>[<p>Ignore this text</p>, <p>This is the text we want</p>]
</code></pre>
<p>How do I specify that I only want the text of p when it is preceded by <code><h4>hammer</h4></code>? </p>
<p>Thanks</p>
|
<p><code>:contains()</code> could help here, but it is not supported.</p>
<p>Taking this into account, you can use <code>select()</code> in conjunction with the <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-next-siblings-and-find-next-sibling" rel="nofollow"><code>find_next_sibling()</code></a>:</p>
<pre><code>print next(h4.find_next_sibling('p').text
for h4 in soup.select('div[class^="proletariat"] > h4')
if h4.text == "hammer")
</code></pre>
|
python|html|css-selectors|beautifulsoup
| 1 |
1,057 | 27,155,129 |
Split pandas Series rows containing multiline strings into separate rows
|
<p>I have a pandas Series that is filled with strings like this:</p>
<pre><code>In:
s = pd.Series(['This is a single line.', 'This is another one.', 'This is a string\nwith more than one line.'])
Out:
0 This is a single line.
1 This is another one.
2 This is a string\nwith more than one line.
dtype: object
</code></pre>
<p>How can I split all rows in this Series that contain the linebreak character <code>\n</code> into rows of their own? What I would expect is:</p>
<pre><code>0 This is a single line.
1 This is another one.
2 This is a string
3 with more than one line.
dtype: object
</code></pre>
<p>I know that I can split each row by the linebreak character with</p>
<pre><code>s = s.str.split('\n')
</code></pre>
<p>which gives </p>
<pre><code>0 [This is a single line.]
1 [This is another one.]
2 [This is a string, with more than one line.]
</code></pre>
<p>but this only breaks the string within the row, not into rows of their own for each token.</p>
|
<p>You could loop over each string in each row to create a new series:</p>
<pre><code>pd.Series([j for i in s.str.split('\n') for j in i])
</code></pre>
<p>It might make more sense to do this on the input rather than creating a temporary series, e.g.:</p>
<pre><code>strings = ['This is a single line.', 'This is another one.', 'This is a string\nwith more than one line.']
pd.Series([j for i in strings for j in i.split('\n')])
</code></pre>
|
python|pandas|split|series
| 4 |
1,058 | 23,397,583 |
Writing camera matrix into xml/yaml file
|
<p>I am using opencv and python
I have calibrated my camera having the following parameters:</p>
<pre><code>camera_matrix=[[ 532.80990646 ,0.0,342.49522219],[0.0,532.93344713,233.88792491],[0.0,0.0,1.0]]
dist_coeff = [-2.81325798e-01,2.91150014e-02,1.21234399e-03,-1.40823665e-04,1.54861424e-01]
</code></pre>
<p>I am working in python.I wrote the following code to save the above into a file but the file was like a normal text file.</p>
<pre><code>f = open("../calibration_camera.xml","w")
f.write('Camera Matrix:\n'+str(camera_matrix))
f.write('\n')
f.write('Distortion Coefficients:\n'+str(dist_coefs))
f.write('\n')
f.close()
</code></pre>
<p>How can i save this data into an xml/yaml file using python commands thus getting the desired output.Please help. Thanks in advance</p>
|
<h1>Using JSON</h1>
<p>JSON seems to be the easiest format for serialization in your case</p>
<pre><code>camera_matrix=[[ 532.80990646 ,0.0,342.49522219],[0.0,532.93344713,233.88792491],[0.0,0.0,1.0]]
dist_coeff = [-2.81325798e-01,2.91150014e-02,1.21234399e-03,-1.40823665e-04,1.54861424e-01]
data = {"camera_matrix": camera_matrix, "dist_coeff": dist_coeff}
fname = "data.json"
import json
with open(fname, "w") as f:
json.dump(data, f)
</code></pre>
<p>data.json:</p>
<pre><code>{"dist_coeff": [-0.281325798, 0.0291150014, 0.00121234399, -0.000140823665, 0.154861424], "camera_matrix": [[532.80990646, 0.0, 342.49522219], [0.0, 532.93344713, 233.88792491], [0.0, 0.0, 1.0]]}
</code></pre>
<h1>Using YAML</h1>
<p>YAML is best option, if you expect human editing of the content</p>
<p>In contrast to <code>json</code> module, <code>yaml</code> is not part of Python and must be installed first:</p>
<p>$ pip install pyyaml</p>
<p>Here goes the code to save the data:</p>
<pre><code>fname = "data.yaml"
import yaml
with open(fname, "w") as f:
yaml.dump(data, f)
</code></pre>
<p>data.yaml:</p>
<pre><code>camera_matrix:
- [532.80990646, 0.0, 342.49522219]
- [0.0, 532.93344713, 233.88792491]
- [0.0, 0.0, 1.0]
dist_coeff: [-0.281325798, 0.0291150014, 0.00121234399, -0.000140823665, 0.154861424]
</code></pre>
<h1>Using XML</h1>
<p>My example is using my favourite <code>lxml</code> package, other XML packages are also available.</p>
<pre><code>from lxml import etree
from lxml.builder import E
camera_matrix=[[ 532.80990646 ,0.0,342.49522219],[0.0,532.93344713,233.88792491],[0.0,0.0,1.0]]
dist_coeff = [-2.81325798e-01,2.91150014e-02,1.21234399e-03,-1.40823665e-04,1.54861424e-01]
def triada(itm):
a, b, c = itm
return E.Triada(a = str(a), b = str(b), c = str(c))
camera_matrix_xml = E.CameraMatrix(*map(triada, camera_matrix))
dist_coeff_xml = E.DistCoef(*map(E.Coef, map(str, dist_coeff)))
xmldoc = E.CameraData(camera_matrix_xml, dist_coeff_xml)
fname = "data.xml"
with open(fname, "w") as f:
f.write(etree.tostring(xmldoc, pretty_print=True))
</code></pre>
<p>data.xml:</p>
<pre><code><CameraData>
<CameraMatrix>
<Triada a="532.80990646" c="342.49522219" b="0.0"/>
<Triada a="0.0" c="233.88792491" b="532.93344713"/>
<Triada a="0.0" c="1.0" b="0.0"/>
</CameraMatrix>
<DistCoef>
<Coef>-0.281325798</Coef>
<Coef>0.0291150014</Coef>
<Coef>0.00121234399</Coef>
<Coef>-0.000140823665</Coef>
<Coef>0.154861424</Coef>
</DistCoef>
</CameraData>
</code></pre>
<p>You shall play a bit with the code to format strings representing the numbers with proper precision. This I leave to you.</p>
|
python|opencv
| 13 |
1,059 | 71,033,943 |
pandas: comparing non-identical list of panda dataframes based on values from a certain column
|
<p>I have a two lists of panda dataframes as follows,</p>
<pre><code>import pandas as pd
import numpy as np
list_one = [pd.DataFrame({'sent_a.1': [0, 3, 2, 1], 'sent_a.2': [0, 1, 4, 0], 'sent_b.3': [0, 6, 0, 8],'sent_b.4': [1, 1, 8, 6],'ID':['id_1','id_1','id_1','id_1']}),
pd.DataFrame({'sent_a.1': [0, 3], 'sent_a.2': [0, 2], 'sent_b.3': [0, 6],'sent_b.4': [1, 1],'ID':['id_2','id_2']})]
list_two = [pd.DataFrame({'sent_a.1': [0, 5], 'sent_a.2': [0, 1], 'sent_b.3': [0, 6],'sent_b.4': [1, 1],'ID':['id_2','id_2']}),
pd.DataFrame({'sent_a.1': [0, 5, 3, 1], 'sent_a.2': [0, 2, 3, 1], 'sent_b.3': [0, 6, 6, 8],'sent_b.4': [1, 5, 8, 5],'ID':['id_1','id_1','id_1','id_1']})]
</code></pre>
<p>I would like to compare the dataframes in these two lists and if the values are the same, I would like to replace the value with 'True' and if the values are different, I would like to set them to 'False' and save the result in a different list of panda dataframes. I have done the following,</p>
<pre><code>for dfs in list_one:
for dfs2 in list_two:
g = np.where(dfs == dfs2, 'True', 'False')
print (g)
</code></pre>
<p>but I get the error,</p>
<pre><code>ValueError: Can only compare identically-labeled DataFrame objects
</code></pre>
<p>how can I sort values in these two lists, based on the values from column 'ID'?</p>
<p><strong>Edit</strong>
I would like the dataframes that have the same value for column 'ID' to be compared. meaning that dataframes that have 'ID' == 'id_1' are to be compared with one another and dataframes that have 'ID' == 'id_2' to be compared with each other (not a cross comparison)</p>
<p>so the desired output is:</p>
<pre><code>output = [ sent_a.1 sent_a.2 sent_b.3 sent_b.4 ID
0 True True True True id_1
1 False False True False id_1
2 False False False True id_1
3 False False True True id_1,
sent_a.1 sent_a.2 sent_b.3 sent_b.4 ID
0 True True True True id_2
1 True True False False id_2]
</code></pre>
|
<p>Based on your current example</p>
<p>For your first question:</p>
<blockquote>
<p>how can I sort values in these two lists, based on the values from column 'ID'?</p>
</blockquote>
<pre><code>list_one = sorted(list_one,key=lambda x: x['ID'].unique()[0][3:], reverse=False)
list_two =sorted(list_two,key=lambda x: x['ID'].unique()[0][3:], reverse=False)
</code></pre>
<p>ValueError: Can only compare identically-labeled DataFrame objects</p>
<ul>
<li>error due to different index values order in dataframes or dataframes are of different shapes</li>
</ul>
<p>First way of comparison:</p>
<pre><code>for dfs in list_one:
for dfs2 in list_two:
if dfs.shape == dfs2.shape:
g = np.where(dfs == dfs2, 'True', 'False')
print (g)
</code></pre>
<p>Second way:</p>
<blockquote>
<p>I would like the dataframes that have the same value for column 'ID' to be compared</p>
</blockquote>
<pre><code>for dfs in list_one:
for dfs2 in list_two:
if (dfs['ID'].unique() == dfs2['ID'].unique()) and (dfs.shape == dfs2.shape):
g = np.where(dfs == dfs2, 'True', 'False')
print (g)
</code></pre>
|
python|pandas|list|dataframe|compare
| 2 |
1,060 | 11,624,050 |
Using Flask, trying to get AJAX to update a span after updating mongo record, but it's opening a new page
|
<p>Feel like I am stumbling over something fairly simple here.</p>
<p>I am not understanding something about AJAX and Flask.</p>
<p>I have a project wherein I display mongodb records in the browser, which has been working fine.</p>
<p>I added functionality for users to increment votes on a record; to Vote it up if they like it. But originally I was then refreshing the entire page with the new vote, using a redirect, which is clumsy. So I am trying to get AJAX to send the data over to the mongodb record and then update the span where I want the votes to appear without having to reload the entire page.</p>
<p>Problem is, the setup I have going, while still updating the record, is now loading a new page with the HTML i want returned only to the span where the vote tally should be (that is, it's loading a new page with only the word "test" in it (the test value I am currently returning)).</p>
<p>The jQuery (the library I am using) is loading fine and there are no other problems (as far as I can tell).</p>
<p>I have the relevant HTML and JS here:</p>
<pre><code><!-- All Standard HTML up here, removed for simplicity -->
<script>
$('#vote_link').bind('click', function(e){
e.preventDefault();
var url = $(this).attr('href');
$('#vote_tally').load(url);
});
</script>
<a href='/vote_up/{{ item._id }}' id='vote_link'>Vote for Me!</a><br>
Likes: <span id='vote_tally'>{{ item.votes }}</span>
<!-- All Standard HTML down here, removed for simplicity -->
</code></pre>
<p>and the python is here: </p>
<pre><code>from flask import Flask, render_template, request, redirect, flash, jsonify
#from mongokit import Connection, Document
#from flask.ext.pymongo import PyMongo
from pymongo import Connection#, json_util
#from pymongo.objectid import ObjectId #this is deprecated
import bson.objectid
'''my pymongo connection - removed for simplicity'''
'''bunch of other routes - also removed for same reason'''
#increment a vote
@app.route('/vote_up/<this_record>')
def vote_up(this_record):
vandalisms.update({'_id':bson.objectid.ObjectId(this_record)},
{"$inc" : { "votes": 1 }}, upsert=True)
'''
also trying to return value for votes field from mongo record, but one step at a
time here
'''
#result = vandalisms.find({'_id':bson.objectid.ObjectId(this_record)}, {'votes':1})
result = 'test'
return result
</code></pre>
<p>I am also having trouble figuring out how to return the individual vote value for the specified mongodb record back to the browser, even with jsonify (which returns {"votes":'_id'}, but that's another issue. Hopefully someone can help me understand how to make AJAX work for me with Flask in this regard.</p>
<p>Thanks in advance,</p>
<p><strong>Edit-24Jul2012-2:27PM CST:</strong></p>
<p>I suspect that the jQuery isn't even activating. It seems to be loading the new page based on the link's href attribute, hence it's no use to have <code>e.prevenDefault();</code> when that's not being run. Furthermore, an <code>alert('I have been clicked');</code> never runs when the click event takes place. Again, the jQuery is loaded, but the click event is not activating the jQuery, and I don't know why not.</p>
|
<p>My guess (based on your edit) is that you have more than one element on the page with the ID of <code>vote_link</code> - this is not allowed in HTML (the ID property must be unique across the document). If you want to have multiple links sharing the same behavior use a class instead (<code>$(".vote_link")</code> for example).</p>
|
python|mongodb|jquery|flask
| 3 |
1,061 | 46,985,763 |
Not using all python sys.argv
|
<p>New to python, but my question is about sys.argv.</p>
<p>I have program that I want to execute different sets of code depending on how many arguments are passed to it. </p>
<p>python test.py hello awesome world</p>
<p>would run a different set of code from</p>
<p>python test.py hello world</p>
<p>If I define 3 sys.argv then it is expecting 3 arguments every time otherwise I get: IndexError: list index out of range</p>
|
<p>Wrap it in if statements:</p>
<pre><code>if len(sys.argv) == 1:
#do something
elif len(sys.argv) == 2:
#do something else
elif len(sys.argv) == 3:
#do something different
else:
#do the last possibility
</code></pre>
|
python-3.x
| 1 |
1,062 | 37,912,611 |
Django -- Process Multiple Form Fields
|
<p>I am very new to Python / Django and would appreciate any and all help I can get here! </p>
<p>I am trying to take in multiple form fields and haven't been able to find a great clean way to do so. My code is trying to take in a foreign Key radio selection (the team), and a number (the bet size), for each instance. </p>
<p>I ended up creating the code to iterate over the request.POST.items to determine which game each team belongs to, which is working fine, however I am having trouble taking in the input for the Bet size as the "value" field is already being assigned to each game. </p>
<p>I have debated using a model form instead of the methodology I have chosen, but cannot find a great way to take in the foreign key data. </p>
<p>How would you suggest altering the code to take in the bet size field? Is there an alternative way to process using Model Forms that you would suggest?</p>
<p>Please find my code below!</p>
<p>Thanks in advance</p>
<p>Views.py:</p>
<pre><code>def pick_game(request):
# Check what kind of request this is? GET/POST?
if request.method == 'GET':
game_list = Game.objects.order_by('-picks')
# form = PickGameForm()
page_variables = {
"game_list": game_list, 'form':form
}
return render(request, 'social/pickGame.html', page_variables)
else:
for key, value in request.POST.items():
print(key, value)
if "choice" in key:
game_id = int(key.split("_")[1])
team_id = int(value)
game = Game.objects.get(pk=game_id)
team = Team.objects.get(pk=team_id)
PlayerPick.objects.create(
player_profile=request.user.playerprofile,
game=game,
team=team,
bet_size=bet_size
)
else:
bet_size = request.POST.get('Bet')
</code></pre>
<p>pickGame.html:</p>
<pre><code><h1>Games</h1>
<form method="post">
{% csrf_token %}
{% for game in game_list %}
<h2>Game {{ game.number }}</h2>
<p><input type="radio" name="{{ game.pk }}" value="{{ game.team1.pk }}"> {{ game.team1.name }}</p>
<p><input type="radio" name="{{ game.pk }}" value="{{ game.team2.pk }}"> {{ game.team2.name }}</p>
<p><input type="number" name="Bet"> How much Money? </p>
{% endfor %}
<hr>
<button type="submit">Submit</button>
</form>
</code></pre>
<p>PlayerPick model:</p>
<pre><code>class PlayerPick(models.Model):
player_profile = models.ForeignKey('PlayerProfile')
team = models.ForeignKey('Team')
game = models.ForeignKey('Game')
bet_size = models.IntegerField(default=0, blank=True)
correct = models.BooleanField(default=False, blank=True)
pick_time = models.DateTimeField(auto_now_add=True)
</code></pre>
|
<p>I think your manual approach is quite OK, and all you have to do is find a way to uniquely identify the Bet field for each game. You could to this in your html:</p>
<pre><code><input type="number" name="{{game.pk}}-Bet">
</code></pre>
<p>And then get the value in your view just before creating your PlayerPick object:</p>
<pre><code>bet_size = request.POST.get('%s-Bet' % game.pk)
</code></pre>
<p>If you want to use Django Forms, you can also imitate this behaviour by using the <code>prefix</code> parameter when creating your form. You have to define this both for the unbound and bound forms, so that the prefixed field names can be recognized:</p>
<pre><code># When creating the forms and passing them to the template
PlayerPickForm(prefix=str(game.pk)+'-')
# When verifying posted data
PlayerPickForm(request.POST, prefix=str(game.pk)+'-')
</code></pre>
|
python|django|forms
| 1 |
1,063 | 67,915,559 |
Last occurence of comma in python dataframe
|
<p>please help me on replace comma with & in the last occurence of comma</p>
<p>DF['MSG'] =</p>
<p>0 20.00, 20.00
1 4.00, 3.00, 2.00
2 100.00
3 10.00, 70.00, 10.00
4 10.00, 10.00, 10.00, 10.00, 10.00
5 99.00
6 50.00, 50.00
7 70.00
8 10.00, 20.00, 65.00</p>
<p>output is:</p>
<p>0 20.00, 20.00
1 4.00, 3.00& 2.00
2 100.00
3 10.00, 70.00& 10.00
4 10.00, 10.00, 10.00, 10.00& 10.00
5 99.00
6 50.00, 50.00
7 70.00
8 10.00, 20.00& 65.00</p>
<p>if the comma occurences are more than 2 then expected in above output in dataframe
plese help me</p>
|
<p>Assuming it's a clean list of numbers, you can change it to a string like this:</p>
<pre><code>list_of_numbers = [1, 2, 3, 4]
print(', '.join([str(i) for i in list_of_numbers[:-1]]) + f" & {list_of_numbers[-1]}")
</code></pre>
<p>gives</p>
<p>1, 2, 3 & 4</p>
|
python
| 0 |
1,064 | 72,422,859 |
Remove lines containing numbers attached to letters with Python
|
<p>I have a <em>txt</em> file containing one sentence per line, and there are lines containing numbers attached to letters. For instance:</p>
<pre><code>The boy3 was strolling on the beach while four seagulls appeared flying.
There were 3 women sunbathing as well.
All children were playing happily.
</code></pre>
<p>I would like remove lines like the first one (<em>i.e.</em> having numbers stuck to words) but not lines like the second which are properly written.</p>
<p>Has anybody got a slight idea?</p>
|
<p>You can use a simple regex pattern. We start with <code>[0-9]+</code>. This pattern detects any number 0-9 an indefinite amounts of times. Meaning 6, or 56, or 56790 works. If you want to detect sentences that have numbers attached to a string you could use something like this: <code>([a-zA-Z][0-9]+)|([0-9]+[a-zA-Z])</code> This regex string matches a string with a letter before a number or after a number. You can search strings using:</p>
<pre class="lang-py prettyprint-override"><code>import re
lines = [
'The boy3 was strolling on the beach while 4 seagulls appeared flying.',
'There were 3 women sunbathing as well.',
]
for line in lines:
res = re.search("([a-zA-Z][0-9]+)|([0-9]+[a-zA-Z])", line)
if res is None:
# remove line
</code></pre>
<p>However you can add more characters to the allowed letters if your sentences can include special characters and such.</p>
|
python|data-preprocessing
| 1 |
1,065 | 48,614,891 |
Pandas - select top N < L most frequent categories for multiple columns and join resulting vectors
|
<p>In Pandas I have separated my data by type and I need to summarize the frequency of the categorical data. I need to get all levels up to 50 levels. </p>
<p>Right now I have something like this (example data follows):</p>
<pre><code># Libraries
import numpy as np
import pandas as pd
# Categorical variables
df = pd.DataFrame(np.random.randint(low = 0,
high = 1000000,
size = (1000, 2)),
columns=['CASE_NUMBER', 'CLIENT_ID'])
df['CASE_NUMBER'] = df['CASE_NUMBER'].apply(str)
df['CLIENT_ID'] = df['CLIENT_ID'].apply(str)
df['PRODUCTCATEGORY'] = np.random.randint(low=0, high=2, size=(1000, 1))
df['PRODUCTTYPE'] = np.random.randint(low=0, high=2, size=(1000, 1))
df['PRODUCTTYPE'] = np.random.randint(low=0, high=2, size=(1000, 1))
df['PRODUCT_CATEGORY_DESC'] = np.random.randint(low=0, high=2, size=(1000, 1))
df['PRODUCT_DESC'] = np.random.randint(low=0, high=2, size=(1000, 1))
df.loc[df['PRODUCTCATEGORY'] == 0 , 'PRODUCTCATEGORY'] = "AC2"
df.loc[df['PRODUCTCATEGORY'] == 1 , 'PRODUCTCATEGORY'] = "AC1"
df.loc[df['PRODUCTTYPE'] == 0 , 'PRODUCTTYPE'] = "AT2"
df.loc[df['PRODUCTTYPE'] == 1 , 'PRODUCTTYPE'] = "AT1"
df.loc[df['PRODUCT_CATEGORY_DESC'] == 0 , 'PRODUCT_CATEGORY_DESC'] = "Revocable"
df.loc[df['PRODUCT_CATEGORY_DESC'] == 1 , 'PRODUCT_CATEGORY_DESC'] = "Irrevocable"
df.loc[df['PRODUCT_DESC'] == 0 , 'PRODUCT_DESC'] = "Immediate"
df.loc[df['PRODUCT_DESC'] == 1 , 'PRODUCT_DESC'] = ""
</code></pre>
<p>I made some very ugly way attempts that started something like what's below, but asides from being verbose it is slow and also adds unnecessary rows if the max number of levels in all columns is < 50:</p>
<pre><code>e = df.describe()
table2 = pd.DataFrame({
'Variable Name': e.columns,
})
for n in e.columns:
for i in range(50):
grouped = df.groupby([n]).size().reset_index()
grouped = grouped.sort_values(0, ascending=False)
table2 = pd.concat([table2, grouped], ignore_index=True, axis=1)
</code></pre>
<p>Here is an example of what I'm ultimately going for (note: the counts are made up numbers that do not really correspond the the above data). You do not have to handle <code>Variable Name</code> and <code>Percent</code> (but bonus points for you if you do!):</p>
<p><a href="https://i.stack.imgur.com/puLdV.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/puLdV.jpg" alt="enter image description here"></a></p>
|
<p>The key to the solution was in a comment from @JonClements:</p>
<pre><code>table2 = df.melt().groupby(['variable', 'value']).size()
</code></pre>
<p>From there I just added some logic to truncate and transform the results:</p>
<pre><code>table2 = table2.to_frame(name='Count')
table2 = table2.reset_index(inplace=False)
table2['Percent'] = table2['Count'] / len(df.index)
for v in table2['variable'].unique():
tmp = table2[table2.variable.str.contains(v) == True]
table2 = table2[table2.variable.str.contains(v) == False]
if tmp.shape[0] > 50:
tmp0 = tmp.iloc[:50,]
tmp1 = pd.DataFrame([{'variable':v,
'value': 'Other',
'Count':tmp.shape[0]-50,
'Percent':sum(tmp0['Percent'])
}])
tmp = tmp0.append(tmp1)
table2 = table2.append(tmp)
print(table2)
</code></pre>
|
python|pandas
| 0 |
1,066 | 20,252,039 |
django output empty csv
|
<p>I'm using django and I'm trying to export the CSV_data list into csv file. Below is my csv.py:</p>
<pre><code>#coding=utf-8
from django.http import HttpResponse
from django.template import loader, Context
from demo.views import CSV_data
def output(request, filename):
response = HttpResponse(mimetype='text/csv')
response['Content-Disposition'] = 'attachment; filename=%s.csv' % filename
t = loader.get_template('csv.txt')
c = Context({
'data': CSV_data,
})
response.write(t.render(c))
return response
</code></pre>
<p>CSV_data is a variable in views.py, I tried to print it in template, the value is ok. </p>
<pre><code> [(u'2012-06-01', [0, 0, 0]), ('2012-06-08', [0, 0, 0]), ('2012-06-15', [0, 0, 0]), ('2012-06-22', [0, 0, 0]), ('2012-06-29', [0, 0, 0]), ('2012-07-06', [0, 0, 0]), ('2012-07-13', [0, 0, 0]), ('2012-07-20', [0, 0, 0]), ('2012-07-27', [0, 0, 0]), ('2012-08-03', [131, 164, 79.88]), ('2012-08-10', [110, 198, 55.56]), ('2012-08-17', [112, 197, 56.85]), ('2012-08-24', [147, 283, 51.94]), ('2012-08-31', [0, 306, 0.0]), ('2012-09-07', [418, 418, 100.0]), ('2012-09-14', [342, 342, 100.0]), ('2012-09-21', [732, 732, 100.0]), ('2012-09-28', [689, 689, 100.0]), ('2012-10-05', [775, 775, 100.0]), ('2012-10-12', [469, 469, 100.0]), ('2012-10-19', [477, 477, 100.0]), ('2012-10-26', [897, 897, 100.0]), ('2012-11-02', [216, 216, 100.0]), ('2012-11-09', [1046, 1046, 100.0]), ('2012-11-16', [840, 840, 100.0]), ('2012-11-23', [948, 948, 100.0])]
</code></pre>
<p>However, the generated csv is always empty.</p>
<p>I tried to add the CSV_data definition to the csv.py file, like this:</p>
<pre><code>#coding=utf-8
from django.http import HttpResponse
from django.template import loader, Context
CSV_data = [(u'2012-06-01', [0, 0, 0]), ('2012-06-08', [0, 0, 0]), ('2012-06-15', [0, 0, 0]), ('2012-06-22', [0, 0, 0]), ('2012-06-29', [0, 0, 0]), ('2012-07-06', [0, 0, 0]), ('2012-07-13', [0, 0, 0]), ('2012-07-20', [0, 0, 0]), ('2012-07-27', [0, 0, 0]), ('2012-08-03', [131, 164, 79.88]), ('2012-08-10', [110, 198, 55.56]), ('2012-08-17', [112, 197, 56.85]), ('2012-08-24', [147, 283, 51.94]), ('2012-08-31', [0, 306, 0.0]), ('2012-09-07', [418, 418, 100.0]), ('2012-09-14', [342, 342, 100.0]), ('2012-09-21', [732, 732, 100.0]), ('2012-09-28', [689, 689, 100.0]), ('2012-10-05', [775, 775, 100.0]), ('2012-10-12', [469, 469, 100.0]), ('2012-10-19', [477, 477, 100.0]), ('2012-10-26', [897, 897, 100.0]), ('2012-11-02', [216, 216, 100.0]), ('2012-11-09', [1046, 1046, 100.0]), ('2012-11-16', [840, 840, 100.0]), ('2012-11-23', [948, 948, 100.0])]
def output(request, filename):
response = HttpResponse(mimetype='text/csv')
response['Content-Disposition'] = 'attachment; filename=%s.csv' % filename
t = loader.get_template('csv.txt')
c = Context({
'data': CSV_data,
})
response.write(t.render(c))
return response
</code></pre>
<p>Then the output csv is not empty. So I guess there's something wrong when import CSV_data from views.py.</p>
<p>The problem is I've tested that CSV_data value in views is correct. So what could go wrong?</p>
<p><em><strong></em>**<em>*</em>**<em>*</em>**<em>*</em>**<em>*</em>**<em>*</em>*</strong><em>UPDATE</em><strong><em>*</em>**<em>*</em>**<em>*</em>**<em>*</em>**<em>*</em>**<em>*</em></strong>: </p>
<p>original code in views.py is like:</p>
<pre><code> CSV_data = []
def part_usage_result(request):
...(details omit)
usageDictWeek = helper.getResultByWeek(modelName, spareCode, start, end) #returns a list
CSV_data=usageDictWeek
</code></pre>
<p>I change to:</p>
<pre><code> CSV_data = []
def part_usage_result(request):
...(details omit)
usageDictWeek = helper.getResultByWeek(modelName, spareCode, start, end) #returns a list
for each in usageDictWeek:
CSV_data.append(each)
</code></pre>
<p>Now the content of csv is correct.
Still don't know why this happens</p>
|
<p>As you didn't provide <code>helper.getResultByWeek</code> details and how it is called, I guess it returns a global variable with a list value, and this variable is modified somewhere in between.</p>
<pre><code> CSV_data = usageDictWeek
</code></pre>
<p>do not copy a list, but creates another reference to existing one. When later original <code>usageDictWeek</code> is modified, CSV_data is modified as well.</p>
<p>When you do instead </p>
<pre><code> CSV_data[:] = usageDictWeek
</code></pre>
<p>a new copy of <code>usageDictWeek</code> is created and assigned to CSV_data.</p>
|
python|django|csv
| 1 |
1,067 | 19,967,926 |
Flask request empty after redirect
|
<p>Using Flask, I'm able to access request.form data in the function poll(), but after a redirect, request.form is empty. </p>
<p>I'm sure this is intentional and I have to explicitly pass this, but how?</p>
<pre><code>from flask import render_template, redirect, request
from app import app
from forms import PollForm
@app.route('/poll', methods = ['GET', 'POST'])
def poll():
form = PollForm()
if form.validate_on_submit():
print request.form # returns ImmutableMultiDict with data
return redirect('/details')
return render_template('poll.html', form=form)
@app.route('/details')
def details():
print request.form # returns empty ImmutableMultiDict
return render_template('details.html')
</code></pre>
|
<p>It's common to redirect from a POST, but you shouldn't need your form data anymore in the details function.</p>
<p>You should process the form submission in the poll function and then redirect to details, which I assume would display some updated data - e.g. from a database.</p>
<pre><code>@app.route('/poll', methods = ['GET', 'POST'])
def poll():
form = PollForm()
if form.validate_on_submit():
# use request.form to update your database
return redirect('/details')
return render_template('poll.html', form=form)
@app.route('/details')
def details():
# query the database to show the updated poll
return render_template('details.html')
</code></pre>
|
python|request|flask
| 3 |
1,068 | 20,289,450 |
Python Scrapy not always downloading data from website
|
<p>Scrapy is used to parse an html page. My question is why sometimes scrapy returns the response I want, but sometimes does not return a response. Is it my fault? Here's my parsing function:</p>
<pre><code>class AmazonSpider(BaseSpider):
name = "amazon"
allowed_domains = ["amazon.org"]
start_urls = [
"http://www.amazon.com/s?rh=n%3A283155%2Cp_n_feature_browse-bin%3A2656020011"
]
def parse(self, response):
sel = Selector(response)
sites = sel.xpath('//div[contains(@class, "result")]')
items = []
titles = {'titles': sites[0].xpath('//a[@class="title"]/text()').extract()}
for title in titles['titles']:
item = AmazonScrapyItem()
item['title'] = title
items.append(item)
return items
</code></pre>
|
<p>I believe you are just not using the most adequate XPath expression. </p>
<p>Amazon's HTML is kinda messy, not very uniform and therefore not very easy to parse. But after some experimenting I could extract all the 12 titles of a couple of search results with the following <code>parse</code> function:</p>
<pre><code>def parse(self, response):
sel = Selector(response)
p = sel.xpath('//div[@class="data"]/h3/a')
titles = p.xpath('span/text()').extract() + p.xpath('text()').extract()
items = []
for title in titles:
item = AmazonScrapyItem()
item['title'] = title
items.append(item)
return items
</code></pre>
<p>If you care about the actual order of the results the above code might not be appropriate but I believe that is not the case.</p>
|
python|request|response|scrapy|sites
| 0 |
1,069 | 51,134,734 |
Swaping values of two lists based on given index
|
<p>I have a list which consists out of two numpy arrays, the first one telling the index of a value and the second containing the belonging value itself. It looks a little like this:</p>
<pre><code>x_glob = [[0, 2], [85, 30]]
</code></pre>
<p>A function is now receiving the following input:</p>
<pre><code>x = [-10, 0, 77, 54]
</code></pre>
<p>My goal is to swap the values of x with the values from x_glob based on the given index array from x_glob. This example should result in something like this:</p>
<pre><code>x_new = [85, 0, 30, 54]
</code></pre>
<p>I do have a solution using a loop. But I am pretty sure there is a way in python to solve this issue more efficient and elegant. </p>
<p>Thank you!</p>
|
<p><a href="https://docs.scipy.org/doc/numpy-1.13.0/user/basics.indexing.html#index-arrays" rel="nofollow noreferrer"><strong><code>NumPy</code></strong> arrays may be indexed with other arrays</a>, which makes this replacement trivial.</p>
<p>All you need to do is index your second array with <code>x_glob[0]</code>, and then assign <code>x_glob[1]</code></p>
<pre><code>x[x_glob[0]] = x_glob[1]
</code></pre>
<p>To see <em>how</em> this works, just look at the result of the indexing:</p>
<pre><code>>>> x[x_glob[0]]
array([-10, 77])
</code></pre>
<p>The result is an array containing the two values that we need to replace, which we then replace with another <code>numpy</code> array, <code>x_glob[1]</code>, to achieve the desired result.</p>
<hr>
<pre><code>>>> x_glob = np.array([[0, 2], [85, 30]])
>>> x = np.array([-10, 0, 77, 54])
>>> x[x_glob[0]] = x_glob[1]
>>> x
array([85, 0, 30, 54])
</code></pre>
|
python|arrays|list|numpy|indexing
| 3 |
1,070 | 73,598,430 |
How to make a customized grouped dataframe with multiple aggregations
|
<p>I have a standard dataframe like the one below :</p>
<pre><code> Id Type Speed Efficiency Durability
0 Id001 A OK OK nonOK
1 Id002 A nonOK OK nonOK
2 Id003 B nonOK nonOK nonOK
3 Id004 B nonOK nonOK OK
4 Id005 A nonOK nonOK OK
5 Id006 A OK OK OK
6 Id007 A OK nonOK OK
7 Id008 B nonOK nonOK OK
8 Id009 C OK OK OK
9 Id010 B OK OK nonOK
10 Id011 C OK nonOK OK
11 Id012 C OK nonOK OK
12 Id013 C nonOK OK OK
13 Id014 C nonOK nonOK OK
14 Id015 C nonOK nonOK OK
</code></pre>
<p>And I'm trying to get this kind of output :</p>
<pre><code> Type Test Speed Efficiency Durability
0 A OK 3 3 3
1 A nonOK 2 2 2
2 B OK 1 1 2
3 B nonOK 3 3 2
4 C OK 3 2 6
5 C nonOK 3 4 0
</code></pre>
<p>I tried with <code>df.groupby('Type').agg('count')</code> but it doesn't give the expected output.</p>
<p>Is it possible to make this kind of transformation with pandas, please ?</p>
|
<p>You can also use the following solution using <code>pandas</code> method chaining:</p>
<pre><code>import pandas as pd
(pd.melt(df, id_vars='Type', value_vars=['Speed', 'Efficiency', 'Durability'], value_name='Test')
.groupby(['Type', 'Test', 'variable'])
.size()
.reset_index()
.pivot(index=['Type', 'Test'], columns='variable', values=0)
.reset_index())
variable Type Test Durability Efficiency Speed
0 A OK 3.0 3.0 3.0
1 A nonOK 2.0 2.0 2.0
2 B OK 2.0 1.0 1.0
3 B nonOK 2.0 3.0 3.0
4 C OK 6.0 2.0 3.0
5 C nonOK NaN 4.0 3.0
</code></pre>
|
python|pandas
| 6 |
1,071 | 70,713,678 |
Machine learning with vectors in both features and target
|
<p>How can I train a model with vectors/arrays as features? I seem to consistently getting errors when doing this...</p>
<p>My feature matrix would look something like this:</p>
<pre><code> A B C Profile
0 1 4 4 [1,2,3,4]
1 2 4 5 [2,2,4,1]
</code></pre>
<p>while my target vector would look something like this:</p>
<pre><code>0 [0,4,5,0]
1 [1,5,6,0]
</code></pre>
<p>etc etc but I'm having trouble with fit(x, y) when using linear_regression from sklearn. Here is the output to print(x) and print(y):</p>
<p>x:</p>
<pre><code>Beams/Beam[0]/Parameters/Energy Beams/Beam[0]/Parameters/BunchPopulation Beams/Beam[0]/BunchShape/Parameters/LongitudinalSigmaLabFrame Simulation/NumberOfParticles initialXHist
0 25.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
1 25.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
2 25.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
3 25.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
4 25.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
... ... ... ... ... ...
995 26.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
996 26.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
997 26.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
998 26.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
999 26.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
1000 rows × 5 columns
</code></pre>
<p>y:</p>
<pre><code>0 [8, 4, 6, 13, 5, 5, 10, 11, 15, 9, 19, 18, 16,...
1 [6, 5, 8, 8, 9, 12, 6, 20, 9, 20, 18, 12, 24, ...
2 [6, 6, 7, 8, 13, 10, 12, 7, 14, 14, 18, 24, 16...
3 [2, 5, 10, 3, 6, 8, 13, 12, 7, 18, 12, 20, 22,...
4 [5, 3, 5, 9, 8, 8, 8, 9, 14, 13, 10, 15, 21, 1...
...
995 [2, 9, 4, 5, 10, 5, 10, 15, 16, 13, 12, 13, 21...
996 [2, 3, 5, 5, 11, 15, 18, 15, 14, 13, 16, 17, 1...
997 [4, 5, 6, 8, 5, 7, 7, 26, 13, 16, 17, 16, 17, ...
998 [1, 3, 5, 7, 5, 6, 16, 10, 17, 12, 12, 18, 24,...
999 [3, 4, 8, 9, 8, 4, 14, 17, 11, 16, 7, 20, 14, ...
Name: finalXHist, Length: 1000, dtype: object
</code></pre>
<p>Can anyone advise? The error I get is:</p>
<pre><code> ---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
TypeError: only size-1 arrays can be converted to Python scalars
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
/tmp/ipykernel_826/1502489859.py in <module>
3
4 # Train the model using the training sets
----> 5 regr.fit(X_train, y_train)
6
7 # Make predictions using the testing set
/cvmfs/sft.cern.ch/lcg/views/LCG_101swan/x86_64-centos7-gcc8-opt/lib/python3.9/site-packages/sklearn/linear_model/_base.py in fit(self, X, y, sample_weight)
516 accept_sparse = False if self.positive else ['csr', 'csc', 'coo']
517
--> 518 X, y = self._validate_data(X, y, accept_sparse=accept_sparse,
519 y_numeric=True, multi_output=True)
520
/cvmfs/sft.cern.ch/lcg/views/LCG_101swan/x86_64-centos7-gcc8-opt/lib/python3.9/site-packages/sklearn/base.py in _validate_data(self, X, y, reset, validate_separately, **check_params)
431 y = check_array(y, **check_y_params)
432 else:
--> 433 X, y = check_X_y(X, y, **check_params)
434 out = X, y
435
/cvmfs/sft.cern.ch/lcg/views/LCG_101swan/x86_64-centos7-gcc8-opt/lib/python3.9/site-packages/sklearn/utils/validation.py in inner_f(*args, **kwargs)
61 extra_args = len(args) - len(all_args)
62 if extra_args <= 0:
---> 63 return f(*args, **kwargs)
64
65 # extra_args > 0
/cvmfs/sft.cern.ch/lcg/views/LCG_101swan/x86_64-centos7-gcc8-opt/lib/python3.9/site-packages/sklearn/utils/validation.py in check_X_y(X, y, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, multi_output, ensure_min_samples, ensure_min_features, y_numeric, estimator)
869 raise ValueError("y cannot be None")
870
--> 871 X = check_array(X, accept_sparse=accept_sparse,
872 accept_large_sparse=accept_large_sparse,
873 dtype=dtype, order=order, copy=copy,
/cvmfs/sft.cern.ch/lcg/views/LCG_101swan/x86_64-centos7-gcc8-opt/lib/python3.9/site-packages/sklearn/utils/validation.py in inner_f(*args, **kwargs)
61 extra_args = len(args) - len(all_args)
62 if extra_args <= 0:
---> 63 return f(*args, **kwargs)
64
65 # extra_args > 0
/cvmfs/sft.cern.ch/lcg/views/LCG_101swan/x86_64-centos7-gcc8-opt/lib/python3.9/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, estimator)
671 array = array.astype(dtype, casting="unsafe", copy=False)
672 else:
--> 673 array = np.asarray(array, order=order, dtype=dtype)
674 except ComplexWarning as complex_warning:
675 raise ValueError("Complex data not supported\n"
ValueError: setting an array element with a sequence.
</code></pre>
<p>I've tried googling it but no luck so far, I guess there is something wrong with the way these two objects are set up.</p>
|
<p>The error is being raised for <code>X</code> (third-to-last part of the traceback): you cannot have an array-valued feature. You need to do some feature engineering to generate a flat table of data to train on; whether that's flattening the arrays into individual features, or extracting some statistic based on those arrays, or something else depends on what those arrays mean (and would be a better question for datascience.SE or stats.SE).</p>
<p>Having arrays for <code>y</code> may have a similar issue, but if treating them as individual outputs is what you're after, it becomes either a "multioutput" regression or a "multilabel" classification, which are handled by subsets of sklearn estimators.</p>
|
python|dataframe|machine-learning|scikit-learn|linear-regression
| 1 |
1,072 | 69,676,952 |
How to create the list inside the dictionary using python
|
<p>I am trying to Automate the dataset creation in quicksight using Boto3.
but I am stuck some point . please any one help to solve this.
Here my code :</p>
<pre><code>qs = boto3.client('quicksight')
response = qs.describe_data_set(
AwsAccountId='xxxxxxxx',
DataSetId='testdatasetv4'
)
columns =response['DataSet']['PhysicalTableMap']['string']['RelationalTable']['InputColumns']
for dic in columns:
for key in dic:
print({dic[key]})
</code></pre>
<p>I need a output like this:</p>
<pre><code>response1 = Client.create_data_set(
AwsAccountId=data['AwsAccountId1'],
DataSetId=data['DatasetId'],
Name='testdataset',
PhysicalTableMap={
'string': {
'RelationalTable': {
'DataSourceArn':response['Arn'],
'Schema': 'public',
'Name': 'sales',
'InputColumns': [
{
'Name': 'salesid',
'Type': 'INTEGER'
},
{
'Name': 'listid',
'Type': 'INTEGER'
},
{
'Name': 'sellerid',
'Type': 'INTEGER'
},
{
'Name': 'buyerid',
'Type': 'INTEGER'
},
{
'Name': 'eventid',
'Type': 'INTEGER'
},
{
'Name': 'dateid',
'Type': 'INTEGER'
},
{
'Name': 'qtysold',
'Type': 'INTEGER'
},
{
'Name': 'pricepaid',
'Type': 'DECIMAL'
},
{
'Name': 'commission',
'Type': 'DECIMAL'
},
{
'Name': 'saletime',
'Type': 'DATETIME'
},
]
}
}
},
</code></pre>
<p>How can I add the above Input columns through a code. I am able extract the input columns but I didn't any idea to add input columns . please help me to do this.</p>
|
<p>Here's an example of creating a dictionary and adding different nested elements. You'll need to adapt for solution.</p>
<pre><code>columns = ['key1', 'key2', 'key3']
vals = ['1', '2', '3']
mydict = {}
mydict['firstkey'] = 1
mydict['anotherkey'] = {}
mydict['anotherkey']['secondkey'] = 2
mydict['needalist'] = {}
mydict['needalist']['mylist'] = [{k:vals[i]} for i, k in enumerate(columns)]
mydict
{'firstkey': 1,
'anotherkey': {'secondkey': 2},
'needalist': {'mylist': [{'key1': '1'}, {'key2': '2'}, {'key3': '3'}]}}
</code></pre>
|
python|python-3.x|boto3|amazon-quicksight
| 0 |
1,073 | 73,299,066 |
Regex for finding trigonometry function with variable
|
<p>I have the string:</p>
<pre><code>-15*sin(h)**2+121*sin(h)-216
</code></pre>
<p>I'm currently using</p>
<pre class="lang-py prettyprint-override"><code>input_text = re.findall(r"sin|cos|tan|\d|\w|\(|\)|\+|-|\*+", input_text.strip().lower())
</code></pre>
<p>to try to tokenize this string, but it returns the following:</p>
<pre><code>['-', '1', '5', '*', 'sin', '(', 'h', ')', '**', '2', '+', '1', '2', '1', '*', 'sin', '(', 'h', ')', '-', '2', '1', '6']
</code></pre>
<p>Could someone help me modify my regex statement so I get</p>
<pre><code>['sin(h)']
</code></pre>
<p>as a token instead of it being broken into</p>
<pre><code>['sin', '(', 'h', ')']
</code></pre>
<p>On top of that could I use [a-zA-Z] so I can tokenize the trig functions for any letter? As in sin([a-zA-Z])</p>
|
<p>Don't make <code>(</code>, <code>\w</code>, and <code>)</code> alternatives to the trig functions, make them part of that same match.</p>
<pre><code>(?:sin|cos|tan)\(\w\)|\+|-|\*+
</code></pre>
|
python|regex|tokenize
| 0 |
1,074 | 49,899,298 |
How does GridSearchCV compute training scores?
|
<p>I'm having a hard time figuring out parameter <code>return_train_score</code> in <code>GridSearchCV</code>. From the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p><code>return_train_score</code> : boolean, optional</p>
<p> If <code>False</code>, the <code>cv_results_</code> attribute will not include training scores.</p>
</blockquote>
<p>My question is: <strong>what are the training scores?</strong></p>
<p>In the following code I'm splitting data into ten stratified folds. As a consequence <code>grid.cv_results_</code> contains ten test scores, namely <code>'split0_test_score'</code>, <code>'split1_test_score'</code> , ..., <code>'split9_test_score'</code>. I'm aware that each of those is the success rate obtained by a 5-nearest neighbors classifier that uses the corresponding fold for testing and the remaining nine folds for training.</p>
<p><code>grid.cv_results_</code> also contains ten train scores: <code>'split0_train_score'</code>, <code>'split1_train_score'</code> , ..., <code>'split9_train_score'</code>. How are these values calculated?</p>
<pre><code>from sklearn import datasets
from sklearn.model_selection import GridSearchCV
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import StratifiedKFold
X, y = datasets.load_iris(True)
skf = StratifiedKFold(n_splits=10, random_state=0)
knn = KNeighborsClassifier()
grid = GridSearchCV(estimator=knn,
cv=skf,
param_grid={'n_neighbors': [5]},
return_train_score=True)
grid.fit(X, y)
print('Mean test score: {}'.format(grid.cv_results_['mean_test_score']))
print('Mean train score: {}'.format(grid.cv_results_['mean_train_score']))
#Mean test score: [ 0.96666667]
#Mean train score: [ 0.96888889]
</code></pre>
|
<p>It is the train score of the prediction model on all folds <strong>excluding</strong> the one you are testing on. In your case, it is the score over the 9 folds you trained the model on.</p>
|
python|scikit-learn|cross-validation|grid-search
| 3 |
1,075 | 49,919,919 |
html to pdf convertion css not working
|
<p>I try to convert the following page to pdf
<a href="https://bootsnipp.com/snippets/P234b" rel="nofollow noreferrer">link</a> </p>
<p>using xhtml2pdf library for python.
But the problem is the css styles are not working properly.
How can i solve the problem ?</p>
|
<p>You need to write all css in header. Import will not work in pdf.</p>
<pre><code><link href="//maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" rel="stylesheet" id="bootstrap-css">
</code></pre>
<p>this need to be change like following:</p>
<pre><code><style>
/*!
* Bootstrap v4.0.0 (https://getbootstrap.com)
* Copyright 2011-2018 The Bootstrap Authors
* Copyright 2011-2018 Twitter, Inc.
* Licensed under MIT
(https://github.com/twbs/bootstrap/blob/master/LICENSE)
*/:root{--blue:#007bff;--indigo:#6610f2;--purple:#6f42c1;--pink:#e83e8c;--
red:#dc3545;--orange:#fd7e14;--yellow:#ffc107;--green:#28a745;--
teal:#20c997;--cyan:#17a2b8;--white:#fff;--gray:#6c757d;--gray-
dark:#343a40;--primary:#007bff;--secondary:#6c757d;--success:#28a745;--
......
</style>
</code></pre>
|
django|python-3.x
| 1 |
1,076 | 64,981,825 |
How to override attribute in Base class Python3 , so that subsequent operations remains same verywhere?
|
<p>I have a use case, where I have to override one attribute in base class <strong>init</strong>, but the operations after that ( by making use of that attribute ) remains the same.</p>
<pre><code>class Person:
def __init__(self, name, phone, record_file = None):
self.name = name
self.phone = phone
if self.record_file:
self.contents = json.load(open(self.record_file))
else:
self.contents = {'person_specific_details': details}
#### Do some operations with self.contents
class Teenager(Person):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# If self.record_file is None:
# self.contents = new for Teenager
self.contents = {'teenager_specific_details': teenager_details}
# But further operations remains the same (#### Do some operations with self.contents)
t = Teenager(phone='xxxxxx', name='XXXXXXX')
</code></pre>
<p>I am not able to acheive it properly. Can anyone help?</p>
|
<p>Your main problem is that you want to change an intermediate value in the <code>Person.__init__</code>, which won't work. But you could create an optional argument for the <code>contents</code> and just use that instead of the default one.
Like this:</p>
<pre class="lang-py prettyprint-override"><code>class Person:
def __init__(self, name, phone, record_file=None, contents=None):
self.name = name
self.phone = phone
if record_file:
with open(record_file) as fp:
self.contents = json.load(fp)
else:
if contents: # can be utilized by other subclasses
self.contents = contents
else:
self.contents = {"person_specific_details": details}
#### Do some operations with self.contents
class Teenager(Person):
def __init__(self, **kwargs):
contents = {"teenager_specific_details": teenager_details}
super().__init__(contents=contents, **kwargs)
t = Teenager(phone="xxxxxx", name="XXXXXXX")
</code></pre>
<p>This way you can pass the <code>Teenager</code> specific contents to the base initializaion, and it can proceed further with that one.</p>
|
python|python-3.x|class|inheritance
| 1 |
1,077 | 65,259,317 |
Tensorflow use : codec can't decode byte XX in position XX : invalid continuation byte
|
<p>i'm trying to train a model, I'm used the code that can be found here : <a href="https://medium.com/@martin.lees/image-recognition-with-machine-learning-in-python-and-tensorflow-b893cd9014d2" rel="nofollow noreferrer">https://medium.com/@martin.lees/image-recognition-with-machine-learning-in-python-and-tensorflow-b893cd9014d2</a></p>
<p>The thing is, even when I just copy / paste the code, I got a problem that I really don't understand why I have it. I searched a lot on the tensorflow Github but found nothing to settle my problem.</p>
<p>Here is the traceback :</p>
<pre><code>Traceback (most recent call last):
File "D:\pokemon\PogoBot\PoGo-Adb\ml_test_data_test.py", line 108, in <module>
tf.app.run(main=main)
File "C:\Users\pierr\anaconda3\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\pierr\anaconda3\lib\site-packages\absl\app.py", line 303, in run
_run_main(main, args)
File "C:\Users\pierr\anaconda3\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "D:\pokemon\PogoBot\PoGo-Adb\ml_test_data_test.py", line 104, in main
saver.save(sess, "./model")
File "C:\Users\pierr\anaconda3\lib\site-packages\tensorflow\python\training\saver.py", line 1183, in save
model_checkpoint_path = sess.run(
File "C:\Users\pierr\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 957, in run
result = self._run(None, fetches, feed_dict, options_ptr,
File "C:\Users\pierr\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1180, in _run
results = self._do_run(handle, final_targets, final_fetches,
File "C:\Users\pierr\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1358, in _do_run
return self._do_call(_run_fn, feeds, fetches, targets, options,
File "C:\Users\pierr\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1365, in _do_call
return fn(*args)
File "C:\Users\pierr\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1349, in _run_fn
return self._call_tf_sessionrun(options, feed_dict, fetch_list,
File "C:\Users\pierr\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1441, in _call_tf_sessionrun
return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 109: invalid continuation byte
</code></pre>
<p>And here is the code :</p>
<pre><code>from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import cv2
from os import listdir
from os.path import isfile, join
import numpy as np
import tensorflow as tf2
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import math
class Capchat:
data_dir = "data_test//"
nb_categories = 9
X_train = None # X is the data array
Y_train = None # Y is the labels array, you'll see this notation pretty often
train_nb = 0 # number of train images
X_test = None
Y_test = None
test_nb = 0 # number of tests images
index = 0 # the index of the array we will fill
def readimg(self, file, label, train = True):
im = cv2.imread(file); # read the image to PIL image
im = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY).flatten() # put it in black and white and as a vector
# the train var definies if we fill the training dataset or the test dataset
if train :
self.X_train[self.index] = im
self.Y_train[self.index][label - 1] = 1
else :
self.X_test[self.index] = im
self.Y_test[self.index][label - 1] = 1
self.index += 1
def __init__(self):
total_size = [f for f in listdir(self.data_dir + "1/") if isfile(join(self.data_dir + "1/", f))].__len__() # ge the total size of the dataset
self.train_nb = math.floor(total_size * 0.8) # we get 80% of the data to train
self.test_nb = math.ceil(total_size *0.2) # 20% to test
# We fill the arrays with zeroes 840 is the number of pixels in an image
self.X_train = np.zeros((self.train_nb*self.nb_categories, 735), np.int32)
self.Y_train = np.zeros((self.train_nb*self.nb_categories, 3), np.int32)
self.X_test = np.zeros((self.test_nb*self.nb_categories, 735), np.int32)
self.Y_test = np.zeros((self.test_nb*self.nb_categories, 3), np.int32)
# grab all the files
files_1 = [f for f in listdir(self.data_dir+"1/") if isfile(join(self.data_dir+"1/", f))]
files_2 = [f for f in listdir(self.data_dir+"2/") if isfile(join(self.data_dir+"2/", f))]
files_3 = [f for f in listdir(self.data_dir+"3/") if isfile(join(self.data_dir+"3/", f))]
for i in range(self.train_nb):
# add all the files to training dataset
self.readimg(self.data_dir+"1/"+files_1[i], 1)
self.readimg(self.data_dir+"2/"+files_2[i], 2)
self.readimg(self.data_dir+"3/"+files_3[i], 3)
self.index = 0
for i in range (self.train_nb, self.train_nb + self.test_nb):
self.readimg(self.data_dir+"1/" + files_1[i], 1, False)
self.readimg(self.data_dir+"2/" + files_2[i], 2, False)
self.readimg(self.data_dir+"3/" + files_3[i], 3, False)
print("donnée triée")
def main(_):
# Import the data
cap = Capchat()
# Create the model
x = tf.placeholder(tf.float32, [None, 735])
W = tf.Variable(tf.zeros([735, 3]), name="weights")
b = tf.Variable(tf.zeros([3]), name="biases")
mult = tf.matmul(x, W) # W * X...
y = tf.add(mult, b, name="calc") # + b
# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, 3])
# cost function
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
# optimizer
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
# allows to save the model later
saver = tf.train.Saver()
# start a session to run the network on
sess = tf.InteractiveSession()
# initialize global variables
tf.global_variables_initializer().run()
# Train for 1000 steps, notice the cap.X_train and cap.Y_train
for _ in range(1000):
sess.run(train_step, feed_dict={x: cap.X_train, y_: cap.Y_train})
# Extract one hot encoded output via argmax
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
# Test for accuraccy on the testset, notice the cap.X_test and cap.Y_test
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("\nATTENTION RESULTAT ",sess.run(accuracy, feed_dict={x: cap.X_test,
y_: cap.Y_test}))
# save the model learned weights and biases
saver.save(sess, "./model")
if __name__ == '__main__':
tf.app.run(main=main)
</code></pre>
|
<p>The error was really stupid,
because I'm on windows, this line</p>
<pre><code>saver.save(sess, "./model")
</code></pre>
<p>was the cause of the error, so I changed it with this :</p>
<pre><code>saver.save(sess, "model\\model")
</code></pre>
<p>And now this is working.</p>
|
python|tensorflow
| 0 |
1,078 | 71,674,381 |
creating multiple columns with a loop based on other column in pandas
|
<p>Hello everyone I have a working code in python but it is written in a crude way because I am still learning the fundamentals and require some insight.</p>
<p>I am creating 40 columns based on one column like i shared a small part of it below:</p>
<pre><code>df["Bonus Payout 80%"]=0
df["Bonus Payout 81%"]=df["Monthly gross salary 100% (LC)"]*0.01
df["Bonus Payout 82%"]=df["Monthly gross salary 100% (LC)"]*0.02
df["Bonus Payout 83%"]=df["Monthly gross salary 100% (LC)"]*0.03
df["Bonus Payout 84%"]=df["Monthly gross salary 100% (LC)"]*0.04
df["Bonus Payout 85%"]=df["Monthly gross salary 100% (LC)"]*0.05
df["Bonus Payout 80%"]=df['Bonus Payout 80%'].apply('{:,.2f}'.format)
df["Bonus Payout 81%"]=df['Bonus Payout 81%'].apply('{:,.2f}'.format)
df["Bonus Payout 82%"]=df["Bonus Payout 82%"].apply('{:,.2f}'.format)
df["Bonus Payout 83%"]=df["Bonus Payout 83%"].apply('{:,.2f}'.format)
df["Bonus Payout 84%"]=df["Bonus Payout 84%"].apply('{:,.2f}'.format)
df["Bonus Payout 85%"]=df["Bonus Payout 85%"].apply('{:,.2f}'.format)
</code></pre>
<p>the lines of code goes on until bonus payout 120%
how can i tidy this up and convert it to a more coder way?</p>
<p>any help is appreciated</p>
<p>edit :
my first lines of code is :</p>
<pre><code>df["Bonus Payout 80%"]=df["Monthly gross salary 100% (LC)"]*0.00
df["Bonus Payout 80%"]=df['Bonus Payout 80%'].apply('{:,.2f}'.format)
</code></pre>
<p>and the last one</p>
<pre><code>df["Bonus Payout 120%"]=df["Monthly gross salary 100% (LC)"]*0.40
df["Bonus Payout 120%"]=df['Bonus Payout 120%'].apply('{:,.2f}'.format)
</code></pre>
|
<p>You can use <code>f-strings</code> and <code>for loops</code>:</p>
<pre><code>j = 0
for i in range(80,121):
df[f"Bonus Payout {i}%"]=df["Monthly gross salary 100% (LC)"]*j
df[f"Bonus Payout {i}%"]=df[f'Bonus Payout {i}%'].apply('{:,.2f}'.format)
j += 0.01
</code></pre>
<p>P.S.: I have edited my answer after question edit.</p>
|
python|pandas|dataframe|loops|multiple-columns
| 1 |
1,079 | 61,618,439 |
Realtime JSON string transfer from android/ios app to a Windows software
|
<p>I want to create an android/ios app that would send a normal string (or a json) to a software in Windows which i will also make. Example, in my mobile app when i press a button, the text on my Windows software will change to whatever text that was sent by my the mobile app in realtime, not after 1 minute or so.</p>
<ul>
<li>I will use python for that Windows software.</li>
<li>I will also have an azure/aws cloud virtual machine instance that will serve as my "bridge" server for this string communication transfer between mobile app and the windows software.</li>
</ul>
<p>My questions are:</p>
<ul>
<li><p>What is the best way to code this?</p></li>
<li><p>What's the best practice of doing this kind of real time transfers? </p></li>
</ul>
<p>NOTE: I have minimal experience with socket programming and I'm curious if that's the "industry standard" of doing this kind of tasks or if there's an easier way of doing it. Thank you very much!</p>
|
<p>your question covers a lot of subjects, but I'll try to give you some basic information.
basic best practices</p>
<ol>
<li>Don't use raw sockets, the industry standard is mostly using an HTTP server (Django or Flask) with RESTful API using JSON as your serialization protocol. I also recommend that you'll make your server <a href="https://medium.com/@rachna3singhal/stateless-over-stateful-applications-73cbe025f07" rel="nofollow noreferrer">stateless</a>.</li>
<li>Investigate similar use cases and <strong>especially webhooks</strong>. the simplest way you execute your plan is giving each end of your connection a unique id, and then, for example, <code>change something in phone app with</code> -> <code>create API request to your server to change something in a windows application with the id 123</code> -> <code>the server will look in its database for the address of the device with id 123</code> -> <code>the server will send an HTTP request to the windows software</code>. this model has many disadvantages like forcing register in the server phone apps to related windows software, what do you do if the address of the software changes (routers have dynamic IP).</li>
<li>Now, for the windows application, you may want to compile python to an executable file and create a standard installer for your program (guides are attached). I also recommend that you'll check Kivy which is a GUI framework that can be compiled to windows easily.</li>
</ol>
<p>As mentioned, this topic is huge and if you want to create a real industry-standard application, you'll have to consider many more things like using HTTPS and other security issues, the ability to scale out your application and handle huge amounts of requests, have a CICD pipeline, testing and testing infrastructure, and many more.</p>
<p>some related links and guides, good luck!</p>
<ul>
<li><a href="https://docs.djangoproject.com/en/3.0/" rel="nofollow noreferrer">Django web server framework</a> </li>
<li><a href="https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04" rel="nofollow noreferrer">Flask best practise stack</a> </li>
<li><a href="https://www.guru99.com/pytest-tutorial.html" rel="nofollow noreferrer">unit testing with pytest</a></li>
<li><a href="https://www.youtube.com/watch?v=-BHverY7IwU" rel="nofollow noreferrer">testing automation</a></li>
<li><a href="https://www.pyinstaller.org/" rel="nofollow noreferrer">compile python with pyinstaller</a></li>
<li><a href="https://help.bittitan.com/hc/en-us/articles/115008269228-Create-a-Windows-Installer-Package-MSI-to-deploy-the-Device-Management-Agent" rel="nofollow noreferrer">create installer file for your executable</a> </li>
<li><a href="https://likegeeks.com/kivy-tutorial/" rel="nofollow noreferrer">Kivy</a></li>
</ul>
|
python|json|sockets|transfer|instant
| 0 |
1,080 | 56,866,244 |
numpy timedelta64 not showing fraction
|
<p>I want to convert 847hours into days, Actual result is 847/24= 35,29..</p>
<p>But, numpy show only "35 days"</p>
<hr>
<pre><code>import numpy as np
x= np.timedelta64(847, 'h')
x= np.timedelta64(x, 'D')
print(x) #Returns 35 days, Expected 35,29
</code></pre>
<hr>
|
<p>The magnitude of a <code>timedelta64</code> is always stored as <em>a 64-bit integer</em> (cf. <a href="https://docs.scipy.org/doc/numpy/reference/arrays.datetime.html#datetime-units" rel="nofollow noreferrer">Datetime Units</a>). To obtain fractional days, we can do:</p>
<pre><code>import numpy as np
x = np.timedelta64(847, 'h')
x = x / np.timedelta64(1, 'D')
print(x)
</code></pre>
<p>The result <code>35.291666666666664</code> is inevitably no longer a <code>timedelta64</code>.</p>
|
python-3.x|numpy|timedelta
| 1 |
1,081 | 56,509,345 |
How can I stop networkx to change the source and the target node?
|
<p>I make a Graph (not Digraph) from a data frame (Huge network) with networkx.
I used this code to creat my graph:
nx.from_pandas_edgelist(R,source='A',target='B',create_using=nx.Graph())</p>
<p>However, in the output when I check the edge list, my source node and the target node has been changed based on the sort and I don't know how to keep it as the way it was in the dataframe (Need the source and target node stay as the way it was in dataframe).</p>
|
<p>If you mean the order has changed, check out <code>nx.OrderedGraph</code></p>
|
python|pandas|networkx
| 0 |
1,082 | 60,879,602 |
Get values from between two other values for each row in the dataframe
|
<p>I want to extract the integer values for each Hole_ID between the From and To values (inclusive). And save them to a new data frame with the Hole IDs as the column headers.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
df=pd.DataFrame(np.array([['Hole_1',110,117],['Hole_2',220,225],['Hole_3',112,114],['Hole_4',248,252],['Hole_5',116,120],['Hole_6',39,45],['Hole_7',65,72],['Hole_8',79,83]]),columns=['HOLE_ID','FROM', 'TO'])
</code></pre>
<p>Example starting data</p>
<pre><code> HOLE_ID FROM TO
0 Hole_1 110 117
1 Hole_2 220 225
2 Hole_3 112 114
3 Hole_4 248 252
4 Hole_5 116 120
5 Hole_6 39 45
6 Hole_7 65 72
7 Hole_8 79 83
</code></pre>
<p>This is what I would like:</p>
<pre><code>Out[5]:
Hole_1 Hole_2 Hole_3 Hole_4 Hole_5 Hole_6 Hole_7 Hole_8
0 110 220 112 248 116 39 65 79
1 111 221 113 249 117 40 66 80
2 112 222 114 250 118 41 67 81
3 113 223 Nan 251 119 42 68 82
4 114 224 Nan 252 120 43 69 83
5 115 225 Nan Nan Nan 44 70 Nan
6 116 Nan Nan Nan Nan 45 71 Nan
7 117 Nan Nan Nan Nan Nan 72 Nan
</code></pre>
<p>I have tried to use the range function, which works if I manually define the range:</p>
<pre><code>for i in df['HOLE_ID']:
df2[i]=range(int(1),int(10))
</code></pre>
<p>gives</p>
<pre><code> Hole_1 Hole_2 Hole_3 Hole_4 Hole_5 Hole_6 Hole_7 Hole_8
0 1 1 1 1 1 1 1 1
1 2 2 2 2 2 2 2 2
2 3 3 3 3 3 3 3 3
3 4 4 4 4 4 4 4 4
4 5 5 5 5 5 5 5 5
5 6 6 6 6 6 6 6 6
6 7 7 7 7 7 7 7 7
7 8 8 8 8 8 8 8 8
8 9 9 9 9 9 9 9 9
</code></pre>
<p>but this won't take the df To and From values as inputs to the range.</p>
<pre><code>df2=pd.DataFrame()
for i in df['HOLE_ID']:
df2[i]=range(df['To'],df['From'])
</code></pre>
<p>gives an error.</p>
|
<p>Apply a method that returns a series of a range between from and to and then transpose the result, eg:</p>
<pre><code>import numpy as np
df.set_index('HOLE_ID').apply(lambda v: pd.Series(np.arange(v['FROM'], v['TO'] + 1)), axis=1).T
</code></pre>
<p>Gives you:</p>
<pre><code>HOLE_ID Hole_1 Hole_2 Hole_3 Hole_4 Hole_5 Hole_6 Hole_7 Hole_8
0 110.0 220.0 112.0 248.0 116.0 39.0 65.0 79.0
1 111.0 221.0 113.0 249.0 117.0 40.0 66.0 80.0
2 112.0 222.0 114.0 250.0 118.0 41.0 67.0 81.0
3 113.0 223.0 NaN 251.0 119.0 42.0 68.0 82.0
4 114.0 224.0 NaN 252.0 120.0 43.0 69.0 83.0
5 115.0 225.0 NaN NaN NaN 44.0 70.0 NaN
6 116.0 NaN NaN NaN NaN 45.0 71.0 NaN
7 117.0 NaN NaN NaN NaN NaN 72.0 NaN
</code></pre>
|
python|pandas|range
| 4 |
1,083 | 63,305,130 |
Removing unwanted characters, and writing from a JSON response
|
<p>So, I am trying to extract specific data and write it to a file, this JSON response has odd brackets around the information I want and need to be stripped off and I'm not really sure how to get to the 'desired output'.</p>
<p>Maybe its better to do it in an xls document? The end goal is to compare this list against another to find which hosts are missing.</p>
<p>Its a very lengthy response, so I just grabbed a snippet.</p>
<p>The JSON response</p>
<pre><code> [
{
"adapter_list_length": 3,
"adapters": [
"adapter1",
"adapter2",
"adapter3"
],
"id": "",
"labels": [
"",
""
],
"specific_data.data.hostname": [
"HOSTNAME1"
],
"specific_data.data.last_seen": "",
"specific_data.data.network_interfaces.ips": [
"123.45.67.89"
],
"specific_data.data.os.type": [
""
]
},
{
"adapter_list_length": 3,
"adapters": [
"adapter1",
"adapter2",
"adapter3"
],
"id": "",
"labels": [
"",
""
],
"specific_data.data.hostname": [
"HOSTNAME2"
</code></pre>
<p>My test writer:</p>
<pre><code>names = [item['specific_data.data.hostname'] for item in data]
with open ('namelist.csv', mode='w') as csv_file:
csv_writer = csv.writer(csv_file, delimiter='\n', quotechar='"', quoting=csv.QUOTE_MINIMAL)
csv_writer.writerow(names)
</code></pre>
<p>Current output:</p>
<pre><code>['HOSTNAME1']
['HOSTNAME2']
</code></pre>
<p>Desired Output:</p>
<pre><code>Hostnames: IPaddress:
HOSTNAME1 123.45.67.89
HOSTNAME2 123.456.78.9
.... ....
... ....
</code></pre>
|
<p>You can have it done this way:</p>
<pre><code>import csv
data = [{'adapter_list_length': 3, 'adapters': ['adapter1', 'adapter2', 'adapter3'],
'id': '', 'labels': ['', ''], 'specific_data.data.hostname': ['HOSTNAME1'],
'specific_data.data.last_seen': '', 'specific_data.data.network_interfaces.ips':
['123.45.67.89'], 'specific_data.data.os.type': ['']}, {'adapter_list_length': 3,
'adapters': ['adapter1', 'adapter2', 'adapter3'], 'id': '', 'labels': ['', ''],
'specific_data.data.hostname': ['HOSTNAME2'],'specific_data.data.last_seen': '',
'specific_data.data.network_interfaces.ips': ['123.45.67.80'],
'specific_data.data.os.type': ['']}]
names = [item['specific_data.data.hostname'][0] for item in data]
ips = [item['specific_data.data.network_interfaces.ips'][0] for item in data]
dets = list(zip(names,ips))
print('Hostnames:','\t','IPaddress:')
for i,j in dets:
print(i,'\t',j)
fields = ['Hostnames:', 'IPaddress:']
rows = [list(x) for x in dets]
filename = "dumb.csv"
with open(filename, 'w') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerow(fields)
csvwriter.writerows(rows)
</code></pre>
|
python|json|api|csv|python-requests
| 0 |
1,084 | 59,625,229 |
Append min value of two columns in pandas data frame
|
<p><strong>df</strong></p>
<pre><code>Purchase
1
3
2
5
4
7
</code></pre>
<p><strong>df2</strong></p>
<pre><code>df2 = pd.DataFrame(columns=['Mean','Median','Max','Col4'])
df2 = df2.append({'Mean': (df['Purchase'].mean()),'Median':df['Purchase'].median(),'Max':(df['Purchase'].max()),'Col4':(df2[['Mean','Median']].min(axis=1))}, ignore_index=True)
</code></pre>
<p><strong>Output obtained</strong></p>
<pre><code> Mean Median Max Col4
3.66 3.5 7 Series([], dtype: float64)
</code></pre>
<p><strong>Output expected</strong></p>
<pre><code> Mean Median Max Col4
3.66 3.5 7 3.5 #Value in Col4 is Min(Mean, Median of df2)
</code></pre>
<p>Can anyone help?</p>
|
<p>Use <code>np.minimum</code> and passed <code>mean</code> with <code>median</code>:</p>
<pre><code>df2 = pd.DataFrame(columns=['Mean','Median','Max','Col4'])
df2 = (df2.append({'Mean': df['Purchase'].mean(),
'Median':df['Purchase'].median(),
'Max': df['Purchase'].max(),
'Col4': np.minimum(df['Purchase'].mean(), df['Purchase'].median())},
ignore_index=True))
print (df2)
Mean Median Max Col4
0 3.666667 3.5 7.0 3.5
</code></pre>
<p>Or better is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.agg.html" rel="nofollow noreferrer"><code>Series.agg</code></a> with new value of min in next step, last create one row DataFrame:</p>
<pre><code>s = df['Purchase'].agg(['mean','median','max'])
s.loc['col4'] = s[['mean','median']].min()
df = s.to_frame(0).T
print (df)
mean median max col4
0 3.666667 3.5 7.0 3.5
</code></pre>
|
python|python-3.x|pandas
| 6 |
1,085 | 60,027,706 |
Expected a list of dataframe got just one dataframe
|
<p>Am trying to convert list of sheets from an excel file into a csv, so beginning with the following codes, i want to read the files first, but i only get the first sheet, and the rest are lost</p>
<pre><code>import pandas as pd
def accept_xcl_file(file):
xcl_file = pd.ExcelFile(file)
sheets= xcl_file.sheet_names
file = xcl_file.parse(sheet_names = sheets)
return file,sheets
file, sheet = accept_xcl_file('Companies.xlsx')
sheet >>
</code></pre>
<p><strong>this is the output from sheet
['companies',
'fruits',
'vehicles',
'sales',
'P&L',
'price',
'clubs',
'countries',
'housing',
'life-expectancy']</strong></p>
<pre><code>file['fruits'] >>
</code></pre>
<p><strong><em>i get a keyerror when i try to index the file, but when i use 'companies' key i get the correct data. going by the documentation i should expect a DataFrame or dict of DataFrames
anyhelp.</em></strong>.</p>
|
<p>The <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html" rel="nofollow noreferrer">read_excel</a> method is already available in pandas to import Excel data.</p>
<p>Try this instead of your code:</p>
<pre><code>import pandas as pd
file = pd.read_excel('Companies.xlsx')
# file is a dict object
# keys are the sheet names as strings
# items are the pd.DataFrame objects containing sheet data
</code></pre>
|
python|pandas
| 1 |
1,086 | 66,897,615 |
Combining list of numbers and strings in python
|
<p>As an <code>R</code> user, I know very little about python. I am using <code>python moviepy</code> to pick up a long list of photos to generate a video in <code>RStudio notebook</code>. What I did previously was to use <code>R</code> to generate the list of photos.</p>
<p>R code:</p>
<pre><code>v_list <- c(paste0("v_", c(1:10, rep(10, 5), rep(11, 10)), ".jpg"))
</code></pre>
<p>and then in <code>python</code> chunk to convert this list to python, <code>v_list = r.v_list</code>.</p>
<p>I wonder if there is an easy way to generate the list directly in python. It appears there are many questions on this topic. Through those answers, I managed to produce the following code:</p>
<p>Python:</p>
<pre><code>v_list = ["v_" + str(x) + ".jpg" for x in range(1, 10)]+["v_" + str(x) + ".jpg" for x in [10]*5] + ["v_" + str(x) + ".jpg" for x in [11]*10]
</code></pre>
<p>My question: is it possible to make this code simpler?</p>
|
<p>How about</p>
<pre class="lang-py prettyprint-override"><code>v_list = [f"v_{x}.jpg" for x in list(range(1, 10)) + [10] * 5 + [11] * 10]
</code></pre>
|
python|r|list
| 2 |
1,087 | 66,770,996 |
Recursively copy the secrets from one VAULT path to another
|
<p>I am trying to copy all the secrets along with the subfolders from one <strong>VAULT</strong> path to another.
Example:</p>
<pre><code>source = "/path/namespace/TEAM1/jenkins"
</code></pre>
<p>(note: the above source path consists of subfolders like job1,job2,job3... and all these subfolders contains the respective secrets in the form of key-value pairs)</p>
<pre><code>destination="/path/namespace/team1/jenkins"
</code></pre>
<p>I could able to manually copy each secret to the destination folder, but wondering any code snippet would help me here to achieve this. Like recursively copy all the secrets along with the respective sub-folders to the destination PATH.</p>
|
<p>Taking vault secret backup from one path to another like.
input_path: secret/tmp1
output_path: secret/tmp2
so now with this python script you can sync all secret from secret/tmp1 to secret/tmp2</p>
<p>Need to add input_path and output_path in python script then just run.
Link for python script.
<a href="https://github.com/vinamra1502/vault-backup-restore" rel="nofollow noreferrer">https://github.com/vinamra1502/vault-backup-restore</a></p>
<p>With this script you can copy all secrets along with the subfolders from one vault path to others.
Ex. secret/tmp1 secret copy to secret/tmp2 path.</p>
|
python-3.x|hashicorp-vault|vault
| 0 |
1,088 | 35,036,077 |
How do I fix this speed varible writing back to file?
|
<p>I've been writing a program, I've run into an error. My current code is: </p>
<pre><code>import tkinter as tk
speed = 80
def onKeyPress(event, value):
global speed
text.delete("%s-1c" % 'insert', 'insert')
text.insert('end', 'Current Speed: %s\n\n' % (speed, ))
with open("speed.txt", "r+") as p:
speed = p.read()
speed = int(speed)
speed = min(max(speed+value, 0), 100)
with open("speed.txt", "r+") as p:
p.writelines(str(speed))
print(speed)
if speed == 100:
text.insert('end', 'You have reached the speed limit')
if speed == 0:
text.insert('end', 'You can not go any slower')
speed = 80
root = tk.Tk()
root.geometry('300x200')
text = tk.Text(root, background='black', foreground='white', font=('Comic Sans MS', 12))
text.pack()
# Individual key bindings
root.bind('<KeyPress-w>', lambda e: onKeyPress(e, 1))
root.bind('<KeyPress-s>', lambda e: onKeyPress(e, -1)) #
root.mainloop()
</code></pre>
<p>I believe <code>speed = min(...)</code> is causing the error. However do you guys have any idea?</p>
|
<p>One problem (I guess it's the problem you're having) is that you are trying to overwrite the content of file <code>speed.txt</code>, however, the value you are writing contains fewer characters than already contained in the file.</p>
<p>This can lead to unexpected values winding up in your file, e.g. if the file contains</p>
<pre>
10
</pre>
<p>Consider what happens if you try to decrement the value by 1 (user hit the <code>s</code> key):</p>
<pre><code>with open('speed.txt', 'r+') as p:
speed = int(p.read())
speed -= 1 # speed is now 9
with open("speed.txt", "r+") as p:
p.writelines(str(speed))
</code></pre>
<p><code>speed.txt</code> now contains:</p>
<pre>
90
</pre>
<p>Instead of decreasing the speed to 9, it has actually been increased to 90! If the speed was already 100 and you tried to decrement it, you would end up with 990 in the file.</p>
<p>This is because opening the file with mode <code>r+</code> opens the file for reading and writing and positions the file pointer at the beginning of the file. A write will only overwrite the first <em>n</em> characters where <em>n</em> is the length of the data written. Hence you can get the sort of corruption shown above.</p>
<p>You can fix this by opening the file with mode <code>'w'</code> for the _second__ <code>open()</code>. This will completely overwrite the file. And you don't need to use <code>writelines()</code>, just use <code>write()</code>.</p>
|
python
| 2 |
1,089 | 56,357,209 |
Pix2pix program terminates after giving Thread warning of Tensorflow
|
<p>I am trying to run <a href="https://github.com/eriklindernoren/Keras-GAN/blob/master/pix2pix/pix2pix.py" rel="nofollow noreferrer">https://github.com/eriklindernoren/Keras-GAN/blob/master/pix2pix/pix2pix.py</a></p>
<pre><code>python pix2pix.py
</code></pre>
<p>Execution terminates giving following message</p>
<pre><code>Using TensorFlow backend.
WARNING:tensorflow:From C:\Users\kulkarni\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
2019-05-29 14:43:23.767965: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2019-05-29 14:43:23.770965: I tensorflow/core/common_runtime/process_util.cc:71] Creating new thread pool with default inter op setting: 4. Tune using inter_op_parallelism_threads for best performance.
</code></pre>
<p>Tried following solution given at <a href="https://stackoverflow.com/questions/52883145/why-keras-model-on-bare-cpu-is-faster">Why Keras model on "bare" CPU is faster?</a> but no luck.</p>
<p>I am running this on Windows 7 Intel i3 CPU 64-bit machine.</p>
<p>How to do proper settings to get the code running?</p>
|
<p>It's not throwing any error. So I'm guessing the script isn't finding the training dataset. Try downloading the dataset and try running it again.</p>
<pre><code>bash download_dataset.sh facades
python pix2pix.py
</code></pre>
|
tensorflow|keras|deep-learning|generative-adversarial-network
| 1 |
1,090 | 42,538,930 |
SSL error with Python requests despite up-to-date dependencies
|
<p>I am getting an SSL "bad handshake" error. Most similar responses to this problem seem to stem from old libraries, 1024bit cert. incompatibility, etc... I <em>think</em> i'm up to date, and can't figure out why i'm getting this error.</p>
<p>SETUP:</p>
<ul>
<li>requests 2.13.0 </li>
<li>certifi 2017.01.23</li>
<li>'OpenSSL 1.0.2g 1 Mar 2016'</li>
</ul>
<p>I'm hitting this API (2048bit certificate key): <a href="https://api.sidecar.io/rest/v1/provision/application/device/count/" rel="noreferrer">https://api.sidecar.io/rest/v1/provision/application/device/count/</a></p>
<p>And getting this error:
<code>requests.exceptions.SSLError: ("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",)</code></p>
<p>See l.44 of <a href="https://github.com/sidecar-io/sidecar-python-sdk/blob/master/sidecar.py" rel="noreferrer">https://github.com/sidecar-io/sidecar-python-sdk/blob/master/sidecar.py</a></p>
<p>If I turn <code>verify=False</code> in requests, I can bypass, but i'd rather figure out why the certification is failing.</p>
<p>Any help is greatly appreciated; thanks!</p>
|
<p>The validation fails because the server you access is setup improperly, i.e. it is not a fault of your setup or code. Looking at the <a href="https://www.ssllabs.com/ssltest/analyze.html?d=api.sidecar.io&s=52.25.112.146&latest" rel="noreferrer">report from SSLLabs</a> you see </p>
<blockquote>
<p>This server's certificate chain is incomplete. Grade capped to B.</p>
</blockquote>
<p>This means that the server sends a certificate chain which is missing an intermediate certificate to the trusted root and thus your client can not build the trust chain. Most desktop browsers work around this problem by trying to get the missing certificate from somewhere else but normal TLS libraries will fail in this case. You would need to explicitly add the missing chain certificate as trusted to work around this problem:</p>
<pre><code>import requests
requests.get('https://api.sidecar.io', verify = 'mycerts.pem')
</code></pre>
<p><code>mycerts.pem</code> should contain the missing intermediate certificate and the trusted root certificate. A tested version for <code>mycerts.pem</code> can be found in <a href="http://pastebin.com/aZSKfyb7" rel="noreferrer">http://pastebin.com/aZSKfyb7</a>.</p>
|
python|ssl|ssl-certificate|python-requests
| 16 |
1,091 | 53,926,506 |
How to get default browser name using python?
|
<p>Following solutions (actually it is only one) doesn't work to me :</p>
<blockquote>
<p><a href="https://stackoverflow.com/questions/19037216/how-to-get-a-name-of-default-browser-using-python">How to get a name of default browser using python</a></p>
</blockquote>
<hr>
<blockquote>
<p><a href="https://stackoverflow.com/questions/32681951/how-to-get-name-of-the-default-browser-in-windows-using-python">How to get name of the default browser in windows using python?</a></p>
</blockquote>
<p>Solution was:</p>
<pre><code>from _winreg import HKEY_CURRENT_USER, OpenKey, QueryValue
# In Py3, this module is called winreg without the underscore
with OpenKey(HKEY_CURRENT_USER,
r"Software\Classes\http\shell\open\command") as key:
cmd = QueryValue(key, None)
</code></pre>
<p>But unfortunately, in Windows 10 Pro I don't have targeted registry value. I've tried to find alternative keys in Regedit, but no luck.</p>
<p>Please take a look, what my registry virtually contains:
<a href="https://i.stack.imgur.com/Gk4dm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gk4dm.png" alt="enter image description here"></a></p>
|
<p>The following works for me on Windows 10 pro:</p>
<pre><code>from winreg import HKEY_CURRENT_USER, OpenKey, QueryValueEx
reg_path = r'Software\Microsoft\Windows\Shell\Associations\UrlAssociations\https\UserChoice'
with OpenKey(HKEY_CURRENT_USER, reg_path) as key:
print(QueryValueEx(key, 'ProgId'))
</code></pre>
<p>Result (first with Chrome set as default, then with IE):</p>
<pre>
$ python test.py
('ChromeHTML', 1)
$ python test.py
('IE.HTTPS', 1)
</pre>
|
python|python-3.x|browser|windows-10
| 2 |
1,092 | 53,833,151 |
Dump pandas DataFrame to SQL statements
|
<p>I need to convert pandas DataFrame object to a series of SQL statements that reproduce the object.</p>
<p>For example, suppose I have a DataFrame object:</p>
<pre><code>>>> df = pd.DataFrame({'manufacturer': ['Audi', 'Volkswagen', 'BMW'],
'model': ['A3', 'Touareg', 'X5']})
>>> df
manufacturer model
0 Audi A3
1 Volkswagen Touareg
2 BMW X5
</code></pre>
<p>I need to convert it to the following SQL representation (not exactly the same):</p>
<pre><code>CREATE TABLE "Auto" (
"index" INTEGER,
"manufacturer" TEXT,
"model" TEXT
);
INSERT INTO Auto (manufacturer, model) VALUES ('Audi', 'A3'), ('Volkswagen', 'Touareg'), ('BMW', 'X5');
</code></pre>
<p>Luckily, pandas DataFrame object has to_sql() method which allows dumping the whole DataFrame to a database through SQLAlchemy engine. I decided to use SQLite in-memory database for this:</p>
<pre><code>>>> from sqlalchemy import create_engine
>>> engine = create_engine('sqlite://', echo=False) # Turning echo to True just logs SQL statements, I'd avoid parsing this logs
>>> df.to_sql(name='Auto', con=engine)
</code></pre>
<p>I'm stuck at this moment. I can't dump SQLite in-memory database to SQL statements either I can't find sqlalchemy driver that would dump SQL statements into a file instead of executing them.</p>
<p>Is there a way to dump all queries sent to SQLAlchemy engine as SQL statements to a file? </p>
<p>My not elegant solution so far:</p>
<pre><code>>>> from sqlalchemy import MetaData
>>> meta = MetaData()
>>> meta.reflect(bind=engine)
>>> print(pd.io.sql.get_schema(df, name='Auto') + ';')
CREATE TABLE "Auto" (
"manufacturer" TEXT,
"model" TEXT
);
>>> print('INSERT INTO Auto ({}) VALUES\n{};'.format(', '.join([repr(c) for c in df.columns]), ',\n'.join([str(row[1:]) for row in engine.execute(meta.tables['Auto'].select())])))
INSERT INTO Auto ('manufacturer', 'model') VALUES
('Audi', 'A3'),
('Volkswagen', 'Touareg'),
('BMW', 'X5');
</code></pre>
<p>I would actually prefer a solution that does not require building the SQL statements manually.</p>
|
<p>SQLite actually allows one to dump the whole database to a series of SQL statements with <a href="https://www.sqlite.org/cli.html#converting_an_entire_database_to_an_ascii_text_file" rel="nofollow noreferrer">dump command</a>. This functionality is also available in python DB-API interface for SQLite: sqlite3, specifically, through <a href="https://docs.python.org/2/library/sqlite3.html#connection-objects" rel="nofollow noreferrer">connection object's iterdump() method</a>. As far as I know, SQLAlchemy does not provide this functionality.</p>
<p>Thus, to dump pandas DataFrame to a series of SQL statements one needs to first dump it to in-memory SQLite database, and then dump this database using iterdump() method:</p>
<pre><code>from sqlalchemy import create_engine
engine = create_engine('sqlite://', echo=False)
df.reset_index().to_sql(name=table_name, con=engine) # reset_index() is needed to preserve index column in dumped data
with engine.connect() as conn:
for line in conn.connection.iterdump():
stream.write(line)
stream.write('\n')
</code></pre>
<p><code>engine().connect().connection</code> allows to get <a href="https://docs.sqlalchemy.org/en/latest/core/connections.html#working-with-raw-dbapi-connections" rel="nofollow noreferrer">raw DBAPI connection</a>.</p>
|
python|pandas|sqlite|sqlalchemy
| 3 |
1,093 | 58,351,948 |
Python requests, how to send json request without " "
|
<p>my code looks like</p>
<pre><code> data = {
"undelete_user":'false'
}
data_json = json.dumps(data)
print(data_json)
</code></pre>
<p>Output is: </p>
<pre><code>{"undelete_user": "false"}
</code></pre>
<p>i need output to be without "" so it can look like </p>
<pre><code>{"undelete_user": false}
</code></pre>
<p>otherwise when i send request, i will get "failed to decode JSON" error</p>
|
<pre><code>import json
data = {
"undelete_user": False
}
data_json = json.dumps(data)
print(data_json)
</code></pre>
<p>All you had to do was remove 'false' and put False, because you're considering your false as a string, and it should be a boolean.
I hope it helped!</p>
|
python|json|python-3.x
| 3 |
1,094 | 22,599,692 |
For loop not iterating?
|
<p>I am a python newbie and I seem to be having an issue and I can't see what I am doing wrong. I am trying to make it so that when I enter a string it turns the string into pig latin. The issue is that when I do this it only prints out the first word in the string converted. Would anyone be able to point me in the right direction?</p>
<p>Cheers</p>
<pre><code>def pig_latin(data):
words = data.split()
piglatin = []
vowels = ["a", "i", "e", "u", "o", "1", "2", "3", "4", "5", "6",
"7", "8", "9", "0"]
for word in words:
if word[0] in vowels:
word = word + "way"
else:
word = word.replace(word[0],"") + word[0] + "ay"
word = word.lower()
piglatin.append(word)
piglatin = "".join(piglatin)
return piglatin
</code></pre>
|
<p>Your <code>return</code> statement is inside the <code>for</code> loop due to bad indentation, so obviously it will return after one iteration.<br>
Here is the code that will fix this, along with some other changes:</p>
<pre><code>def pigetize(text, wovels):
return ((text + "way") if text[0] in wovels else (text[1:]+text[0]+"ay")).lower()
def pig_latin(data):
words = data.split()
piglatin = []
vowels = ["a", "i", "e", "u", "o"] + [str(x) for x in range(10)]
for word in words:
piglatin.append(pigetize(word, wovels))
return "".join(piglatin)
</code></pre>
|
python-3.x
| 0 |
1,095 | 22,726,878 |
Python, append within a loop
|
<p>So I need to save the results of a loop and I'm having some difficulty. I want to record my results to a new list, but I get "string index out of range" and other errors. The end goal is to record the products of digits 1-5, 2-6, 3-7 etc, eventually keeping the highest product. </p>
<pre><code>def product_of_digits(number):
d= str(number)
for integer in d:
s = 0
k = []
while s < (len(d)):
j = (int(d[s])*int(d[s+1])*int(d[s+2])*int(d[s+3])*int(d[s+4]))
s += 1
k.append(j)
print(k)
product_of_digits(n)
</code></pre>
|
<p>Similar question some time ago. Hi Chauxvive</p>
<p>This is because you are checking until the last index of <code>d</code> as <code>s</code> and then doing <code>d[s+4]</code> and so on... Instead, you should change your <code>while</code> loop to:</p>
<p><code>while s < (len(d)-4):</code></p>
|
python|list|loops|append|product
| 0 |
1,096 | 45,480,459 |
How to serialize and deserialize objects with cbor2?
|
<p>I'm trying to serialize and deserialize object using cbor2 but even after following the documentation I cannot properly do it. Let' suppose I have the following two classes:</p>
<pre><code>class A(object):
def __init__(self):
self.a = 5
self.b = set()
def a(self):
return self.a
class B(object):
def __init__(self, a):
self._a = a
def a(self):
return a
a = A()
b = B(a)
</code></pre>
<p>Can anyone show me how to do it for the object <code>a</code> please?</p>
<p>Thanks</p>
|
<p>Sorry for a late answer.</p>
<p>CBOR2 is currently missing support for serializing sets which could be stored as tagged arrays.</p>
<p>There is a ticket for adding support here: </p>
<p><a href="https://github.com/agronholm/cbor2/issues/14" rel="nofollow noreferrer">https://github.com/agronholm/cbor2/issues/14</a></p>
|
python|serialization|deserialization|cbor
| 0 |
1,097 | 28,598,140 |
Pandas: Incrementally count occurrences in a column
|
<p>I have a DataFrame (df) which contains a 'Name' column. In a column labeled 'Occ_Number' I would like to keep a running tally on the number of appearances of each value in 'Name'. </p>
<p>For example:</p>
<pre><code>Name Occ_Number
abc 1
def 1
ghi 1
abc 2
abc 3
def 2
jkl 1
jkl 2
</code></pre>
<p>I've been trying to come up with a method using</p>
<pre><code>>df['Name'].value_counts()
</code></pre>
<p>but can't quite figure out how to tie it all together. I can only get the grand total from value_counts(). My process thus far involves creating a list of the 'Name' column string values which contain counts greater than 1 with the following code:</p>
<pre><code>>things = df['Name'].value_counts()
>things = things[things > 1]
>queries = things.index.values
</code></pre>
<p>I was hoping to then somehow cycle through 'Name' and conditionally add to Occ_Number by checking against queries, but this is where I'm getting stuck. Does anybody know of a way to do this? I would appreciate any help. Thank you!</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="noreferrer"><code>cumcount</code></a>
to avoid a dummy column:</p>
<pre><code>>>> df["Occ_Number"] = df.groupby("Name").cumcount()+1
>>> df
Name Occ_Number
0 abc 1
1 def 1
2 ghi 1
3 abc 2
4 abc 3
5 def 2
6 jkl 1
7 jkl 2
</code></pre>
|
python|pandas|dataframe
| 27 |
1,098 | 56,973,197 |
Connection of Event hubs to Azure Databricks
|
<p>I want to add libraries in Azure Databricks for connecting to Event Hubs. I will be writing notebooks in python. So which library should I add for connecting to Event Hubs?</p>
<p>As per my search till now I got a spark connecting library in Maven coordinates. But I don't think I will be able to import it in python.</p>
|
<p>Structured streaming integration for Azure Event Hubs is ultimately run on the JVM, so you'll need to import the libraries from the Maven coordinate below:</p>
<pre><code> groupId = com.microsoft.azure
artifactId = azure-eventhubs-spark_2.11
version = 2.3.10
</code></pre>
<p><strong>Note:</strong> For Python applications, you need to add this above library and its dependencies when deploying your application.</p>
<p>For more details, refer "<a href="https://github.com/Azure/azure-event-hubs-spark/blob/master/docs/PySpark/structured-streaming-pyspark.md#deploying" rel="nofollow noreferrer">Structured streaming + Event Hubs Integration Guide for PySpark</a>" and "<a href="https://docs.microsoft.com/en-us/azure/azure-databricks/databricks-stream-from-eventhubs#attach-libraries-to-spark-cluster" rel="nofollow noreferrer">Attach libraries to Spark Cluster</a>".</p>
<p>And also, you may refer <a href="https://stackoverflow.com/questions/49365852/how-to-process-eventhub-stream-with-pyspark-and-custom-python-function/52770596#52770596">SO</a> thread, which addresses a similar issue.</p>
<p>Hope this helps.</p>
|
python|azure|azure-databricks
| 0 |
1,099 | 23,995,473 |
Having trouble comparing a variable to an input in a while loop
|
<p>I'm having some trouble working on a basic program I'm making whilst I try and learn Python, the problem is I am trying to compare a users input to a variable that I have set and it is not working when I try and compare them.</p>
<p>This is the loop in question:</p>
<pre><code> if del_question == "1":
symbol = input("What symbol would you like to change?: ")
while len(symbol) != 1 or symbol not in words:
print("Sorry, that is not a valid symbol")
symbol = input("What symbol would you like to change?: ")
letter = input("What would you like to change it to?: ")
while letter in words and len(letter) != 1:
print("Sorry, that is not a valid letter")
letter = input("What letter would you like to change?: ")
dictionary[symbol] = letter
words = words.replace(symbol, letter)
print("Here is your new code: \n", words)
</code></pre>
<p>The game is about breaking the code by pairing letters and symbols, this is where the letters and symbols are paired but on the letter input when I try and make it so that you are unable to pair up the same letter twice it simply bypasses it. It is working on the symbol input but I'm not sure on why it's not working here.</p>
<p>Here is the text file importing:</p>
<pre><code>code_file = open("words.txt", "r")
word_file = open("solved.txt", "r")
letter_file = open("letter.txt", "r")
</code></pre>
<p>and:</p>
<pre><code>solved = word_file.read()
words = code_file.read()
clue = clues_file.read()
</code></pre>
<p>This is the contents of the words file:</p>
<pre><code>#+/084&"
#3*#%#+
8%203:
,1$&
!-*%
.#7&33&
#*#71%
&-&641'2
#))85
9&330*
</code></pre>
|
<p>Your bug is a simple logic error. You have an <code>and</code> conditional when you really want an <code>or</code> conditional. Change your second while statement to:</p>
<pre><code>while letter in words or len(letter) != 1
</code></pre>
|
python|variables
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.