title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
sequence
Why does map return a map object instead of a list in Python 3?
40,015,439
<p>I am interested in understanding the <a href="http://stackoverflow.com/questions/1303347/getting-a-map-to-return-a-list-in-python-3-x">new language design of Python 3.x</a>.</p> <p>I do enjoy, in Python 2.7, the function <code>map</code>:</p> <pre><code>Python 2.7.12 In[2]: map(lambda x: x+1, [1,2,3]) Out[2]: [2, 3, 4] </code></pre> <p>However, in Python 3.x things have changed:</p> <pre><code>Python 3.5.1 In[2]: map(lambda x: x+1, [1,2,3]) Out[2]: &lt;map at 0x4218390&gt; </code></pre> <p>I understand the how, but I could not find a reference to the why. Why did the language designers make this choice, which, in my opinion, introduces a great deal of pain. Was this to arm-wrestle developers in sticking to list comprehensions?</p> <p>IMO, list can be naturally thought as <a href="http://learnyouahaskell.com/functors-applicative-functors-and-monoids" rel="nofollow">Functors</a>; and I have been somehow been thought to think in this way:</p> <pre><code>fmap :: (a -&gt; b) -&gt; f a -&gt; f b </code></pre>
20
2016-10-13T08:01:41Z
40,015,905
<p>Guido answers this question <a href="https://docs.python.org/3/whatsnew/3.0.html">here</a>: "<em>since creating a list would just be wasteful</em>". </p> <p>He also says that the correct transformation is to use a regular <code>for</code> loop.</p> <p>Converting <code>map()</code> from 2 to 3 might not just be a simple case of sticking a <code>list( )</code> around it. Guido also says:</p> <p>"If the input sequences are not of equal length, <code>map()</code> will stop at the termination of the shortest of the sequences. For full compatibility with <code>map()</code> from Python 2.x, also wrap the sequences in <code>itertools.zip_longest()</code>, e.g.</p> <pre><code>map(func, *sequences) </code></pre> <p>becomes </p> <pre><code>list(map(func, itertools.zip_longest(*sequences))) </code></pre> <p>"</p>
6
2016-10-13T08:27:25Z
[ "python", "python-3.x" ]
Python how to decode unicode with hex characters
40,015,477
<p>I have extracted a string from web crawl script as following:</p> <pre><code>u'\xe3\x80\x90\xe4\xb8\xad\xe5\xad\x97\xe3\x80\x91' </code></pre> <p>I want to decode <code>u'\xe3\x80\x90\xe4\xb8\xad\xe5\xad\x97\xe3\x80\x91'</code> with utf-8. With <a href="http://ddecode.com/hexdecoder/" rel="nofollow">http://ddecode.com/hexdecoder/</a>, I can see the result is <code>'【中字】'</code></p> <p>I tried using the following syntax but failed.</p> <pre><code>msg = u'\xe3\x80\x90\xe4\xb8\xad\xe5\xad\x97\xe3\x80\x91' result = msg.decode('utf8') </code></pre> <p>Error:</p> <pre><code>Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "C:\Python27\lib\encodings\utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-11: ordi nal not in range(128) </code></pre> <p>May I ask how to decode the string correctly?</p> <p>Thanks for help.</p>
2
2016-10-13T08:04:01Z
40,015,550
<p>Just keep msg as string not unicode.</p> <pre><code>msg = '\xe3\x80\x90\xe4\xb8\xad\xe5\xad\x97\xe3\x80\x91' result = msg.decode('utf8') </code></pre>
0
2016-10-13T08:08:18Z
[ "python", "utf-8", "python-2.x" ]
Python how to decode unicode with hex characters
40,015,477
<p>I have extracted a string from web crawl script as following:</p> <pre><code>u'\xe3\x80\x90\xe4\xb8\xad\xe5\xad\x97\xe3\x80\x91' </code></pre> <p>I want to decode <code>u'\xe3\x80\x90\xe4\xb8\xad\xe5\xad\x97\xe3\x80\x91'</code> with utf-8. With <a href="http://ddecode.com/hexdecoder/" rel="nofollow">http://ddecode.com/hexdecoder/</a>, I can see the result is <code>'【中字】'</code></p> <p>I tried using the following syntax but failed.</p> <pre><code>msg = u'\xe3\x80\x90\xe4\xb8\xad\xe5\xad\x97\xe3\x80\x91' result = msg.decode('utf8') </code></pre> <p>Error:</p> <pre><code>Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "C:\Python27\lib\encodings\utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-11: ordi nal not in range(128) </code></pre> <p>May I ask how to decode the string correctly?</p> <p>Thanks for help.</p>
2
2016-10-13T08:04:01Z
40,015,562
<ol> <li><p>Perhaps you should fix the crawl script instead, a Unicode string should contain <code>u'【中字】'</code> (<code>u'\u3010\u4e2d\u5b57\u3011'</code>) already, instead of the raw UTF-8 bytes.</p></li> <li><p>To convert <code>msg</code> to the correct encoding, first you need to turn the wrong Unicode string back to byte string (encode it as Latin-1), <em>then</em> decode it as UTF-8:</p> <pre><code>&gt;&gt;&gt; print msg.encode('latin1').decode('utf-8') 【中字】 </code></pre></li> </ol>
2
2016-10-13T08:09:26Z
[ "python", "utf-8", "python-2.x" ]
Python how to decode unicode with hex characters
40,015,477
<p>I have extracted a string from web crawl script as following:</p> <pre><code>u'\xe3\x80\x90\xe4\xb8\xad\xe5\xad\x97\xe3\x80\x91' </code></pre> <p>I want to decode <code>u'\xe3\x80\x90\xe4\xb8\xad\xe5\xad\x97\xe3\x80\x91'</code> with utf-8. With <a href="http://ddecode.com/hexdecoder/" rel="nofollow">http://ddecode.com/hexdecoder/</a>, I can see the result is <code>'【中字】'</code></p> <p>I tried using the following syntax but failed.</p> <pre><code>msg = u'\xe3\x80\x90\xe4\xb8\xad\xe5\xad\x97\xe3\x80\x91' result = msg.decode('utf8') </code></pre> <p>Error:</p> <pre><code>Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "C:\Python27\lib\encodings\utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-11: ordi nal not in range(128) </code></pre> <p>May I ask how to decode the string correctly?</p> <p>Thanks for help.</p>
2
2016-10-13T08:04:01Z
40,017,764
<p>The problem with</p> <pre><code>msg = u'\xe3\x80\x90\xe4\xb8\xad\xe5\xad\x97\xe3\x80\x91' result = msg.decode('utf8') </code></pre> <p>is that you are trying to decode Unicode. That doesn't really make sense. You can encode <em>from</em> Unicode to some type of encoding, or you can decode a byte string <em>to</em> Unicode.</p> <p>When you do </p> <pre><code>msg.decode('utf8') </code></pre> <p>Python 2 sees that <code>msg</code> is Unicode. It knows that it can't decode Unicode so it "helpfully" assumes that you want to encode <code>msg</code> with the default ASCII codec so the result of that transformation can be decoded to Unicode using the UTF-8 codec. Python 3 behaves much more sensibly: that code would simply fail with </p> <pre><code>AttributeError: 'str' object has no attribute 'decode' </code></pre> <p>The technique given in kennytm's answer: </p> <pre><code>msg.encode('latin1').decode('utf-8') </code></pre> <p>works because the Unicode codepoints less than 256 correspond directly to the characters in the <a href="https://en.wikipedia.org/wiki/Latin-1_Supplement_%28Unicode_block%29" rel="nofollow">Latin1</a> encoding (aka ISO 8859-1). </p> <p>Here's some Python 2 code that illustrates this:</p> <pre><code>for i in xrange(256): lat = chr(i) uni = unichr(i) assert lat == uni.encode('latin1') assert lat.decode('latin1') == uni </code></pre> <p>And here is the equivalent Python 3 code:</p> <pre><code>for i in range(256): lat = bytes([i]) uni = chr(i) assert lat == uni.encode('latin1') assert lat.decode('latin1') == uni </code></pre> <p>You may find this article helpful: <a href="http://nedbatchelder.com/text/unipain.html" rel="nofollow">Pragmatic Unicode</a>, which was written by SO veteran Ned Batchelder.</p> <p>Unless you are forced to use Python 2 I strongly advise you to switch to Python 3. It will make handling Unicode far less painful.</p>
1
2016-10-13T09:53:11Z
[ "python", "utf-8", "python-2.x" ]
Django filter based on specific item from reverse relation
40,015,616
<p>I have two models that could look something like this: </p> <pre><code>Course(models.Model): pass WeeklyPrice(models.Model): num_weeks = models.IntegerField() price = models.DecimalField() course = models.ForeignKey(Course) is_erased = models.BooleanField() </code></pre> <p>The thing is i have a search form, which needs to be able to filter by minimum number of weeks.</p> <p>On Course creation, a bunch of <code>WeeklyPrices</code> are also created, which can then be deleted (logically by setting is_erased to True), so a good example would be that Course has 13 instances of WeeklyPrice, starting at week 4, because weeks 1, 2 and 3 have been deleted.</p> <p>So how can i retrieve a queryset of <code>Courses</code> with an instance of <code>WeeklyPrice</code> whose <code>num_weeks</code> is less or equal to the <code>duration</code> variable retrieved from the search form?</p>
-1
2016-10-13T08:11:46Z
40,016,393
<p>Ok, so i ended up finding out it's a lot simpler than it sounds...</p> <p>The filter needed is: </p> <pre><code>courses.filter(weekly_prices__is_erased=False, weekly_prices__num_weeks=duration) </code></pre>
0
2016-10-13T08:52:03Z
[ "python", "django" ]
Grouping a pandas dataframe in a suitable format for creating a chart
40,015,666
<p>Suppose I have the following pandas dataframe:</p> <pre><code>In [1]: df Out[1]: sentiment date 0 pos 2016-10-08 1 neu 2016-10-08 2 pos 2016-10-09 3 neg 2016-10-09 4 neg 2016-10-09 </code></pre> <p>I can indeed create a dataframe that makes summary statistics about the sentiment column per day as follows:</p> <pre><code>gf=df.groupby(["date", "sentiment"]).size().reset_index(name='count') </code></pre> <p>which gives</p> <pre><code>In [2]: gf Out[2]: date sentiment count 0 2016-10-08 neu 1 1 2016-10-08 pos 1 2 2016-10-09 neg 2 3 2016-10-09 pos 1 </code></pre> <p>However I need to transform this result in the following tabular format (or new dataframe) so as to be able to make a bar chart (like for example as in <a href="https://drive.google.com/previewtemplate?id=0AoFkkLP2MB8kdDRrWDdLdU5mU1EzZGtTUmhwZXBJclE&amp;mode=public&amp;ddrp=1#" rel="nofollow">this</a> Google bar chart).</p> <pre><code> date pos neg neu 0 2016-10-08 1 0 1 1 2016-10-09 1 2 0 </code></pre> <p>I tried to go about it by creating a new dataframe</p> <pre><code>columns = ['date','pos', 'neg', 'neu'] clean_sheet = pd.DataFrame(columns=columns) </code></pre> <p>and then iterating over <code>gf</code> looking for unique dates and in turn iterating on those searching for either pos, neg or neu with <code>.loc</code> but it got really messy</p> <p>Any ideas for a simpler solution?</p> <p>Thanks</p>
2
2016-10-13T08:14:55Z
40,015,705
<p>You need add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html" rel="nofollow"><code>unstack</code></a>:</p> <pre><code>gf = df.groupby(["date", "sentiment"]).size().unstack(fill_value=0).reset_index() #remove column name 'sentiment' gf.columns.name = None print (gf) date neg neu pos 0 2016-10-08 0 1 1 1 2016-10-09 2 0 1 </code></pre> <p>Another slowier solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow"><code>pivot_table</code></a>:</p> <pre><code>gf = df.pivot_table(index="date", columns="sentiment", aggfunc=len, fill_value=0) .reset_index() gf.columns.name = None print (gf) date neg neu pos 0 2016-10-08 0 1 1 1 2016-10-09 2 0 1 </code></pre> <p>And last solution is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="nofollow"><code>crosstab</code></a>, but in larger <code>DataFrame</code> is slowier:</p> <pre><code>gf = pd.crosstab( df.date, df.sentiment).reset_index() gf.columns.name = None print (gf) date neg neu pos 0 2016-10-08 0 1 1 1 2016-10-09 2 0 1 </code></pre> <p><strong>Timings</strong> (pandas 0.19.0):</p> <pre><code>#[50000 rows x 2 columns] df = pd.concat([df]*10000).reset_index(drop=True) In [197]: %timeit (df.groupby(["date", "sentiment"]).size().unstack(fill_value=0).reset_index()) 100 loops, best of 3: 6.3 ms per loop In [198]: %timeit (df.pivot_table(index="date", columns="sentiment", aggfunc=len, fill_value=0).reset_index()) 100 loops, best of 3: 12.2 ms per loop In [199]: %timeit (pd.crosstab( df.date, df.sentiment).reset_index()) 100 loops, best of 3: 11.3 ms per loop </code></pre>
2
2016-10-13T08:16:34Z
[ "python", "pandas", "dataframe", "pivot-table", "reshape" ]
Accepting an Excel file with io stream
40,015,789
<p>Currently I am working on an Excel file and taking its path as input as</p> <pre><code>myObj = ProcessExcelFile("C:\SampleExcel.xlsx") </code></pre> <p>constructor is as</p> <pre><code>def __init__(self, filepath): ''' Constructor ''' if not os.path.isfile(filepath): raise FileNotFoundError(filepath) self.datawb = xlrd.open_workbook(filepath) </code></pre> <p>but now from another interface I want to use the same class but its not sending me filepath its sending me file through io stream as </p> <pre><code>data = req.stream.read(req.content_length) file = io.BytesIO(data) </code></pre> <p>now this file variable when I am passing in my file as</p> <pre><code>myObj = ProcessExcelFile(file) </code></pre> <p>It's giving me error </p> <pre><code>TypeError: argument should be string, bytes or integer, not _io.BytesIO </code></pre> <p>I want to make my init so that it can take path as well as an io stream or if its not possible, I need to have file from io stream as priority</p>
0
2016-10-13T08:21:16Z
40,015,866
<p>You would need to modify your class to allow streams as well. Pass the stream to <code>open_workbook</code> via <code>file_contents</code>.</p> <pre><code>def __init__(self, filepath, stream=False): ''' Constructor ''' if stream: self.datawb = xlrd.open_workbook(file_contents=filepath.read()) else: if not os.path.isfile(filepath): raise FileNotFoundError(filepath) self.datawb = xlrd.open_workbook(filepath) </code></pre>
0
2016-10-13T08:25:20Z
[ "python", "excel", "io", "xlrd" ]
Use all coordinates in grid [python]
40,015,824
<p>For a minesweeper I have created a board using python and pygame. If you click a bomb, you should see the entire board. I have separate functions that contain the (randomised) bomb positions, and create the numbers around the bombs(on the proper coordinates). How do I make sure it checks the coordinates 0 to GRID_TILES(the maximum range).</p> <p>This is how I show the 'clicked' coordinates</p> <pre><code>def handle_mouse(mousepos): x, y = mousepos x, y = math.ceil(x / 40), math.ceil(y / 40) check = x, y if check in FLAGS: print("You have to unflag this tile before clicking!") else: CLICKED.append(check) draw_item(CELLS[x -1][y - 1], x - 1, y - 1, check) bomb_check(check) def draw_item(item, x, y, check): global BLOCK_SIZE, screen background = pygame.image.load("img/white.png") if check in BOMBS: image = pygame.image.load("img/9.png") else: image = pygame.image.load("img/"+str(item)+".png") x, y = x * BLOCK_SIZE, y * BLOCK_SIZE screen.blit(background, (x, y)) screen.blit(image, (x + 10, y + 10)) pygame.display.flip() </code></pre> <p>Using:</p> <pre><code>def game_mainloop(): While True: if event.type == pygame.MOUSEBUTTONDOWN and event.button == 1: handle_mouse(pygame.mouse.get_pos()) if event.type == pygame.MOUSEBUTTONDOWN and event.button == 3: handle_flag(pygame.mouse.get_pos()) </code></pre> <p>The following definitions are:</p> <p>CELLS = list of number of bombs adjecent to a tile</p> <p>FLAGS = a list of the flagged positions</p> <p>CLICKED = a list of clicked positions</p> <p>bomb_check = handles if the clicked coordinate is a bomb</p> <p>I have imported both pygame and math.</p> <p>As of now, the code now just opens up the tile you click, but I want to know how to get another line of code to open up every tile in the grid</p>
0
2016-10-13T08:22:59Z
40,026,216
<p>I got the solution:</p> <pre><code>def show_board(): for x in range(0,GRID_TILES): for y in range(0, GRID_TILES): draw_item(CELLS[x][y], x, y, (x+1,y+1)) </code></pre> <p>If I call this after hitting a bomb, it will show the entire board. If I just wanted to use every single coordinate on a grid, just</p> <pre><code>def show_board(): for x in range(0,GRID_TILES): for y in range(0, GRID_TILES): </code></pre> <p>is sufficient.</p>
1
2016-10-13T16:20:42Z
[ "python", "python-3.x", "pygame" ]
How we stash Data to Cloudant DB in IBM Bluemix?
40,015,836
<p>Currently I am trying to stash data into Cloudant DB from a notebook using Python's Pixiedust package. After establishing a connection it gives me this error when trying to insert data into the database.</p> <pre><code>nPy4JJavaError: An error occurred while calling o172.save. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 38.0 failed 10 times, most recent failure: ' Lost task 3.9 in stage 38.0 (TID 1811, yp-spark-dal09-env5-0046): spray.http.IllegalUriException: The path of an URI without authority must not begin with "//" </code></pre> <p>Please tell me how I can stash data into Cloudant DB from a notebook? </p>
0
2016-10-13T08:24:01Z
40,031,314
<p>This issue has been addressed and a new version of PixieDust has been uploaded to PyPI. You can download the latest version in your notebook by issuing the following command:</p> <p><code>!pip install --user --upgrade --no-deps pixiedust</code></p> <p>Restart your kernel after the upgrade is complete.</p> <p>You can verify your PixieDust version (this was fixed in version 0.40) by running the following command:</p> <p><code>!pip show pixiedust</code></p> <p>The output should look similar to the following:</p> <pre><code>--- Metadata-Version: 1.0 Name: pixiedust Version: 0.40 Summary: Misc helpers for Spark Python Notebook Home-page: https://github.com/ibm-cds-labs/pixiedust Author: David Taieb Author-email: [email protected] License: Apache 2.0 Location: /gpfs/global_fs01/sym_shared/YPProdSpark/user/sc7c-6877c2673165c0-9db077469d48/.local/lib/python2.7/site-packages Requires: maven-artifact, mpld3 Classifiers: </code></pre>
0
2016-10-13T21:29:06Z
[ "python", "ibm-bluemix", "cloudant", "pixiedust" ]
How to stream POST data into Python requests?
40,015,869
<p>I'm using the Python <code>requests</code> library to send a POST request. The part of the program that produces the POST data can <strong>write</strong> into an arbitrary file-like object (output stream).</p> <p>How can I make these two parts fit?</p> <p>I would have expected that <code>requests</code> provides a streaming interface for this use case, but it seems it doesn't. It only accepts as <code>data</code> argument a file-like object from which it <strong>reads</strong>. It doesn't provide a file-like object into which I can <strong>write</strong>.</p> <p>Is this a fundamental issue with the Python HTTP libraries?</p> <h1>Ideas so far:</h1> <p>It seems that the simplest solution is to <code>fork()</code> and to let the requests library communicate with the POST data producer throgh a <strong>pipe</strong>.</p> <p>Is there a better way?</p> <p>Alternatively, I could try to complicate the POST data producer. However, that one is parsing one XML stream (from stdin) and producing a new XML stream to used as POST data. Then I have the same problem in reverse: The XML serializer libraries want to <strong>write</strong> into a file-like object, I'm not aware of any possibility that an XML serializer provides a file-like object from which other can <strong>read</strong>.</p> <p>I'm also aware that the cleanest, classic solution to this is coroutines, which are somewhat available in Python through generators (<code>yield</code>). The POST data could be streamed through (<code>yield</code>) instead of a file-like object and use a pull-parser.</p> <p>However, is possible to make <code>requests</code> accept an iterator for POST data? And is there an XML serializer that can readily be used in combination with <code>yield</code>?</p> <p>Or, are there any wrapper objects that turn writing into a file-like object into a generator, and/or provide a file-like object that wraps an iterator?</p>
2
2016-10-13T08:25:30Z
40,018,118
<p>The only way of connecting a data producer that requires a push interface for its data sink with a data consumer that requires a pull interface for its data source is through an intermediate buffer. Such a system can be operated only by running the producer and the consumer in "parallel" - the producer fills the buffer and the consumer reads from it, each of them being suspended as necessary. Such a parallelism can be simulated with <em>cooperative multitasking</em>, where the producer yields the control to the consumer when the buffer is full, and the consumer returns the control to the producer when the buffer gets empty. By taking the generator approach you will be building a custom-tailored cooperative multitasking solution for your case, which will hardly end up being simpler compared to the easy pipe-based approach, where the responsibility of scheduling the producer and the consumer is entirely with the OS.</p>
0
2016-10-13T10:10:34Z
[ "python", "xml", "http", "python-requests", "generator" ]
How to stream POST data into Python requests?
40,015,869
<p>I'm using the Python <code>requests</code> library to send a POST request. The part of the program that produces the POST data can <strong>write</strong> into an arbitrary file-like object (output stream).</p> <p>How can I make these two parts fit?</p> <p>I would have expected that <code>requests</code> provides a streaming interface for this use case, but it seems it doesn't. It only accepts as <code>data</code> argument a file-like object from which it <strong>reads</strong>. It doesn't provide a file-like object into which I can <strong>write</strong>.</p> <p>Is this a fundamental issue with the Python HTTP libraries?</p> <h1>Ideas so far:</h1> <p>It seems that the simplest solution is to <code>fork()</code> and to let the requests library communicate with the POST data producer throgh a <strong>pipe</strong>.</p> <p>Is there a better way?</p> <p>Alternatively, I could try to complicate the POST data producer. However, that one is parsing one XML stream (from stdin) and producing a new XML stream to used as POST data. Then I have the same problem in reverse: The XML serializer libraries want to <strong>write</strong> into a file-like object, I'm not aware of any possibility that an XML serializer provides a file-like object from which other can <strong>read</strong>.</p> <p>I'm also aware that the cleanest, classic solution to this is coroutines, which are somewhat available in Python through generators (<code>yield</code>). The POST data could be streamed through (<code>yield</code>) instead of a file-like object and use a pull-parser.</p> <p>However, is possible to make <code>requests</code> accept an iterator for POST data? And is there an XML serializer that can readily be used in combination with <code>yield</code>?</p> <p>Or, are there any wrapper objects that turn writing into a file-like object into a generator, and/or provide a file-like object that wraps an iterator?</p>
2
2016-10-13T08:25:30Z
40,018,547
<p><code>request</code> does take an iterator or generator as <code>data</code> argument, the details are described in <a href="http://docs.python-requests.org/en/master/user/advanced/#chunk-encoded-requests" rel="nofollow">Chunk-Encoded Requests</a>. The transfer encoding needs to be chunked in this case because the data size is not known beforehand.</p> <p>Here is a very simle example that uses a <a href="https://docs.python.org/3/library/queue.html#queue.Queue" rel="nofollow"><code>queue.Queue</code></a> and can be used as a file-like object for writing:</p> <pre><code>import requests import queue import threading class WriteableQueue(queue.Queue): def write(self, data): # An empty string would be interpreted as EOF by the receiving server if data: self.put(data) def __iter__(self): return iter(self.get, None) def close(self): self.put(None) # quesize can be limited in case producing is faster then streaming q = WriteableQueue(100) def post_request(iterable): r = requests.post("http://httpbin.org/post", data=iterable) print(r.text) threading.Thread(target=post_request, args=(q,)).start() # pass the queue to the serializer that writes to it ... q.write(b'1...') q.write(b'2...') # closing ends the request q.close() </code></pre>
3
2016-10-13T10:32:29Z
[ "python", "xml", "http", "python-requests", "generator" ]
How To Share Mutable Variables Across Python Modules?
40,015,876
<p>I need to share variables across multiple modules. These variables will be changed asynchronously by threads as the program runs.</p> <p>I need to be able to access the most resent state of the variable by multi modules at the same time.</p> <p>Multiple modules will also be writing to the same variable.</p> <p>Basically what I need is a shared memory space, like a Global var within a module, but is accessible &amp; changeable by all other modules asynchronously.</p> <p>I'm familiar with locking a global variable within a module. I have no idea where to start doing this across multiple modules.</p> <p>How is it done?</p>
0
2016-10-13T08:26:02Z
40,016,435
<p>Place all your global variables in a module, for example config.py and import it throughout your modules:</p> <p>config.py:</p> <pre><code>a=None varlock=None </code></pre> <p>main.py:</p> <pre><code>import config import threading config.a = 42 config.varlock = threading.RLock() ... </code></pre> <p>Then you can use the global lock instance, instantiated once in your main, to protect your variables. Every time you modify one of these in any of your threads, do it as</p> <pre><code>with config.varlock: config.a = config.a + 42 </code></pre> <p>and you should be fine. </p> <p>Hannu</p>
1
2016-10-13T08:53:36Z
[ "python", "python-multithreading" ]
Test if byte is empty in python
40,015,912
<p>I'd like to test the content of a variable containing a byte in a way like this: </p> <pre><code>line = [] while True: for c in self.ser.read(): # read() from pySerial line.append(c) if c == binascii.unhexlify('0A').decode('utf8'): print("Line: " + line) line = [] break </code></pre> <p>But this does not work... I'd like also to test, if a byte is empty: In this case</p> <pre><code>print(self.ser.read()) </code></pre> <p>prints: b'' (with two single quotes)</p> <p>I do not until now succeed to test this</p> <pre><code>if self.ser.read() == b'' </code></pre> <p>or what ever always shows a syntax error...</p> <p>I know, very basic, but I don't get it...</p>
2
2016-10-13T08:27:36Z
40,017,463
<p>If you want to verify the contents of your variable or string which you want to read from pySerial, use the <code>repr()</code> function, something like:</p> <pre><code>import serial import repr as reprlib from binascii import unhexlify self.ser = serial.Serial(self.port_name, self.baudrate, self.bytesize, self.parity, self.stopbits, self.timeout, self.xonxoff, self.rtscts) line = [] while 1: for c in self.ser.read(): # read() from pySerial line.append(c) if if c == b'\x0A': print("Line: " + line) print repr(unhexlify(''.join('0A'.split())).decode('utf8')) line = [] break </code></pre>
1
2016-10-13T09:40:12Z
[ "python", "byte" ]
Get a file name with tkinter.filedialog.asksaveasfilename to append in it
40,015,914
<p>from an GUI application designed with <code>tkinter</code>, I wish to save some datas in a file in appending mode. To get the file's name I use <code>asksaveasfilename</code> from <code>filedialog</code> module. Here is the code:</p> <pre><code>from tkinter.filedialog import asksaveasfilename def save_file(): file_name = asksaveasfilename() if file_name: f = open(file_name, 'a') contents = tab_chrono.text_area.get(1.0, 'end') f.write(contents) f.close() </code></pre> <p>The problem happens when I select in the dialog an existing file, I got a warning that the file will be overwritten. It is not true since I append in the file. Is there a way to get rid of this warning ? Or do I have to rewrite a <code>askappendfilename</code> myself ? This is missing in <code>filedialog</code> module. <a href="https://i.stack.imgur.com/mPL5s.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/mPL5s.jpg" alt="enter image description here"></a></p>
1
2016-10-13T08:27:55Z
40,017,696
<p>Use the option <code>confirmoverwrite</code> to prevent the message, when selecting an existing file.</p> <pre><code>import tkFileDialog import time class Example(): dlg = tkFileDialog.asksaveasfilename(confirmoverwrite=False) fname = dlg if fname != '': try: f = open(fname, "rw+") text = f.read() print text except: f = open(fname, "w") new_text = time.time() f.write(str(new_text)+'\n') f.close() </code></pre> <p>Edit: Note that I am using <code>f.read()</code> to be able to print the existing text.<br> You may want to remove the <code>f.read()</code> and subsequent <code>print</code> statement and replace them with a <code>f.seek(0,2)</code> which positions the pointer at the end of the existing file.<br> The other option is as follows using the <code>append</code> option in the file open, which will create the file if it doesn't already exist:</p> <pre><code>import tkFileDialog import time class Example(): dlg = tkFileDialog.asksaveasfilename(confirmoverwrite=False) fname = dlg if fname != '': f = open(fname, "a") new_text = time.time() f.write(str(new_text)+'\n') f.close() </code></pre>
1
2016-10-13T09:49:59Z
[ "python", "tkinter", "filedialog" ]
Get a file name with tkinter.filedialog.asksaveasfilename to append in it
40,015,914
<p>from an GUI application designed with <code>tkinter</code>, I wish to save some datas in a file in appending mode. To get the file's name I use <code>asksaveasfilename</code> from <code>filedialog</code> module. Here is the code:</p> <pre><code>from tkinter.filedialog import asksaveasfilename def save_file(): file_name = asksaveasfilename() if file_name: f = open(file_name, 'a') contents = tab_chrono.text_area.get(1.0, 'end') f.write(contents) f.close() </code></pre> <p>The problem happens when I select in the dialog an existing file, I got a warning that the file will be overwritten. It is not true since I append in the file. Is there a way to get rid of this warning ? Or do I have to rewrite a <code>askappendfilename</code> myself ? This is missing in <code>filedialog</code> module. <a href="https://i.stack.imgur.com/mPL5s.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/mPL5s.jpg" alt="enter image description here"></a></p>
1
2016-10-13T08:27:55Z
40,019,904
<p>The <code>asksaveasfilename</code> dialog accepts a <code>confirmoverwrite</code> argument to enable or disable the file existence check.</p> <pre><code>file_name = asksaveasfilename(confirmoverwrite=False) </code></pre> <p>This can be found in the Tk manual for <a href="http://www.tcl.tk/man/tcl8.5/TkCmd/getOpenFile.htm" rel="nofollow">tk_getSaveFile</a> but doesn't appear to be documented for tkinter. It was introduced in Tk 8.5.11 so is relatively new in Tk terms (released Nov 2011).</p>
2
2016-10-13T11:37:28Z
[ "python", "tkinter", "filedialog" ]
Python .append() to a vector in a single line
40,015,949
<p>I'm having problems in <code>append</code> in array. I'm expecting a result like:</p> <pre><code>['44229#0:', '2016/10/11', '11:15:57','11:15:57','11:15:57','11:15:58' '0'] </code></pre> <p>but I'm having this result:</p> <pre><code>['44229#0:', '2016/10/11', '11:15:57'] ['11:15:57'] ['11:15:57'] ['11:15:58'] ['44231#0:', '2016/10/11', '11:19:23'] ['11:19:23'] ['1'] ['11:19:24'] ['11:19:24'] ['44231#0:', '2016/10/11', '12:20:39'] ['12:20:58'] ['12:20:59'] ['12:20:59'] </code></pre> <p>If I indent a bit the output, I get this results:</p> <pre><code>['44229#0:', '2016/10/11', '11:15:57', '11:15:57', '11:15:57', '11:15:58', '44231#0:', '2016/10/11', '11:19:23', '11:19:23', '1', '11:19:24', '11:19:24', '44231#0:', '2016/10/11', '12:20:39', '12:20:58', '12:20:59', '12:20:59'] </code></pre> <p>Still not being the ones I'm need, because I have all in one vector, and need to separate them in different lines...</p> <p>I'm parsing this log from Nginx:</p> <pre><code>2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL_do_handshake: -1 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL_get_error: 2 2016/10/11 11:15:57 [debug] 44229#0: *45677 post event 0000000001449060 2016/10/11 11:15:57 [debug] 44229#0: *45677 delete posted event 0000000001449060 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL handshake handler: 0 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL_do_handshake: 1 2016/10/11 11:15:57 [debug] 44229#0: *45677 http check ssl handshake 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL: TLSv1.2, cipher: 2016/10/11 11:15:57 [debug] 44229#0: *45677 reusable connection: 1 2016/10/11 11:15:57 [debug] 44229#0: *45677 http wait request handler 2016/10/11 11:15:57 [debug] 44229#0: *45677 malloc: 00000000014456D0:1024 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL_read: -1 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL_get_error: 2 2016/10/11 11:15:57 [debug] 44229#0: *45677 free: 00000000014456D0 2016/10/11 11:15:57 [debug] 44229#0: *45677 post event 0000000001449060 2016/10/11 11:15:57 [debug] 44229#0: *45677 delete posted event 0000000001449060 2016/10/11 11:15:57 [debug] 44229#0: *45677 http wait request handler 2016/10/11 11:15:57 [debug] 44229#0: *45677 malloc: 00000000014456D0:1024 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL_read: 144 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL_read: -1 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL_get_error: 2 2016/10/11 11:15:57 [debug] 44229#0: *45677 reusable connection: 0 2016/10/11 11:15:57 [debug] 44229#0: *45677 posix_memalign: 00000000014974A0:4096 @16 2016/10/11 11:15:57 [debug] 44229#0: *45677 http process request line 2016/10/11 11:15:57 [debug] 44229#0: *45677 http request line: "POST / HTTP/1.1" 2016/10/11 11:15:57 [debug] 44229#0: *45677 http uri: "/" 2016/10/11 11:15:57 [debug] 44229#0: *45677 http args: "" 2016/10/11 11:15:57 [debug] 44229#0: *45677 http exten: "" 2016/10/11 11:15:57 [debug] 44229#0: *45677 http process request header line 2016/10/11 11:15:57 [debug] 44229#0: *45677 http header: "Host:" 2016/10/11 11:15:57 [debug] 44229#0: *45677 http header: "Transfer-Encoding: chunked" 2016/10/11 11:15:57 [debug] 44229#0: *45677 http header: "Accept: */*" 2016/10/11 11:15:57 [debug] 44229#0: *45677 http header: "content-type: application/json" 2016/10/11 11:15:57 [debug] 44229#0: *45677 posix_memalign: 00000000016689D0:4096 @16 2016/10/11 11:15:57 [debug] 44229#0: *45677 http header: "Content-Length: 149" 2016/10/11 11:15:58 [debug] 44229#0: *45677 post event 0000000001449060 2016/10/11 11:15:58 [debug] 44229#0: *45677 delete posted event 0000000001449060 2016/10/11 11:15:58 [debug] 44229#0: *45677 http process request header line 2016/10/11 11:15:58 [debug] 44229#0: *45677 SSL_read: 6 2016/10/11 11:15:58 [debug] 44229#0: *45677 SSL_read: 149 2016/10/11 11:15:58 [debug] 44229#0: *45677 SSL_read: 7 2016/10/11 11:15:58 [debug] 44229#0: *45677 SSL_read: -1 2016/10/11 11:15:58 [debug] 44229#0: *45677 SSL_get_error: 2 2016/10/11 11:15:58 [debug] 44229#0: *45677 http header done 2016/10/11 11:15:58 [debug] 44229#0: *45677 event timer del: 43: 1476177405011 2016/10/11 11:15:58 [debug] 44229#0: *45677 generic phase: 0 2016/10/11 11:15:58 [debug] 44229#0: *45677 rewrite phase: 1 2016/10/11 11:15:58 [debug] 44229#0: *45677 test location: "/" 2016/10/11 11:15:58 [debug] 44229#0: *45677 using configuration "/" 2016/10/11 11:15:58 [debug] 44229#0: *45677 http cl:-1 max:1048576 2016/10/11 11:15:58 [debug] 44229#0: *45677 rewrite phase: 3 2016/10/11 11:15:58 [debug] 44229#0: *45677 http set discard body 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 39 s:0 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 35 s:1 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 0D s:1 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 0A s:3 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 7B s:4 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 0D s:5 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 0A s:6 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 30 s:0 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 0D s:1 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 0A s:8 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 0D s:9 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 0A s:10 2016/10/11 11:15:58 [debug] 44229#0: *45677 xslt filter header 2016/10/11 11:15:58 [debug] 44229#0: *45677 HTTP/1.1 200 OK 2016/10/11 11:15:58 [debug] 44229#0: *45677 write new buf t:1 f:0 0000000001668BF0, pos 0000000001668BF0, size: 160 file: 0, size: 0 2016/10/11 11:15:58 [debug] 44229#0: *45677 http write filter: l:0 f:0 s:160 2016/10/11 11:15:58 [debug] 44229#0: *45677 http output filter "/?" 2016/10/11 11:15:58 [debug] 44229#0: *45677 http copy filter: "/?" 2016/10/11 11:15:58 [debug] 44229#0: *45677 image filter 2016/10/11 11:15:58 [debug] 44229#0: *45677 xslt filter body 2016/10/11 11:15:58 [debug] 44229#0: *45677 http postpone filter "/?" 00007FFFADE3C4A0 2016/10/11 11:15:58 [debug] 44229#0: *45677 write old buf t:1 f:0 0000000001668BF0, pos 0000000001668BF0, size: 160 file: 0, size: 0 2016/10/11 11:15:58 [debug] 44229#0: *45677 write new buf t:0 f:0 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 0 2016/10/11 11:15:58 [debug] 44229#0: *45677 http write filter: l:1 f:0 s:160 2016/10/11 11:15:58 [debug] 44229#0: *45677 http write filter limit 0 2016/10/11 11:15:58 [debug] 44229#0: *45677 posix_memalign: 0000000001499DB0:256 @16 2016/10/11 11:15:58 [debug] 44229#0: *45677 malloc: 000000000175B750:16384 2016/10/11 11:15:58 [debug] 44229#0: *45677 SSL buf copy: 160 2016/10/11 11:15:58 [debug] 44229#0: *45677 SSL to write: 160 2016/10/11 11:15:58 [debug] 44229#0: *45677 SSL_write: 160 2016/10/11 11:15:58 [debug] 44229#0: *45677 http write filter 0000000000000000 2016/10/11 11:15:58 [debug] 44229#0: *45677 http copy filter: 0 "/?" 2016/10/11 11:15:58 [debug] 44229#0: *45677 http finalize request: 0, "/?" a:1, c:1 2016/10/11 11:15:58 [debug] 44229#0: *45677 set http keepalive handler 2016/10/11 11:15:58 [debug] 44229#0: *45677 http close request 2016/10/11 11:15:58 [debug] 44229#0: *45677 http log handler 2016/10/11 11:15:58 [debug] 44229#0: *45677 free: 00000000014974A0, unused: 1 2016/10/11 11:15:58 [debug] 44229#0: *45677 free: 00000000016689D0, unused: 3109 2016/10/11 11:15:58 [debug] 44229#0: *45677 free: 00000000014456D0 2016/10/11 11:15:58 [debug] 44229#0: *45677 hc free: 0000000000000000 0 2016/10/11 11:15:58 [debug] 44229#0: *45677 hc busy: 0000000000000000 0 2016/10/11 11:15:58 [debug] 44229#0: *45677 free: 000000000175B750 2016/10/11 11:15:58 [debug] 44229#0: *45677 tcp_nodelay 2016/10/11 11:15:58 [debug] 44229#0: *45677 reusable connection: 1 2016/10/11 11:15:58 [debug] 44229#0: *45677 event timer add: 43: 65000:1476177423255 2016/10/11 11:17:03 [debug] 44229#0: *45677 event timer del: 43: 1476177423255 2016/10/11 11:17:03 [debug] 44229#0: *45677 http keepalive handler 2016/10/11 11:17:03 [debug] 44229#0: *45677 close http connection: 43 2016/10/11 11:17:03 [debug] 44229#0: *45677 SSL_shutdown: 1 2016/10/11 11:17:03 [debug] 44229#0: *45677 reusable connection: 0 2016/10/11 11:17:03 [debug] 44229#0: *45677 free: 0000000000000000 2016/10/11 11:17:03 [debug] 44229#0: *45677 free: 0000000000000000 2016/10/11 11:17:03 [debug] 44229#0: *45677 free: 00000000014462C0, unused: 8 2016/10/11 11:17:03 [debug] 44229#0: *45677 free: 000000000149ACF0, unused: 8 2016/10/11 11:17:03 [debug] 44229#0: *45677 free: 0000000001499DB0, unused: 144 2016/10/11 11:19:22 [debug] 44231#0: *45709 event timer add: 8: 60000:1476177622411 2016/10/11 11:19:22 [debug] 44231#0: *45709 reusable connection: 1 2016/10/11 11:19:22 [debug] 44231#0: *45709 epoll add event: fd:8 op:1 ev:80002001 2016/10/11 11:19:23 [debug] 44231#0: *45709 post event 0000000001448EE0 2016/10/11 11:19:23 [debug] 44231#0: *45709 delete posted event 0000000001448EE0 2016/10/11 11:19:23 [debug] 44231#0: *45709 http check ssl handshake 2016/10/11 11:19:23 [debug] 44231#0: *45709 http recv(): 1 2016/10/11 11:19:23 [debug] 44231#0: *45709 https ssl handshake: 0x16 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL_do_handshake: -1 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL_get_error: 2 2016/10/11 11:19:23 [debug] 44231#0: *45709 reusable connection: 0 2016/10/11 11:19:23 [debug] 44231#0: *45709 post event 0000000001448EE0 2016/10/11 11:19:23 [debug] 44231#0: *45709 delete posted event 0000000001448EE0 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL handshake handler: 0 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL_do_handshake: -1 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL_get_error: 2 2016/10/11 11:19:23 [debug] 44231#0: *45709 post event 0000000001448EE0 2016/10/11 11:19:23 [debug] 44231#0: *45709 delete posted event 0000000001448EE0 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL handshake handler: 0 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL_do_handshake: 1 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL: TLSv1.2, cipher: 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL reused session 2016/10/11 11:19:23 [debug] 44231#0: *45709 reusable connection: 1 2016/10/11 11:19:23 [debug] 44231#0: *45709 http wait request handler 2016/10/11 11:19:23 [debug] 44231#0: *45709 malloc: 00000000014E16E0:1024 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL_read: -1 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL_get_error: 2 2016/10/11 11:19:23 [debug] 44231#0: *45709 free: 00000000014E16E0 2016/10/11 11:19:24 [debug] 44231#0: *45709 post event 0000000001448EE0 2016/10/11 11:19:24 [debug] 44231#0: *45709 delete posted event 0000000001448EE0 2016/10/11 11:19:24 [debug] 44231#0: *45709 http wait request handler 2016/10/11 11:19:24 [debug] 44231#0: *45709 malloc: 00000000014E16E0:1024 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL_read: 144 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL_read: 6 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL_read: 149 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL_read: 7 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL_read: -1 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL_get_error: 2 2016/10/11 11:19:24 [debug] 44231#0: *45709 reusable connection: 0 2016/10/11 11:19:24 [debug] 44231#0: *45709 posix_memalign: 00000000015541A0:4096 @16 2016/10/11 11:19:24 [debug] 44231#0: *45709 http process request line 2016/10/11 11:19:24 [debug] 44231#0: *45709 http request line: "POST / HTTP/1.1" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http uri: "/" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http args: "" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http exten: "" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http process request header line 2016/10/11 11:19:24 [debug] 44231#0: *45709 http header: "Host:" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http header: "Transfer-Encoding: chunked" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http header: "Accept: */*" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http header: "content-type: application/json" 2016/10/11 11:19:24 [debug] 44231#0: *45709 posix_memalign: 0000000001466290:4096 @16 2016/10/11 11:19:24 [debug] 44231#0: *45709 http header: "Content-Length: 149" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http header done 2016/10/11 11:19:24 [debug] 44231#0: *45709 event timer del: 8: 1476177622411 2016/10/11 11:19:24 [debug] 44231#0: *45709 generic phase: 0 2016/10/11 11:19:24 [debug] 44231#0: *45709 rewrite phase: 1 2016/10/11 11:19:24 [debug] 44231#0: *45709 test location: "/" 2016/10/11 11:19:24 [debug] 44231#0: *45709 using configuration "/" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http cl:-1 max:1048576 2016/10/11 11:19:24 [debug] 44231#0: *45709 rewrite phase: 3 2016/10/11 11:19:24 [debug] 44231#0: *45709 http set discard body 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 39 s:0 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 35 s:1 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 0D s:1 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 0A s:3 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 7B s:4 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 0D s:5 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 0A s:6 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 30 s:0 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 0D s:1 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 0A s:8 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 0D s:9 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 0A s:10 2016/10/11 11:19:24 [debug] 44231#0: *45709 xslt filter header 2016/10/11 11:19:24 [debug] 44231#0: *45709 HTTP/1.1 200 OK 2016/10/11 11:19:24 [debug] 44231#0: *45709 write new buf t:1 f:0 00000000014664B0, pos 00000000014664B0, size: 160 file: 0, size: 0 2016/10/11 11:19:24 [debug] 44231#0: *45709 http write filter: l:0 f:0 s:160 2016/10/11 11:19:24 [debug] 44231#0: *45709 http output filter "/?" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http copy filter: "/?" 2016/10/11 11:19:24 [debug] 44231#0: *45709 image filter 2016/10/11 11:19:24 [debug] 44231#0: *45709 xslt filter body 2016/10/11 11:19:24 [debug] 44231#0: *45709 http postpone filter "/?" 00007FFFADE3C420 2016/10/11 11:19:24 [debug] 44231#0: *45709 write old buf t:1 f:0 00000000014664B0, pos 00000000014664B0, size: 160 file: 0, size: 0 2016/10/11 11:19:24 [debug] 44231#0: *45709 write new buf t:0 f:0 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 0 2016/10/11 11:19:24 [debug] 44231#0: *45709 http write filter: l:1 f:0 s:160 2016/10/11 11:19:24 [debug] 44231#0: *45709 http write filter limit 0 2016/10/11 11:19:24 [debug] 44231#0: *45709 posix_memalign: 00000000014672A0:256 @16 2016/10/11 11:19:24 [debug] 44231#0: *45709 malloc: 000000000151CF30:16384 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL buf copy: 160 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL to write: 160 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL_write: 160 2016/10/11 11:19:24 [debug] 44231#0: *45709 http write filter 0000000000000000 2016/10/11 11:19:24 [debug] 44231#0: *45709 http copy filter: 0 "/?" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http finalize request: 0, "/?" a:1, c:1 2016/10/11 11:19:24 [debug] 44231#0: *45709 set http keepalive handler 2016/10/11 11:19:24 [debug] 44231#0: *45709 http close request 2016/10/11 11:19:24 [debug] 44231#0: *45709 http log handler 2016/10/11 11:19:24 [debug] 44231#0: *45709 free: 00000000015541A0, unused: 1 2016/10/11 11:19:24 [debug] 44231#0: *45709 free: 0000000001466290, unused: 3110 2016/10/11 11:19:24 [debug] 44231#0: *45709 free: 00000000014E16E0 2016/10/11 11:19:24 [debug] 44231#0: *45709 hc free: 0000000000000000 0 2016/10/11 11:19:24 [debug] 44231#0: *45709 hc busy: 0000000000000000 0 2016/10/11 11:19:24 [debug] 44231#0: *45709 free: 000000000151CF30 2016/10/11 11:19:24 [debug] 44231#0: *45709 tcp_nodelay 2016/10/11 11:19:24 [debug] 44231#0: *45709 reusable connection: 1 2016/10/11 11:19:24 [debug] 44231#0: *45709 event timer add: 8: 65000:1476177629112 2016/10/11 11:20:29 [debug] 44231#0: *45709 event timer del: 8: 1476177629112 2016/10/11 11:20:29 [debug] 44231#0: *45709 http keepalive handler 2016/10/11 11:20:29 [debug] 44231#0: *45709 close http connection: 8 2016/10/11 11:20:29 [debug] 44231#0: *45709 SSL_shutdown: 1 2016/10/11 11:20:29 [debug] 44231#0: *45709 reusable connection: 0 2016/10/11 11:20:29 [debug] 44231#0: *45709 free: 0000000000000000 2016/10/11 11:20:29 [debug] 44231#0: *45709 free: 0000000000000000 2016/10/11 11:20:29 [debug] 44231#0: *45709 free: 00000000014EA310, unused: 8 2016/10/11 11:20:29 [debug] 44231#0: *45709 free: 00000000014E9EA0, unused: 8 2016/10/11 11:20:29 [debug] 44231#0: *45709 free: 00000000014672A0, unused: 144 2016/10/11 12:20:38 [debug] 44231#0: *46332 event timer add: 4: 60000:1476181298580 2016/10/11 12:20:38 [debug] 44231#0: *46332 reusable connection: 1 2016/10/11 12:20:38 [debug] 44231#0: *46332 epoll add event: fd:4 op:1 ev:80002001 2016/10/11 12:20:39 [debug] 44231#0: *46332 post event 0000000001449120 2016/10/11 12:20:39 [debug] 44231#0: *46332 delete posted event 0000000001449120 2016/10/11 12:20:39 [debug] 44231#0: *46332 http check ssl handshake 2016/10/11 12:20:39 [debug] 44231#0: *46332 http recv(): 1 2016/10/11 12:20:39 [debug] 44231#0: *46332 https ssl handshake: 0x16 2016/10/11 12:20:39 [debug] 44231#0: *46332 SSL_do_handshake: -1 2016/10/11 12:20:39 [debug] 44231#0: *46332 SSL_get_error: 2 2016/10/11 12:20:39 [debug] 44231#0: *46332 reusable connection: 0 2016/10/11 12:20:39 [debug] 44231#0: *46332 post event 0000000001449120 2016/10/11 12:20:39 [debug] 44231#0: *46332 delete posted event 0000000001449120 2016/10/11 12:20:39 [debug] 44231#0: *46332 SSL handshake handler: 0 2016/10/11 12:20:39 [debug] 44231#0: *46332 SSL_do_handshake: -1 2016/10/11 12:20:39 [debug] 44231#0: *46332 SSL_get_error: 2 2016/10/11 12:20:58 [debug] 44231#0: *46332 post event 0000000001449120 2016/10/11 12:20:58 [debug] 44231#0: *46332 delete posted event 0000000001449120 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL handshake handler: 0 2016/10/11 12:20:58 [debug] 44231#0: *46332 verify:1, error:0, depth:1, subject¡ 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL_do_handshake: -1 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL_get_error: 2 2016/10/11 12:20:58 [debug] 44231#0: *46332 post event 0000000001449120 2016/10/11 12:20:58 [debug] 44231#0: *46332 delete posted event 0000000001449120 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL handshake handler: 0 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL_do_handshake: -1 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL_get_error: 2 2016/10/11 12:20:58 [debug] 44231#0: *46332 post event 0000000001449120 2016/10/11 12:20:58 [debug] 44231#0: *46332 delete posted event 0000000001449120 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL handshake handler: 0 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL_do_handshake: 1 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL: TLSv1.2, cipher: 2016/10/11 12:20:58 [debug] 44231#0: *46332 reusable connection: 1 2016/10/11 12:20:58 [debug] 44231#0: *46332 http wait request handler 2016/10/11 12:20:58 [debug] 44231#0: *46332 malloc: 00000000014456D0:1024 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL_read: -1 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL_get_error: 2 2016/10/11 12:20:58 [debug] 44231#0: *46332 free: 00000000014456D0 2016/10/11 12:20:59 [debug] 44231#0: *46332 post event 0000000001449120 2016/10/11 12:20:59 [debug] 44231#0: *46332 delete posted event 0000000001449120 2016/10/11 12:20:59 [debug] 44231#0: *46332 http wait request handler 2016/10/11 12:20:59 [debug] 44231#0: *46332 malloc: 00000000014456D0:1024 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_read: 144 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_read: -1 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_get_error: 2 2016/10/11 12:20:59 [debug] 44231#0: *46332 reusable connection: 0 2016/10/11 12:20:59 [debug] 44231#0: *46332 posix_memalign: 00000000016F1CC0:4096 @16 2016/10/11 12:20:59 [debug] 44231#0: *46332 http process request line 2016/10/11 12:20:59 [debug] 44231#0: *46332 http request line: "POST / HTTP/1.1" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http uri: "/" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http args: "" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http exten: "" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http process request header line 2016/10/11 12:20:59 [debug] 44231#0: *46332 http header: "Host:" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http header: "Transfer-Encoding: chunked" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http header: "Accept: */*" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http header: "content-type: application/json" 2016/10/11 12:20:59 [debug] 44231#0: *46332 posix_memalign: 00000000014974A0:4096 @16 2016/10/11 12:20:59 [debug] 44231#0: *46332 http header: "Content-Length: 149" 2016/10/11 12:20:59 [debug] 44231#0: *46332 post event 0000000001449120 2016/10/11 12:20:59 [debug] 44231#0: *46332 delete posted event 0000000001449120 2016/10/11 12:20:59 [debug] 44231#0: *46332 http process request header line 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_read: 6 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_read: 149 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_read: 7 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_read: -1 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_get_error: 2 2016/10/11 12:20:59 [debug] 44231#0: *46332 http header done 2016/10/11 12:20:59 [debug] 44231#0: *46332 event timer del: 4: 1476181298580 2016/10/11 12:20:59 [debug] 44231#0: *46332 generic phase: 0 2016/10/11 12:20:59 [debug] 44231#0: *46332 rewrite phase: 1 2016/10/11 12:20:59 [debug] 44231#0: *46332 test location: "/" 2016/10/11 12:20:59 [debug] 44231#0: *46332 using configuration "/" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http cl:-1 max:1048576 2016/10/11 12:20:59 [debug] 44231#0: *46332 rewrite phase: 3 2016/10/11 12:20:59 [debug] 44231#0: *46332 http set discard body 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 39 s:0 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 35 s:1 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 0D s:1 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 0A s:3 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 7B s:4 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 0D s:5 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 0A s:6 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 30 s:0 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 0D s:1 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 0A s:8 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 0D s:9 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 0A s:10 2016/10/11 12:20:59 [debug] 44231#0: *46332 xslt filter header 2016/10/11 12:20:59 [debug] 44231#0: *46332 HTTP/1.1 200 OK 2016/10/11 12:20:59 [debug] 44231#0: *46332 write new buf t:1 f:0 00000000014976C0, pos 00000000014976C0, size: 160 file: 0, size: 0 2016/10/11 12:20:59 [debug] 44231#0: *46332 http write filter: l:0 f:0 s:160 2016/10/11 12:20:59 [debug] 44231#0: *46332 http output filter "/?" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http copy filter: "/?" 2016/10/11 12:20:59 [debug] 44231#0: *46332 image filter 2016/10/11 12:20:59 [debug] 44231#0: *46332 xslt filter body 2016/10/11 12:20:59 [debug] 44231#0: *46332 http postpone filter "/?" 00007FFFADE3C4A0 2016/10/11 12:20:59 [debug] 44231#0: *46332 write old buf t:1 f:0 00000000014976C0, pos 00000000014976C0, size: 160 file: 0, size: 0 2016/10/11 12:20:59 [debug] 44231#0: *46332 write new buf t:0 f:0 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 0 2016/10/11 12:20:59 [debug] 44231#0: *46332 http write filter: l:1 f:0 s:160 2016/10/11 12:20:59 [debug] 44231#0: *46332 http write filter limit 0 2016/10/11 12:20:59 [debug] 44231#0: *46332 posix_memalign: 00000000014E78E0:256 @16 2016/10/11 12:20:59 [debug] 44231#0: *46332 malloc: 000000000147DF50:16384 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL buf copy: 160 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL to write: 160 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_write: 160 2016/10/11 12:20:59 [debug] 44231#0: *46332 http write filter 0000000000000000 2016/10/11 12:20:59 [debug] 44231#0: *46332 http copy filter: 0 "/?" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http finalize request: 0, "/?" a:1, c:1 2016/10/11 12:20:59 [debug] 44231#0: *46332 set http keepalive handler 2016/10/11 12:20:59 [debug] 44231#0: *46332 http close request 2016/10/11 12:20:59 [debug] 44231#0: *46332 http log handler 2016/10/11 12:20:59 [debug] 44231#0: *46332 free: 00000000016F1CC0, unused: 1 2016/10/11 12:20:59 [debug] 44231#0: *46332 free: 00000000014974A0, unused: 3108 2016/10/11 12:20:59 [debug] 44231#0: *46332 free: 00000000014456D0 2016/10/11 12:20:59 [debug] 44231#0: *46332 hc free: 0000000000000000 0 2016/10/11 12:20:59 [debug] 44231#0: *46332 hc busy: 0000000000000000 0 2016/10/11 12:20:59 [debug] 44231#0: *46332 free: 000000000147DF50 2016/10/11 12:20:59 [debug] 44231#0: *46332 tcp_nodelay 2016/10/11 12:20:59 [debug] 44231#0: *46332 reusable connection: 1 2016/10/11 12:20:59 [debug] 44231#0: *46332 event timer add: 4: 65000:1476181324667 2016/10/11 12:22:04 [debug] 44231#0: *46332 event timer del: 4: 1476181324667 2016/10/11 12:22:04 [debug] 44231#0: *46332 http keepalive handler 2016/10/11 12:22:04 [debug] 44231#0: *46332 close http connection: 4 2016/10/11 12:22:04 [debug] 44231#0: *46332 SSL_shutdown: 1 2016/10/11 12:22:04 [debug] 44231#0: *46332 reusable connection: 0 2016/10/11 12:22:04 [debug] 44231#0: *46332 free: 0000000000000000 2016/10/11 12:22:04 [debug] 44231#0: *46332 free: 0000000000000000 2016/10/11 12:22:04 [debug] 44231#0: *46332 free: 00000000015D79A0, unused: 8 2016/10/11 12:22:04 [debug] 44231#0: *46332 free: 000000000156C5F0, unused: 8 2016/10/11 12:22:04 [debug] 44231#0: *46332 free: 00000000014E78E0, unused: 144 </code></pre> <p>and this is my code:</p> <pre><code>#!/usr/bin/env python import re file = open('log_to_parse.txt', 'r') openFile = file.readlines() file.close() resultsFile = open('resultsFile.txt', 'a') printList = [] identifierNew ="45" identifierOld = "467" insideReused = False #reuseSession = [] sentencesToFind = ["http check ssl handshake","SSL: TLSv1.2, cipher:","http process request line","http close request","SSL reused session"] for line in openFile: lineSplitted = line.split(' ') #print lineSplitted[0], lineSplitted[3], lineSplitted[1] identifierOld = lineSplitted[3] print lineSplitted[3] for phrase in sentencesToFind: if phrase in line: #if identifierNew != identifierOld: # print &gt;&gt; resultsFile, "\n" # printList = [] if sentencesToFind.index(phrase) == 0: #print sentencesToFind.index(phrase) printList.append(lineSplitted[3]) #+ " " + lineSplitted[0] + " " + lineSplitted[1] + " ") printList.append(lineSplitted[0]) printList.append(lineSplitted[1]) elif sentencesToFind.index(phrase) == 4: printList.append("1") insideReused = True else: printList.append(lineSplitted[1]) if sentencesToFind.index(phrase) == 4 and not insideReused: printList.append("0") identifierNew = identifierOld insideReused = False if printList: #and identifierOld != identifierNew: #print &gt;&gt; resultsFile, "\n" print &gt;&gt;resultsFile, printList printList = [] resultsFile.close() </code></pre> <p>Some ideas? Thanks :)</p>
2
2016-10-13T08:29:12Z
40,015,993
<p>Shouldn't the <code>if printList</code> block be unindented?</p> <p>Or at least not set the list to [] every single line.</p>
0
2016-10-13T08:31:25Z
[ "python", "parsing", "nginx", "python-textprocessing" ]
Python .append() to a vector in a single line
40,015,949
<p>I'm having problems in <code>append</code> in array. I'm expecting a result like:</p> <pre><code>['44229#0:', '2016/10/11', '11:15:57','11:15:57','11:15:57','11:15:58' '0'] </code></pre> <p>but I'm having this result:</p> <pre><code>['44229#0:', '2016/10/11', '11:15:57'] ['11:15:57'] ['11:15:57'] ['11:15:58'] ['44231#0:', '2016/10/11', '11:19:23'] ['11:19:23'] ['1'] ['11:19:24'] ['11:19:24'] ['44231#0:', '2016/10/11', '12:20:39'] ['12:20:58'] ['12:20:59'] ['12:20:59'] </code></pre> <p>If I indent a bit the output, I get this results:</p> <pre><code>['44229#0:', '2016/10/11', '11:15:57', '11:15:57', '11:15:57', '11:15:58', '44231#0:', '2016/10/11', '11:19:23', '11:19:23', '1', '11:19:24', '11:19:24', '44231#0:', '2016/10/11', '12:20:39', '12:20:58', '12:20:59', '12:20:59'] </code></pre> <p>Still not being the ones I'm need, because I have all in one vector, and need to separate them in different lines...</p> <p>I'm parsing this log from Nginx:</p> <pre><code>2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL_do_handshake: -1 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL_get_error: 2 2016/10/11 11:15:57 [debug] 44229#0: *45677 post event 0000000001449060 2016/10/11 11:15:57 [debug] 44229#0: *45677 delete posted event 0000000001449060 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL handshake handler: 0 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL_do_handshake: 1 2016/10/11 11:15:57 [debug] 44229#0: *45677 http check ssl handshake 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL: TLSv1.2, cipher: 2016/10/11 11:15:57 [debug] 44229#0: *45677 reusable connection: 1 2016/10/11 11:15:57 [debug] 44229#0: *45677 http wait request handler 2016/10/11 11:15:57 [debug] 44229#0: *45677 malloc: 00000000014456D0:1024 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL_read: -1 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL_get_error: 2 2016/10/11 11:15:57 [debug] 44229#0: *45677 free: 00000000014456D0 2016/10/11 11:15:57 [debug] 44229#0: *45677 post event 0000000001449060 2016/10/11 11:15:57 [debug] 44229#0: *45677 delete posted event 0000000001449060 2016/10/11 11:15:57 [debug] 44229#0: *45677 http wait request handler 2016/10/11 11:15:57 [debug] 44229#0: *45677 malloc: 00000000014456D0:1024 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL_read: 144 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL_read: -1 2016/10/11 11:15:57 [debug] 44229#0: *45677 SSL_get_error: 2 2016/10/11 11:15:57 [debug] 44229#0: *45677 reusable connection: 0 2016/10/11 11:15:57 [debug] 44229#0: *45677 posix_memalign: 00000000014974A0:4096 @16 2016/10/11 11:15:57 [debug] 44229#0: *45677 http process request line 2016/10/11 11:15:57 [debug] 44229#0: *45677 http request line: "POST / HTTP/1.1" 2016/10/11 11:15:57 [debug] 44229#0: *45677 http uri: "/" 2016/10/11 11:15:57 [debug] 44229#0: *45677 http args: "" 2016/10/11 11:15:57 [debug] 44229#0: *45677 http exten: "" 2016/10/11 11:15:57 [debug] 44229#0: *45677 http process request header line 2016/10/11 11:15:57 [debug] 44229#0: *45677 http header: "Host:" 2016/10/11 11:15:57 [debug] 44229#0: *45677 http header: "Transfer-Encoding: chunked" 2016/10/11 11:15:57 [debug] 44229#0: *45677 http header: "Accept: */*" 2016/10/11 11:15:57 [debug] 44229#0: *45677 http header: "content-type: application/json" 2016/10/11 11:15:57 [debug] 44229#0: *45677 posix_memalign: 00000000016689D0:4096 @16 2016/10/11 11:15:57 [debug] 44229#0: *45677 http header: "Content-Length: 149" 2016/10/11 11:15:58 [debug] 44229#0: *45677 post event 0000000001449060 2016/10/11 11:15:58 [debug] 44229#0: *45677 delete posted event 0000000001449060 2016/10/11 11:15:58 [debug] 44229#0: *45677 http process request header line 2016/10/11 11:15:58 [debug] 44229#0: *45677 SSL_read: 6 2016/10/11 11:15:58 [debug] 44229#0: *45677 SSL_read: 149 2016/10/11 11:15:58 [debug] 44229#0: *45677 SSL_read: 7 2016/10/11 11:15:58 [debug] 44229#0: *45677 SSL_read: -1 2016/10/11 11:15:58 [debug] 44229#0: *45677 SSL_get_error: 2 2016/10/11 11:15:58 [debug] 44229#0: *45677 http header done 2016/10/11 11:15:58 [debug] 44229#0: *45677 event timer del: 43: 1476177405011 2016/10/11 11:15:58 [debug] 44229#0: *45677 generic phase: 0 2016/10/11 11:15:58 [debug] 44229#0: *45677 rewrite phase: 1 2016/10/11 11:15:58 [debug] 44229#0: *45677 test location: "/" 2016/10/11 11:15:58 [debug] 44229#0: *45677 using configuration "/" 2016/10/11 11:15:58 [debug] 44229#0: *45677 http cl:-1 max:1048576 2016/10/11 11:15:58 [debug] 44229#0: *45677 rewrite phase: 3 2016/10/11 11:15:58 [debug] 44229#0: *45677 http set discard body 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 39 s:0 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 35 s:1 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 0D s:1 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 0A s:3 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 7B s:4 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 0D s:5 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 0A s:6 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 30 s:0 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 0D s:1 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 0A s:8 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 0D s:9 2016/10/11 11:15:58 [debug] 44229#0: *45677 http chunked byte: 0A s:10 2016/10/11 11:15:58 [debug] 44229#0: *45677 xslt filter header 2016/10/11 11:15:58 [debug] 44229#0: *45677 HTTP/1.1 200 OK 2016/10/11 11:15:58 [debug] 44229#0: *45677 write new buf t:1 f:0 0000000001668BF0, pos 0000000001668BF0, size: 160 file: 0, size: 0 2016/10/11 11:15:58 [debug] 44229#0: *45677 http write filter: l:0 f:0 s:160 2016/10/11 11:15:58 [debug] 44229#0: *45677 http output filter "/?" 2016/10/11 11:15:58 [debug] 44229#0: *45677 http copy filter: "/?" 2016/10/11 11:15:58 [debug] 44229#0: *45677 image filter 2016/10/11 11:15:58 [debug] 44229#0: *45677 xslt filter body 2016/10/11 11:15:58 [debug] 44229#0: *45677 http postpone filter "/?" 00007FFFADE3C4A0 2016/10/11 11:15:58 [debug] 44229#0: *45677 write old buf t:1 f:0 0000000001668BF0, pos 0000000001668BF0, size: 160 file: 0, size: 0 2016/10/11 11:15:58 [debug] 44229#0: *45677 write new buf t:0 f:0 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 0 2016/10/11 11:15:58 [debug] 44229#0: *45677 http write filter: l:1 f:0 s:160 2016/10/11 11:15:58 [debug] 44229#0: *45677 http write filter limit 0 2016/10/11 11:15:58 [debug] 44229#0: *45677 posix_memalign: 0000000001499DB0:256 @16 2016/10/11 11:15:58 [debug] 44229#0: *45677 malloc: 000000000175B750:16384 2016/10/11 11:15:58 [debug] 44229#0: *45677 SSL buf copy: 160 2016/10/11 11:15:58 [debug] 44229#0: *45677 SSL to write: 160 2016/10/11 11:15:58 [debug] 44229#0: *45677 SSL_write: 160 2016/10/11 11:15:58 [debug] 44229#0: *45677 http write filter 0000000000000000 2016/10/11 11:15:58 [debug] 44229#0: *45677 http copy filter: 0 "/?" 2016/10/11 11:15:58 [debug] 44229#0: *45677 http finalize request: 0, "/?" a:1, c:1 2016/10/11 11:15:58 [debug] 44229#0: *45677 set http keepalive handler 2016/10/11 11:15:58 [debug] 44229#0: *45677 http close request 2016/10/11 11:15:58 [debug] 44229#0: *45677 http log handler 2016/10/11 11:15:58 [debug] 44229#0: *45677 free: 00000000014974A0, unused: 1 2016/10/11 11:15:58 [debug] 44229#0: *45677 free: 00000000016689D0, unused: 3109 2016/10/11 11:15:58 [debug] 44229#0: *45677 free: 00000000014456D0 2016/10/11 11:15:58 [debug] 44229#0: *45677 hc free: 0000000000000000 0 2016/10/11 11:15:58 [debug] 44229#0: *45677 hc busy: 0000000000000000 0 2016/10/11 11:15:58 [debug] 44229#0: *45677 free: 000000000175B750 2016/10/11 11:15:58 [debug] 44229#0: *45677 tcp_nodelay 2016/10/11 11:15:58 [debug] 44229#0: *45677 reusable connection: 1 2016/10/11 11:15:58 [debug] 44229#0: *45677 event timer add: 43: 65000:1476177423255 2016/10/11 11:17:03 [debug] 44229#0: *45677 event timer del: 43: 1476177423255 2016/10/11 11:17:03 [debug] 44229#0: *45677 http keepalive handler 2016/10/11 11:17:03 [debug] 44229#0: *45677 close http connection: 43 2016/10/11 11:17:03 [debug] 44229#0: *45677 SSL_shutdown: 1 2016/10/11 11:17:03 [debug] 44229#0: *45677 reusable connection: 0 2016/10/11 11:17:03 [debug] 44229#0: *45677 free: 0000000000000000 2016/10/11 11:17:03 [debug] 44229#0: *45677 free: 0000000000000000 2016/10/11 11:17:03 [debug] 44229#0: *45677 free: 00000000014462C0, unused: 8 2016/10/11 11:17:03 [debug] 44229#0: *45677 free: 000000000149ACF0, unused: 8 2016/10/11 11:17:03 [debug] 44229#0: *45677 free: 0000000001499DB0, unused: 144 2016/10/11 11:19:22 [debug] 44231#0: *45709 event timer add: 8: 60000:1476177622411 2016/10/11 11:19:22 [debug] 44231#0: *45709 reusable connection: 1 2016/10/11 11:19:22 [debug] 44231#0: *45709 epoll add event: fd:8 op:1 ev:80002001 2016/10/11 11:19:23 [debug] 44231#0: *45709 post event 0000000001448EE0 2016/10/11 11:19:23 [debug] 44231#0: *45709 delete posted event 0000000001448EE0 2016/10/11 11:19:23 [debug] 44231#0: *45709 http check ssl handshake 2016/10/11 11:19:23 [debug] 44231#0: *45709 http recv(): 1 2016/10/11 11:19:23 [debug] 44231#0: *45709 https ssl handshake: 0x16 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL_do_handshake: -1 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL_get_error: 2 2016/10/11 11:19:23 [debug] 44231#0: *45709 reusable connection: 0 2016/10/11 11:19:23 [debug] 44231#0: *45709 post event 0000000001448EE0 2016/10/11 11:19:23 [debug] 44231#0: *45709 delete posted event 0000000001448EE0 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL handshake handler: 0 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL_do_handshake: -1 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL_get_error: 2 2016/10/11 11:19:23 [debug] 44231#0: *45709 post event 0000000001448EE0 2016/10/11 11:19:23 [debug] 44231#0: *45709 delete posted event 0000000001448EE0 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL handshake handler: 0 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL_do_handshake: 1 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL: TLSv1.2, cipher: 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL reused session 2016/10/11 11:19:23 [debug] 44231#0: *45709 reusable connection: 1 2016/10/11 11:19:23 [debug] 44231#0: *45709 http wait request handler 2016/10/11 11:19:23 [debug] 44231#0: *45709 malloc: 00000000014E16E0:1024 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL_read: -1 2016/10/11 11:19:23 [debug] 44231#0: *45709 SSL_get_error: 2 2016/10/11 11:19:23 [debug] 44231#0: *45709 free: 00000000014E16E0 2016/10/11 11:19:24 [debug] 44231#0: *45709 post event 0000000001448EE0 2016/10/11 11:19:24 [debug] 44231#0: *45709 delete posted event 0000000001448EE0 2016/10/11 11:19:24 [debug] 44231#0: *45709 http wait request handler 2016/10/11 11:19:24 [debug] 44231#0: *45709 malloc: 00000000014E16E0:1024 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL_read: 144 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL_read: 6 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL_read: 149 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL_read: 7 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL_read: -1 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL_get_error: 2 2016/10/11 11:19:24 [debug] 44231#0: *45709 reusable connection: 0 2016/10/11 11:19:24 [debug] 44231#0: *45709 posix_memalign: 00000000015541A0:4096 @16 2016/10/11 11:19:24 [debug] 44231#0: *45709 http process request line 2016/10/11 11:19:24 [debug] 44231#0: *45709 http request line: "POST / HTTP/1.1" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http uri: "/" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http args: "" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http exten: "" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http process request header line 2016/10/11 11:19:24 [debug] 44231#0: *45709 http header: "Host:" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http header: "Transfer-Encoding: chunked" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http header: "Accept: */*" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http header: "content-type: application/json" 2016/10/11 11:19:24 [debug] 44231#0: *45709 posix_memalign: 0000000001466290:4096 @16 2016/10/11 11:19:24 [debug] 44231#0: *45709 http header: "Content-Length: 149" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http header done 2016/10/11 11:19:24 [debug] 44231#0: *45709 event timer del: 8: 1476177622411 2016/10/11 11:19:24 [debug] 44231#0: *45709 generic phase: 0 2016/10/11 11:19:24 [debug] 44231#0: *45709 rewrite phase: 1 2016/10/11 11:19:24 [debug] 44231#0: *45709 test location: "/" 2016/10/11 11:19:24 [debug] 44231#0: *45709 using configuration "/" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http cl:-1 max:1048576 2016/10/11 11:19:24 [debug] 44231#0: *45709 rewrite phase: 3 2016/10/11 11:19:24 [debug] 44231#0: *45709 http set discard body 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 39 s:0 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 35 s:1 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 0D s:1 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 0A s:3 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 7B s:4 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 0D s:5 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 0A s:6 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 30 s:0 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 0D s:1 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 0A s:8 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 0D s:9 2016/10/11 11:19:24 [debug] 44231#0: *45709 http chunked byte: 0A s:10 2016/10/11 11:19:24 [debug] 44231#0: *45709 xslt filter header 2016/10/11 11:19:24 [debug] 44231#0: *45709 HTTP/1.1 200 OK 2016/10/11 11:19:24 [debug] 44231#0: *45709 write new buf t:1 f:0 00000000014664B0, pos 00000000014664B0, size: 160 file: 0, size: 0 2016/10/11 11:19:24 [debug] 44231#0: *45709 http write filter: l:0 f:0 s:160 2016/10/11 11:19:24 [debug] 44231#0: *45709 http output filter "/?" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http copy filter: "/?" 2016/10/11 11:19:24 [debug] 44231#0: *45709 image filter 2016/10/11 11:19:24 [debug] 44231#0: *45709 xslt filter body 2016/10/11 11:19:24 [debug] 44231#0: *45709 http postpone filter "/?" 00007FFFADE3C420 2016/10/11 11:19:24 [debug] 44231#0: *45709 write old buf t:1 f:0 00000000014664B0, pos 00000000014664B0, size: 160 file: 0, size: 0 2016/10/11 11:19:24 [debug] 44231#0: *45709 write new buf t:0 f:0 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 0 2016/10/11 11:19:24 [debug] 44231#0: *45709 http write filter: l:1 f:0 s:160 2016/10/11 11:19:24 [debug] 44231#0: *45709 http write filter limit 0 2016/10/11 11:19:24 [debug] 44231#0: *45709 posix_memalign: 00000000014672A0:256 @16 2016/10/11 11:19:24 [debug] 44231#0: *45709 malloc: 000000000151CF30:16384 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL buf copy: 160 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL to write: 160 2016/10/11 11:19:24 [debug] 44231#0: *45709 SSL_write: 160 2016/10/11 11:19:24 [debug] 44231#0: *45709 http write filter 0000000000000000 2016/10/11 11:19:24 [debug] 44231#0: *45709 http copy filter: 0 "/?" 2016/10/11 11:19:24 [debug] 44231#0: *45709 http finalize request: 0, "/?" a:1, c:1 2016/10/11 11:19:24 [debug] 44231#0: *45709 set http keepalive handler 2016/10/11 11:19:24 [debug] 44231#0: *45709 http close request 2016/10/11 11:19:24 [debug] 44231#0: *45709 http log handler 2016/10/11 11:19:24 [debug] 44231#0: *45709 free: 00000000015541A0, unused: 1 2016/10/11 11:19:24 [debug] 44231#0: *45709 free: 0000000001466290, unused: 3110 2016/10/11 11:19:24 [debug] 44231#0: *45709 free: 00000000014E16E0 2016/10/11 11:19:24 [debug] 44231#0: *45709 hc free: 0000000000000000 0 2016/10/11 11:19:24 [debug] 44231#0: *45709 hc busy: 0000000000000000 0 2016/10/11 11:19:24 [debug] 44231#0: *45709 free: 000000000151CF30 2016/10/11 11:19:24 [debug] 44231#0: *45709 tcp_nodelay 2016/10/11 11:19:24 [debug] 44231#0: *45709 reusable connection: 1 2016/10/11 11:19:24 [debug] 44231#0: *45709 event timer add: 8: 65000:1476177629112 2016/10/11 11:20:29 [debug] 44231#0: *45709 event timer del: 8: 1476177629112 2016/10/11 11:20:29 [debug] 44231#0: *45709 http keepalive handler 2016/10/11 11:20:29 [debug] 44231#0: *45709 close http connection: 8 2016/10/11 11:20:29 [debug] 44231#0: *45709 SSL_shutdown: 1 2016/10/11 11:20:29 [debug] 44231#0: *45709 reusable connection: 0 2016/10/11 11:20:29 [debug] 44231#0: *45709 free: 0000000000000000 2016/10/11 11:20:29 [debug] 44231#0: *45709 free: 0000000000000000 2016/10/11 11:20:29 [debug] 44231#0: *45709 free: 00000000014EA310, unused: 8 2016/10/11 11:20:29 [debug] 44231#0: *45709 free: 00000000014E9EA0, unused: 8 2016/10/11 11:20:29 [debug] 44231#0: *45709 free: 00000000014672A0, unused: 144 2016/10/11 12:20:38 [debug] 44231#0: *46332 event timer add: 4: 60000:1476181298580 2016/10/11 12:20:38 [debug] 44231#0: *46332 reusable connection: 1 2016/10/11 12:20:38 [debug] 44231#0: *46332 epoll add event: fd:4 op:1 ev:80002001 2016/10/11 12:20:39 [debug] 44231#0: *46332 post event 0000000001449120 2016/10/11 12:20:39 [debug] 44231#0: *46332 delete posted event 0000000001449120 2016/10/11 12:20:39 [debug] 44231#0: *46332 http check ssl handshake 2016/10/11 12:20:39 [debug] 44231#0: *46332 http recv(): 1 2016/10/11 12:20:39 [debug] 44231#0: *46332 https ssl handshake: 0x16 2016/10/11 12:20:39 [debug] 44231#0: *46332 SSL_do_handshake: -1 2016/10/11 12:20:39 [debug] 44231#0: *46332 SSL_get_error: 2 2016/10/11 12:20:39 [debug] 44231#0: *46332 reusable connection: 0 2016/10/11 12:20:39 [debug] 44231#0: *46332 post event 0000000001449120 2016/10/11 12:20:39 [debug] 44231#0: *46332 delete posted event 0000000001449120 2016/10/11 12:20:39 [debug] 44231#0: *46332 SSL handshake handler: 0 2016/10/11 12:20:39 [debug] 44231#0: *46332 SSL_do_handshake: -1 2016/10/11 12:20:39 [debug] 44231#0: *46332 SSL_get_error: 2 2016/10/11 12:20:58 [debug] 44231#0: *46332 post event 0000000001449120 2016/10/11 12:20:58 [debug] 44231#0: *46332 delete posted event 0000000001449120 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL handshake handler: 0 2016/10/11 12:20:58 [debug] 44231#0: *46332 verify:1, error:0, depth:1, subject¡ 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL_do_handshake: -1 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL_get_error: 2 2016/10/11 12:20:58 [debug] 44231#0: *46332 post event 0000000001449120 2016/10/11 12:20:58 [debug] 44231#0: *46332 delete posted event 0000000001449120 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL handshake handler: 0 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL_do_handshake: -1 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL_get_error: 2 2016/10/11 12:20:58 [debug] 44231#0: *46332 post event 0000000001449120 2016/10/11 12:20:58 [debug] 44231#0: *46332 delete posted event 0000000001449120 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL handshake handler: 0 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL_do_handshake: 1 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL: TLSv1.2, cipher: 2016/10/11 12:20:58 [debug] 44231#0: *46332 reusable connection: 1 2016/10/11 12:20:58 [debug] 44231#0: *46332 http wait request handler 2016/10/11 12:20:58 [debug] 44231#0: *46332 malloc: 00000000014456D0:1024 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL_read: -1 2016/10/11 12:20:58 [debug] 44231#0: *46332 SSL_get_error: 2 2016/10/11 12:20:58 [debug] 44231#0: *46332 free: 00000000014456D0 2016/10/11 12:20:59 [debug] 44231#0: *46332 post event 0000000001449120 2016/10/11 12:20:59 [debug] 44231#0: *46332 delete posted event 0000000001449120 2016/10/11 12:20:59 [debug] 44231#0: *46332 http wait request handler 2016/10/11 12:20:59 [debug] 44231#0: *46332 malloc: 00000000014456D0:1024 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_read: 144 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_read: -1 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_get_error: 2 2016/10/11 12:20:59 [debug] 44231#0: *46332 reusable connection: 0 2016/10/11 12:20:59 [debug] 44231#0: *46332 posix_memalign: 00000000016F1CC0:4096 @16 2016/10/11 12:20:59 [debug] 44231#0: *46332 http process request line 2016/10/11 12:20:59 [debug] 44231#0: *46332 http request line: "POST / HTTP/1.1" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http uri: "/" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http args: "" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http exten: "" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http process request header line 2016/10/11 12:20:59 [debug] 44231#0: *46332 http header: "Host:" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http header: "Transfer-Encoding: chunked" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http header: "Accept: */*" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http header: "content-type: application/json" 2016/10/11 12:20:59 [debug] 44231#0: *46332 posix_memalign: 00000000014974A0:4096 @16 2016/10/11 12:20:59 [debug] 44231#0: *46332 http header: "Content-Length: 149" 2016/10/11 12:20:59 [debug] 44231#0: *46332 post event 0000000001449120 2016/10/11 12:20:59 [debug] 44231#0: *46332 delete posted event 0000000001449120 2016/10/11 12:20:59 [debug] 44231#0: *46332 http process request header line 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_read: 6 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_read: 149 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_read: 7 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_read: -1 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_get_error: 2 2016/10/11 12:20:59 [debug] 44231#0: *46332 http header done 2016/10/11 12:20:59 [debug] 44231#0: *46332 event timer del: 4: 1476181298580 2016/10/11 12:20:59 [debug] 44231#0: *46332 generic phase: 0 2016/10/11 12:20:59 [debug] 44231#0: *46332 rewrite phase: 1 2016/10/11 12:20:59 [debug] 44231#0: *46332 test location: "/" 2016/10/11 12:20:59 [debug] 44231#0: *46332 using configuration "/" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http cl:-1 max:1048576 2016/10/11 12:20:59 [debug] 44231#0: *46332 rewrite phase: 3 2016/10/11 12:20:59 [debug] 44231#0: *46332 http set discard body 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 39 s:0 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 35 s:1 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 0D s:1 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 0A s:3 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 7B s:4 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 0D s:5 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 0A s:6 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 30 s:0 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 0D s:1 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 0A s:8 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 0D s:9 2016/10/11 12:20:59 [debug] 44231#0: *46332 http chunked byte: 0A s:10 2016/10/11 12:20:59 [debug] 44231#0: *46332 xslt filter header 2016/10/11 12:20:59 [debug] 44231#0: *46332 HTTP/1.1 200 OK 2016/10/11 12:20:59 [debug] 44231#0: *46332 write new buf t:1 f:0 00000000014976C0, pos 00000000014976C0, size: 160 file: 0, size: 0 2016/10/11 12:20:59 [debug] 44231#0: *46332 http write filter: l:0 f:0 s:160 2016/10/11 12:20:59 [debug] 44231#0: *46332 http output filter "/?" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http copy filter: "/?" 2016/10/11 12:20:59 [debug] 44231#0: *46332 image filter 2016/10/11 12:20:59 [debug] 44231#0: *46332 xslt filter body 2016/10/11 12:20:59 [debug] 44231#0: *46332 http postpone filter "/?" 00007FFFADE3C4A0 2016/10/11 12:20:59 [debug] 44231#0: *46332 write old buf t:1 f:0 00000000014976C0, pos 00000000014976C0, size: 160 file: 0, size: 0 2016/10/11 12:20:59 [debug] 44231#0: *46332 write new buf t:0 f:0 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 0 2016/10/11 12:20:59 [debug] 44231#0: *46332 http write filter: l:1 f:0 s:160 2016/10/11 12:20:59 [debug] 44231#0: *46332 http write filter limit 0 2016/10/11 12:20:59 [debug] 44231#0: *46332 posix_memalign: 00000000014E78E0:256 @16 2016/10/11 12:20:59 [debug] 44231#0: *46332 malloc: 000000000147DF50:16384 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL buf copy: 160 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL to write: 160 2016/10/11 12:20:59 [debug] 44231#0: *46332 SSL_write: 160 2016/10/11 12:20:59 [debug] 44231#0: *46332 http write filter 0000000000000000 2016/10/11 12:20:59 [debug] 44231#0: *46332 http copy filter: 0 "/?" 2016/10/11 12:20:59 [debug] 44231#0: *46332 http finalize request: 0, "/?" a:1, c:1 2016/10/11 12:20:59 [debug] 44231#0: *46332 set http keepalive handler 2016/10/11 12:20:59 [debug] 44231#0: *46332 http close request 2016/10/11 12:20:59 [debug] 44231#0: *46332 http log handler 2016/10/11 12:20:59 [debug] 44231#0: *46332 free: 00000000016F1CC0, unused: 1 2016/10/11 12:20:59 [debug] 44231#0: *46332 free: 00000000014974A0, unused: 3108 2016/10/11 12:20:59 [debug] 44231#0: *46332 free: 00000000014456D0 2016/10/11 12:20:59 [debug] 44231#0: *46332 hc free: 0000000000000000 0 2016/10/11 12:20:59 [debug] 44231#0: *46332 hc busy: 0000000000000000 0 2016/10/11 12:20:59 [debug] 44231#0: *46332 free: 000000000147DF50 2016/10/11 12:20:59 [debug] 44231#0: *46332 tcp_nodelay 2016/10/11 12:20:59 [debug] 44231#0: *46332 reusable connection: 1 2016/10/11 12:20:59 [debug] 44231#0: *46332 event timer add: 4: 65000:1476181324667 2016/10/11 12:22:04 [debug] 44231#0: *46332 event timer del: 4: 1476181324667 2016/10/11 12:22:04 [debug] 44231#0: *46332 http keepalive handler 2016/10/11 12:22:04 [debug] 44231#0: *46332 close http connection: 4 2016/10/11 12:22:04 [debug] 44231#0: *46332 SSL_shutdown: 1 2016/10/11 12:22:04 [debug] 44231#0: *46332 reusable connection: 0 2016/10/11 12:22:04 [debug] 44231#0: *46332 free: 0000000000000000 2016/10/11 12:22:04 [debug] 44231#0: *46332 free: 0000000000000000 2016/10/11 12:22:04 [debug] 44231#0: *46332 free: 00000000015D79A0, unused: 8 2016/10/11 12:22:04 [debug] 44231#0: *46332 free: 000000000156C5F0, unused: 8 2016/10/11 12:22:04 [debug] 44231#0: *46332 free: 00000000014E78E0, unused: 144 </code></pre> <p>and this is my code:</p> <pre><code>#!/usr/bin/env python import re file = open('log_to_parse.txt', 'r') openFile = file.readlines() file.close() resultsFile = open('resultsFile.txt', 'a') printList = [] identifierNew ="45" identifierOld = "467" insideReused = False #reuseSession = [] sentencesToFind = ["http check ssl handshake","SSL: TLSv1.2, cipher:","http process request line","http close request","SSL reused session"] for line in openFile: lineSplitted = line.split(' ') #print lineSplitted[0], lineSplitted[3], lineSplitted[1] identifierOld = lineSplitted[3] print lineSplitted[3] for phrase in sentencesToFind: if phrase in line: #if identifierNew != identifierOld: # print &gt;&gt; resultsFile, "\n" # printList = [] if sentencesToFind.index(phrase) == 0: #print sentencesToFind.index(phrase) printList.append(lineSplitted[3]) #+ " " + lineSplitted[0] + " " + lineSplitted[1] + " ") printList.append(lineSplitted[0]) printList.append(lineSplitted[1]) elif sentencesToFind.index(phrase) == 4: printList.append("1") insideReused = True else: printList.append(lineSplitted[1]) if sentencesToFind.index(phrase) == 4 and not insideReused: printList.append("0") identifierNew = identifierOld insideReused = False if printList: #and identifierOld != identifierNew: #print &gt;&gt; resultsFile, "\n" print &gt;&gt;resultsFile, printList printList = [] resultsFile.close() </code></pre> <p>Some ideas? Thanks :)</p>
2
2016-10-13T08:29:12Z
40,017,834
<p>To save the file, it's easier to do it using numpy. The original array is split at every element containing a '#'</p> <pre><code>import numpy as np import re file = open('log_to_parse.txt', 'r') openFile = file.readlines() file.close() #resultsFile = open('resultsFile.txt', 'a') printList = [] identifierNew ="45" identifierOld = "467" insideReused = False #reuseSession = [] sentencesToFind = ["http check ssl handshake", "SSL: TLSv1.2, cipher:", "http process request line", "http close request", "SSL reused session"] for line in openFile: lineSplitted = line.split(' ') #print lineSplitted[0], lineSplitted[3], lineSplitted[1] identifierOld = lineSplitted[3] # print(lineSplitted[3]) for phrase in sentencesToFind: if phrase in line: #if identifierNew != identifierOld: # print &gt;&gt; resultsFile, "\n" # printList = [] if sentencesToFind.index(phrase) == 0: #print sentencesToFind.index(phrase) printList.append(lineSplitted[3]) #+ " " + lineSplitted[0] + " " + lineSplitted[1] + " ") printList.append(lineSplitted[0]) printList.append(lineSplitted[1]) elif sentencesToFind.index(phrase) == 4: printList.append("1") insideReused = True else: printList.append(lineSplitted[1]) if sentencesToFind.index(phrase) == 4 and not insideReused: printList.append("0") identifierNew = identifierOld insideReused = False # I did not see the need to put the for loop where you write the content inside the previous for loop split_list = [] start_sub_list=0 for i in range(1,len(printList)): if '#' in printList[i]: temp_list = printList[start_sub_list:i] split_list.append(temp_list) start_sub_list=i #The last element will always be left out. So, temp_list = printList[start_sub_list:i] split_list.append(temp_list) split_list = np.array(split_list) print(split_list) np.savetxt('resultsFile.txt', split_list, fmt="%s") </code></pre> <p><code>print(split_list)</code> yields the following as the output, and this is what will be written to the file</p> <pre><code>[['44229#0:', '2016/10/11', '11:15:57', '11:15:57', '11:15:57', '11:15:58'] ['44231#0:', '2016/10/11', '11:19:23', '11:19:23', '1', '11:19:24', '11:19:24'] ['44231#0:', '2016/10/11', '12:20:39', '12:20:58', '12:20:59']] </code></pre>
1
2016-10-13T09:56:26Z
[ "python", "parsing", "nginx", "python-textprocessing" ]
How to make a related field mandatory in Django?
40,016,025
<p>Suppose I have the following many-to-one relationship in Django:</p> <pre><code>from django.db import models class Person(models.Model): # ... class Address(models.Model): # ... person = models.ForeignKey(Person, on_delete=models.CASCADE) </code></pre> <p>This allows a person to have multiple addresses.</p> <p>I wish to make it mandatory for a person to have at least one address, so it will be impossible to save a person with no addresses in the DB.</p> <p>How can I achieve this goal? Is it possible to make a related field mandatory (as can be done for "normal" fields using <code>blank=False</code>)?</p>
3
2016-10-13T08:33:03Z
40,024,797
<p>Why not make Address a many to many relationship on Person? That seems to be a more natural expression of your data. </p> <p>But regardless, you can't really enforce the many to many realation on the db. You could perhaps override the save of Person to check for an address. But I would prefer to handle it in the form logic. </p>
0
2016-10-13T15:12:14Z
[ "python", "django", "django-models", "relationship" ]
How to make a related field mandatory in Django?
40,016,025
<p>Suppose I have the following many-to-one relationship in Django:</p> <pre><code>from django.db import models class Person(models.Model): # ... class Address(models.Model): # ... person = models.ForeignKey(Person, on_delete=models.CASCADE) </code></pre> <p>This allows a person to have multiple addresses.</p> <p>I wish to make it mandatory for a person to have at least one address, so it will be impossible to save a person with no addresses in the DB.</p> <p>How can I achieve this goal? Is it possible to make a related field mandatory (as can be done for "normal" fields using <code>blank=False</code>)?</p>
3
2016-10-13T08:33:03Z
40,026,172
<p>As previously said, there's no way to enforce a relationship directly on the database. </p> <p>However, you can take care of it by validating the model before saving using the <code>clean()</code> method. It will be automatically triggered on save for Django models.</p> <pre><code>class Person(models.Model): . . . def clean(self): if len(self.addresses) == 0: raise ValidationError('At least one address is required.') </code></pre>
0
2016-10-13T16:18:17Z
[ "python", "django", "django-models", "relationship" ]
python pandas : group by several columns and count value for one column
40,016,097
<p>I have <strong>df</strong>:</p> <pre><code> orgs feature1 feature2 feature3 0 org1 True True NaN 1 org1 NaN True NaN 2 org2 NaN True True 3 org3 True True NaN 4 org4 True True True 5 org4 True True True </code></pre> <p>Now i would like count the number of distinct orgs per each feature. basically to have a <strong>df_Result</strong> like this:</p> <pre><code> features count_distinct_orgs 0 feature1 3 1 feature2 4 2 feature3 2 </code></pre> <p>Does anybody have an idea how to do that?</p>
2
2016-10-13T08:36:40Z
40,016,302
<p>You can add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sum.html" rel="nofollow"><code>sum</code></a> to previous <a href="http://stackoverflow.com/a/40004637/2901002">solution</a>:</p> <pre><code>df1 = df.groupby('orgs') .apply(lambda x: x.iloc[:,1:].apply(lambda y: y.nunique())).sum().reset_index() df1.columns = ['features','count_distinct_orgs'] print (df1) features count_distinct_orgs 0 feature1 3 1 feature2 4 2 feature3 2 </code></pre> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.aggregate.html" rel="nofollow"><code>aggregate</code></a> <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.nunique.html" rel="nofollow"><code>Series.nunique</code></a>:</p> <pre><code>df1 = df.groupby('orgs') .agg(lambda x: pd.Series.nunique(x)) .sum() .astype(int) .reset_index() df1.columns = ['features','count_distinct_orgs'] print (df1) features count_distinct_orgs 0 feature1 3 1 feature2 4 2 feature3 2 </code></pre> <hr> <p>Solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow"><code>stack</code></a> works, but return warning:</p> <blockquote> <p>C:\Anaconda3\lib\site-packages\pandas\core\groupby.py:2937: FutureWarning: numpy not_equal will not check object identity in the future. The comparison did not return the same result as suggested by the identity (<code>is</code>)) and will change. inc = np.r_[1, val[1:] != val[:-1]]</p> </blockquote> <pre><code>df1 = df.set_index('orgs').stack(dropna=False) df1 = df1.groupby(level=[0,1]).nunique().unstack().sum().reset_index() df1.columns = ['features','count_distinct_orgs'] print (df1) features count_distinct_orgs 0 feature1 3 1 feature2 4 2 feature3 2 </code></pre>
1
2016-10-13T08:47:59Z
[ "python", "pandas", "count", "group-by", "stack" ]
Why does the python filter do not overflow when it processes an infinite sequence?
40,016,145
<pre><code>def _odd_iter(): n = 1 while True: n = n + 2 yield n def filt(n): return lambda x: x % n &gt; 0 def primes(): yield 2 it = _odd_iter() while True: n = next(it) yield n it = filter(filt(n),it) </code></pre> <p>For example: 【3,5,7,9,11,13,15 ......】<br> If I have to take number 7 from this sequence I want to judge whether it is a prime number that must be divided in 3 and 5 to determine And 3,5 of these information must be stored up even if the inert load or the more information will be stored in the future will be more and more slow calculation of the actual experiment but in fact generate prime speed is not lower and the memory does not explode and I want to know what the internal principles </p>
0
2016-10-13T08:39:47Z
40,016,274
<p>In Python 3, as your post is tagged, <code>filter</code> is a lazily-evaluated generator-type object. If you tried to evaluate that entire <code>filter</code> object with e.g. <code>it = list(filter(filt(n),it))</code>, you would have a bad time. You would have an equally bad time if you ran your code in Python 2, in which <code>filter()</code> automatically returns a list.</p> <p>A filter on an infinite iterable is not inherently problematic, though, because you can use it in a perfectly acceptable way, like a <code>for</code> loop:</p> <pre><code>it = filter(filt(n),it) for iteration in it: if input(): print(iteration) else: break </code></pre>
1
2016-10-13T08:46:21Z
[ "python", "python-3.x", "functional-programming" ]
Request object's unique identifier - django or django rest framework
40,016,151
<p>Whenever a request goes to a server, I think there would be a unique identifier for each request object. But I couldn't find how to access that unique identifier. </p> <p>I have already read the docs for <a href="https://docs.djangoproject.com/en/1.10/ref/request-response/#django.http.HttpRequest" rel="nofollow">django HTTPRequest</a> and <a href="http://www.django-rest-framework.org/api-guide/requests/" rel="nofollow">django-rest-framework Request Object</a>.</p> <p>Please Explain, How to get request object's unique identifier, If there is one ? </p>
2
2016-10-13T08:40:04Z
40,019,330
<p>I think <code>No</code>. There is no unique identifier for each and every request something like <code>request-id</code>.We have to generate it from server or generate from client side and pass to server via <code>Header</code>. Also there is no <code>time</code> in <code>HTTP request</code> according to <a href="https://www.w3.org/Protocols/HTTP/HTRQ_Headers.html" rel="nofollow">w3c</a></p>
0
2016-10-13T11:07:44Z
[ "python", "django", "request", "django-rest-framework" ]
Inverting large JSON dictionary
40,016,192
<p>I have a JSON dictionary containing multiple entries (roughly 8 million) of the following form:</p> <pre><code>{"Some_String": {"Name0": 1, "Name1": 1, "Name42": 2, "Name5": 2, ... }, ...} </code></pre> <p>It contains strings that have been used to reference discrete named entities along with counts of how many times that name has been referenced by that string.</p> <p>I want to invert the mapping so that the Name0 is followed by the strings that have referenced it (maintaining the counts). A name is likely to appear within multiple string entries. </p> <pre><code>{"Name0": {"Some_String": 1, "Some_other_string": 1,... }, ...} </code></pre> <p>My question is: is there some JSON functionality that will allow me to efficiently do this?</p> <p>My naive approach involves adding each name into a 2D array (adding the strings and counts to that array as they are found). </p> <p>Initially this ran quite quickly, but as the size of the array increases the running time decreases (linear search). </p> <pre><code>for string in list(surface.keys()): for count, name in zip(surfacs[string].values(),surface[string].keys()): if name in pages: surface_count_list[pages.index(name)].append([string, count]) else: pages.append(name) surface_count_list.append([string, count]) </code></pre> <p>I realise I could add this data directly into a new dictionary, but I didn't know if this would really increase the efficiency of adding new items as the size of the dictionary increases. </p> <p>Thanks. </p>
3
2016-10-13T08:42:23Z
40,016,604
<p>Something like</p> <pre><code>from collections import defaultdict result = defaultdict(dict) for somestring, namesdict in initialdata.items(): for name, amount in namesdict.items(): result[name][something] = amount </code></pre> <p>Would do it, but with 8 million items it may become time to look at databases.</p>
0
2016-10-13T09:01:59Z
[ "python", "arrays", "json", "dictionary" ]
Reading and closing large amount of files in new class result in OSError: Too many open files
40,016,280
<p>I have large amount of files and I need to traverse them and search for some strings, when string is found, the file is copied into new folder, otherwise its closed.</p> <p>Here is example code:</p> <pre><code>import os import stringsfilter def apply_filter(path, filter_dict): dirlist = os.listdir(path) for directory in dirlist: pwd = path + '/' + directory filelist = os.listdir(pwd) for filename in filelist: if filename.split('.')[-1] == "stats": sfilter = stringsfilter.StringsFilter(pwd, filename, filter_dict["strings"]) sfilter.find_strings_and_move() </code></pre> <p>and here is stringsfilter.py:</p> <pre><code>import main import codecs import os import shutil class StringsFilter: strings = None def __init__(self, filepath, filename, strings): self.filepath = filepath self.filename = filename self.strings = strings self.logger = main.get_module_logger("StringsFilter") self.file_desc = codecs.open(self.filepath + '/' + self.filename, 'r', encoding="utf-8-sig") self.logger.debug("[-] Strings: " + str(self.strings)) self.logger.debug("[-] Instantiating class Strings Filter, filename: %s " % self.filename) def find_strings_and_move(self): for line in self.file_desc.readlines(): for string in self.strings: if string in line: self.move_to_folder() return self.close() def move_to_folder(self): name = self.filename.split('.')[0] os.mkdir(self.filepath + '/' + name) shutil.copyfile(self.filepath + '/' + self.filename, self.filepath + '/' + name + '/' + self.filename) self.close() def close(self): if self.file_desc: self.logger.debug("[-] Closing file %s" % self.filename) self.file_desc.close() </code></pre> <p>main.py:</p> <pre><code>import logging def get_module_logger(name): # create logger logger = logging.getLogger(name) # set logging level to log everything logger.setLevel(logging.DEBUG) # create file handler which logs everything fh = logging.FileHandler('files.log') fh.setLevel(logging.DEBUG) # create console handler ch = logging.StreamHandler() ch.setLevel(logging.INFO) # create formatter and add it to the handlersi formatter = logging.Formatter('[%(asctime)s] [%(name)-17s] [%(levelname)-5s] - %(message)s') fh.setFormatter(formatter) ch.setFormatter(formatter) # add the handlers to the logger logger.addHandler(fh) logger.addHandler(ch) return logger </code></pre> <p>in log I can see following:</p> <pre><code>[2016-10-13 10:07:07,002] [StringsFilter ] [DEBUG] - [-] Strings: ['DEVICE_PROBLEM'] [2016-10-13 10:07:07,002] [StringsFilter ] [DEBUG] - [-] Instantiating class Strings Filter, filename: file1.stats [2016-10-13 10:07:07,003] [StringsFilter ] [DEBUG] - [-] Closing file file1.stats [2016-10-13 10:07:07,003] [StringsFilter ] [DEBUG] - [-] Strings: ['DEVICE_PROBLEM'] [2016-10-13 10:07:07,003] [StringsFilter ] [DEBUG] - [-] Strings: ['DEVICE_PROBLEM'] [2016-10-13 10:07:07,004] [StringsFilter ] [DEBUG] - [-] Instantiating class Strings Filter, filename: file2.stats [2016-10-13 10:07:07,004] [StringsFilter ] [DEBUG] - [-] Instantiating class Strings Filter, filename: file2.stats [2016-10-13 10:07:07,004] [StringsFilter ] [DEBUG] - [-] Closing file file2.stats [2016-10-13 10:07:07,004] [StringsFilter ] [DEBUG] - [-] Closing file file2.stats [2016-10-13 10:07:07,005] [StringsFilter ] [DEBUG] - [-] Strings: ['DEVICE_PROBLEM'] [2016-10-13 10:07:07,005] [StringsFilter ] [DEBUG] - [-] Strings: ['DEVICE_PROBLEM'] [2016-10-13 10:07:07,005] [StringsFilter ] [DEBUG] - [-] Strings: ['DEVICE_PROBLEM'] [2016-10-13 10:07:07,005] [StringsFilter ] [DEBUG] - [-] Instantiating class Strings Filter, filename: file3.stats [2016-10-13 10:07:07,005] [StringsFilter ] [DEBUG] - [-] Instantiating class Strings Filter, filename: file3.stats [2016-10-13 10:07:07,005] [StringsFilter ] [DEBUG] - [-] Instantiating class Strings Filter, filename: file3.stats [2016-10-13 10:07:07,006] [StringsFilter ] [DEBUG] - [-] Closing file file3.stats [2016-10-13 10:07:07,006] [StringsFilter ] [DEBUG] - [-] Closing file file3.stats [2016-10-13 10:07:07,006] [StringsFilter ] [DEBUG] - [-] Closing file file3.stats </code></pre> <p>And it goes on, it seems like with every iteration, each statement from <strong>init</strong> is done once more, until there are too many files open and program ends with </p> <pre><code>OSError: [Errno 24] Too many files open </code></pre> <p>I can't understand, why statements from <strong>init</strong> are called multiple times each time the instance is created.</p>
1
2016-10-13T08:46:34Z
40,086,288
<p>Reason why you have same thing logged multiple times: Every time <code>main.get_module_logger("StringsFilter")</code> is called, you call <code>logger.addHandler(...)</code> on <strong>the same logger</strong> returned from <code>logging.getLogger(name)</code>, so you get multiple handlers in one logger. Better make module-level logger</p> <pre><code>import ... LOG = main.get_module_logger("StringsFilter") class StringsFilter:... </code></pre> <p>Regarding open files, I don't see the reason, but consider using <code>with open(filename) as f:</code> syntax in <code>find_strings_and_move()</code>. </p> <pre><code>LOG = main.get_module_logger("StringsFilter") class StringsFilter: strings = None def __init__(self, filepath, filename, strings): self.filepath = filepath self.filename = filename self.strings = strings LOG.debug("[-] Strings: " + str(self.strings)) LOG.debug("[-] Instantiating class Strings Filter, filename: %s " % self.filename) def find_strings_and_move(self): with open(self.filepath + '/' + self.filename, 'r') as file_desc: lines = file_desc.readlines() for line in lines: for string in self.strings: if string in line: self.move_to_folder() return def move_to_folder(self): name = self.filename.split('.')[0] os.mkdir(self.filepath + '/' + name) shutil.copyfile(self.filepath + '/' + self.filename, self.filepath + '/' + name + '/' + self.filename) </code></pre> <p>This way you make sure the file is closed 1) before move 2) always</p>
0
2016-10-17T12:22:26Z
[ "python", "python-3.4" ]
How to store the resulted image from OpenCV using Python in directory?
40,016,287
<p>I have experimenting with a python script which scales the images by 2 times and it is working fine, but the problem is how to store this resulted image in my disk so I can compare the results before and after.</p> <pre><code>import cv2 import numpy as np img = cv2.imread('input.jpg') res = cv2.resize(img,None,fx=2, fy=2, interpolation = cv2.INTER_CUBIC) </code></pre> <p>Resultant is stored in res variable but it should be created as new image. How?</p> <p>My desired output should be result.jpg</p> <p>What i got when printed res</p> <pre><code>&gt;&gt;&gt; res array([[[ 39, 43, 44], [ 40, 44, 44], [ 41, 45, 46], ..., [ 54, 52, 52], [ 52, 50, 50], [ 51, 49, 49]], [[ 38, 42, 44], [ 39, 43, 44], [ 41, 45, 46], ..., [ 55, 53, 53], [ 54, 52, 52], [ 53, 51, 51]], [[ 37, 40, 43], [ 38, 41, 44], [ 40, 43, 46], ..., [ 58, 56, 55], [ 56, 54, 54], [ 56, 53, 53]], ..., [[ 52, 135, 94], [ 54, 137, 95], [ 59, 141, 99], ..., [ 66, 139, 101], [ 62, 135, 96], [ 60, 133, 94]], [[ 47, 131, 89], [ 49, 133, 91], [ 55, 138, 96], ..., [ 56, 129, 91], [ 54, 127, 89], [ 54, 127, 88]], [[ 44, 128, 86], [ 47, 130, 88], [ 53, 136, 94], ..., [ 50, 123, 85], [ 50, 123, 85], [ 50, 123, 85]]], dtype=uint8) </code></pre>
-1
2016-10-13T08:47:01Z
40,016,514
<p>You can use imwrite function. </p> <p>You can find the description of this function <a href="http://docs.opencv.org/2.4.13/modules/highgui/doc/reading_and_writing_images_and_video.html" rel="nofollow">here</a></p>
1
2016-10-13T08:57:20Z
[ "python", "image", "python-2.7", "opencv", "numpy" ]
Index the first and the last n elements of a list
40,016,359
<p>the first n and the last n element of the python list</p> <pre><code>l=[1,2,3,4,5,6,7,8,9,10] </code></pre> <p>can be indexed by the expressions</p> <pre><code>print l[:3] [1, 2, 3] </code></pre> <p>and</p> <pre><code>print l[-3:] [8, 9, 10] </code></pre> <p>is there a way to combine both in a single expression, i.e index the first n and the last n elements using one indexing expression?</p>
5
2016-10-13T08:50:17Z
40,016,402
<p>Just concatenate the results:</p> <pre><code>l[:3] + l[-3:] </code></pre> <p>There is no dedicated syntax to combine disjoint slices.</p>
5
2016-10-13T08:52:13Z
[ "python", "list" ]
Index the first and the last n elements of a list
40,016,359
<p>the first n and the last n element of the python list</p> <pre><code>l=[1,2,3,4,5,6,7,8,9,10] </code></pre> <p>can be indexed by the expressions</p> <pre><code>print l[:3] [1, 2, 3] </code></pre> <p>and</p> <pre><code>print l[-3:] [8, 9, 10] </code></pre> <p>is there a way to combine both in a single expression, i.e index the first n and the last n elements using one indexing expression?</p>
5
2016-10-13T08:50:17Z
40,016,409
<p>No, but you can use:</p> <pre><code>l[:3] + l [-3:] </code></pre>
2
2016-10-13T08:52:29Z
[ "python", "list" ]
Index the first and the last n elements of a list
40,016,359
<p>the first n and the last n element of the python list</p> <pre><code>l=[1,2,3,4,5,6,7,8,9,10] </code></pre> <p>can be indexed by the expressions</p> <pre><code>print l[:3] [1, 2, 3] </code></pre> <p>and</p> <pre><code>print l[-3:] [8, 9, 10] </code></pre> <p>is there a way to combine both in a single expression, i.e index the first n and the last n elements using one indexing expression?</p>
5
2016-10-13T08:50:17Z
40,016,450
<p>Do you mean like:</p> <pre><code>l[:3]+l[-3:] </code></pre> <p>then using variable:</p> <pre><code>l[:x]+l[-y:] </code></pre>
3
2016-10-13T08:54:06Z
[ "python", "list" ]
Index the first and the last n elements of a list
40,016,359
<p>the first n and the last n element of the python list</p> <pre><code>l=[1,2,3,4,5,6,7,8,9,10] </code></pre> <p>can be indexed by the expressions</p> <pre><code>print l[:3] [1, 2, 3] </code></pre> <p>and</p> <pre><code>print l[-3:] [8, 9, 10] </code></pre> <p>is there a way to combine both in a single expression, i.e index the first n and the last n elements using one indexing expression?</p>
5
2016-10-13T08:50:17Z
40,016,602
<p>If it is allowed to change list, You can use this:</p> <pre><code>del(a[n:-n]) </code></pre> <p>If not, create new list and then do this.</p> <pre><code>b = [x for x in a] del(b[n:-n]) </code></pre>
2
2016-10-13T09:01:55Z
[ "python", "list" ]
Python-like multiprocessing in C++
40,016,373
<p>I am new to C++, and I am coming from a long background of Python. </p> <p>I am searching for a way to run a function in parallel in C++. I read a lot about <code>std::async</code>, but it is still not very clear for me. </p> <ol> <li><p>The following code does some really interesting thing</p> <pre><code>#include &lt;future&gt; #include &lt;iostream&gt; void called_from_async() { std::cout &lt;&lt; "Async call" &lt;&lt; std::endl; } int main() { //called_from_async launched in a separate thread if possible std::future&lt;void&gt; result( std::async(called_from_async)); std::cout &lt;&lt; "Message from main." &lt;&lt; std::endl; //ensure that called_from_async is launched synchronously //if it wasn't already launched result.get(); return 0; } </code></pre> <p>If I run it several times sometimes the output is what I expected:</p> <pre><code>Message from main. Async call </code></pre> <p>But sometimes I get something like this:</p> <pre><code>MAessysnacg ec aflrlom main. </code></pre> <p>Why isnt the <code>cout</code> happens first? I clearly call the <code>.get()</code> method AFTER the <code>cout</code>. </p></li> <li><p>About the parallel runs. In case I have a code like this:</p> <pre><code>#include &lt;future&gt; #include &lt;iostream&gt; #include &lt;vector&gt; int twice(int m) { return 2 * m; } int main() { std::vector&lt;std::future&lt;int&gt;&gt; futures; for(int i = 0; i &lt; 10; ++i) { futures.push_back (std::async(twice, i)); } //retrive and print the value stored in the future for(auto &amp;e : futures) { std::cout &lt;&lt; e.get() &lt;&lt; std::endl; } return 0; } </code></pre> <p>All the 10 calls to <code>twice</code> function will run on separate cores simultaneously? </p> <p>In case not, is there a similar thing in C++ like the Python <em>multiprocess</em> lib? </p> <p>Mainly what I am searching for:</p> <p>I write a function, and call it with n number of inputs with ?multiprocessing? and it will run the function 1 times on n nodes at the same time.</p></li> </ol>
4
2016-10-13T08:50:56Z
40,016,591
<p>1) <code>result.get();</code> does not start the thread. It only <strong>waits</strong> for the result. The parallel thread is launched with <code>std::async(called_from_async)</code> call (or whenever the compiler decides).</p> <p>However <code>std::cout</code> is guaranteed to be internally thread safe. So the result you are showing us <strong>should not</strong> ever happen. There's a race condition, but you can't mix both outputs like that. If it really happens (which I doubt) then you might be dealing with a compiler bug.</p> <p>2) Your calls will run parallely. On how many cores it depends on OS and other processes running on your machine. But there's a good chance that all will be used (assuming that you have control over whole ecosystem and no other cpu-intensive processes are running in the background).</p> <p>There is no multiprocessing-like lib for C++ (at least not in the std). If you wish to run subprocesses then there are several options, e.g. forking or <code>popen</code> syscalls.</p>
5
2016-10-13T09:01:29Z
[ "python", "c++", "asynchronous", "multiprocessing" ]
Extract weight of an item from its description using regex in python
40,016,431
<p>I have a list of product descriptions. for example:</p> <pre><code> items = ['avuhovi Grillikaapeli 320g','Savuhovi Kisamakkara 320g', 'Savuhovi Raivo 250g', 'AitoMaku str.garl.sal.dres.330ml', 'Rydbergs 225ml Hollandaise sauce'] </code></pre> <p>I want to extract the weights that is, 320g, 320g, 250ml, 330ml. I know we can use regex for this but do not know how to buil regex to extract that. You can see that weights are sometimes in the middle of the description and sometimes having dot(.) as separator rather than space. So, I am confused how to extract.</p> <p>Thanks for help in advance :)</p>
3
2016-10-13T08:53:32Z
40,016,560
<p>He is one solution that may work (using <code>search</code> and <code>group</code> suggested by Wiktor):</p> <pre><code>&gt;&gt;&gt; for t in items : ... re.search(r'([0-9]+(g|ml))', t).group(1) ... '320g' '320g' '250g' '330ml' '225ml' </code></pre> <p>Indeed a better solution (thanks Wiktor) would be to test if there is a match :</p> <pre><code>&gt;&gt;&gt; res = [] &gt;&gt;&gt; for t in items : ... m = re.search(r'(\d+(g|ml))', t) ... if m: ... res.append(m.group(1)) print res </code></pre>
5
2016-10-13T08:59:54Z
[ "python", "regex" ]
Extract weight of an item from its description using regex in python
40,016,431
<p>I have a list of product descriptions. for example:</p> <pre><code> items = ['avuhovi Grillikaapeli 320g','Savuhovi Kisamakkara 320g', 'Savuhovi Raivo 250g', 'AitoMaku str.garl.sal.dres.330ml', 'Rydbergs 225ml Hollandaise sauce'] </code></pre> <p>I want to extract the weights that is, 320g, 320g, 250ml, 330ml. I know we can use regex for this but do not know how to buil regex to extract that. You can see that weights are sometimes in the middle of the description and sometimes having dot(.) as separator rather than space. So, I am confused how to extract.</p> <p>Thanks for help in advance :)</p>
3
2016-10-13T08:53:32Z
40,016,633
<p><a href="https://regex101.com/r/gy5YTp/4" rel="nofollow">https://regex101.com/r/gy5YTp/4</a></p> <p>Match any digit with <code>\d+</code> then create a matching but non selecting group with <code>(?:ml|g)</code> this will match ml or g. </p> <pre><code>import re items = ['avuhovi Grillikaapeli 320g', 'Savuhovi 333ml Kisamakkara 320g', 'Savuhovi Raivo 250g', 'AitoMaku str.garl.sal.dres.330ml', 'Rydbergs 225ml Hollandaise sauce'] groupedWeights = [re.findall('(\d+(?:ml|g))', i) for i in items] flattenedWeights = [y for x in groupedWeights for y in x] print(flattenedWeights) </code></pre> <p>The match that we make returns a list of lists of weights found so we need to flatten that with <code>[y for x in groupedWeights for y in x]</code></p> <p>That is if you ever have more than one weight in an element. Otherwise we can take the first element of each list like this.</p> <pre><code>weights = [re.findall('(\d+(?:ml|g))', i)[0] for i in items] </code></pre>
-1
2016-10-13T09:03:24Z
[ "python", "regex" ]
How to schedule and cancel tasks with asyncio
40,016,501
<p>I am writing a client-server application. While connected, client sends to the server a "heartbeat" signal, for example, every second. On the server-side I need a mechanism where I can add tasks (or coroutines or something else) to be executed asynchronously. Moreover, I want to cancel tasks from a client, when it stops sending that "heartbeat" signal. </p> <p>In other words, when the server starts a task it has kind of timeout or <em>ttl</em>, in example 3 seconds. When the server receives the "heartbeat" signal it resets timer for another 3 seconds until task is done or client disconnected (stops send the signal).</p> <p>Here is an <a href="https://pymotw.com/3/asyncio/index.html" rel="nofollow">example</a> of canceling a task from <em>asyncio</em> tutorial on pymotw.com. But here the task is canceled before the event_loop started, which is not suitable for me.</p> <pre><code>import asyncio async def task_func(): print('in task_func') return 'the result' event_loop = asyncio.get_event_loop() try: print('creating task') task = event_loop.create_task(task_func()) print('canceling task') task.cancel() print('entering event loop') event_loop.run_until_complete(task) print('task: {!r}'.format(task)) except asyncio.CancelledError: print('caught error from cancelled task') else: print('task result: {!r}'.format(task.result())) finally: event_loop.close() </code></pre>
0
2016-10-13T08:56:56Z
40,022,171
<p>You can use <code>asyncio</code> <code>Task</code> wrappers to execute a task via the <a href="https://docs.python.org/3/library/asyncio-task.html#asyncio.ensure_future" rel="nofollow"><code>ensure_future()</code></a> method.</p> <p><code>ensure_future</code> will automatically wrap your coroutine in a <code>Task</code> wrapper and attach it to your event loop. The <code>Task</code> wrapper will then also ensure that the coroutine 'cranks-over' from <code>await</code> to <code>await</code> statement (or until the coroutine finishes).</p> <p>In other words, just pass a regular coroutine to <code>ensure_future</code> and assign the resultant <code>Task</code> object to a variable. You can then call <a href="https://docs.python.org/3/library/asyncio-task.html#asyncio.Task" rel="nofollow"><code>Task.cancel()</code></a> when you need to stop it.</p> <pre><code>import asyncio async def task_func(): print('in task_func') # if the task needs to run for a while you'll need an await statement # to provide a pause point so that other coroutines can run in the mean time await some_db_or_long_running_background_coroutine() # or if this is a once-off thing, then return the result, # but then you don't really need a Task wrapper... # return 'the result' async def my_app(): my_task = None while True: await asyncio.sleep(0) # listen for trigger / heartbeat if heartbeat and not my_task: my_task = asyncio.ensure_future(task_func()) # also listen for termination of hearbeat / connection elif not heartbeat and my_task: if not my_task.cancelled(): my_task.cancel() else: my_task = None run_app = asyncio.ensure_future(my_app()) event_loop = asyncio.get_event_loop() event_loop.run_forever() </code></pre> <p>Note that tasks are meant for long-running tasks that need to keep working in the background without interrupting the main flow. If all you need is a quick once-off method, then just call the function directly instead.</p>
2
2016-10-13T13:19:37Z
[ "python", "python-asyncio" ]
From 3D to 2D-array Basemap plot (Python)
40,016,519
<p>I'm trying to plot data on a map (basemap) according lon/lat arrays with the following shapes:</p> <pre><code>lon (Ntracks, Npoints) lat (Ntracks, Npoints) data(Ntracks, Npoints, Ncycles) </code></pre> <p>The problem is that, with the code I did below, nothing is displayed on the map (and it's terribly slow! even close to out of memory error...). I'm probably using the wrong method. Can I have your help to be able to convert this 3D-data array to a 2D one to use the pcolormesh function ?</p> <pre><code>map = Basemap(projection='mill',lat_ts=10,\ llcrnrlon= -11.01472, \ urcrnrlon= 2.769825, \ llcrnrlat= 43.292, \ urcrnrlat= 52.10833, \ resolution='l') # Zone lonmin, lonmax = map.lonmin, map.lonmax latmin, latmax = map.latmin, map.latmax # Missing value misval = 99.999 # Masks on Lon/Lat arrays lon = np.ma.masked_where(np.isnan(lon) == True, lon) lat = np.ma.masked_where(np.isnan(lat) == True, lat) lon = np.ma.masked_where(lon &lt;= lonmin, lon) lon = np.ma.masked_where(lon &gt;= lonmax, lon) lat = np.ma.masked_where(lat &lt;= latmin, lat) lat = np.ma.masked_where(lat &gt;= latmax, lat) # Dimensions Ntracks, Npoints, Ncycles = data.shape ''' #----------------- # TRACKS LOCATIONS # IT WORKS =&gt; SEE FIGURE POSTED BELOW #----------------- for track in range(Ntracks): x, y = map( *np.meshgrid( lon[track,:], lat[track,:] )) y = y.T map.plot(x, y, marker= 'o', \ markersize=1.5, \ color='#0B610B', markeredgecolor='#0B610B') ''' #----------------- # TRACKS DATA # =&gt; NOT WORKING !! #----------------- Data = np.full( (Npoints*Npoints), misval ) for track in range(Ntracks): x, y = map( *np.meshgrid( lon[track,:], lat[track,:] )) y = y.T for c in range(Ncycles): Data[0:Npoints] = data[track,:,c] Data = np.ma.masked_where(np.isnan(Data) == True, Data) Data = np.ma.masked_where(Data &gt;= misval, Data) Data = Data.reshape( (Npoints, Npoints) ) # plot Data on the map map.pcolormesh(x, y, Data) parallels = np.arange( latmin, latmax, 2., dtype= 'int') meridians = np.arange( lonmin, lonmax, 2., dtype= 'int') map.drawcoastlines(linewidth=1.5, color='white') map.drawcountries(linewidth=1.5, color='grey') map.drawparallels(parallels, labels= [1,0,0,0], color= 'black', fontsize=16, linewidth=0) map.drawmeridians(meridians, labels= [0,0,0,1], color= 'black', fontsize= 16, linewidth=0) map.drawmapboundary(color='k',linewidth=2.0) map.fillcontinents(color='#cdc5bf') </code></pre> <p>This code generates no error but no data is dispayed on the map...<br> Here is an example of what the code is returning in output for Ntracks=1 and Ncycles=4 :</p> <pre><code>FILE 0 TRACK 011 CYCLE 1 [[-- 60.779693603515625 60.658145904541016 ..., 0.0 0.0 0.0] [-- 60.779693603515625 60.658145904541016 ..., 0.0 0.0 0.0] [-- 60.779693603515625 60.658145904541016 ..., 0.0 0.0 0.0] ..., [-- 60.779693603515625 60.658145904541016 ..., 0.0 0.0 0.0] [-- 60.779693603515625 60.658145904541016 ..., 0.0 0.0 0.0] [-- 60.779693603515625 60.658145904541016 ..., 0.0 0.0 0.0]] CYCLE 2 [[-- 60.7838249206543 60.666202545166016 ..., 0.0 0.0 0.0] [-- 60.7838249206543 60.666202545166016 ..., 0.0 0.0 0.0] [-- 60.7838249206543 60.666202545166016 ..., 0.0 0.0 0.0] ..., [-- 60.7838249206543 60.666202545166016 ..., 0.0 0.0 0.0] [-- 60.7838249206543 60.666202545166016 ..., 0.0 0.0 0.0] [-- 60.7838249206543 60.666202545166016 ..., 0.0 0.0 0.0]] CYCLE 3 [[-- -- -- ..., 0.0 0.0 0.0] [-- -- -- ..., 0.0 0.0 0.0] [-- -- -- ..., 0.0 0.0 0.0] ..., [-- -- -- ..., 0.0 0.0 0.0] [-- -- -- ..., 0.0 0.0 0.0] [-- -- -- ..., 0.0 0.0 0.0]] CYCLE 4 [[-- -- -- ..., 0.0 0.0 0.0] [-- -- -- ..., 0.0 0.0 0.0] [-- -- -- ..., 0.0 0.0 0.0] ..., [-- -- -- ..., 0.0 0.0 0.0] [-- -- -- ..., 0.0 0.0 0.0] [-- -- -- ..., 0.0 0.0 0.0]] </code></pre> <p>I should have something like this (and do a treatment close to 0° lon) <a href="https://i.stack.imgur.com/RyaUw.png" rel="nofollow">Tracks locations</a></p>
1
2016-10-13T08:57:43Z
40,110,849
<p>This map seems to be a very nice tool, thank you for the hint :)</p> <p>If I get it right, Data should be a 2D-Array containing a value for each x-y-pair? But you fill only the first Npoints entries of Data, and then you just reshape it -- so you did not write anything in any rows besides the first of your Npoints x Npoints matrix. This is probably not intended, you want to fill the i-th point into the [i, i]-element of the matrix.</p> <p>So create the matrix with <code>Data = np.full((Npoints, Npoints), np.nan)</code> instead of the 1D-matrix with length N*N you were creating. And then <code>np.fill_diagonal(Data, data[track, :, c])</code></p> <p>And I'm not sure if you are right there:</p> <blockquote> <p>If <code>latlon</code> keyword is set to True, x,y are intrepreted as longitude and latitude in degrees. [...]</p> </blockquote> <p>is this not what you want?</p>
0
2016-10-18T14:23:43Z
[ "python", "arrays", "numpy" ]
Compare items in list if tuples Python
40,016,521
<p>If have this list of tuples:</p> <pre><code>l1 = [('aa', 1),('de', 1),('ac', 3),('ab', 2),('de', 2),('bc', 4)] </code></pre> <p>I want to loop over it, and check if the second item in each tuple is the same as another second item in a tuple in this list. If it is, I want to put this tuple in a new list of tuples.</p> <p>So from this list of tuples I would expect for my new list of tuples:</p> <pre><code>l2 = [('aa', 1), ('de', 1), ('ab', 2),('de', 2)] </code></pre> <p>Because the one's and the two's match. Right now if have this:</p> <pre><code>l2 = [] for i in range(len(l1)): if l1[i][1] in l1[:][0]: l2.append(l1[i]) </code></pre> <p>However, this only gives me:</p> <pre><code>l2 = [('aa', 1), ('de', 1)] </code></pre> <p>I'm pretty sure I'm not indexing the list of tuples right. The solution is probably pretty simple, but I'm just not getting there. </p>
0
2016-10-13T08:57:54Z
40,016,588
<p>You'll need two passes: one to count how many times the second element exists, and another to then build the new list based on those counts:</p> <pre><code>from collections import Counter counts = Counter(id_ for s, id_ in l1) l2 = [(s, id_) for s, id_ in l1 if counts[id_] &gt; 1] </code></pre> <p>Demo:</p> <pre><code>&gt;&gt;&gt; from collections import Counter &gt;&gt;&gt; l1 = [('aa', 1),('de', 1),('ac', 3),('ab', 2),('de', 2),('bc', 4)] &gt;&gt;&gt; counts = Counter(id_ for s, id_ in l1) &gt;&gt;&gt; [(s, id_) for s, id_ in l1 if counts[id_] &gt; 1] [('aa', 1), ('de', 1), ('ab', 2), ('de', 2)] </code></pre> <p>Your code goes wrong with <code>l1[:][0]</code>; <code>l1[:]</code> just creates a <em>shallow copy</em> of the list, then takes the first element from that list. Even if it worked, your approach would have to check <em>every other element in your list</em> for every tuple you consider, which is really inefficient (the code would take N**2 steps for a list of N tuples).</p>
3
2016-10-13T09:01:25Z
[ "python", "python-2.7" ]
How to schedule a job to execute Python script in cloud to load data into bigquery?
40,016,694
<p>I am trying to setup a schedule job/process in cloud to load csv data into Bigquery from google buckets using a python script. I have manage to get hold off the python code to do this but not sure where do I need to save this code so that this task could be completed as an automated process rather than running the gsutil commands manualy. </p>
2
2016-10-13T09:07:02Z
40,018,591
<p><a href="https://cloud.google.com/solutions/reliable-task-scheduling-compute-engine" rel="nofollow">Reliable Task Scheduling on Google Compute Engine  |  Solutions  |  Google Cloud Platform</a>, the 1st link in Google on "google cloud schedule a cron job", gives a high-level overview. <a href="https://cloud.google.com/appengine/docs/python/config/cron" rel="nofollow">Scheduling Tasks With Cron for Python  |  App Engine standard environment for Python  |  Google Cloud Platform</a>, the 2nd link, has step-by-step instructions. They boil down to:</p> <ul> <li>Create <code>cron.yaml</code> in the specificed format alongside your <code>app.yaml</code></li> <li>optionally test it at a development server</li> <li>upload it to the Google Cloud with <code>appcfg.py update</code> or <code>update_cron</code></li> </ul>
1
2016-10-13T10:34:38Z
[ "python", "google-bigquery", "google-cloud-platform", "gsutil" ]
How to detect if a string is already utf8-encoded?
40,016,832
<p>I have some strings like this:</p> <pre><code>u'Thaïlande' </code></pre> <p>This was "Thaïlande" and I dont know how it's been encoded, but I need to bring it back to "Thaïlande", then URL-encode it.</p> <p>Is there a way to guess if a string has already been encoded with Python 2?</p>
0
2016-10-13T09:13:44Z
40,016,915
<p>You have what is called a <a href="https://en.wikipedia.org/wiki/Mojibake" rel="nofollow">Mojibake</a>. You could use statistical analysis to see if there is a unusual number of Latin-1 characters in there in a combination typical of UTF-8 bytes, or if there are any CP1252-specific characters in there.</p> <p>There already is a package that does this for you <em>and</em> repairs the damage if a Mojibake is detected: <a href="https://ftfy.readthedocs.io/en/latest/" rel="nofollow"><code>ftfy</code></a>:</p> <blockquote> <p>The goal of ftfy is to take in bad Unicode and output good Unicode, for use in your Unicode-aware code.</p> </blockquote> <p>and</p> <blockquote> <p>The ftfy.fix_encoding() function will look for evidence of mojibake and, when possible, it will undo the process that produced it to get back the text that was supposed to be there.</p> <p>Does this sound impossible? It’s really not. UTF-8 is a well-designed encoding that makes it obvious when it’s being misused, and a string of mojibake usually contains all the information we need to recover the original string.</p> </blockquote>
1
2016-10-13T09:16:57Z
[ "python", "python-2.7", "character-encoding" ]
Change Python version for evaluating file with SublimREPL plugin
40,016,838
<p>I am using Sublim Text 3 on Mac OS X El Capitan. What I need to do is to evaluate a Python file within Sublime Text 3.</p> <p>I have installed <code>Package Control</code> and then <code>SublimREPL</code> plugins.</p> <p>I have set up a 2 rows layout (<code>View &gt; Layout &gt; Rows: 2</code>) in order to display a Python interpreter in the second part of the screen.</p> <p>I then launch the Python interpreter with <code>Tools &gt; Command Palette... &gt; SublimeREPL: Python</code>.</p> <p>The interpreter starts correctly and I get this:</p> <p><a href="https://i.stack.imgur.com/TXBS0.png" rel="nofollow"><img src="https://i.stack.imgur.com/TXBS0.png" alt="enter image description here"></a></p> <p>I can't find how to start with Python 3.5 that I have downloaded manually (thus installed in <code>/usr/local/bin/</code>). I have tried to modify this file: <code>/Library/Application Support/Sublime Text 3/Packages/SublimeREPL/Config/Python/Main.sublime-menu</code> following <a href="http://stackoverflow.com/a/17575258/2508539">this post</a> instructions, but this did not change anything (Python 2.7.10 still launched).</p> <p>Here is the content of my <code>Main.sublime-menu</code>:</p> <pre><code>[ { "id": "tools", "children": [{ "caption": "SublimeREPL", "mnemonic": "R", "id": "SublimeREPL", "children": [ {"caption": "Python", "id": "Python", "children":[ {"command": "repl_open", "caption": "Python", "id": "repl_python", "mnemonic": "P", "args": { "type": "subprocess", "encoding": "utf8", "cmd": ["python", "-i", "-u"], "cwd": "$file_path", "syntax": "Packages/Python/Python.tmLanguage", "external_id": "python", "extend_env": {"PYTHONIOENCODING": "utf-8"} } }, {"command": "python_virtualenv_repl", "id": "python_virtualenv_repl", "caption": "Python - virtualenv"}, {"command": "repl_open", "caption": "Python - PDB current file", "id": "repl_python_pdb", "mnemonic": "D", "args": { "type": "subprocess", "encoding": "utf8", "cmd": ["python", "-i", "-u", "-m", "pdb", "$file_basename"], "cwd": "$file_path", "syntax": "Packages/Python/Python.tmLanguage", "external_id": "python", "extend_env": {"PYTHONIOENCODING": "utf-8"} } }, {"command": "repl_open", "caption": "Python - RUN current file", "id": "repl_python_run", "mnemonic": "R", "args": { "type": "subprocess", "encoding": "utf8", "cmd": ["python", "-u", "$file_basename"], "cwd": "$file_path", "syntax": "Packages/Python/Python.tmLanguage", "external_id": "python", "extend_env": {"PYTHONIOENCODING": "utf-8"} } }, {"command": "repl_open", "caption": "Python - IPython", "id": "repl_python_ipython", "mnemonic": "I", "args": { "type": "subprocess", "encoding": "utf8", "autocomplete_server": true, "cmd": { "osx": ["python", "-u", "${packages}/SublimeREPL/config/Python/ipy_repl.py"], "linux": ["python", "-u", "${packages}/SublimeREPL/config/Python/ipy_repl.py"], "windows": ["python", "-u", "${packages}/SublimeREPL/config/Python/ipy_repl.py"] }, "cwd": "$file_path", "syntax": "Packages/Python/Python.tmLanguage", "external_id": "python", "extend_env": { "PYTHONIOENCODING": "utf-8", "SUBLIMEREPL_EDITOR": "$editor" } } } ]} ] }] } ] </code></pre> <p>Still following <a href="http://stackoverflow.com/a/17575258/2508539">this post</a> advices, I modified the part of code below, but I can't find any exe file in folder <code>/usr/local/bin/</code>:</p> <pre><code>{"command": "repl_open", "caption": "Python - PDB current file", "id": "repl_python_pdb", "mnemonic": "D", "args": { "type": "subprocess", "encoding": "utf8", "cmd": ["/usr/local/bin/python3", "-i", "-u", "-m", "pdb", "$file_basename"], "cwd": "$file_path", "syntax": "Packages/Python/Python.tmLanguage", "external_id": "python", "extend_env": {"PYTHONIOENCODING": "utf-8"} } } </code></pre> <p>When I press <code>Ctrl</code> + <code>,</code> + <code>f</code> (according to the <a href="https://github.com/wuub/SublimeREPL/blob/master/README.md" rel="nofollow">doc</a>), the interpreter still starts with Python 2.7.10.</p>
0
2016-10-13T09:13:56Z
40,131,394
<p>It appears you are modifying the incorrect configuration files and perhaps a few things are causing issues.</p> <blockquote> <p>I then launch the Python interpreter with <code>Tools &gt; Command Palette... &gt; SublimeREPL: Python</code></p> </blockquote> <p>There is no command palette item "SublimeREPL: Python", so I'm assuming you mean <code>Tools &gt; SublimeREPL &gt; Python &gt; Python</code>. That opens in a tab something like the following:</p> <pre><code># *REPL* [python] Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; </code></pre> <p>In fact <code>Tools &gt; SublimeREPL &gt; Python</code> displays a menu something like the following:</p> <pre><code>Python - execnet Python Python - virtualen Python - PDB current file Python - RUN current file Python - IPython </code></pre> <p>So far so good. But how to change the Python version?</p> <p>Ideally we could either configure the global python to use (that doesn't seem possible) or add a version variant to the menus desccribed above (that doesn't seem possible either).</p> <p>The most straight forward workaround is adding a custom version variant.</p> <p>Create a file named <code>Packages/User/Menu.sublime-menu</code> (if it doesn't already exist). Find the User packages directory via <code>Menu &gt; Preferences &gt; Browse Packages...</code>). And create your menu in that file. This menu will be <strong>added</strong> to the existing menus.</p> <p>For example:</p> <p><code>Packages/User/Menu.sublime-menu</code></p> <pre><code>[ { "id": "tools", "children": [{ "caption": "SublimeREPL Variants", "id": "SublimeREPLVariants", "children": [ { "command": "repl_open", "caption": "Python - PDB current file", "id": "repl_python_pdb", "mnemonic": "D", "args": { "type": "subprocess", "encoding": "utf8", "cmd": ["/usr/bin/python3", "-i", "-u", "-m", "pdb", "$file_basename"], "cwd": "$file_path", "syntax": "Packages/Python/Python.tmLanguage", "external_id": "python", "extend_env": {"PYTHONIOENCODING": "utf-8"} } } ] }] } ] </code></pre> <p>It is important that:</p> <ol> <li><p>The caption and id are not the same as any other menu otherwise it will replace the other menu, hence I've named the one above "SublimeREPL Variants".</p></li> <li><p>The "cmd" uses an absolute path to a valid binary. You can check you are using the correct path with a commands like <code>which</code>:</p> <p>Find the location of Python via which:</p> <pre><code>$ which python /usr/bin/python $ which python3 /usr/bin/python3 $ which python2 /usr/bin/python2 </code></pre> <p>See also <a href="https://stackoverflow.com/questions/40024713/possible-to-switch-between-python-2-and-3-in-sublime-text-3-build-systems-wind/40031389#40031389">Possible to switch between python 2 and 3 in Sublime Text 3 build systems? (Windows)</a> for additional details.</p></li> </ol>
0
2016-10-19T12:27:29Z
[ "python", "osx", "sublimetext3", "osx-elcapitan", "sublimerepl" ]
Scraping dynamic website to get elements in <script tag> using BeautifulSoup and Selenium
40,016,884
<p>I am trying to scrape a dynamic website by using beautifulsoup and selenium. The attributes I would like to filter and put into a CSV are contained within a <code>&lt;script&gt;</code> tag. I would like to extract contained in </p> <p>Script: window.IS24 = window.IS24 || {}; IS24.ssoAppName = "search"; IS24.applicationContext = "/Suche/error-reporter"; IS24.ab = {}; IS24.feature = {"SEARCH_BY_TELEKOM_SPEED_ENABLED":true, IS24.resultList = { angularDebugInfoEnabled: false, navigationBarUrl: "/Suche/S-T/Haus-Kauf",</p> <pre><code> nextPage: "/Suche/S-T/P-2/Haus-Kauf?pagerReporting=true", searchUrl: "/Haus-Kauf", isMobile: false, isTablet: false, query: {"realEstateType":"HOUSE_BUY","otpEnabled":true,"sortingCode":0,"location": {"isGeoHierarchySearch":true, Schulze","referrer":["RESULT_LIST_GROUPED"],"**attributes":[ {"title":"Kaufpreis","value":"249.012,75 €"}, {"title":"Wohnfläche","value":"129,87 m²"},{"title":"Zimmer","value":"4"}, {"title":"Grundstück","value":"400 m²"}],"checkedAttributes":["Gäste-** </code></pre> <p>I am not sure how to extract the attributes at the end into a CSV. Can you please help me with the code?</p>
0
2016-10-13T09:15:53Z
40,017,181
<p>Here is how you can pull out attribute values from a tag with beautifulSoup.</p> <pre><code>import urllib2 from bs4 import BeautifulSoup req = urllib2.Request('http://website_to_grab_things_from.com') response = urllib2.urlopen(req) html = response.read() soup = BeautifulSoup(html, "html.parser") alltext = soup.getText() #soup.findAll('TAGNAME', {'ATTR_NAME' :'ATTR_VALUE'}) result = soup.findAll('div', {'class' :'teaser-text'}) </code></pre>
0
2016-10-13T09:27:58Z
[ "jquery", "python", "selenium", "web-scraping", "beautifulsoup" ]
Python to update JSON file
40,016,895
<p>After googling around and checking some other posts I still haven't found a solution for my issue. </p> <p>Let me quickly explain what I'm after:</p> <p>I've got a JSON configuration file with the following syntax:</p> <pre><code> [ { "name": "Name 1", "provider": "Provider 1", "url": "/1", "source": "URL" }, { "name": "Name 2", "provider": "Provider 2", "url": "/1", "source": "URL 2" } ] </code></pre> <p>My issue is the key source changes and I've got a script generating these however I'm not able to find a way to update them automatically ideally I would need to tell my python script to look out for "Provider 1" and then update with the new URL for Provider 1 but my JSON format does not include individual keys and hence I need to search via Provider name and update the source.</p> <p>Hope this makes sense any help appreciated.</p>
0
2016-10-13T09:16:23Z
40,017,401
<p>You can try something similar to below;</p> <pre><code> myList = [ { "name": "Name 1", "provider": "Provider 1", "url": "/1", "source": "URL" }, { "name": "Name 2", "provider": "Provider 2", "url": "/1", "source": "URL 2" } ] def insertOrUpdate(updatedObj): existing = [x for x in myList if x['provider'] == updatedObj['provider']] if len(existing) == 1: existing[0]['url'] = updatedObj['url'] else: myList.append(updatedObj) insertOrUpdate({"name": "Name 1", "provider": "Provider 1", "url": "updated Url", "source": "URL"}) insertOrUpdate({"name": "Name X", "provider": "Provider X", "url": "new Url", "source": "URL"}) </code></pre>
0
2016-10-13T09:37:41Z
[ "python", "json" ]
Python to update JSON file
40,016,895
<p>After googling around and checking some other posts I still haven't found a solution for my issue. </p> <p>Let me quickly explain what I'm after:</p> <p>I've got a JSON configuration file with the following syntax:</p> <pre><code> [ { "name": "Name 1", "provider": "Provider 1", "url": "/1", "source": "URL" }, { "name": "Name 2", "provider": "Provider 2", "url": "/1", "source": "URL 2" } ] </code></pre> <p>My issue is the key source changes and I've got a script generating these however I'm not able to find a way to update them automatically ideally I would need to tell my python script to look out for "Provider 1" and then update with the new URL for Provider 1 but my JSON format does not include individual keys and hence I need to search via Provider name and update the source.</p> <p>Hope this makes sense any help appreciated.</p>
0
2016-10-13T09:16:23Z
40,017,726
<p>Assuming <code>test.json</code> is your json file containing the text you inserted in the question, you can do the following</p> <pre><code>import json with open('test.json') as my_file: json_content = json.load(my_file) for ele in json_content: print(ele) </code></pre> <p>{'name': 'Name 1', 'provider': 'Provider 1', 'source': 'URL', 'url': '/1'}</p> <p>{'name': 'Name 2', 'provider': 'Provider 2', 'source': 'URL 2', 'url': '/1'}</p> <pre><code>new_url = 'something. Define it as you want.' for ele in json_content: if ele['provider']=='Provider 1': ele['url']=new_url for ele in json_content: print(ele) for ele in json_content: print(ele) </code></pre> <p>{'name': 'Name 1', 'provider': 'Provider 1', 'source': 'URL', 'url': 'something. Define it as you want.'} {'name': 'Name 2', 'provider': 'Provider 2', 'source': 'URL 2', 'url': '/1'}</p>
0
2016-10-13T09:51:37Z
[ "python", "json" ]
Python to update JSON file
40,016,895
<p>After googling around and checking some other posts I still haven't found a solution for my issue. </p> <p>Let me quickly explain what I'm after:</p> <p>I've got a JSON configuration file with the following syntax:</p> <pre><code> [ { "name": "Name 1", "provider": "Provider 1", "url": "/1", "source": "URL" }, { "name": "Name 2", "provider": "Provider 2", "url": "/1", "source": "URL 2" } ] </code></pre> <p>My issue is the key source changes and I've got a script generating these however I'm not able to find a way to update them automatically ideally I would need to tell my python script to look out for "Provider 1" and then update with the new URL for Provider 1 but my JSON format does not include individual keys and hence I need to search via Provider name and update the source.</p> <p>Hope this makes sense any help appreciated.</p>
0
2016-10-13T09:16:23Z
40,018,098
<p>Thank you very much both answers do work I just need to adapt some more code in order to make it ready as @MMF suggested the script should then run via cron.</p> <p>Thanks a lot for your help guys!!</p>
0
2016-10-13T10:09:28Z
[ "python", "json" ]
Tensorflow: Fine tune Inception model
40,016,933
<p>For a few days I am following the instructions here:<a href="https://github.com/tensorflow/models/tree/master/inception" rel="nofollow">https://github.com/tensorflow/models/tree/master/inception</a> for fine-tuning inception model. The problem is that my dataset is huge so converting it to TFRecords format would fill my entire hard-disk space. Is there a way of fine-tuning without using this format? Thanks!</p>
0
2016-10-13T09:17:34Z
40,031,748
<p>Fine-tuning is independent of the data format; you're fine there. TFRecords promotes training and scoring speed; it shouldn't affect the quantity of iterations or epochs needed, nor the ultimate classification accuracy.</p>
0
2016-10-13T22:00:53Z
[ "python", "machine-learning", "tensorflow", "deep-learning" ]
Regular expression finding '\n'
40,016,950
<p>I'm in the process of making a program to pattern match phone numbers in text.</p> <p>I'm loading this text:</p> <pre><code>(01111-222222)fdf 01111222222 (01111)222222 01111 222222 01111.222222 </code></pre> <p>Into a variable, and using "findall" it's returning this:</p> <pre><code>('(01111-222222)', '(01111', '-', '222222)') ('\n011112', '', '\n', '011112') ('(01111)222222', '(01111)', '', '222222') ('01111 222222', '01111', ' ', '222222') ('01111.222222', '01111', '.', '222222') </code></pre> <p>This is my expression:</p> <pre><code>ex = re.compile(r"""( (\(?0\d{4}\)?)? # Area code (\s*\-*\.*)? # seperator (\(?\d{6}\)?) # Local number )""", re.VERBOSE) </code></pre> <p>I don't understand why the '\n' is being caught. </p> <p>If <code>*</code> in '<code>\\.*</code>' is substituted for by '<code>+</code>', the expression works as I want it. Or if I simply remove <code>*</code>(and being happy to find the two sets of numbers separated by only a single period), the expression works.</p>
4
2016-10-13T09:18:24Z
40,017,060
<p>The <code>\s</code> matches both <em>horizontal</em> and <em>veritcal</em> whitespace symbols. If you have a <code>re.VERBOSE</code>, you can match a normal space with an escaped space <code>\ </code>. Or, you may exclude <code>\r</code> and <code>\n</code> from <code>\s</code> with <code>[^\S\r\n]</code> to match horizontal whitespace.</p> <p>Use</p> <pre><code>ex = re.compile(r"""( (\(?0\d{4}\)?)? # Area code ([^\S\r\n]*-*\.*)? # seperator ((HERE)) (\(?\d{6}\)?) # Local number )""", re.VERBOSE) </code></pre> <p>See the <a href="https://regex101.com/r/TefKkm/3" rel="nofollow">regex demo</a></p> <p>Also, the <code>-</code> outside a character class does not require escaping.</p>
4
2016-10-13T09:22:54Z
[ "python", "regex" ]
If I terminate the process, the thread would also stop?
40,017,328
<p>I've following code :</p> <pre><code>#!/usr/bin/python3 import os import subprocess import time import threading class StartJar(threading.Thread): def run(self): os.system("java -jar &lt;nazwa_appki&gt;.jar") jarFileRun = StartJar current_location = subprocess.getoutput("pwd") while True: x = subprocess.getstatusoutput("git checkout master &amp;&amp; git reset --hard &amp;&amp; git fetch &amp;&amp; git pull") if x[0] is 0: os.system("git checkout master &amp;&amp; git reset --hard &amp;&amp; git fetch &amp;&amp; git pull") os.system("mvn clean package ~/your/path") try: process_pid = subprocess.check_output(['pgrep', '-f', 'tu_podaj_nazwe_procesu']).decode() except subprocess.CalledProcessError as e: print("pgrep failed because ({}):".format(e.returncode), e.output.decode()) else: try: os.kill(int(process_pid), 9) print('Process with ' + process_pid + ' PID was killed.') except ProcessLookupError as e: print("Process were old...") except ValueError as e: print("There is no such process!") os.system("cp ~/your/path" + ' ' + current_location) jarFileRun.start() else: print('No changes was made...') time.sleep(1800) </code></pre> <p>And I wonder if I kill process that my thread runs, it would close as well ? If no how do i terminate the thread to be able to run it once more with new changes that had came out for the file I want to execute ? I tried to find out in google something that stops thread but it didn't work for me when I added it to the first line of while statement.</p>
0
2016-10-13T09:34:44Z
40,018,246
<p>If you terminate the process,all threads corresponding to that process would terminate. the process will still run if you terminate one of the threads in process.</p>
0
2016-10-13T10:17:07Z
[ "python", "multithreading", "python-3.x", "python-3.4", "python-multithreading" ]
If I terminate the process, the thread would also stop?
40,017,328
<p>I've following code :</p> <pre><code>#!/usr/bin/python3 import os import subprocess import time import threading class StartJar(threading.Thread): def run(self): os.system("java -jar &lt;nazwa_appki&gt;.jar") jarFileRun = StartJar current_location = subprocess.getoutput("pwd") while True: x = subprocess.getstatusoutput("git checkout master &amp;&amp; git reset --hard &amp;&amp; git fetch &amp;&amp; git pull") if x[0] is 0: os.system("git checkout master &amp;&amp; git reset --hard &amp;&amp; git fetch &amp;&amp; git pull") os.system("mvn clean package ~/your/path") try: process_pid = subprocess.check_output(['pgrep', '-f', 'tu_podaj_nazwe_procesu']).decode() except subprocess.CalledProcessError as e: print("pgrep failed because ({}):".format(e.returncode), e.output.decode()) else: try: os.kill(int(process_pid), 9) print('Process with ' + process_pid + ' PID was killed.') except ProcessLookupError as e: print("Process were old...") except ValueError as e: print("There is no such process!") os.system("cp ~/your/path" + ' ' + current_location) jarFileRun.start() else: print('No changes was made...') time.sleep(1800) </code></pre> <p>And I wonder if I kill process that my thread runs, it would close as well ? If no how do i terminate the thread to be able to run it once more with new changes that had came out for the file I want to execute ? I tried to find out in google something that stops thread but it didn't work for me when I added it to the first line of while statement.</p>
0
2016-10-13T09:34:44Z
40,018,361
<p>No it will not terminate because the value <code>daemon</code> is by default False and it means that the thread doesn't stop when the main process finishes, it's totally independent. However, if you set the thread's daemon value to <code>True</code> it will run as long as the main process runs too.</p> <p>In order to do so you can delete your <code>StartJar</code> class and define <code>jarFileRun</code> like this:</p> <pre><code>jarFileRun = threading.Thread(target=os.system, args=("java -jar &lt;nazwa_appki&gt;.jar",), daemon=True) </code></pre> <p>Or make an init definition inside the class for <code>daemon</code></p>
-1
2016-10-13T10:23:41Z
[ "python", "multithreading", "python-3.x", "python-3.4", "python-multithreading" ]
Is numpy.polyfit with 1 degree of fitting, TLS or OLS?
40,017,357
<p>I have two timeseries x and y, both have the same length. Using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.polyfit.html" rel="nofollow">numpy.polyfit</a>, I fit a straight line through the data with:</p> <pre><code>numpy.polyfit(x,y,1) </code></pre> <p>Is this <a href="https://en.wikipedia.org/wiki/Total_least_squares" rel="nofollow">Total Least Squares (TLS)</a> or <a href="https://en.wikipedia.org/wiki/Ordinary_least_squares" rel="nofollow">Ordinary Least Squares (OLS)</a>? I want to fit a TLS in python, how can this be done?</p>
2
2016-10-13T09:35:46Z
40,017,928
<p>You can use <a href="http://docs.scipy.org/doc/scipy/reference/odr.html" rel="nofollow">scipy.odr</a> it will compute orthogonal regression which should be equal to tls. </p>
1
2016-10-13T10:00:44Z
[ "python", "numpy", "scipy" ]
Passing parameter (directory) to %cd command in ipython notebook
40,017,360
<p>I'm trying to pass a parameter (directory) to a %cd command in ipython notebook as below:</p> <pre><code> rootdir = "D:\mydoc" %cd rootdir </code></pre> <p>but i get the following error:</p> <pre><code> [Error 2] The system cannot find the file specified: u'rootdir' D:\mydoc </code></pre> <p>when i'm doing </p> <pre><code> %cd D:\mydoc </code></pre> <p>This obviously works but i want to be able to specify my working directories using parameters...</p> <p>Many thanks who can help me.</p> <p>Best Wishes</p>
0
2016-10-13T09:35:59Z
40,017,458
<p>You can use <code>$</code> to use the value in a variable.</p> <pre><code>%cd $rootdir </code></pre>
1
2016-10-13T09:39:55Z
[ "python", "directory", "ipython", "parameter-passing", "jupyter-notebook" ]
xml ns0: prefix cant make the previous solutions work
40,017,379
<p>i know there are several questions that are already answered about this topic but for some reason i cant make it work when i read the file. </p> <p>this is the code i have </p> <pre><code>import xml.etree.ElementTree as etree import copy etree.register_namespace("","http://www.w3.org/2001/XMLSchema") etree.register_namespace("","http://www.w3.org/2001/XMLSchema-instance") #Estract the search set thats necesary tree=etree.ElementTree() tree.parse('Selection Sets.xml') root = tree.getroot() #QTO with one set name and properties as needed, same properties will be applied to the new elements itree=etree.ElementTree() itree.parse('New QTO one set.xml') iroot=itree.getroot() print (etree.tostring(iroot)) #----Extract the names to be used in the new sets catcher=[] temp=tree.findall('selectionsets/selectionset') for child in temp: catcher.append(str(child.get("name"))) # print (catcher) #----create new elements inside the QTO xml '''itemp=iroot.findall("Takeoff/Catalog") for isets in itemp: for icatch in catcher: col = copy.deepcopy(isets) col.set('Name','%s'%(icatch)) iroot.find('Takeoff/Catalog/Item').append(col)''' print (catcher[0]) otree = etree.ElementTree(iroot) otree.write('Set QTO.xml') </code></pre> <hr> <p>this is the original xml file structure</p> <pre><code> &lt;?xml version="1.0" encoding="utf-8"?&gt; &lt;Takeoff xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://download.autodesk.com/us/navisworks/schemas/nw-TakeoffCatalog-10.0.xsd"&gt; &lt;Catalog&gt; &lt;Item Name="Column Location D" WBS="1" Transparency="0.3" Color="-3887691" LineThickness="0.1" CatalogId="f9f30bfc-6c43-4b7b-a335-5c5c093b4165"&gt; &lt;VariableCollection&gt; &lt;Variable Name="Length" Formula="=ModelLength" Units="Meter" /&gt; &lt;Variable Name="Width" Formula="=ModelWidth" Units="Meter" /&gt; &lt;Variable Name="Thickness" Formula="=ModelThickness" Units="Meter" /&gt; &lt;Variable Name="Height" Formula="=ModelHeight" Units="Meter" /&gt; &lt;Variable Name="Perimeter" Formula="=ModelPerimeter" Units="Meter" /&gt; &lt;Variable Name="Area" Formula="=ModelArea" Units="SquareMeter" /&gt; &lt;Variable Name="Volume" Formula="=ModelVolume" Units="CubicMeter" /&gt; &lt;Variable Name="Weight" Formula="=ModelWeight" Units="Kilogram" /&gt; &lt;Variable Name="Count" Formula="=1" /&gt; &lt;/VariableCollection&gt; &lt;/Item&gt; &lt;/Catalog&gt; &lt;ConfigFile&gt; &lt;Workbook xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://download.autodesk.com/us/navisworks/schemas/nw-TakeoffConfiguration-10.0.xsd" xmlns=""&gt; &lt;GlobalConfiguration&gt; &lt;ConfigureFileType&gt;ItemResources&lt;/ConfigureFileType&gt; &lt;UnitsSystem&gt;Metric&lt;/UnitsSystem&gt; &lt;Currency&gt; &lt;Name&gt;&lt;/Name&gt; &lt;Code&gt;&lt;/Code&gt; &lt;Symbol&gt;&lt;/Symbol&gt; &lt;/Currency&gt; &lt;Column Name="Object"&gt; &lt;Type&gt;String&lt;/Type&gt; &lt;Purpose&gt;Input&lt;/Purpose&gt; &lt;Formula varies="0"&gt;&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="0" group="any"&gt;&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="Description1"&gt; &lt;Type&gt;String&lt;/Type&gt; &lt;Purpose&gt;Input&lt;/Purpose&gt; &lt;Formula varies="0"&gt;&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="0" group="any"&gt;&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="Description2"&gt; &lt;Type&gt;String&lt;/Type&gt; &lt;Purpose&gt;Input&lt;/Purpose&gt; &lt;Formula varies="0"&gt;&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="0" group="any"&gt;&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="ModelLength"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;Input&lt;/Purpose&gt; &lt;Formula varies="0"&gt;&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="0" group="length"&gt;meter&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="ModelWidth"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;Input&lt;/Purpose&gt; &lt;Formula varies="0"&gt;&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="0" group="length"&gt;meter&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="ModelThickness"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;Input&lt;/Purpose&gt; &lt;Formula varies="0"&gt;&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="0" group="length"&gt;meter&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="ModelHeight"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;Input&lt;/Purpose&gt; &lt;Formula varies="0"&gt;&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="0" group="length"&gt;meter&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="ModelPerimeter"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;Input&lt;/Purpose&gt; &lt;Formula varies="0"&gt;&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="0" group="length"&gt;meter&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="ModelArea"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;Input&lt;/Purpose&gt; &lt;Formula varies="0"&gt;&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="0" group="area"&gt;squaremeter&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="ModelVolume"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;Input&lt;/Purpose&gt; &lt;Formula varies="0"&gt;&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="0" group="volume"&gt;cubicmeter&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="ModelWeight"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;Input&lt;/Purpose&gt; &lt;Formula varies="0"&gt;&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="0" group="weight"&gt;kilogram&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="Length"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;Calculation&lt;/Purpose&gt; &lt;Formula varies="1"&gt;=ModelLength&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="1" group="length"&gt;meter&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="Width"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;Calculation&lt;/Purpose&gt; &lt;Formula varies="1"&gt;=ModelWidth&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="1" group="length"&gt;meter&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="Thickness"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;Calculation&lt;/Purpose&gt; &lt;Formula varies="1"&gt;=ModelThickness&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="1" group="length"&gt;meter&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="Height"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;Calculation&lt;/Purpose&gt; &lt;Formula varies="1"&gt;=ModelHeight&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="1" group="length"&gt;meter&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="Perimeter"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;Calculation&lt;/Purpose&gt; &lt;Formula varies="1"&gt;=ModelPerimeter&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="1" group="length"&gt;meter&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="Area"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;Calculation&lt;/Purpose&gt; &lt;Formula varies="1"&gt;=ModelArea&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="1" group="area"&gt;squaremeter&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="Volume"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;Calculation&lt;/Purpose&gt; &lt;Formula varies="1"&gt;=ModelVolume&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="1" group="volume"&gt;cubicmeter&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="Weight"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;Calculation&lt;/Purpose&gt; &lt;Formula varies="1"&gt;=ModelWeight&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="1" group="weight"&gt;kilogram&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="Count"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="0" group="any"&gt;count&lt;/Units&gt; &lt;/Column&gt; &lt;Column Name="PrimaryQuantity"&gt; &lt;Type&gt;Number&lt;/Type&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Value varies="1"&gt;&lt;/Value&gt; &lt;Units varies="1" group="any"&gt;&lt;/Units&gt; &lt;/Column&gt; &lt;/GlobalConfiguration&gt; &lt;Table Name="ObjectResource"&gt; &lt;ColumnRef Name="Object" /&gt; &lt;ColumnRef Name="Description1" /&gt; &lt;ColumnRef Name="Description2" /&gt; &lt;ColumnRef Name="ModelLength" /&gt; &lt;ColumnRef Name="ModelWidth" /&gt; &lt;ColumnRef Name="ModelThickness" /&gt; &lt;ColumnRef Name="ModelHeight" /&gt; &lt;ColumnRef Name="ModelPerimeter" /&gt; &lt;ColumnRef Name="ModelArea" /&gt; &lt;ColumnRef Name="ModelVolume" /&gt; &lt;ColumnRef Name="ModelWeight" /&gt; &lt;ColumnRef Name="Length"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Width"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Thickness"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Height"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Perimeter"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Area"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="area"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Volume"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="volume"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Weight"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="weight"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Count"&gt; &lt;Purpose&gt;Calculation&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="PrimaryQuantity"&gt; &lt;Purpose&gt;Calculation&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;/Table&gt; &lt;Table Name="StepResource"&gt; &lt;ColumnRef Name="Length"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Width"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Thickness"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Height"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Perimeter"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Area"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="area"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Volume"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="volume"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Weight"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="weight"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Count" /&gt; &lt;ColumnRef Name="PrimaryQuantity" /&gt; &lt;/Table&gt; &lt;Table Name="ObjectStep"&gt; &lt;ColumnRef Name="Object" /&gt; &lt;ColumnRef Name="ModelLength" /&gt; &lt;ColumnRef Name="ModelWidth" /&gt; &lt;ColumnRef Name="ModelThickness" /&gt; &lt;ColumnRef Name="ModelHeight" /&gt; &lt;ColumnRef Name="ModelPerimeter" /&gt; &lt;ColumnRef Name="ModelArea" /&gt; &lt;ColumnRef Name="ModelVolume" /&gt; &lt;ColumnRef Name="ModelWeight" /&gt; &lt;ColumnRef Name="Length"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Width"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Thickness"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Height"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Perimeter"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Area"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="area"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Volume"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="volume"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Weight"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="weight"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;/Table&gt; &lt;Table Name="Step"&gt; &lt;ColumnRef Name="Length"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Width"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Thickness"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Height"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Perimeter"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Area"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Volume"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Weight"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;/Table&gt; &lt;Table Name="Object"&gt; &lt;ColumnRef Name="Object" /&gt; &lt;ColumnRef Name="Description1" /&gt; &lt;ColumnRef Name="Description2" /&gt; &lt;ColumnRef Name="ModelLength" /&gt; &lt;ColumnRef Name="ModelWidth" /&gt; &lt;ColumnRef Name="ModelThickness" /&gt; &lt;ColumnRef Name="ModelHeight" /&gt; &lt;ColumnRef Name="ModelPerimeter" /&gt; &lt;ColumnRef Name="ModelArea" /&gt; &lt;ColumnRef Name="ModelVolume" /&gt; &lt;ColumnRef Name="ModelWeight" /&gt; &lt;ColumnRef Name="Length"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Width"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Thickness"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Height"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Perimeter"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="length"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Area"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="area"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Volume"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="volume"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Weight"&gt; &lt;Formula varies="1"&gt;&lt;/Formula&gt; &lt;Units varies="1" group="weight"&gt;&lt;/Units&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Count"&gt; &lt;Purpose&gt;Calculation&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="PrimaryQuantity"&gt; &lt;Purpose&gt;Calculation&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;/Table&gt; &lt;Table Name="Item"&gt; &lt;ColumnRef Name="Length"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Width"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Thickness"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Height"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Perimeter"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Area"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Volume"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Weight"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Count"&gt; &lt;Formula varies="1"&gt;=1&lt;/Formula&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="PrimaryQuantity" /&gt; &lt;/Table&gt; &lt;Table Name="ItemGroup" /&gt; &lt;Table Name="Resource"&gt; &lt;ColumnRef Name="Length"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Width"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Thickness"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Height"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Perimeter"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Area"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Volume"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Weight"&gt; &lt;Purpose&gt;RollUp&lt;/Purpose&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="Count"&gt; &lt;Formula varies="1"&gt;=1&lt;/Formula&gt; &lt;/ColumnRef&gt; &lt;ColumnRef Name="PrimaryQuantity" /&gt; &lt;/Table&gt; &lt;Table Name="ResourceGroup" /&gt; &lt;/Workbook&gt; &lt;/ConfigFile&gt; &lt;/Takeoff&gt; </code></pre> <hr> <p>Part of the result so far</p> <pre><code>&lt;ns0:Takeoff xmlns="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns0="http://download.autodesk.com/us/navisworks/schemas/nw-TakeoffCatalog-10.0.xsd"&gt; &lt;ns0:Catalog&gt; &lt;ns0:Item CatalogId="f9f30bfc-6c43-4b7b-a335-5c5c093b4165" Color="-3887691" LineThickness="0.1" Name="Column Location D" Transparency="0.3" WBS="1"&gt; &lt;ns0:VariableCollection&gt; &lt;ns0:Variable Formula="=ModelLength" Name="Length" Units="Meter" /&gt; &lt;ns0:Variable Formula="=ModelWidth" Name="Width" Units="Meter" /&gt; &lt;ns0:Variable Formula="=ModelThickness" Name="Thickness" Units="Meter" /&gt; &lt;ns0:Variable Formula="=ModelHeight" Name="Height" Units="Meter" /&gt; &lt;ns0:Variable Formula="=ModelPerimeter" Name="Perimeter" Units="Meter" /&gt; &lt;ns0:Variable Formula="=ModelArea" Name="Area" Units="SquareMeter" /&gt; &lt;ns0:Variable Formula="=ModelVolume" Name="Volume" Units="CubicMeter" /&gt; &lt;ns0:Variable Formula="=ModelWeight" Name="Weight" Units="Kilogram" /&gt; &lt;ns0:Variable Formula="=1" Name="Count" /&gt; &lt;/ns0:VariableCollection&gt; &lt;/ns0:Item&gt; &lt;/ns0:Catalog&gt; &lt;ns0:ConfigFile&gt; &lt;Workbook noNamespaceSchemaLocation="http://download.autodesk.com/us/navisworks/schemas/nw-TakeoffConfiguration-10.0.xsd"&gt; &lt;/Workbook&gt; &lt;/ns0:ConfigFile&gt; &lt;/ns0:Takeoff&gt; </code></pre>
1
2016-10-13T09:36:54Z
40,040,868
<p>I understand the problem now, you have unwanted ns0 prefixes in your output. In order to avoid them you may need to register all the namespaces with their corresponding prefixes:</p> <pre><code>etree.register_namespace("xs","http://www.w3.org/2001/XMLSchema") etree.register_namespace("xsi","http://www.w3.org/2001/XMLSchema-instance") etree.register_namespace("","http://download.autodesk.com/us/navisworks/schemas/nw-TakeoffCatalog-10.0.xsd") </code></pre>
0
2016-10-14T10:24:38Z
[ "python", "xml", "elementtree" ]
Saving Django 400 bad requests for analyzing purposes
40,017,391
<p>I have a Django-based API.</p> <p>Sometimes my customers inform me that some of the their requests return 400 (bad request) even that they're not supposed to.</p> <p>I thought about a good way to handle and debug that exact problem, by saving the failed-requests, with all of the headers (such as the Access Token) and a timestamp, so when I'll see problematic pattern I'll be able to debug it and find the reason.</p> <p>My question regarding to my method is - how can I "trigger" a function that will collect the request with all of its headers everytime that I'm returning a response with 400 status?</p>
2
2016-10-13T09:37:20Z
40,019,364
<p>If the 400 response is generated by Django, you can use a <a href="https://docs.djangoproject.com/en/1.10/topics/http/middleware/" rel="nofollow">custom middleware</a> (beware: the middleware api changed in 1.10, for django &lt;= 1.9 the doc is <a href="https://docs.djangoproject.com/en/1.9/topics/http/middleware/" rel="nofollow">here</a>) and the <a href="https://docs.python.org/2/library/logging.html" rel="nofollow"><code>logging</code> module</a> - just make sure your logging is <a href="https://docs.djangoproject.com/en/1.10/topics/logging/" rel="nofollow">correctly configured</a> in your django settings.</p> <p>If the 400 response happens at a higher level then you'll have to check the relevant doc according to how your project is deployed</p> <p>Also FYIY <code>django.core.WSGIHandler</code> yields a 400 response when a <code>UnicodeDecodeError</code> happens while instanciating the <code>HttpRequest</code> object - in this case AFAICT the middlewares won't even be invoked. You should already find traces of these occurences in the logger configured for <code>django.requests</code> (cf <a href="https://docs.djangoproject.com/en/1.10/topics/logging/#django-request" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/logging/#django-request</a>)</p>
1
2016-10-13T11:09:41Z
[ "python", "django" ]
Moviepy - TypeError: Can't convert 'bytes' object to str implicitly
40,017,394
<pre><code>from moviepy.editor import * clip = VideoFileClip("vid.mov") clip.write_videofile("movie.mp4") </code></pre> <p>^ Gives the error </p> <pre><code>TypeError: Can't convert 'bytes' object to str implicitly. </code></pre> <p>It prints "Building video movie.mp4" and "Writing audio in movieTEMP_MPY_wvf_snd.mp3" normally.</p> <p>I am using python 3.2 with Raspian Wheezy. What is wrong? Surely it should be a simple program...</p> <p>EDIT: If you add audio = false to the write_videofile parameters, it works fine. The problem is somewhere in the audio.</p>
0
2016-10-13T09:37:28Z
40,024,390
<p>As per <a href="https://github.com/Zulko/moviepy/issues/330" rel="nofollow">this</a> answer, the issue was that there is an error in the moviepy script which generates an incorrect error output. The correct output indicates that I had not install the libmp3lame codec when I installed ffmpeg, so it could not write audio. See <a href="http://stackoverflow.com/questions/40021623/installing-libmp3lame-to-work-with-ffmpeg-on-raspberry-pi/40022016#40022016">this question</a> for details on how to correctly install ffmpeg with aforementioned codec.</p>
0
2016-10-13T14:52:37Z
[ "python", "raspberry-pi", "raspbian", "moviepy" ]
What am I doing wrong with this negative lookahead? Filtering out certain numbers in a regex
40,017,590
<p>I have a big piece of code produced by a software. Each instruction has an identifier number and I have to modify only certain numbers:</p> <pre><code>grr.add(new GenericRuleResult(RULEX_RULES.get(String.valueOf(11)), new Result(0,Boolean.FALSE,"ROSSO"))); grr.add(new GenericRuleResult(RULEX_RULES.get(String.valueOf(12)), new Result(0,Boolean.FALSE,"£££"))); etc... </code></pre> <p>Now, I am using SublimeText3 to change rapidly all of the wrong lines with this regex:</p> <pre><code>Of\((11|14|19|20|21|27|28|31)\)\), new Result\( </code></pre> <p>This regex above allowed me to put "ROSSO" (red) in each line containing those numbers. Now I have to put "VERDE" (green) in the remaining lines. My idea was to add a <code>?!</code> in the Regex to look for all of the lines <em>NOT CONTAINING</em> those numbers. </p> <p>From the website <a href="https://regex101.com/" rel="nofollow">Regex101</a> I get in the description of the regex:</p> <pre><code>Of matches the characters Of literally (case sensitive) \( matches the character ( literally (case sensitive) Negative Lookahead (?!11|14|19|20|21|27|28|31) Assert that the Regex below does not match 1st Alternative 11 etc... </code></pre> <p>So why am I not finding the lines containing 12, 13, 14 etc?</p> <p>Edit: the Actual Regex: <code>Of\((?!11|14|19|20|21|27|28|31)\)\), new Result\(</code></p>
0
2016-10-13T09:45:54Z
40,018,064
<p>Your problem is that you are assuming a negative look ahead changes the cursor position, it does not. </p> <p>That is, a negative lookahead of the form <code>(?!xy)</code> merely verifies that the <em>next</em> two characters are not <code>xy</code>. It does not then swallow two characters from the text. As its name suggests, it merely <em>looks ahead</em> from where you are, without moving ahead!</p> <p>Thus, if you wish to match further things beyond that assertion you must:</p> <ul> <li>negatively assert it is not <code>xy</code>;</li> <li>then consume the two characters for whatever they are;</li> <li>then continue your match.</li> </ul> <p>So try something like:</p> <pre><code>Of\((?!11|14|19|20|21|27|28|31)..\)\), new Result\( </code></pre>
1
2016-10-13T10:07:22Z
[ "python", "regex", "sublimetext3" ]
Getting the correct dimensions after using np.nonzero on array
40,017,986
<p>When using the the np.nonzero on a 1d array: </p> <pre><code>l = np.array([0, 1, 2, 3, 0]) np.nonzero(l) array([1,2,3], dtype=int64) </code></pre> <p>How do I get the index information as a 1d array and not as a tuple in the most effecient way? </p>
1
2016-10-13T10:03:12Z
40,018,005
<p><code>np.nonzero</code> will return the index information as a 1d tuple only. However, if you are looking for an alternative,Use <code>np.argwhere</code></p> <pre><code>import numpy as np l = np.array([0, 1, 2, 3, 0]) non_zero = np.argwhere(l!=0) </code></pre> <p>However, this will return a column matrix. To convert it to a row matrix,</p> <pre><code>non_zero = non_zero.T[0] </code></pre>
0
2016-10-13T10:04:14Z
[ "python", "arrays", "numpy", "shape", "zero" ]
Getting the correct dimensions after using np.nonzero on array
40,017,986
<p>When using the the np.nonzero on a 1d array: </p> <pre><code>l = np.array([0, 1, 2, 3, 0]) np.nonzero(l) array([1,2,3], dtype=int64) </code></pre> <p>How do I get the index information as a 1d array and not as a tuple in the most effecient way? </p>
1
2016-10-13T10:03:12Z
40,018,540
<p>You are looking for <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.flatnonzero.html" rel="nofollow"><code>np.flatnonzero</code></a> for use on a <code>1D</code> array -</p> <pre><code>In [5]: l Out[5]: array([0, 1, 2, 3, 0]) In [6]: np.flatnonzero(l) Out[6]: array([1, 2, 3]) </code></pre>
0
2016-10-13T10:32:12Z
[ "python", "arrays", "numpy", "shape", "zero" ]
Getting the correct dimensions after using np.nonzero on array
40,017,986
<p>When using the the np.nonzero on a 1d array: </p> <pre><code>l = np.array([0, 1, 2, 3, 0]) np.nonzero(l) array([1,2,3], dtype=int64) </code></pre> <p>How do I get the index information as a 1d array and not as a tuple in the most effecient way? </p>
1
2016-10-13T10:03:12Z
40,019,626
<p>Just index the tuple</p> <pre><code>Idn = np.nonzero(l)[0] </code></pre> <p>Code of <code>flatnonzero</code> does</p> <pre><code>return a.ravel().nonzero()[0] </code></pre>
0
2016-10-13T11:23:10Z
[ "python", "arrays", "numpy", "shape", "zero" ]
prettify adding extra lines in xml
40,018,043
<p>I'm using Prettify to make my XML file readable. I am adding some new info in to an excising XML file but when i save it to a file i get extra lines in between the lines. is there a way of removing these line? Below is the code i'm using </p> <pre><code>import xml.etree.ElementTree as xml import xml.dom.minidom as minidom from lxml import etree def prettify(elem): rough_string = xml.tostring(elem, 'utf-8') reparsed = minidom.parseString(rough_string) return reparsed.toprettyxml(indent="\t") cid = "[123,123,123,123,123]" doc = xml.parse('test.xml') root = doc.getroot() root.getchildren().index(root.find('card')) e = xml.Element('card') e.set('id', cid) n = xml.SubElement(e, "name") n.text = "FOLDER" r = xml.SubElement(e, "red") r.text = "FILE.AVI" g = xml.SubElement(e, "green") g.text = "FILE.AVI" b = xml.SubElement(e, "blue") b.text = "FILE.AVI" root.insert(0, e) doc2 = prettify(root) with open("testnew.xml", "w") as f: f.write(doc2) </code></pre> <p>Below is what i get in the file</p> <pre><code>&lt;data&gt; &lt;card id="[123,123,123,123,123]"&gt; &lt;name&gt;FOLDER&lt;/name&gt; &lt;red&gt;FILE.AVI&lt;/red&gt; &lt;green&gt;FILE.AVI&lt;/green&gt; &lt;blue&gt;FILE.AVI&lt;/blue&gt; &lt;/card&gt; &lt;card id="[000,000,000,000,000]"&gt; &lt;name&gt;Colours&lt;/name&gt; &lt;red&gt;/media/usb/cow.avi&lt;/red&gt; &lt;green&gt;/media/usb/pig.avi&lt;/green&gt; &lt;blue&gt;/media/usb/cat.avi&lt;/blue&gt; &lt;/card&gt; &lt;/data&gt; </code></pre> <p>input file "test.xml" looks like </p> <pre><code>&lt;data&gt; &lt;card id="[000,000,000,000,000]"&gt; &lt;name&gt;Colours&lt;/name&gt; &lt;red&gt;/media/usb/cow.avi&lt;/red&gt; &lt;green&gt;/media/usb/pig.avi&lt;/green&gt; &lt;blue&gt;/media/usb/cat.avi&lt;/blue&gt; &lt;/card&gt; &lt;/data&gt; </code></pre>
0
2016-10-13T10:06:21Z
40,019,783
<p>The new content added is being printed fine. Removing any "prettification" of the existing text solves the issue </p> <p>Add </p> <pre><code>for elem in root.iter('*'): if elem == e: print "Added XML node does not need to be stripped" continue if elem.text is not None: elem.text = elem.text.strip() if elem.tail is not None: elem.tail = elem.tail.strip() </code></pre> <p>before calling</p> <pre><code> doc2 = prettify(root) </code></pre> <p>Related answer: <a href="http://stackoverflow.com/questions/19288469/python-how-to-strip-white-spaces-from-xml-text-nodes">Python how to strip white-spaces from xml text nodes</a></p>
0
2016-10-13T11:31:22Z
[ "python", "xml", "prettify" ]
Binning of data along one axis in numpy
40,018,125
<p>I have a large two dimensional array <code>arr</code> which I would like to bin over the second axis using numpy. Because <code>np.histogram</code> flattens the array I'm currently using a for loop:</p> <pre><code>import numpy as np arr = np.random.randn(100, 100) nbins = 10 binned = np.empty((arr.shape[0], nbins)) for i in range(arr.shape[0]): binned[i,:] = np.histogram(arr[i,:], bins=nbins)[0] </code></pre> <p>I feel like there should be a more direct and more efficient way to do that within numpy but I failed to find one.</p>
2
2016-10-13T10:10:57Z
40,018,765
<p>You could use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.apply_along_axis.html" rel="nofollow"><code>np.apply_along_axis</code></a>:</p> <pre><code>x = np.array([range(20), range(1, 21), range(2, 22)]) nbins = 2 &gt;&gt;&gt; np.apply_along_axis(lambda a: np.histogram(a, bins=nbins)[0], 1, x) array([[10, 10], [10, 10], [10, 10]]) </code></pre> <p>The main advantage (if any) is that it's slightly shorter, but I wouldn't expect much of a performance gain. It's possibly marginally more efficient in the assembly of the per-row results.</p>
1
2016-10-13T10:42:12Z
[ "python", "numpy", "histogram", "binning" ]
Binning of data along one axis in numpy
40,018,125
<p>I have a large two dimensional array <code>arr</code> which I would like to bin over the second axis using numpy. Because <code>np.histogram</code> flattens the array I'm currently using a for loop:</p> <pre><code>import numpy as np arr = np.random.randn(100, 100) nbins = 10 binned = np.empty((arr.shape[0], nbins)) for i in range(arr.shape[0]): binned[i,:] = np.histogram(arr[i,:], bins=nbins)[0] </code></pre> <p>I feel like there should be a more direct and more efficient way to do that within numpy but I failed to find one.</p>
2
2016-10-13T10:10:57Z
40,059,479
<p>You have to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogramdd.html#numpy.histogramdd" rel="nofollow">numpy.histogramdd</a> specifically meant for your problem </p>
0
2016-10-15T13:16:40Z
[ "python", "numpy", "histogram", "binning" ]
Run function after t time
40,018,172
<p>I got a problem is: have a variable total time is <code>t_total</code> after a time in <code>t_total</code> i want to run some function and the time continue count up or down to the end of <code>t_total</code>. It look like: <code>---t1---t2--</code> when time equal t1 run a function and in t2 run another function. I tried to make a count function:</p> <pre><code>for t in range(t_total): t = t + 1 time.sleep(1) if t = t1: function1() </code></pre> <p>but if follow that function the count up will stop at time when run function1. So, have anyway to keep the count continue to run the function2?</p>
0
2016-10-13T10:13:03Z
40,018,239
<p>Add into function 1 to start a new timer starting at time t. You can export the value of t, t1 and t2 into function 1 and continue a new timer.</p>
0
2016-10-13T10:16:42Z
[ "python" ]
Run function after t time
40,018,172
<p>I got a problem is: have a variable total time is <code>t_total</code> after a time in <code>t_total</code> i want to run some function and the time continue count up or down to the end of <code>t_total</code>. It look like: <code>---t1---t2--</code> when time equal t1 run a function and in t2 run another function. I tried to make a count function:</p> <pre><code>for t in range(t_total): t = t + 1 time.sleep(1) if t = t1: function1() </code></pre> <p>but if follow that function the count up will stop at time when run function1. So, have anyway to keep the count continue to run the function2?</p>
0
2016-10-13T10:13:03Z
40,018,369
<p>You can do using <a href="https://docs.python.org/2/library/threading.html" rel="nofollow">thread</a></p> <pre><code>import threading for t in range(t_total): t = t + 1 time.sleep(1) if t = t1: threading.Thread(target=function1).start() </code></pre>
4
2016-10-13T10:24:18Z
[ "python" ]
Python How to open an URL and get source code at the same time?
40,018,348
<p>What I had try are as following:</p> <p>1)</p> <pre><code>response = urllib2.urlopen(url) html = response.read() </code></pre> <p>In this way, I can't open the url in browser.</p> <p>2)</p> <pre><code>webbrowser.open(url) </code></pre> <p>In this way, I can't get source code of the url.</p> <p>So, how can I open an URL and get source code at the same time?</p> <p>Thanks for your help.</p>
0
2016-10-13T10:22:59Z
40,018,429
<p>Have a look at BeautifulSoup: <a href="https://www.crummy.com/software/BeautifulSoup/" rel="nofollow">https://www.crummy.com/software/BeautifulSoup/</a></p> <p>You can request a website and then read the HTML source code from it:</p> <pre><code>import requests from bs4 import BeautifulSoup r = requests.get(YourURL) soup = BeautifulSoup(r.content) print soup.prettify() </code></pre> <p>If you want to read JavaScript, look into Headless Browsers.</p>
1
2016-10-13T10:26:37Z
[ "python" ]
Converting a string into list of desired tokens using python
40,018,391
<p>I have ingredients for thousands of products for example:</p> <pre><code>Ingredient = 'Beef stock (beef bones, water, onion, carrot, beef meat, parsnip, thyme, parsley, clove, black pepper, bay leaf), low lactose cream (28%), onion, mustard, modified maize starch,tomato puree, modified potato starch, butter sugar, salt (0,8%), burnt sugar, blackcurrant, peppercorns (black, pink, green, all spice, white) 0,4%.' </code></pre> <p>I want this ingredient in the form of a list like the following:</p> <pre><code>listOfIngredients = ['Beef Stock', 'low lactose cream', 'onion', 'mustard', 'modified maize starch','tomato puree', 'modified potato starch', 'butter sugar', 'salt', 'burnt sugar', 'blackcurrant', 'peppercorns'] </code></pre> <p>So in the listOfIngredients I do not have any explanations of the product in percentage or even further products that one ingredient itself contains. Regex is a good way of doing this but I am not good at making regex. Can someone help me in making regex to get the desired output. Thanks in advance. </p>
1
2016-10-13T10:25:09Z
40,019,132
<p>You might try two approaches.</p> <p>The first one is to remove all <code>(...)</code> substrings and anything that is not <code>,</code> after (that is not followed with non-word boundary).</p> <pre><code>\s*\([^()]*\)[^,]*(?:,\b[^,]*)* </code></pre> <p>See the <a href="https://regex101.com/r/XvpB0t/1" rel="nofollow">regex demo</a></p> <p><strong>Details</strong>:</p> <ul> <li><code>\s*</code> - 0+ whitespaces</li> <li><code>\([^()]*\)</code> - a <code>(...)</code> substring having no <code>(</code> and <code>)</code> inside: <ul> <li><code>\(</code> - a literal <code>(</code></li> <li><code>[^()]*</code> - 0+ chars other than <code>(</code> and <code>)</code> (a <code>[^...]</code> is a <em>negated character class</em>)</li> </ul></li> <li><code>[^,]*</code> - 0+ chars other than <code>,</code></li> <li><code>(?:,\b[^,]*)*</code> - zero or more sequences of: <ul> <li><code>,\b</code> - a comma that is followed with a letter/digit/underscore</li> <li><code>[^,]*</code> - 0+ chars other than <code>,</code>.</li> </ul></li> </ul> <p>These matches are removed, and then <code>,\s*</code> regex is used to split the string with a comma and 0+ whitespaces to get the final result.</p> <p>The second one is based on matching and <em>capturing</em> words consisting of letters (and <code>_</code>) only, and just matching <code>(...)</code> substrings.</p> <pre><code>\([^()]*\)|([^\W\d]+(?:\s+[^\W\d]+)*) </code></pre> <p>See the <a href="http://%5C([%5E()]*%5C)%7C([%5E%5CW%5Cd]+(?:%5Cs%20[%5E%5CW%5Cd]%20)*)" rel="nofollow">second regex demo</a></p> <p><strong>Details</strong>:</p> <ul> <li><code>\([^()]*\)</code> - a <code>(...)</code> substring having no <code>(</code> and <code>)</code> inside</li> <li><code>|</code> - or </li> <li><code>([^\W\d]+(?:\s+[^\W\d]+)*)</code> - Group 1 capturing: <ul> <li><code>[^\W\d]+</code> - 1+ letters or underscores (you may add <code>_</code> after <code>\d</code> to exclude underscores)</li> <li><code>(?:\s+[^\W\d]+)*</code> - 0+ sequences of: <ul> <li><code>\s+</code> - 1 or more whitespaces</li> <li><code>[^\W\d]+</code> - 1+ letters or underscores</li> </ul></li> </ul></li> </ul> <p>Both return the same results for the current string, but you may want to adjust it in future.</p> <p>See <a href="https://ideone.com/KJdUOb" rel="nofollow">Python demo</a>:</p> <pre><code>import re Ingredient = 'Beef stock (beef bones, water, onion, carrot, beef meat, parsnip, thyme, parsley, clove, black pepper, bay leaf), low lactose cream (28%), onion, mustard, modified maize starch,tomato puree, modified potato starch, butter sugar, salt (0,8%), burnt sugar, blackcurrant, peppercorns (black, pink, green, all spice, white) 0,4%.' res = re.sub(r'\s*\([^()]*\)[^,]*(?:,\b[^,]*)*', "", Ingredient) print(re.split(r',\s*', res)) vals = re.findall(r'\([^()]*\)|([^\W\d]+(?:\s+[^\W\d]+)*)', Ingredient) vals = [x for x in vals if x] print(vals) </code></pre>
1
2016-10-13T10:59:25Z
[ "python", "regex" ]
list() uses more memory than list comprehension
40,018,398
<p>So i was playing with <code>list</code> objects and found little strange thing that if <code>list</code> is created with <code>list()</code> it uses more memory, than list comprehension? I'm using Python 3.5.2</p> <pre><code>In [1]: import sys In [2]: a = list(range(100)) In [3]: sys.getsizeof(a) Out[3]: 1008 In [4]: b = [i for i in range(100)] In [5]: sys.getsizeof(b) Out[5]: 912 In [6]: type(a) == type(b) Out[6]: True In [7]: a == b Out[7]: True In [8]: sys.getsizeof(list(b)) Out[8]: 1008 </code></pre> <p>From the <a href="https://docs.python.org/3.5/library/stdtypes.html#list">docs</a>:</p> <blockquote> <p>Lists may be constructed in several ways:</p> <ul> <li>Using a pair of square brackets to denote the empty list: <code>[]</code></li> <li>Using square brackets, separating items with commas: <code>[a]</code>, <code>[a, b, c]</code></li> <li>Using a list comprehension: <code>[x for x in iterable]</code></li> <li>Using the type constructor: <code>list()</code> or <code>list(iterable)</code></li> </ul> </blockquote> <p>But it seems that using <code>list()</code> it uses more memory.</p> <p>And as much <code>list</code> is bigger, the gap increases.</p> <p><a href="https://i.stack.imgur.com/VVHJL.png"><img src="https://i.stack.imgur.com/VVHJL.png" alt="Difference in memory"></a></p> <p>Why this happens?</p> <p><strong>UPDATE #1</strong></p> <p>Test with Python 3.6.0b2:</p> <pre><code>Python 3.6.0b2 (default, Oct 11 2016, 11:52:53) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import sys &gt;&gt;&gt; sys.getsizeof(list(range(100))) 1008 &gt;&gt;&gt; sys.getsizeof([i for i in range(100)]) 912 </code></pre> <p><strong>UPDATE #2</strong></p> <p>Test with Python 2.7.12:</p> <pre><code>Python 2.7.12 (default, Jul 1 2016, 15:12:24) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import sys &gt;&gt;&gt; sys.getsizeof(list(xrange(100))) 1016 &gt;&gt;&gt; sys.getsizeof([i for i in xrange(100)]) 920 </code></pre>
54
2016-10-13T10:25:25Z
40,018,719
<p>I think you're seeing over-allocation patterns this is a <a href="https://github.com/python/cpython/blob/3.5/Objects/listobject.c#L42" rel="nofollow">sample from the source</a>:</p> <pre><code>/* This over-allocates proportional to the list size, making room * for additional growth. The over-allocation is mild, but is * enough to give linear-time amortized behavior over a long * sequence of appends() in the presence of a poorly-performing * system realloc(). * The growth pattern is: 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ... */ new_allocated = (newsize &gt;&gt; 3) + (newsize &lt; 9 ? 3 : 6); </code></pre> <hr> <p>Printing the sizes of list comprehensions of lengths 0-88 you can see the pattern matches:</p> <pre><code># create comprehensions for sizes 0-88 comprehensions = [sys.getsizeof([1 for _ in range(l)]) for l in range(90)] # only take those that resulted in growth compared to previous length steps = zip(comprehensions, comprehensions[1:]) growths = [x for x in list(enumerate(steps)) if x[1][0] != x[1][1]] # print the results: for growth in growths: print(growth) </code></pre> <p>Results (format is <code>(list length, (old total size, new total size))</code>):</p> <pre><code>(0, (64, 96)) (4, (96, 128)) (8, (128, 192)) (16, (192, 264)) (25, (264, 344)) (35, (344, 432)) (46, (432, 528)) (58, (528, 640)) (72, (640, 768)) (88, (768, 912)) </code></pre> <hr> <p>The over-allocation is done for performance reasons allowing lists to grow without allocating more memory with every growth (better <a href="https://en.wikipedia.org/wiki/Amortized_analysis" rel="nofollow">amortized</a> performance).</p> <p>A probable reason for the difference with using list comprehension, is that list comprehension can not deterministically calculate the size of the generated list, but <code>list()</code> can. This means comprehensions will continuously grow the list as it fills it using over-allocation until finally filling it.</p> <p>It is possible that is will not grow the over-allocation buffer with unused allocated nodes once its done (in fact, in most cases it wont, that would defeat the over-allocation purpose).</p> <p><code>list()</code>, however, can add some buffer no matter the list size since it knows the final list size in advance.</p> <hr> <p>Another backing evidence, also from the source, is that we see <a href="https://github.com/python/cpython/blob/3.5/Python/compile.c#L3374" rel="nofollow">list comprehensions invoking <code>LIST_APPEND</code></a>, which indicates usage of <code>list.resize</code>, which in turn indicates consuming the pre-allocation buffer without knowing how much of it will be filled. This is consistent with the behavior you're seeing.</p> <hr> <p>To conclude, <code>list()</code> will pre-allocate more nodes as a function of the list size</p> <pre><code>&gt;&gt;&gt; sys.getsizeof(list([1,2,3])) 60 &gt;&gt;&gt; sys.getsizeof(list([1,2,3,4])) 64 </code></pre> <p>List comprehension does not know the list size so it uses append operations as it grows, depleting the pre-allocation buffer:</p> <pre><code># one item before filling pre-allocation buffer completely &gt;&gt;&gt; sys.getsizeof([i for i in [1,2,3]]) 52 # fills pre-allocation buffer completely # note that size did not change, we still have buffered unused nodes &gt;&gt;&gt; sys.getsizeof([i for i in [1,2,3,4]]) 52 # grows pre-allocation buffer &gt;&gt;&gt; sys.getsizeof([i for i in [1,2,3,4,5]]) 68 </code></pre>
41
2016-10-13T10:40:13Z
[ "python", "list", "list-comprehension" ]
list() uses more memory than list comprehension
40,018,398
<p>So i was playing with <code>list</code> objects and found little strange thing that if <code>list</code> is created with <code>list()</code> it uses more memory, than list comprehension? I'm using Python 3.5.2</p> <pre><code>In [1]: import sys In [2]: a = list(range(100)) In [3]: sys.getsizeof(a) Out[3]: 1008 In [4]: b = [i for i in range(100)] In [5]: sys.getsizeof(b) Out[5]: 912 In [6]: type(a) == type(b) Out[6]: True In [7]: a == b Out[7]: True In [8]: sys.getsizeof(list(b)) Out[8]: 1008 </code></pre> <p>From the <a href="https://docs.python.org/3.5/library/stdtypes.html#list">docs</a>:</p> <blockquote> <p>Lists may be constructed in several ways:</p> <ul> <li>Using a pair of square brackets to denote the empty list: <code>[]</code></li> <li>Using square brackets, separating items with commas: <code>[a]</code>, <code>[a, b, c]</code></li> <li>Using a list comprehension: <code>[x for x in iterable]</code></li> <li>Using the type constructor: <code>list()</code> or <code>list(iterable)</code></li> </ul> </blockquote> <p>But it seems that using <code>list()</code> it uses more memory.</p> <p>And as much <code>list</code> is bigger, the gap increases.</p> <p><a href="https://i.stack.imgur.com/VVHJL.png"><img src="https://i.stack.imgur.com/VVHJL.png" alt="Difference in memory"></a></p> <p>Why this happens?</p> <p><strong>UPDATE #1</strong></p> <p>Test with Python 3.6.0b2:</p> <pre><code>Python 3.6.0b2 (default, Oct 11 2016, 11:52:53) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import sys &gt;&gt;&gt; sys.getsizeof(list(range(100))) 1008 &gt;&gt;&gt; sys.getsizeof([i for i in range(100)]) 912 </code></pre> <p><strong>UPDATE #2</strong></p> <p>Test with Python 2.7.12:</p> <pre><code>Python 2.7.12 (default, Jul 1 2016, 15:12:24) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import sys &gt;&gt;&gt; sys.getsizeof(list(xrange(100))) 1016 &gt;&gt;&gt; sys.getsizeof([i for i in xrange(100)]) 920 </code></pre>
54
2016-10-13T10:25:25Z
40,019,900
<p>Thanks everyone for helping me to understand that awesome Python.</p> <p>I don't want to make question that massive(that why i'm posting answer), just want to show and share my thoughts.</p> <p>As @ReutSharabani noted correctly: "list() deterministically determines list size". You can see it from that graph.</p> <p><a href="https://i.stack.imgur.com/JrqC9.png"><img src="https://i.stack.imgur.com/JrqC9.png" alt="graph of sizes"></a></p> <p>When you <code>append</code> or using list comprehension you always have some sort of boundaries that extends when you reach some point. And with <code>list()</code> you have almost the same boundaries, but they are floating.</p> <p><strong>UPDATE</strong></p> <p>So thanks to @ReutSharabani, @tavo, @SvenFestersen</p> <p>To sum up: <code>list()</code> preallocates memory depend on list size, list comprehension cannot do that(it requests more memory when it needed, like <code>.append()</code>). That's why <code>list()</code> store more memory.</p> <p>One more graph, that show <code>list()</code> preallocate memory. So green line shows <code>list(range(830))</code> appending element by element and for a while memory not changing.</p> <p><a href="https://i.stack.imgur.com/yoV85.png"><img src="https://i.stack.imgur.com/yoV85.png" alt="list() preallocates memory"></a></p> <p><strong>UPDATE 2</strong></p> <p>As @Barmar noted in comments below, <code>list()</code> must me faster than list comprehension, so i ran <code>timeit()</code> with <code>number=1000</code> for length of <code>list</code> from <code>4**0</code> to <code>4**10</code> and the results are</p> <p><a href="https://i.stack.imgur.com/WNSnO.png"><img src="https://i.stack.imgur.com/WNSnO.png" alt="time measurements"></a></p>
23
2016-10-13T11:37:10Z
[ "python", "list", "list-comprehension" ]
Cython- Cannot open include file: 'io.h': No such file or directory
40,018,405
<p>Just starting learning cython. I was trying to compile a simple .pyx file.</p> <pre><code>print("hello") </code></pre> <p>Here's my setup.py:</p> <pre><code>from distutils.core import setup from Cython.Build import cythonize setup( ext_modules = cythonize("hello.pyx") ) </code></pre> <p>Then I run the command.</p> <pre><code>python setup.py build_ext --inplace </code></pre> <p>The error as below. I've struggle on googling it and nothing helpful came to me.</p> <pre><code> running build_ext building 'hello' extension C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -IC:\Users\Jackie\AppData\Local\Continuum\Anaconda3\include -IC:\Users\Jackie\AppData\Local\Continuum\Anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\wdf\ucrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tchello.c /Fobuild\temp.win32-3.5\Release\hello.obj hello.c c:\users\jackie\appdata\local\continuum\anaconda3\include\pyconfig.h(68): fatal error C1083: Cannot open include file: 'io.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\cl.exe' failed with exit status 2 </code></pre> <p>Can someone help me to resolve the error, please?</p> <p>I have Anaconda3 4.1.1, python3.5 and VS Express 2015 installed.</p> <p>It's really frustrating...</p>
0
2016-10-13T10:25:36Z
40,037,613
<p>Well... the error went away after I uninstalled all Microsoft and python related software and install Anaconda and VS2015 Express again. However, another error came along... </p>
0
2016-10-14T07:40:54Z
[ "python", "cython" ]
How to pretty print dictionaries in iPython
40,018,594
<p>I'm currently using RethinkDB, which has a nice web UI with a Data Explorer which allows the user to print out the contents of the database like this:</p> <p><a href="https://i.stack.imgur.com/bKKgU.png" rel="nofollow"><img src="https://i.stack.imgur.com/bKKgU.png" alt="enter image description here"></a></p> <p>Note that each key-value pair starts on a new line, and the keys and values (mostly) have different colors. By contrast, if I print out the same using iPython, I get an almost illegible result:</p> <p><a href="https://i.stack.imgur.com/4jLcS.png" rel="nofollow"><img src="https://i.stack.imgur.com/4jLcS.png" alt="enter image description here"></a></p> <p>This is slightly ameliorated if I iterate over the cursor and <code>print</code> each item, like so:</p> <p><a href="https://i.stack.imgur.com/VegvQ.png" rel="nofollow"><img src="https://i.stack.imgur.com/VegvQ.png" alt="enter image description here"></a></p> <p>However, this requires more typing and still doesn't look as good as the RethinkDB web UI. Is there perhaps an iPython plugin that I can install to improve the appearance of the printed output?</p> <p>(I had a look at <a href="https://docs.python.org/2/library/pprint.html" rel="nofollow">pprint</a>, but this seems to control only the positioning of the text and not its color).</p>
0
2016-10-13T10:34:44Z
40,019,015
<p>You could use <a href="https://docs.python.org/3/library/json.html#json.dumps" rel="nofollow"><code>json.dumps()</code></a>:</p> <pre><code>import json for row in r.db(....).run(conn): print(json.dumps(row, indent=4)) </code></pre> <p>Although this does not display the keys in sorted order, as appears to be the case in the example, it might be sufficient for your needs. As pointed out by @coder, you <code>json.dumps()</code> can sort the keys by specifying the <code>sort_keys=True</code> parameter.</p> <pre><code>for row in r.db(....).run(conn): print(json.dumps(row, indent=4, sort_keys=True)) </code></pre> <p>It might also be possible to print the object directly (haven't tested this):</p> <pre><code>print(json.dumps(r.db(....).run(conn), indent=4, sort_keys=True) </code></pre> <p>which might also print out the surrounding "list" object.</p> <hr> <p>To handle objects that do not support serialisation to JSON you can use a custom <a href="https://docs.python.org/3/library/json.html#json.JSONEncoder" rel="nofollow"><code>JSONEncoder</code></a>. Here is an example which handles <code>datetime.datetime</code> objects:</p> <pre><code>from datetime import datetime class DateTimeAwareJSONEncoder(json.JSONEncoder): def default(self, obj): if isinstance(obj, datetime): tz = obj.tzname() return obj.ctime() + (' {}'.format(tz) if tz else '') return super(DateTimeAwareJSONEncoder, self).default(obj) for row in r.db(....).run(conn): print(json.dumps(row, indent=4, sort_keys=True, cls=DateTimeAwareJSONEncoder)) </code></pre> <p>You can use <a href="https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow"><code>datetime.strftime()</code></a> to format the date time string as required.</p>
3
2016-10-13T10:54:14Z
[ "python", "rethinkdb" ]
How to pretty print dictionaries in iPython
40,018,594
<p>I'm currently using RethinkDB, which has a nice web UI with a Data Explorer which allows the user to print out the contents of the database like this:</p> <p><a href="https://i.stack.imgur.com/bKKgU.png" rel="nofollow"><img src="https://i.stack.imgur.com/bKKgU.png" alt="enter image description here"></a></p> <p>Note that each key-value pair starts on a new line, and the keys and values (mostly) have different colors. By contrast, if I print out the same using iPython, I get an almost illegible result:</p> <p><a href="https://i.stack.imgur.com/4jLcS.png" rel="nofollow"><img src="https://i.stack.imgur.com/4jLcS.png" alt="enter image description here"></a></p> <p>This is slightly ameliorated if I iterate over the cursor and <code>print</code> each item, like so:</p> <p><a href="https://i.stack.imgur.com/VegvQ.png" rel="nofollow"><img src="https://i.stack.imgur.com/VegvQ.png" alt="enter image description here"></a></p> <p>However, this requires more typing and still doesn't look as good as the RethinkDB web UI. Is there perhaps an iPython plugin that I can install to improve the appearance of the printed output?</p> <p>(I had a look at <a href="https://docs.python.org/2/library/pprint.html" rel="nofollow">pprint</a>, but this seems to control only the positioning of the text and not its color).</p>
0
2016-10-13T10:34:44Z
40,023,364
<p><a href="http://stackoverflow.com/users/21945/mhawke">mhawke</a>'s answer works if one adds the keyword argument <code>time_format="raw"</code> to RethinkDB's <code>run()</code> command. (Otherwise, you get a <code>TypeError</code> because RethinkDB's object containing the time zone is not JSON serializable). The result looks like this:</p> <p><a href="https://i.stack.imgur.com/xFINb.png" rel="nofollow"><img src="https://i.stack.imgur.com/xFINb.png" alt="enter image description here"></a></p> <p>which is much more legible. A slight drawback is that the <code>epoch_time</code> is more difficult to interpret than the original time format.</p>
0
2016-10-13T14:09:05Z
[ "python", "rethinkdb" ]
Sphinx cross referencing breaks for inherited objects imported and documented in a parent module
40,018,681
<p>I'm trying to get my Sphinx documentation build correctly and have cross-references (including those from inherited relations) work right.</p> <p>In my project, I have a situation which is depicted in the example below, which I replicated for convenience on <a href="https://github.com/anjos/sphinx-broken-xref" rel="nofollow">this github repo</a>:</p> <pre><code>$ tree . . ├── a │   ├── b │   │   └── __init__.py │   └── __init__.py ├── conf.py ├── index.rst └── README.md </code></pre> <p>In <code>a.b.__init__</code>, I declare classes <code>A</code> and <code>B</code>. <code>B</code> inherits from <code>A</code>. In <code>a.__init__</code>, I import <code>A</code> and <code>B</code> like: <code>from .b import A, B</code>. The reason I do this in my real projects is to reduce the import paths on modules while keeping implementation of specific classes in separate files.</p> <p>Then, in my rst files, I <em>autodoc</em> module <code>a</code> with <code>.. automodule:: a</code>. Because <code>a.b</code> is just an auxiliary module, I don't <em>autodoc</em> it since I don't want to get repeated references to the same classes and not to confuse the user on what they should be really doing. I also set <code>show-inheritance</code> expecting <code>a.B</code> will have a back link to <code>a.A</code>.</p> <p>If I try to sphinx-build this in nit-picky mode, I'll get the following warning:</p> <pre><code>WARNING: py:class reference target not found: a.b.A </code></pre> <p>If I look at the generated documentation for class <code>B</code>, then I verify it is not properly linked against class <code>A</code>, which just confirms the warning above.</p> <p>Question: how do I fix this?</p>
2
2016-10-13T10:38:34Z
40,040,000
<p>Sphinx uses the value of the <code>__module__</code> attribute to figure out the name of the module in which a class/function/method was defined (see <a href="https://docs.python.org/2/reference/datamodel.html#the-standard-type-hierarchy" rel="nofollow">https://docs.python.org/2/reference/datamodel.html#the-standard-type-hierarchy</a>). Sometimes this is not what you want in the documentation. </p> <p>The attribute is writable. Your problem can be fixed by adding this line in a/__init__.py: </p> <pre><code>A.__module__ = "a" </code></pre>
1
2016-10-14T09:43:31Z
[ "python", "python-sphinx" ]
What is the "role" argument in QTableWidgetItem.data(self, int *role*)?
40,018,723
<p>I am looking through the documentation at:</p> <ul> <li><a href="http://pyqt.sourceforge.net/Docs/PyQt4/qtablewidgetitem.html#data" rel="nofollow">http://pyqt.sourceforge.net/Docs/PyQt4/qtablewidgetitem.html#data</a></li> </ul> <p>I can put the line <code>IDList.append(item.data())</code> into my code, and print that as the correct value. But weirdly, after that line, it gives me this error:</p> <blockquote> <p>TypeError: QTableWidgetItem.data(int): not enough arguments</p> </blockquote> <p>I don't know why the error message comes after the print line, but I don't think that should be important. What does the documentation mean by "int <em>role</em>"? Can you give an example, please?</p>
1
2016-10-13T10:40:16Z
40,021,010
<p>An item has many kinds of data associated with it, such as text, font, background, tooltip, etc. The <code>role</code> is a value from the <a href="https://doc.qt.io/qt-4.8/qt.html#ItemDataRole-enum" rel="nofollow">ItemDataRole enum</a> that allows you to specify which kind of data you want.</p> <p>Some of these items of data also have an accessor function. So these two lines of code are equivalent:</p> <pre><code>font = item.font() font = item.data(QtCore.Qt.FontRole) </code></pre> <p>The data roles are extensible. You can use any values starting at <code>Qt.UserRole</code> to associate a custom role with your own data:</p> <pre><code>MyRole = QtCore.Qt.UserRole + 2 item.setData(MyRole, [1, 2, 3]) ... data = item.data(MyRole) </code></pre>
2
2016-10-13T12:29:02Z
[ "python", "pyqt", "qtablewidgetitem" ]
how do you map a pandas dataframe column to a function which requires more than one parameter
40,018,755
<p>I have a function which has two arguments, the first argument is some text the second a regex pattern. </p> <p>I want to pass each row of a certain column from my dataframe to a function using .map however I am not sure how to direct the data from the dataframe to be the first argument and the regex (which will be the same pattern for each row) to be the second argument?</p> <pre><code>def regex(pattern,text): p = pattern t = str(text) results = re.findall(p,t) return results df['new_column'] = df['source_code'].map(some_function) </code></pre>
1
2016-10-13T10:41:58Z
40,019,301
<p>I think you need <code>lambda</code> function with parameters <code>pattern</code> and <code>x</code>:</p> <pre><code>df['new_column'] = df['source_code'].map(lambda x: some_function(pattern, x)) df['new_column'] = df['source_code'].apply(lambda x: some_function(pattern, x)) </code></pre> <p>Thank you <a href="http://stackoverflow.com/questions/40018755/how-do-you-map-a-pandas-dataframe-column-to-a-function-which-requires-more-than#comment67315401_40019301">Jon Clements</a> for another solution:</p> <pre><code>df['new_column'] = df['source_code'].apply(some_function, args=(pattern ,x)) </code></pre>
1
2016-10-13T11:06:26Z
[ "python", "regex", "pandas" ]
SQLalchemy slow with Redshift
40,018,778
<p>I have a 44k rows table in a pandas Data Frame. When I try to export this table (or any other table) to a Redshift database, the process takes ages. I'm using sqlalchemy to create a conexion like this:</p> <pre><code>import sqlalchemy as sal engine = sal.create_engine('redshift+psycopg2://blablamyhost/myschema') </code></pre> <p>The method I use to export the tables is Pandas <code>to_sql</code> like this:</p> <pre><code>dat.to_sql(name="olap_comercial",con=eng,schema="monetization",index=False,if_exists="replace" ,dtype={"description":sal.types.String(length=271),"date_postoffer":sal.types.DATE}) </code></pre> <p>Is it normal that it is so slow? I'm talking about more than 15 minutes.</p>
0
2016-10-13T10:42:46Z
40,106,504
<p>Yes, it is normal to be that slow (and possibly slower for large clusters). Regular sql inserts (as generated by sqlalchemy) are very slow for Redshift, and should be avoided.</p> <p>You should consider using S3 as an intermediate staging layer, your data flow will be: dataframe->S3->redshift</p> <p>Ideally, you should also gzip your data before uploading to S3, this will improve your performance as well.</p> <p>This can be coordinated from your python script using BOTO3 and psycopg2 <a href="https://boto3.readthedocs.io/en/latest/" rel="nofollow">https://boto3.readthedocs.io/en/latest/</a></p>
1
2016-10-18T11:03:48Z
[ "python", "sqlalchemy" ]
Python - Interactive phonebook
40,018,832
<p>I need help with adding a name and phone number to a dictionary in ONE line with <code>raw_input</code>. It should look like this: <code>add John 123</code> (adds the name John with the number 123). Here is my code:</p> <pre><code>def phonebook(): pb={} while True: val,q,w=raw_input().split(" ") if val=='add': if q in pb: print print "This name already exists" print else: pb[q]=w #lägger till namn + nummer i dictionary if val=='lookup': if q in pb: print print pb[q] print else: print "Name is not in phonebook" print </code></pre> <p>Im getting the unpack error. Any tips? Is there another way to do it?</p>
0
2016-10-13T10:45:44Z
40,019,202
<p>The following line assume that you type exactly 3 words each separated by a space character:</p> <pre><code>val, q, w = raw_input().split(" ") </code></pre> <p>If you have less or more than 3 words (which is the case when you use the lookup command, isn't it?), you will get an unpack error.</p> <p>You could get the input in a unique variable and then test its first element to avoid error:</p> <pre><code>in_ = raw_input().split(" ") if in_[0] == 'add': # process add action if in_[0] == 'lookup': # process lookup action </code></pre> <hr> <p>bonus tip: you do not need to give the space character to <code>split</code> method since it is the default value:</p> <pre><code>raw_input().split() # will work as well </code></pre>
2
2016-10-13T11:02:12Z
[ "python", "split", "interactive" ]
Python - Interactive phonebook
40,018,832
<p>I need help with adding a name and phone number to a dictionary in ONE line with <code>raw_input</code>. It should look like this: <code>add John 123</code> (adds the name John with the number 123). Here is my code:</p> <pre><code>def phonebook(): pb={} while True: val,q,w=raw_input().split(" ") if val=='add': if q in pb: print print "This name already exists" print else: pb[q]=w #lägger till namn + nummer i dictionary if val=='lookup': if q in pb: print print pb[q] print else: print "Name is not in phonebook" print </code></pre> <p>Im getting the unpack error. Any tips? Is there another way to do it?</p>
0
2016-10-13T10:45:44Z
40,019,287
<p>I think you get an unpack error when you lookup someone with "lookup john". The code looks for a third value in the split(" ") but doesn't find one.</p> <p>I'm not sure if this might help:</p> <pre><code>def phonebook(): pb={} while True: vals = raw_input().split(" ") if vals[0] == 'add': q = vals[1] w = vals[2] if q in pb: print "This name already exists" else: pb[q]=w #lägger till namn + nummer i dictionary elif vals[0]=='lookup': q = vals[1] if q in pb: print print str(q) + "'s number is: " + str(pb[q]) print else: print "Name is not in phonebook" print </code></pre> <p>return me with:</p> <pre><code>&gt;add j 12 &gt;add j 12 This name already exists &gt;lookup j j's number is: 12 &gt;lookup j 12 j's number is: 12 &gt;lookup k Name is not in phonebook </code></pre>
1
2016-10-13T11:05:58Z
[ "python", "split", "interactive" ]
How do I resize rows with setRowHeight and resizeRowToContents in PyQt4?
40,019,022
<p>I have a small issue with proper resizing of rows in my tableview. I have a vertical header and no horizontal header. I tried:</p> <pre><code>self.Popup.table.setModel(notesTableModel(datainput)) self.Popup.table.horizontalHeader().setVisible(False) self.Popup.table.verticalHeader().setFixedWidth(200) for n in xrange(self.Popup.table.model().columnCount()): self.Popup.table.setColumnWidth(n,150) </code></pre> <p>And this works fine, but when I try:</p> <pre><code> for n in xrange(self.Popup.table.model().rowCount()): self.Popup.table.setRowHeight(n,100) </code></pre> <p>or </p> <pre><code>for n in xrange(self.Popup.table.model().rowCount()): self.Popup.table.resizeRowToContents(n) </code></pre> <p>No row is resized, even if the text exceeds the length of the cell.</p> <p>How can I force the rows to fit the data?</p>
0
2016-10-13T10:54:43Z
40,043,620
<p>For me, both <code>setRowHeight</code> and <code>resizeRowsToContents</code> work as expected. Here's the test script I used:</p> <pre><code>from PyQt4 import QtCore, QtGui class Window(QtGui.QWidget): def __init__(self, rows, columns): super(Window, self).__init__() self.table = QtGui.QTableView(self) self.table.horizontalHeader().setVisible(False) layout = QtGui.QVBoxLayout(self) layout.addWidget(self.table) model = QtGui.QStandardItemModel(rows, columns, self.table) self.table.setModel(model) text = 'some long item of text that requires word-wrapping' for column in range(model.columnCount()): self.table.setColumnWidth(column, 150) for row in range(model.rowCount()): item = QtGui.QStandardItem(text) model.setItem(row, column, item) # self.table.setRowHeight(row, 100) self.table.resizeRowsToContents() if __name__ == '__main__': import sys app = QtGui.QApplication(sys.argv) window = Window(4, 3) window.setGeometry(800, 150, 500, 250) window.show() sys.exit(app.exec_()) </code></pre> <p>And here's what it looks like:</p> <p><a href="https://i.stack.imgur.com/mAAA6.png" rel="nofollow"><img src="https://i.stack.imgur.com/mAAA6.png" alt="enter image description here"></a> </p>
0
2016-10-14T12:50:20Z
[ "python", "pyqt4", "qtableview" ]
PHP exec command is not returning full data from Python script
40,019,042
<p>I am connecting to a server through PHP SSH and then using exec to run a python program on that server. </p> <p>If I connect to that server through putty and execute the same command through command line, I get result like:</p> <blockquote> <p>Evaluating....</p> <p>Connecting....</p> <p>Retrieving data....</p> <p>1) Statement 1</p> <p>2) Statement 2</p> <p>.</p> <p>.</p> <p>.</p> <p>N) Statement N</p> </blockquote> <p>Python program is written by somebody else...</p> <p>When I connect through SSH php, I can execute <code>$ssh-&gt;exec("ls")</code> and get the full results as proper as on server command line. But when I try <code>$ssh-&gt;exec("python myscript.py -s statement 0 0 0");</code> I couldn't get the full results but I get a random line as an ouput.</p> <p>In general, if somebody had experienced the same issue and solved, please let me know.</p> <p>Thanks</p>
0
2016-10-13T10:55:17Z
40,019,200
<p>Perhaps it's caused by buffering of the output. Try adding the <code>-u</code> option to your Python command - this forces stdout, stdin and stderr to be unbuffered.</p>
1
2016-10-13T11:02:07Z
[ "php", "python", "ssh", "centos", "exec" ]
Build nested dictionary in shell script
40,019,081
<p>My aim is to create a dictionary structure as mentioned below in shell script and pass that as an argument to a python script. I can build that dictionary in python script itself by passing all variable value from shell. Actually that will be too much of arguments for my python script.</p> <pre><code>{ "a": "Hello", "b": { "c": { "key": "value1" }, "d": { "key2": "value2" } } } </code></pre> <p>Where <code>a, b, c, d, key2, key2</code> are string values stored in different variables.</p> <p>Is there any way to achieve this?</p>
1
2016-10-13T10:57:13Z
40,019,857
<p>You can create a json file with your dictionary and pass to python script with argparse</p> <p>For example:</p> <pre><code>import os import argparse import json def execute(json_file): if os.path.isfile(json_file): with open(json_file) as json_data: data = json.load(json_data) # here you can process your dictionary data print data parser = argparse.ArgumentParser(description="This script will take json file as argument") parser.add_argument('json') arg = parser.parse_args() json_file = arg.json execute(json_file) </code></pre> <p>open command shell and enter:</p> <pre><code>C:\Python27\python.exe argDict.py C:\temp.json </code></pre>
0
2016-10-13T11:35:01Z
[ "python", "shell", "dictionary" ]
Unable to use JSON.parse in my django template
40,019,107
<p>I have a variable data1 in my Django view which has been returned in the following way -</p> <pre><code>def dashboard(request): df1 = pd.read_excel("file.xlsx") str1= df1.to_json(orient = 'records') data1 = json.loads(str1) return render(request, 'dashboard/new 1.html',{'data1' : data1}) </code></pre> <p>The variable is then called in the template using javascript</p> <pre><code> &lt;script type = text/javascript&gt; var ob2 = JSON.parse( {{ data1 }} ); document.write(ob2); &lt;/script&gt; </code></pre> <p>This does not show anything on the HTML webpage created. Is there anything wrong in the code?</p>
1
2016-10-13T10:58:14Z
40,019,153
<p>Try outputting it as a string:</p> <pre><code>&lt;script type = text/javascript&gt; var ob2 = JSON.parse( "{{ data1 }}" ); document.write(ob2); &lt;/script&gt; </code></pre> <p>If this is not producing the results, I suggest just printing <code>{{ data1 }}</code> on screen and seeing exactly what is being returned by Django.</p>
1
2016-10-13T11:00:15Z
[ "javascript", "python", "json", "django" ]
Unable to use JSON.parse in my django template
40,019,107
<p>I have a variable data1 in my Django view which has been returned in the following way -</p> <pre><code>def dashboard(request): df1 = pd.read_excel("file.xlsx") str1= df1.to_json(orient = 'records') data1 = json.loads(str1) return render(request, 'dashboard/new 1.html',{'data1' : data1}) </code></pre> <p>The variable is then called in the template using javascript</p> <pre><code> &lt;script type = text/javascript&gt; var ob2 = JSON.parse( {{ data1 }} ); document.write(ob2); &lt;/script&gt; </code></pre> <p>This does not show anything on the HTML webpage created. Is there anything wrong in the code?</p>
1
2016-10-13T10:58:14Z
40,019,199
<p>Besides The Brewmaster's answer, the other problems are:</p> <pre><code>data1 = json.loads(str1) </code></pre> <p>That turns the JSON string back into a Python data structure. Just send <code>str1</code> itself to the template, and call it <code>a</code> as that's what you use in the template:</p> <pre><code>return render(request, 'dashboard/new 1.html',{'a' : str1}) </code></pre>
1
2016-10-13T11:02:04Z
[ "javascript", "python", "json", "django" ]
Unable to use JSON.parse in my django template
40,019,107
<p>I have a variable data1 in my Django view which has been returned in the following way -</p> <pre><code>def dashboard(request): df1 = pd.read_excel("file.xlsx") str1= df1.to_json(orient = 'records') data1 = json.loads(str1) return render(request, 'dashboard/new 1.html',{'data1' : data1}) </code></pre> <p>The variable is then called in the template using javascript</p> <pre><code> &lt;script type = text/javascript&gt; var ob2 = JSON.parse( {{ data1 }} ); document.write(ob2); &lt;/script&gt; </code></pre> <p>This does not show anything on the HTML webpage created. Is there anything wrong in the code?</p>
1
2016-10-13T10:58:14Z
40,019,543
<p>Try this by remove <code>Parse</code></p> <pre><code> &lt;script type = text/javascript&gt; document.write("{{ data1 }}"); &lt;/script&gt; </code></pre>
0
2016-10-13T11:19:24Z
[ "javascript", "python", "json", "django" ]
parse xml files in subdirectories using beautifulsoup in python
40,019,122
<p>I have more than 5000 XML files in multiple sub directories named f1, f2, f3, f4,... Each folder contains more than 200 files. At the moment I want to extract all the files using BeautifulSoup only as I have already tried lxml, elemetTree and minidom but am struggling to get it done through BeautifulSoup. </p> <p>I can extract single file in sub directory but not able to get all the files through BeautifulSoup.</p> <p>I have checked the below posts:</p> <p><a href="http://stackoverflow.com/questions/7785831/xml-parsing-in-python-using-beautifulsoup">XML parsing in Python using BeautifulSoup</a> (Extract Single File)</p> <p><a href="http://stackoverflow.com/questions/38211588/parsing-all-xml-files-in-directory-and-all-subdirectories">Parsing all XML files in directory and all subdirectories</a> (This is minidom)</p> <p><a href="http://stackoverflow.com/questions/30267755/reading-1000s-of-xml-documents-with-beautifulsoup">Reading 1000s of XML documents with BeautifulSoup</a> (Unable to get the files through this post)</p> <p>Here is the code which I have written to extract a single file:</p> <pre><code>from bs4 import BeautifulSoup file = BeautifulSoup(open('./Folder/SubFolder1/file1.XML'),'lxml-xml') print(file.prettify()) </code></pre> <p>When I try to get all files in all folders I am using the below code:</p> <pre><code>from bs4 import BeautifulSoup file = BeautifulSoup('//Folder/*/*.XML','lxml-xml') print(file.prettify()) </code></pre> <p>Then I am only getting XML Version and nothing else. I know that I have to use a for loop and am not sure how to use it in order to parse all the files through the loop. </p> <p>I know that it will be very very slow but for the sake of learning I want to use beautifulsoup to parse all the files, or if for loop is not recommended then will be grateful if I can get a better solution but only in beautifulsoup only.</p> <p>Regards,</p>
0
2016-10-13T10:58:54Z
40,019,427
<p>Use <code>glob.glob</code> to find the XML documents:</p> <pre><code>import glob from bs4 import BeautifulSoup for filename in glob.glob('//Folder/*/*.XML'): content = BeautifulSoup(filename, 'lxml-xml') print(content.prettify()) </code></pre> <p><em>note</em>: don't shadow the builtin function/class <code>file</code>. </p> <p>Read the <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow">BeautifulSoup Quick Start</a></p>
0
2016-10-13T11:12:47Z
[ "python", "xml-parsing", "beautifulsoup" ]
parse xml files in subdirectories using beautifulsoup in python
40,019,122
<p>I have more than 5000 XML files in multiple sub directories named f1, f2, f3, f4,... Each folder contains more than 200 files. At the moment I want to extract all the files using BeautifulSoup only as I have already tried lxml, elemetTree and minidom but am struggling to get it done through BeautifulSoup. </p> <p>I can extract single file in sub directory but not able to get all the files through BeautifulSoup.</p> <p>I have checked the below posts:</p> <p><a href="http://stackoverflow.com/questions/7785831/xml-parsing-in-python-using-beautifulsoup">XML parsing in Python using BeautifulSoup</a> (Extract Single File)</p> <p><a href="http://stackoverflow.com/questions/38211588/parsing-all-xml-files-in-directory-and-all-subdirectories">Parsing all XML files in directory and all subdirectories</a> (This is minidom)</p> <p><a href="http://stackoverflow.com/questions/30267755/reading-1000s-of-xml-documents-with-beautifulsoup">Reading 1000s of XML documents with BeautifulSoup</a> (Unable to get the files through this post)</p> <p>Here is the code which I have written to extract a single file:</p> <pre><code>from bs4 import BeautifulSoup file = BeautifulSoup(open('./Folder/SubFolder1/file1.XML'),'lxml-xml') print(file.prettify()) </code></pre> <p>When I try to get all files in all folders I am using the below code:</p> <pre><code>from bs4 import BeautifulSoup file = BeautifulSoup('//Folder/*/*.XML','lxml-xml') print(file.prettify()) </code></pre> <p>Then I am only getting XML Version and nothing else. I know that I have to use a for loop and am not sure how to use it in order to parse all the files through the loop. </p> <p>I know that it will be very very slow but for the sake of learning I want to use beautifulsoup to parse all the files, or if for loop is not recommended then will be grateful if I can get a better solution but only in beautifulsoup only.</p> <p>Regards,</p>
0
2016-10-13T10:58:54Z
40,019,432
<p>If I understood you correctly, then you do need to loop through the files, as you had already thought:</p> <pre><code>from bs4 import BeautifulSoup from pathlib import Path for filepath in Path('./Folder').glob('*/*.XML'): with filepath.open() as f: soup = BeautifulSoup(f,'lxml-xml') print(soup.prettify()) </code></pre> <p><a href="https://docs.python.org/3/library/pathlib.html" rel="nofollow"><code>pathlib</code></a> is just one approach to handling paths, on a higher level using objects. You could achieve the same with <a href="https://docs.python.org/3/library/glob.html" rel="nofollow"><code>glob</code></a> and string paths.</p>
1
2016-10-13T11:13:03Z
[ "python", "xml-parsing", "beautifulsoup" ]
putting bounding box around text in a image
40,019,315
<p>I want to compare two screenshots containing text. Basically both the screenshots contains some pretty formatted text. I want to compare if the same formatting being reflected in both the pictures as well as same text appearing at same location in both images. </p> <p>How I am doing it right now is - </p> <pre><code>1. Apply bilateral filters to remove the underlines of text. 2. Apply threshold with value 180 as min value and clear them out 3. Apply Gaussian blur on the image to remove the unfilled space between the characters. 4. Apply threshold again with value 250 as min value. 5. Compute contours in the images 6. Draw rectangle bounding box around contours 7. use O(n^2) algo to find out max overlapped rectangle and compare text within it. </code></pre> <p>However the problem is the contours appearing in both the images are different, i.e. in one of the <code>image number of contours are 38 while other contains 53</code>. I want to have a generic solution and don't want to depend upon the image content. However one thing for sure is the image is containing a well formatted text.</p> <p>Thanks</p>
0
2016-10-13T11:07:13Z
40,022,760
<p>I'm not sure to understand what do you want exactly but to get bounding box around word in image, i could do this :</p> <ol> <li>Apply processing to get good a thresholding : only text, background in black, text in white. This step depends on the type and quality of your image.</li> <li>Compute the sum of each line. The sum should be different from 0 where there is text and all lines in the space between each line should be null (you can set a threshold on this value if there is some noise). You can find the top/bottom line for each text line</li> <li>For each text line found in step 2, compute the sum of each columns. Same than step two, columns with word should be different from 0. You can find all spaces between words and letters. Remove all spaces which are too small to be a space between two words.</li> <li>Congratulation you have the top/bottom line and first/last columns of each words. </li> </ol>
0
2016-10-13T13:44:17Z
[ "python", "opencv", "image-processing" ]
How to insert ' into postgresql?
40,019,347
<p>I use <code>python</code> package <code>psycopg2</code> to update database.</p> <pre><code>cur.execute("UPDATE scholars SET name='{}' WHERE id={} and name is null".format(author, scholar_id)) </code></pre> <p><code>psycopg2.ProgrammingError: syntax error at or near "Neill" LINE 1: UPDATE scholars SET name='O'Neill, Kevin' WHERE id=12403 and...</code></p> <p>data should be:<code>O'Neill, Kevin</code></p>
0
2016-10-13T11:08:22Z
40,021,767
<p>Use <a href="http://initd.org/psycopg/docs/usage.html#query-parameters" rel="nofollow">Psycopg's parameter passing functionality</a>:</p> <pre><code>cur.execute (""" UPDATE scholars SET name = %s WHERE id = %s and name is null """, (author, scholar_id) ) </code></pre> <p><a href="https://docs.python.org/3/tutorial/introduction.html#strings" rel="nofollow">Triple quotes make it clearer</a></p>
1
2016-10-13T13:00:19Z
[ "python", "postgresql", "psycopg2" ]
Python Iterate over a list
40,019,371
<p>trying to figure this one out, i have 3 dictionaries and i am after adding them to a list, but how can i loop over list and print out that list.</p> <pre><code>def_person = {'name' : 'TEST1','JobRole' : 'TESTJOB'} def_person1 = {'name' : 'TEST2','JobRole' : 'TESTJOB1'} def_person2 = {'name' : 'TEST3','JobRole' : 'TESTJOB2'} person = list(def_person.keys()) person.append(def_person.values()) person.append(def_person1.values()) person.append(def_person2.values()) for persons in person: print() </code></pre> <p>so i the output would be like Name jobrole with all the names under Name and job roles under jobrole.</p>
-3
2016-10-13T11:10:05Z
40,068,266
<p>Actually you cannot assume that all dictionaries having the same keys would display the keys in the same order - this is in no way guaranteed. Instead you should have a list of keys that you'd print in order, and then iterate over this key list for each dictionary, pulling values for each key.</p> <p>I show how to make the two-dimensional list correctly first:</p> <pre><code>person_keys = list(def_person.keys()) # add the heading row result = [person_keys] for person in [def_person, def_person1, def_person2]: # make a list of current person's values in the # order of keys in person_keys result.append([person[key] for key in person_keys]) </code></pre> <p>Now, to print this list nicely, in Python 3 you can do:</p> <pre><code>for row in result: for cell in row: # print each cell as at least 20 characters wide print('{:20}'.format(cell), end='') # empty line after each row print() </code></pre> <p>Result:</p> <pre><code>JobRole name TESTJOB TEST1 TESTJOB1 TEST2 TESTJOB2 TEST3 </code></pre> <hr> <p>Above, the ordering of columns is arbitrary. Sometimes you will get <code>JobRole</code> first, sometimes <code>name</code> first. If you want to have a fixed order with <code>name</code> first, then just use <code>person_keys = ['name', 'JobRole']</code>,</p>
2
2016-10-16T08:11:02Z
[ "python", "python-3.x" ]
HTTP Server with active TCP connections in python
40,019,418
<p>I am writing a pseudo-http application in python, the requirements for which are:</p> <ol> <li>It should handle HTTP requests.</li> <li>The connections between the client and the server outlive the request-response, i.e. the underlying TCP connection remains alive after a response has been sent to a client.</li> <li>The server needs to be able to send data to a particular client for which it already has an opened connection.</li> </ol> <p>I looked at twisted and python's TCPServer/BaseHTTPServer, but they don't quite fit the bill. The way I see it, I have two options:</p> <ol> <li>Start from a HTTP server implementation and override my way down to connection management.</li> <li>Have a simple socket server that will manage the connections and pass data between the "http" server and the client.</li> </ol> <p>Has anyone tackled a similar issue? Any ideas on other approaches or which one will be a better option?</p> <p>Thanks!</p> <p><strong>EDIT 1</strong> I cannot use HTTP 2 or web sockets; HTTP &lt;2 over TCP is a hard requirement.</p>
1
2016-10-13T11:12:33Z
40,063,132
<p>Since you cannot use websockets or http/2, and you require the ability to push data from the server to the client, then long-polling is probably the best option remaining.</p> <p>See Nevow, at <a href="https://github.com/twisted/nevow" rel="nofollow">https://github.com/twisted/nevow</a>, for one possible implementation of long-polling, via the athena module.</p>
1
2016-10-15T19:08:51Z
[ "python", "tcp", "twisted", "httpserver" ]
Python tkinter displaying images as movie stream
40,019,449
<p>I am trying to screen-grab and display the image quickly like a recording. It seems to all function well except the display window is "blinking" occasionally with a white frame. It doesn't appear to be every update or every other frame, but rather every 5 or so. Any thoughts on the cause?</p> <pre><code>from tkinter import * from PIL import Image, ImageGrab, ImageTk import threading from collections import deque from io import BytesIO class buildFrame: def __init__(self): self.root = Tk() self.land = Canvas(self.root, width=800, height=600) self.land.pack() self.genObj() self.thsObj = self.land.create_image(0,0, anchor='nw', image=self.imgObj) self.sStream = deque() self.spinning = True prQ = threading.Thread(target=self.procQ) prQ.start() t1 = threading.Thread(target=self.snapS, args=[100]) t1.start() def genObj(self): tmp = Image.new('RGBA', (800, 600), color=(0, 0, 0)) self.imgObj = ImageTk.PhotoImage(image=tmp) def procQ(self): while self.spinning == True: if self.sStream: self.land.itemconfig(self.thsObj, image=self.sStream[0]) self.sStream.popleft() def snapS(self, shtCount): quality_val = 70 for i in range(shtCount): mem_file = BytesIO() ImageGrab.grab().save(mem_file, format="JPEG", quality=quality_val) mem_file.seek(0) tmp = Image.open(mem_file) tmp.thumbnail([800, 600]) img = ImageTk.PhotoImage(tmp) self.sStream.append(img) mem_file.close() world = buildFrame() world.root.mainloop() </code></pre>
0
2016-10-13T11:14:26Z
40,021,002
<p>You should avoid making Tk calls on non GUI threads. This works much more smoothly if you get rid of the threads entirely and use <code>after</code> to schedule the image capture.</p> <pre><code>from tkinter import * from PIL import Image, ImageGrab, ImageTk from io import BytesIO class buildFrame: def __init__(self): self.root = Tk() self.land = Canvas(self.root, width=800, height=600) self.land.pack() tmp = Image.new('RGBA', (800, 600), color=(0, 0, 0)) self.imgObj = ImageTk.PhotoImage(image=tmp) self.thsObj = self.land.create_image(0,0, anchor='nw', image=self.imgObj) self.root.after("idle", self.snapS) def snapS(self): quality_val = 70 mem_file = BytesIO() ImageGrab.grab().save(mem_file, format="JPEG", quality=quality_val) mem_file.seek(0) tmp = Image.open(mem_file) tmp.thumbnail([800, 600]) self.image = ImageTk.PhotoImage(tmp) self.land.itemconfig(self.thsObj, image=self.image) mem_file.close() self.root.after(10, self.snapS) world = buildFrame() world.root.mainloop() </code></pre> <p>If you really want to use threads, you should queue the image stream and have the UI thread deserialize the tkinter image from the stream and display it. So one thread for capture and the main thread doing display.</p> <h2>EDIT</h2> <p>The following version keeps using a thread for the capture and passes the data via the deque but ensures that only the Tk UI thread operates on Tk objects. This needs some work to avoid accumulating images in the queue but a delay of 100ms between images works fine for now.</p> <pre><code>from tkinter import * from PIL import Image, ImageGrab, ImageTk import sys, threading, time from collections import deque from io import BytesIO class buildFrame: def __init__(self): self.root = Tk() self.root.wm_protocol("WM_DELETE_WINDOW", self.on_destroy) self.land = Canvas(self.root, width=800, height=600) self.land.pack() self.genObj() self.thsObj = self.land.create_image(0,0, anchor='nw', image=self.imgObj) self.sStream = deque() self.image_ready = threading.Event() self.spinning = True self.prQ = threading.Thread(target=self.procQ) self.prQ.start() self.t1 = threading.Thread(target=self.snapS, args=[100]) self.t1.start() def on_destroy(self): self.spinning = False self.root.after_cancel(self.afterid) self.prQ.join() self.t1.join() self.root.destroy() def genObj(self): tmp = Image.new('RGBA', (800, 600), color=(0, 0, 0)) self.imgObj = ImageTk.PhotoImage(image=tmp) def procQ(self): while self.spinning == True: if self.image_ready.wait(0.1): print(len(self.sStream)) self.image_ready.clear() self.afterid = self.root.after(1, self.show_image) def show_image(self): stream = self.sStream[0] self.sStream.popleft() tmp = Image.open(stream) tmp.thumbnail([800, 600]) self.image = ImageTk.PhotoImage(tmp) self.land.itemconfig(self.thsObj, image=self.image) stream.close() def snapS(self, shtCount): quality_val = 70 while self.spinning: mem_file = BytesIO() ImageGrab.grab().save(mem_file, format="JPEG", quality=quality_val) mem_file.seek(0) self.sStream.append(mem_file) self.image_ready.set() time.sleep(0.1) def main(): world = buildFrame() world.root.mainloop() return 0 if __name__ == '__main__': sys.exit(main()) </code></pre>
0
2016-10-13T12:28:39Z
[ "python", "tkinter", "pillow" ]
While loop only responding to second condition
40,019,501
<p>My while loop is only ending when it matches the second condition, the first one is being ignored, I have no idea what I am doing wrong</p> <pre><code>while (response !=0) or (cont != 5): response = os.system("ping -c 2 " + hostname) cont=cont+1 print cont print response </code></pre>
0
2016-10-13T11:17:06Z
40,019,668
<p>Change your <code>or</code> to <code>and</code>. When it is checking the first condition and if that is false and the second condition is true, whole condition will be true. It means either first condition is true or second condition is true.</p> <pre><code>While ( false or true ) will be while ( true ) </code></pre> <p>To check both the condition you should use <code>and</code>. It checks that both conditions should be <code>true</code> for the expression to be <code>true</code>.</p> <pre><code>while ( false and true ) will be while ( false ) while (response !=0) and (cont != 5): response = os.system("ping -c 2 " + hostname) cont=cont+1 print cont print response </code></pre>
0
2016-10-13T11:25:23Z
[ "python" ]
While loop only responding to second condition
40,019,501
<p>My while loop is only ending when it matches the second condition, the first one is being ignored, I have no idea what I am doing wrong</p> <pre><code>while (response !=0) or (cont != 5): response = os.system("ping -c 2 " + hostname) cont=cont+1 print cont print response </code></pre>
0
2016-10-13T11:17:06Z
40,019,809
<p>With <code>subprocess.call</code>:</p> <pre><code>import subprocess for count in range(5); response = subprocess.call(["ping", "-c", "2", hostname]) if not response: break </code></pre> <p>Prefer an iteration with <code>range</code> or <code>xrange</code>.</p>
0
2016-10-13T11:32:38Z
[ "python" ]
Play audio file when keyboard is pressed
40,019,533
<p>I have the following code in which I capture the user input and then I want to parse it and evaluate each character in string using ASCII code to play a certain .mp3 file: </p> <p>The problem is that this code works only for the first character. For example, if I have the input as <code>ab</code> I only hear the audio file for <code>a</code> and not <code>b</code></p> <pre><code>import os wrd=raw_input("Please write something: ") wrd=(str(wrd)).lower() wrd=list(wrd) i=0 print (wrd[0:len(wrd):1]) for wrd[i] in wrd: print wrd[i] if ord((wrd[i]))==97: os.system("start C:/Users/letters/a(1).mp3") i+=1 if ord((wrd[i]))==98: os.system("start C:/Users//letters/b(1).mp3") i+=1 </code></pre>
0
2016-10-13T11:18:55Z
40,020,420
<p>If the mp3 file name always follow the path "letter(1),mp3" you can do something like:</p> <pre><code>import os wrd = input("Please write something: ") wrd = wrd.lower() for char in wrd: try: os.system("start " + char +'(1).mp3') except: ValueError </code></pre> <p>otherwise you can use a dictionary and fill it with the right filename for each letter:</p> <pre><code>db = { 'a': 'C:/Users//letters/a(1).mp3', 'b': 'C:/Users//letters/b(1).mp3' # etc } ... try: os.system("start " + db[char]) ... </code></pre>
0
2016-10-13T12:00:32Z
[ "python", "os.system" ]
Python regex alpha-numeric string with numeric part between two values
40,019,664
<p>I am terrible at regex in general, but I would be interested to know if there is a method to check if the numeric part of an alpha-numeric string is between two values, or less/greater than a certain value?</p> <p>For example if I have a string to search in a file which has multiple numeric variations like below:</p> <pre><code>key_string (870 bytes) key_string (1500 bytes) key_string (70 bytes) </code></pre> <p>Is it possible to extract the 'key_string' string only on whether the '(xxxx bytes)' part is between a certain threshold, or less/greater than a certain value?</p> <p>For example if I want to find all the above 'key_string' example where the second part is below 1200 bytes, can I print out:</p> <pre><code>key_string (870 bytes) key_string (70 bytes) </code></pre> <p>and ignore the string below in one regular expression? :</p> <pre><code>key_string (1500 bytes) </code></pre>
0
2016-10-13T11:25:10Z
40,019,874
<p>You can use re.findall() to search along with regex.</p> <p><strong>Explanation of regex as below:</strong></p> <pre><code>key_string\s+\((\d+)\s+bytes\) </code></pre> <p><img src="https://www.debuggex.com/i/H8MpT5EfCmy2_MaX.png" alt="Regular expression visualization"></p> <p><a href="https://www.debuggex.com/r/H8MpT5EfCmy2_MaX" rel="nofollow">Debuggex Demo</a></p> <p><strong>Code:</strong></p> <pre><code>import re with open('result.txt') as fh: for l in fh: a = re.findall(r"key_string\s+\((\d+)\s+bytes\)",l.strip()) if len(a) &gt; 0 and int(a[0]) &lt; 1200: print (l) </code></pre> <p><strong>Output:</strong></p> <pre><code>C:\Users\dinesh_pundkar\Desktop&gt;python c.py key_string (870 bytes) key_string (70 bytes) C:\Users\dinesh_pundkar\Desktop&gt; </code></pre> <p><strong>Code 2 as suggested by <em>@WiktorStribiżew</em> :</strong></p> <pre><code>import re pattern = r'key_string\s+\((\d+)\s+bytes\)' regex = re.compile(pattern, re.IGNORECASE) with open('result.txt') as fh: for match in regex.finditer(fh.read()): if int(match.group(1)) &lt; 1200: print((match.group())) </code></pre>
1
2016-10-13T11:35:55Z
[ "python", "regex" ]
Methods for finding the Eigenvalues of a matrix using python?
40,019,734
<p>I am currently attempting to find the eigenvalues of a matrix H. I have tried using both numpy.linalg.eig and scipy.linalg.eig, although both apparently use the same underlying method.</p> <p>The problem is that my matrix H is purely real, and the eigenvalues have to be real and also positive.</p> <p>But the scipy and numpy methods return complex Eigenvalues, both positive and negative, which because they are complex and negative, cannot be correct. EDIT I know that the Eigenvalues must be real as the matrix represents a physical system where a complex eigenvalue would have no meaning \end EDIT</p> <p>Does anyone know of any other way that I can obtain the correct, purely real, Eigenvalues of a matrix in python?</p> <p>Thank you for your time! EDIT 3: Corrected H matrix gives purely real eigenvalues, so my imaginary problem dissapears. Now I just need to figure out why my the Eigenvalues are too big, but that is another question!</p> <p>Many Thanks to all those who responded! </p> <p>Corrected H matrix is below for interest. Notice that my problem now the eigenvalues are too large. I expected values in the range 0-1. Not ~10^50! </p> <p>CORRECTED H MATRIX EIGENVALUES:</p> <pre><code>[ -1.56079757e-02 -6.70247389e+59 -1.31298702e+56 -3.64404066e+52 -9.70803701e+48 -1.85917866e+45 -1.65895844e+41 -5.61503911e+39 -7.19768059e+36 -4.58657021e+32 -4.98763491e+28 -3.08561491e+27 -3.63383072e+25 -2.58033979e+25 -3.45930959e+23 -2.13272853e+18 -4.25175990e+21 -1.93387466e+22] </code></pre> <p>CORRECTED H MATRIX:</p> <pre><code>[[ -1.56079757e-02 -1.96247112e-02 -2.02799782e-02 -1.99695485e-02 -1.93678897e-02 -1.86944625e-02 -1.30222438e+04 -3.54051869e+05 -4.91571514e+06 -4.51159690e+07 -3.09207669e+08 -1.69913322e+09 -2.76231241e+15 -4.29262866e+17 -3.76558847e+19 -2.27013318e+21 -1.03308991e+23 -3.75607123e+24] [ -1.96247112e-02 -3.16659228e-02 -3.73018152e-02 -3.99083810e-02 -4.09801356e-02 -4.12397330e-02 -9.25855152e+03 -2.52585509e+05 -3.52145205e+06 -3.24749687e+07 -2.23781425e+08 -1.23712026e+09 -1.95621015e+15 -3.04176626e+17 -2.67015928e+19 -1.61101326e+21 -7.33788197e+22 -2.67049818e+24] [ -2.02799782e-02 -3.73018152e-02 -4.77923287e-02 -5.41249519e-02 -5.79464638e-02 -6.01988341e-02 -7.57318263e+03 -2.06839231e+05 -2.88760361e+06 -2.66717677e+07 -1.84121508e+08 -1.01989311e+09 -1.59803489e+15 -2.48531861e+17 -2.18219073e+19 -1.31694511e+21 -6.00020265e+22 -2.18437720e+24] [ -1.99695485e-02 -3.99083810e-02 -5.41249519e-02 -6.39296468e-02 -7.06496425e-02 -7.52593492e-02 -6.56444085e+03 -1.79388958e+05 -2.50607920e+06 -2.31660126e+07 -1.60063118e+08 -8.87505427e+08 -1.38428190e+15 -2.15309349e+17 -1.89070134e+19 -1.14117996e+21 -5.20014426e+22 -1.89341950e+24] [ -1.93678897e-02 -4.09801356e-02 -5.79464638e-02 -7.06496425e-02 -8.00703376e-02 -8.70367786e-02 -5.87456014e+03 -1.60590211e+05 -2.24436978e+06 -2.07565818e+07 -1.43492007e+08 -7.96094702e+08 -1.23832305e+15 -1.92618393e+17 -1.69155984e+19 -1.02106226e+21 -4.65319430e+22 -1.69443289e+24] [ -1.86944625e-02 -4.12397330e-02 -6.01988341e-02 -7.52593492e-02 -8.70367786e-02 -9.62124393e-02 -5.36462746e+03 -1.46683176e+05 -2.05056240e+06 -1.89701536e+07 -1.31188910e+08 -7.28124191e+08 -1.13054072e+15 -1.75859951e+17 -1.54445848e+19 -9.32316767e+20 -4.24900807e+22 -1.54734986e+24] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -4.12478326e+18 -5.45644679e+19 -2.90876009e+20 -8.98307694e+20 -1.93571800e+21 -3.25655840e+21 -1.23009840e+30 -2.34880436e+32 -2.19696316e+34 -1.25767256e+36 -4.92737192e+37 -1.41676103e+39] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -5.45644679e+19 -1.18364260e+21 -9.55137274e+21 -4.18185914e+22 -1.20837111e+23 -2.59872572e+23 -4.88154308e+30 -1.23670123e+33 -1.52633071e+35 -1.14675488e+37 -5.86768809e+38 -2.19383952e+40] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -2.90876009e+20 -9.55137274e+21 -1.10203112e+23 -6.57480361e+23 -2.48279601e+24 -6.72600978e+24 -9.55655956e+30 -2.93055192e+33 -4.39290725e+35 -4.01427998e+37 -2.49882367e+39 -1.13605487e+41] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -8.98307694e+20 -4.18185914e+22 -6.57480361e+23 -5.15935422e+24 -2.48241611e+25 -8.32646595e+25 -1.32363927e+31 -4.60841402e+33 -7.90547208e+35 -8.31277603e+37 -5.97680290e+39 -3.14644094e+41] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -1.93571800e+21 -1.20837111e+23 -2.48279601e+24 -2.48241611e+25 -1.48582210e+26 -6.06263231e+26 -1.52615891e+31 -5.77316621e+33 -1.08616056e+36 -1.26177499e+38 -1.00789238e+40 -5.92030497e+41] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -3.25655840e+21 -2.59872572e+23 -6.72600978e+24 -8.32646595e+25 -6.06263231e+26 -2.95791640e+27 -1.59124215e+31 -6.34774909e+33 -1.27102306e+36 -1.58346849e+38 -1.36500133e+40 -8.69716304e+41] [ -2.76231241e+15 -1.95621015e+15 -1.59803489e+15 -1.38428190e+15 -1.23832305e+15 -1.13054072e+15 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -3.91170589e+42 -3.71477021e+44 -1.55100113e+46 -3.65410576e+47 -5.53824601e+48 -5.87586247e+49] [ -4.29262866e+17 -3.04176626e+17 -2.48531861e+17 -2.15309349e+17 -1.92618393e+17 -1.75859951e+17 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -3.71477021e+44 -5.04566596e+46 -2.92377802e+48 -9.33903419e+49 -1.88272072e+51 -2.61414916e+52] [ -3.76558847e+19 -2.67015928e+19 -2.18219073e+19 -1.89070134e+19 -1.69155984e+19 -1.54445848e+19 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -1.55100113e+46 -2.92377802e+48 -2.28558880e+50 -9.63567387e+51 -2.51888438e+53 -4.46829479e+54] [ -2.27013318e+21 -1.61101326e+21 -1.31694511e+21 -1.14117996e+21 -1.02106226e+21 -9.32316767e+20 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -3.65410576e+47 -9.33903419e+49 -9.63567387e+51 -5.25195965e+53 -1.74576666e+55 -3.88366439e+56] [ -1.03308991e+23 -7.33788197e+22 -6.00020265e+22 -5.20014426e+22 -4.65319430e+22 -4.24900807e+22 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -5.53824601e+48 -1.88272072e+51 -2.51888438e+53 -1.74576666e+55 -7.26381158e+56 -1.99648815e+58] [ -3.75607123e+24 -2.67049818e+24 -2.18437720e+24 -1.89341950e+24 -1.69443289e+24 -1.54734986e+24 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -5.87586247e+49 -2.61414916e+52 -4.46829479e+54 -3.88366439e+56 -1.99648815e+58 -6.69651817e+59]] </code></pre> <p>I have left the prior incorrect H matrix so that already existing answers make sense to any future readers.</p> <p>EDIT 2: old H matrix that is definitely not right.</p> <pre><code>[[ 9.84292024e+03 -8.31470427e+03 1.28883548e+04 -1.42234052e+03 6.39075781e+03 1.68134522e+03 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -5.93837816e+16 6.38322749e+16 -6.85843186e+16 5.75338966e+16 -4.88603241e+16 3.50805052e+16] [ -8.31470427e+03 1.16557521e+05 -3.57981876e+05 7.96363898e+05 -1.49026732e+06 2.53900589e+06 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 8.06918956e+18 -3.72079304e+19 1.23800418e+20 -3.42505937e+20 8.36989008e+20 -1.86726751e+21] [ 1.28883548e+04 -3.57981876e+05 3.15391321e+06 -1.63653726e+07 6.55556033e+07 -2.25027001e+08 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -2.29647856e+20 2.23060743e+21 -1.47751020e+22 7.86504336e+22 -3.61027130e+23 1.48623808e+24] [ -1.42234052e+03 7.96363898e+05 -1.63653726e+07 1.68187967e+08 -1.22007429e+09 7.18684022e+09 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 3.46309077e+21 -5.84715936e+22 6.46859079e+23 -5.59189865e+24 4.08308120e+25 -2.63166392e+26] [ 6.39075781e+03 -1.49026732e+06 6.55556033e+07 -1.22007429e+09 1.47164022e+10 -1.36810088e+11 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -3.69586675e+22 9.89735934e+23 -1.67505077e+25 2.15859810e+26 -2.30442140e+27 2.13906412e+28] [ 1.68134522e+03 2.53900589e+06 -2.25027001e+08 7.18684022e+09 -1.36810088e+11 1.90724566e+12 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 3.17341100e+23 -1.27716724e+25 3.13774985e+26 -5.72503211e+27 8.49214835e+28 -1.07936569e+30] [ -2.62366128e+07 3.12867102e+08 -2.07586348e+09 9.55718390e+09 -3.58688215e+10 1.18206299e+11 -3.72545099e+19 3.55377485e+20 -2.19797302e+21 1.06820421e+22 -4.43482421e+22 1.64613799e+23 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 6.21899934e+06 -1.35300269e+09 2.25199661e+10 -2.08147670e+11 1.41978312e+12 -8.03720030e+12 3.55377485e+20 -6.92933885e+21 7.86285194e+22 -6.60223225e+23 4.55617308e+24 -2.73627888e+25 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ -2.10320449e+07 4.11734924e+09 -1.49402973e+11 2.51974540e+12 -2.86573004e+13 2.56306446e+14 -2.19797302e+21 7.86285194e+22 -1.49349605e+24 1.98682041e+25 -2.09455082e+26 1.87262719e+27 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 2.94574146e+06 -1.02367345e+10 7.58502833e+11 -2.20591701e+13 3.96780330e+14 -5.32688366e+15 1.06820421e+22 -6.60223225e+23 1.98682041e+25 -3.97295506e+26 6.07807050e+27 -7.69005025e+28 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ -1.34143013e+07 2.23461195e+10 -3.24369808e+12 1.56693459e+14 -4.30207139e+15 8.37537551e+16 -4.43482421e+22 4.55617308e+24 -2.09455082e+26 6.07807050e+27 -1.30367761e+29 2.25611610e+30 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ -1.29818367e+06 -4.45010813e+10 1.22995908e+13 -9.61118583e+14 3.92675368e+16 -1.08141277e+18 1.64613799e+23 -2.73627888e+25 1.87262719e+27 -7.69005025e+28 2.25611610e+30 -5.21172115e+31 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ -5.93837816e+16 8.06918956e+18 -2.29647856e+20 3.46309077e+21 -3.69586675e+22 3.17341100e+23 -4.12843622e+30 1.31023908e+32 -2.32666430e+33 2.96546578e+34 -3.03538906e+35 2.65606764e+36 -3.91170589e+42 9.60158418e+43 -1.53011091e+45 1.88717441e+46 -1.94862667e+47 1.76215269e+48] [ 6.38322749e+16 -3.72079304e+19 2.23060743e+21 -5.84715936e+22 9.89735934e+23 -1.27716724e+25 3.81005492e+31 -2.11434722e+33 5.97475670e+34 -1.14689052e+36 1.70519806e+37 -2.11260012e+38 9.60158418e+43 -3.55965236e+45 8.25594136e+46 -1.44373334e+48 2.07304224e+49 -2.56816531e+50] [ -6.85843186e+16 1.23800418e+20 -1.47751020e+22 6.46859079e+23 -1.67505077e+25 3.13774985e+26 -2.44128687e+32 2.26787992e+34 -9.81982973e+35 2.73945925e+37 -5.71489855e+38 9.68877560e+39 -1.53011091e+45 8.25594136e+46 -2.68951294e+48 6.44111112e+49 -1.24291248e+51 2.03914940e+52] [ 5.75338966e+16 -3.42505937e+20 7.86504336e+22 -5.59189865e+24 2.15859810e+26 -5.72503211e+27 1.26953404e+33 -1.91719765e+35 1.23852494e+37 -4.89510985e+38 1.39768692e+40 -3.16418412e+41 1.88717441e+46 -1.44373334e+48 6.44111112e+49 -2.06086355e+51 5.21505047e+52 -1.10591734e+54] [ -4.88603241e+16 8.36989008e+20 -3.61027130e+23 4.08308120e+25 -2.30442140e+27 8.49214835e+28 -5.72744451e+33 1.37649297e+36 -1.30066841e+38 7.14559647e+39 -2.74085484e+41 8.13459136e+42 -1.94862667e+47 2.07304224e+49 -1.24291248e+51 5.21505047e+52 -1.69908728e+54 4.57315759e+55] [ 3.50805052e+16 -1.86726751e+21 1.48623808e+24 -2.63166392e+26 2.13906412e+28 -1.07936569e+30 2.32655148e+34 -8.75599541e+36 1.19185355e+39 -8.96773338e+40 4.55433688e+42 -1.74680817e+44 1.76215269e+48 -2.56816531e+50 2.03914940e+52 -1.10591734e+54 4.57315759e+55 -1.54023048e+57]] </code></pre>
1
2016-10-13T11:28:08Z
40,020,062
<p>Standard method would be to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html" rel="nofollow">numpy.linalg.eig</a>.</p> <pre><code>from numpy import linalg as LA w, v = LA.eig(np.diag((1, 2, 3))) # w: # array([ 1., 2., 3.]) # v: # array([[ 1., 0., 0.], # [ 0., 1., 0.], # [ 0., 0., 1.]]) </code></pre> <p>Obviously, if your matrix is very specific (let's say very large and sparse), usually you want to use iterative approach (e.g. <em>Krylov subspace method</em>) to find most dominant eigenvalues. See <a href="http://docs.scipy.org/doc/scipy/reference/tutorial/arpack.html" rel="nofollow">discussion</a> for more details.</p>
2
2016-10-13T11:45:20Z
[ "python", "numpy", "scipy", "eigenvalue" ]
Methods for finding the Eigenvalues of a matrix using python?
40,019,734
<p>I am currently attempting to find the eigenvalues of a matrix H. I have tried using both numpy.linalg.eig and scipy.linalg.eig, although both apparently use the same underlying method.</p> <p>The problem is that my matrix H is purely real, and the eigenvalues have to be real and also positive.</p> <p>But the scipy and numpy methods return complex Eigenvalues, both positive and negative, which because they are complex and negative, cannot be correct. EDIT I know that the Eigenvalues must be real as the matrix represents a physical system where a complex eigenvalue would have no meaning \end EDIT</p> <p>Does anyone know of any other way that I can obtain the correct, purely real, Eigenvalues of a matrix in python?</p> <p>Thank you for your time! EDIT 3: Corrected H matrix gives purely real eigenvalues, so my imaginary problem dissapears. Now I just need to figure out why my the Eigenvalues are too big, but that is another question!</p> <p>Many Thanks to all those who responded! </p> <p>Corrected H matrix is below for interest. Notice that my problem now the eigenvalues are too large. I expected values in the range 0-1. Not ~10^50! </p> <p>CORRECTED H MATRIX EIGENVALUES:</p> <pre><code>[ -1.56079757e-02 -6.70247389e+59 -1.31298702e+56 -3.64404066e+52 -9.70803701e+48 -1.85917866e+45 -1.65895844e+41 -5.61503911e+39 -7.19768059e+36 -4.58657021e+32 -4.98763491e+28 -3.08561491e+27 -3.63383072e+25 -2.58033979e+25 -3.45930959e+23 -2.13272853e+18 -4.25175990e+21 -1.93387466e+22] </code></pre> <p>CORRECTED H MATRIX:</p> <pre><code>[[ -1.56079757e-02 -1.96247112e-02 -2.02799782e-02 -1.99695485e-02 -1.93678897e-02 -1.86944625e-02 -1.30222438e+04 -3.54051869e+05 -4.91571514e+06 -4.51159690e+07 -3.09207669e+08 -1.69913322e+09 -2.76231241e+15 -4.29262866e+17 -3.76558847e+19 -2.27013318e+21 -1.03308991e+23 -3.75607123e+24] [ -1.96247112e-02 -3.16659228e-02 -3.73018152e-02 -3.99083810e-02 -4.09801356e-02 -4.12397330e-02 -9.25855152e+03 -2.52585509e+05 -3.52145205e+06 -3.24749687e+07 -2.23781425e+08 -1.23712026e+09 -1.95621015e+15 -3.04176626e+17 -2.67015928e+19 -1.61101326e+21 -7.33788197e+22 -2.67049818e+24] [ -2.02799782e-02 -3.73018152e-02 -4.77923287e-02 -5.41249519e-02 -5.79464638e-02 -6.01988341e-02 -7.57318263e+03 -2.06839231e+05 -2.88760361e+06 -2.66717677e+07 -1.84121508e+08 -1.01989311e+09 -1.59803489e+15 -2.48531861e+17 -2.18219073e+19 -1.31694511e+21 -6.00020265e+22 -2.18437720e+24] [ -1.99695485e-02 -3.99083810e-02 -5.41249519e-02 -6.39296468e-02 -7.06496425e-02 -7.52593492e-02 -6.56444085e+03 -1.79388958e+05 -2.50607920e+06 -2.31660126e+07 -1.60063118e+08 -8.87505427e+08 -1.38428190e+15 -2.15309349e+17 -1.89070134e+19 -1.14117996e+21 -5.20014426e+22 -1.89341950e+24] [ -1.93678897e-02 -4.09801356e-02 -5.79464638e-02 -7.06496425e-02 -8.00703376e-02 -8.70367786e-02 -5.87456014e+03 -1.60590211e+05 -2.24436978e+06 -2.07565818e+07 -1.43492007e+08 -7.96094702e+08 -1.23832305e+15 -1.92618393e+17 -1.69155984e+19 -1.02106226e+21 -4.65319430e+22 -1.69443289e+24] [ -1.86944625e-02 -4.12397330e-02 -6.01988341e-02 -7.52593492e-02 -8.70367786e-02 -9.62124393e-02 -5.36462746e+03 -1.46683176e+05 -2.05056240e+06 -1.89701536e+07 -1.31188910e+08 -7.28124191e+08 -1.13054072e+15 -1.75859951e+17 -1.54445848e+19 -9.32316767e+20 -4.24900807e+22 -1.54734986e+24] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -4.12478326e+18 -5.45644679e+19 -2.90876009e+20 -8.98307694e+20 -1.93571800e+21 -3.25655840e+21 -1.23009840e+30 -2.34880436e+32 -2.19696316e+34 -1.25767256e+36 -4.92737192e+37 -1.41676103e+39] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -5.45644679e+19 -1.18364260e+21 -9.55137274e+21 -4.18185914e+22 -1.20837111e+23 -2.59872572e+23 -4.88154308e+30 -1.23670123e+33 -1.52633071e+35 -1.14675488e+37 -5.86768809e+38 -2.19383952e+40] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -2.90876009e+20 -9.55137274e+21 -1.10203112e+23 -6.57480361e+23 -2.48279601e+24 -6.72600978e+24 -9.55655956e+30 -2.93055192e+33 -4.39290725e+35 -4.01427998e+37 -2.49882367e+39 -1.13605487e+41] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -8.98307694e+20 -4.18185914e+22 -6.57480361e+23 -5.15935422e+24 -2.48241611e+25 -8.32646595e+25 -1.32363927e+31 -4.60841402e+33 -7.90547208e+35 -8.31277603e+37 -5.97680290e+39 -3.14644094e+41] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -1.93571800e+21 -1.20837111e+23 -2.48279601e+24 -2.48241611e+25 -1.48582210e+26 -6.06263231e+26 -1.52615891e+31 -5.77316621e+33 -1.08616056e+36 -1.26177499e+38 -1.00789238e+40 -5.92030497e+41] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -3.25655840e+21 -2.59872572e+23 -6.72600978e+24 -8.32646595e+25 -6.06263231e+26 -2.95791640e+27 -1.59124215e+31 -6.34774909e+33 -1.27102306e+36 -1.58346849e+38 -1.36500133e+40 -8.69716304e+41] [ -2.76231241e+15 -1.95621015e+15 -1.59803489e+15 -1.38428190e+15 -1.23832305e+15 -1.13054072e+15 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -3.91170589e+42 -3.71477021e+44 -1.55100113e+46 -3.65410576e+47 -5.53824601e+48 -5.87586247e+49] [ -4.29262866e+17 -3.04176626e+17 -2.48531861e+17 -2.15309349e+17 -1.92618393e+17 -1.75859951e+17 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -3.71477021e+44 -5.04566596e+46 -2.92377802e+48 -9.33903419e+49 -1.88272072e+51 -2.61414916e+52] [ -3.76558847e+19 -2.67015928e+19 -2.18219073e+19 -1.89070134e+19 -1.69155984e+19 -1.54445848e+19 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -1.55100113e+46 -2.92377802e+48 -2.28558880e+50 -9.63567387e+51 -2.51888438e+53 -4.46829479e+54] [ -2.27013318e+21 -1.61101326e+21 -1.31694511e+21 -1.14117996e+21 -1.02106226e+21 -9.32316767e+20 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -3.65410576e+47 -9.33903419e+49 -9.63567387e+51 -5.25195965e+53 -1.74576666e+55 -3.88366439e+56] [ -1.03308991e+23 -7.33788197e+22 -6.00020265e+22 -5.20014426e+22 -4.65319430e+22 -4.24900807e+22 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -5.53824601e+48 -1.88272072e+51 -2.51888438e+53 -1.74576666e+55 -7.26381158e+56 -1.99648815e+58] [ -3.75607123e+24 -2.67049818e+24 -2.18437720e+24 -1.89341950e+24 -1.69443289e+24 -1.54734986e+24 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -5.87586247e+49 -2.61414916e+52 -4.46829479e+54 -3.88366439e+56 -1.99648815e+58 -6.69651817e+59]] </code></pre> <p>I have left the prior incorrect H matrix so that already existing answers make sense to any future readers.</p> <p>EDIT 2: old H matrix that is definitely not right.</p> <pre><code>[[ 9.84292024e+03 -8.31470427e+03 1.28883548e+04 -1.42234052e+03 6.39075781e+03 1.68134522e+03 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -5.93837816e+16 6.38322749e+16 -6.85843186e+16 5.75338966e+16 -4.88603241e+16 3.50805052e+16] [ -8.31470427e+03 1.16557521e+05 -3.57981876e+05 7.96363898e+05 -1.49026732e+06 2.53900589e+06 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 8.06918956e+18 -3.72079304e+19 1.23800418e+20 -3.42505937e+20 8.36989008e+20 -1.86726751e+21] [ 1.28883548e+04 -3.57981876e+05 3.15391321e+06 -1.63653726e+07 6.55556033e+07 -2.25027001e+08 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -2.29647856e+20 2.23060743e+21 -1.47751020e+22 7.86504336e+22 -3.61027130e+23 1.48623808e+24] [ -1.42234052e+03 7.96363898e+05 -1.63653726e+07 1.68187967e+08 -1.22007429e+09 7.18684022e+09 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 3.46309077e+21 -5.84715936e+22 6.46859079e+23 -5.59189865e+24 4.08308120e+25 -2.63166392e+26] [ 6.39075781e+03 -1.49026732e+06 6.55556033e+07 -1.22007429e+09 1.47164022e+10 -1.36810088e+11 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -3.69586675e+22 9.89735934e+23 -1.67505077e+25 2.15859810e+26 -2.30442140e+27 2.13906412e+28] [ 1.68134522e+03 2.53900589e+06 -2.25027001e+08 7.18684022e+09 -1.36810088e+11 1.90724566e+12 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 3.17341100e+23 -1.27716724e+25 3.13774985e+26 -5.72503211e+27 8.49214835e+28 -1.07936569e+30] [ -2.62366128e+07 3.12867102e+08 -2.07586348e+09 9.55718390e+09 -3.58688215e+10 1.18206299e+11 -3.72545099e+19 3.55377485e+20 -2.19797302e+21 1.06820421e+22 -4.43482421e+22 1.64613799e+23 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 6.21899934e+06 -1.35300269e+09 2.25199661e+10 -2.08147670e+11 1.41978312e+12 -8.03720030e+12 3.55377485e+20 -6.92933885e+21 7.86285194e+22 -6.60223225e+23 4.55617308e+24 -2.73627888e+25 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ -2.10320449e+07 4.11734924e+09 -1.49402973e+11 2.51974540e+12 -2.86573004e+13 2.56306446e+14 -2.19797302e+21 7.86285194e+22 -1.49349605e+24 1.98682041e+25 -2.09455082e+26 1.87262719e+27 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 2.94574146e+06 -1.02367345e+10 7.58502833e+11 -2.20591701e+13 3.96780330e+14 -5.32688366e+15 1.06820421e+22 -6.60223225e+23 1.98682041e+25 -3.97295506e+26 6.07807050e+27 -7.69005025e+28 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ -1.34143013e+07 2.23461195e+10 -3.24369808e+12 1.56693459e+14 -4.30207139e+15 8.37537551e+16 -4.43482421e+22 4.55617308e+24 -2.09455082e+26 6.07807050e+27 -1.30367761e+29 2.25611610e+30 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ -1.29818367e+06 -4.45010813e+10 1.22995908e+13 -9.61118583e+14 3.92675368e+16 -1.08141277e+18 1.64613799e+23 -2.73627888e+25 1.87262719e+27 -7.69005025e+28 2.25611610e+30 -5.21172115e+31 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ -5.93837816e+16 8.06918956e+18 -2.29647856e+20 3.46309077e+21 -3.69586675e+22 3.17341100e+23 -4.12843622e+30 1.31023908e+32 -2.32666430e+33 2.96546578e+34 -3.03538906e+35 2.65606764e+36 -3.91170589e+42 9.60158418e+43 -1.53011091e+45 1.88717441e+46 -1.94862667e+47 1.76215269e+48] [ 6.38322749e+16 -3.72079304e+19 2.23060743e+21 -5.84715936e+22 9.89735934e+23 -1.27716724e+25 3.81005492e+31 -2.11434722e+33 5.97475670e+34 -1.14689052e+36 1.70519806e+37 -2.11260012e+38 9.60158418e+43 -3.55965236e+45 8.25594136e+46 -1.44373334e+48 2.07304224e+49 -2.56816531e+50] [ -6.85843186e+16 1.23800418e+20 -1.47751020e+22 6.46859079e+23 -1.67505077e+25 3.13774985e+26 -2.44128687e+32 2.26787992e+34 -9.81982973e+35 2.73945925e+37 -5.71489855e+38 9.68877560e+39 -1.53011091e+45 8.25594136e+46 -2.68951294e+48 6.44111112e+49 -1.24291248e+51 2.03914940e+52] [ 5.75338966e+16 -3.42505937e+20 7.86504336e+22 -5.59189865e+24 2.15859810e+26 -5.72503211e+27 1.26953404e+33 -1.91719765e+35 1.23852494e+37 -4.89510985e+38 1.39768692e+40 -3.16418412e+41 1.88717441e+46 -1.44373334e+48 6.44111112e+49 -2.06086355e+51 5.21505047e+52 -1.10591734e+54] [ -4.88603241e+16 8.36989008e+20 -3.61027130e+23 4.08308120e+25 -2.30442140e+27 8.49214835e+28 -5.72744451e+33 1.37649297e+36 -1.30066841e+38 7.14559647e+39 -2.74085484e+41 8.13459136e+42 -1.94862667e+47 2.07304224e+49 -1.24291248e+51 5.21505047e+52 -1.69908728e+54 4.57315759e+55] [ 3.50805052e+16 -1.86726751e+21 1.48623808e+24 -2.63166392e+26 2.13906412e+28 -1.07936569e+30 2.32655148e+34 -8.75599541e+36 1.19185355e+39 -8.96773338e+40 4.55433688e+42 -1.74680817e+44 1.76215269e+48 -2.56816531e+50 2.03914940e+52 -1.10591734e+54 4.57315759e+55 -1.54023048e+57]] </code></pre>
1
2016-10-13T11:28:08Z
40,020,171
<p>Have you tried with <a href="http://eigen.tuxfamily.org" rel="nofollow">libeigen</a>? There's a nice Python wrapper for it, called <a href="https://pypi.python.org/pypi/minieigen" rel="nofollow">minieigen</a>.</p> <pre><code>#!/usr/bin/env python3 import minieigen as eigen M = eigen.Matrix3(1, 0, 0, 0, 2, 0, 0, 0, 3) print(M.spectralDecomposition()) </code></pre>
1
2016-10-13T11:49:35Z
[ "python", "numpy", "scipy", "eigenvalue" ]
Methods for finding the Eigenvalues of a matrix using python?
40,019,734
<p>I am currently attempting to find the eigenvalues of a matrix H. I have tried using both numpy.linalg.eig and scipy.linalg.eig, although both apparently use the same underlying method.</p> <p>The problem is that my matrix H is purely real, and the eigenvalues have to be real and also positive.</p> <p>But the scipy and numpy methods return complex Eigenvalues, both positive and negative, which because they are complex and negative, cannot be correct. EDIT I know that the Eigenvalues must be real as the matrix represents a physical system where a complex eigenvalue would have no meaning \end EDIT</p> <p>Does anyone know of any other way that I can obtain the correct, purely real, Eigenvalues of a matrix in python?</p> <p>Thank you for your time! EDIT 3: Corrected H matrix gives purely real eigenvalues, so my imaginary problem dissapears. Now I just need to figure out why my the Eigenvalues are too big, but that is another question!</p> <p>Many Thanks to all those who responded! </p> <p>Corrected H matrix is below for interest. Notice that my problem now the eigenvalues are too large. I expected values in the range 0-1. Not ~10^50! </p> <p>CORRECTED H MATRIX EIGENVALUES:</p> <pre><code>[ -1.56079757e-02 -6.70247389e+59 -1.31298702e+56 -3.64404066e+52 -9.70803701e+48 -1.85917866e+45 -1.65895844e+41 -5.61503911e+39 -7.19768059e+36 -4.58657021e+32 -4.98763491e+28 -3.08561491e+27 -3.63383072e+25 -2.58033979e+25 -3.45930959e+23 -2.13272853e+18 -4.25175990e+21 -1.93387466e+22] </code></pre> <p>CORRECTED H MATRIX:</p> <pre><code>[[ -1.56079757e-02 -1.96247112e-02 -2.02799782e-02 -1.99695485e-02 -1.93678897e-02 -1.86944625e-02 -1.30222438e+04 -3.54051869e+05 -4.91571514e+06 -4.51159690e+07 -3.09207669e+08 -1.69913322e+09 -2.76231241e+15 -4.29262866e+17 -3.76558847e+19 -2.27013318e+21 -1.03308991e+23 -3.75607123e+24] [ -1.96247112e-02 -3.16659228e-02 -3.73018152e-02 -3.99083810e-02 -4.09801356e-02 -4.12397330e-02 -9.25855152e+03 -2.52585509e+05 -3.52145205e+06 -3.24749687e+07 -2.23781425e+08 -1.23712026e+09 -1.95621015e+15 -3.04176626e+17 -2.67015928e+19 -1.61101326e+21 -7.33788197e+22 -2.67049818e+24] [ -2.02799782e-02 -3.73018152e-02 -4.77923287e-02 -5.41249519e-02 -5.79464638e-02 -6.01988341e-02 -7.57318263e+03 -2.06839231e+05 -2.88760361e+06 -2.66717677e+07 -1.84121508e+08 -1.01989311e+09 -1.59803489e+15 -2.48531861e+17 -2.18219073e+19 -1.31694511e+21 -6.00020265e+22 -2.18437720e+24] [ -1.99695485e-02 -3.99083810e-02 -5.41249519e-02 -6.39296468e-02 -7.06496425e-02 -7.52593492e-02 -6.56444085e+03 -1.79388958e+05 -2.50607920e+06 -2.31660126e+07 -1.60063118e+08 -8.87505427e+08 -1.38428190e+15 -2.15309349e+17 -1.89070134e+19 -1.14117996e+21 -5.20014426e+22 -1.89341950e+24] [ -1.93678897e-02 -4.09801356e-02 -5.79464638e-02 -7.06496425e-02 -8.00703376e-02 -8.70367786e-02 -5.87456014e+03 -1.60590211e+05 -2.24436978e+06 -2.07565818e+07 -1.43492007e+08 -7.96094702e+08 -1.23832305e+15 -1.92618393e+17 -1.69155984e+19 -1.02106226e+21 -4.65319430e+22 -1.69443289e+24] [ -1.86944625e-02 -4.12397330e-02 -6.01988341e-02 -7.52593492e-02 -8.70367786e-02 -9.62124393e-02 -5.36462746e+03 -1.46683176e+05 -2.05056240e+06 -1.89701536e+07 -1.31188910e+08 -7.28124191e+08 -1.13054072e+15 -1.75859951e+17 -1.54445848e+19 -9.32316767e+20 -4.24900807e+22 -1.54734986e+24] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -4.12478326e+18 -5.45644679e+19 -2.90876009e+20 -8.98307694e+20 -1.93571800e+21 -3.25655840e+21 -1.23009840e+30 -2.34880436e+32 -2.19696316e+34 -1.25767256e+36 -4.92737192e+37 -1.41676103e+39] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -5.45644679e+19 -1.18364260e+21 -9.55137274e+21 -4.18185914e+22 -1.20837111e+23 -2.59872572e+23 -4.88154308e+30 -1.23670123e+33 -1.52633071e+35 -1.14675488e+37 -5.86768809e+38 -2.19383952e+40] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -2.90876009e+20 -9.55137274e+21 -1.10203112e+23 -6.57480361e+23 -2.48279601e+24 -6.72600978e+24 -9.55655956e+30 -2.93055192e+33 -4.39290725e+35 -4.01427998e+37 -2.49882367e+39 -1.13605487e+41] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -8.98307694e+20 -4.18185914e+22 -6.57480361e+23 -5.15935422e+24 -2.48241611e+25 -8.32646595e+25 -1.32363927e+31 -4.60841402e+33 -7.90547208e+35 -8.31277603e+37 -5.97680290e+39 -3.14644094e+41] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -1.93571800e+21 -1.20837111e+23 -2.48279601e+24 -2.48241611e+25 -1.48582210e+26 -6.06263231e+26 -1.52615891e+31 -5.77316621e+33 -1.08616056e+36 -1.26177499e+38 -1.00789238e+40 -5.92030497e+41] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -3.25655840e+21 -2.59872572e+23 -6.72600978e+24 -8.32646595e+25 -6.06263231e+26 -2.95791640e+27 -1.59124215e+31 -6.34774909e+33 -1.27102306e+36 -1.58346849e+38 -1.36500133e+40 -8.69716304e+41] [ -2.76231241e+15 -1.95621015e+15 -1.59803489e+15 -1.38428190e+15 -1.23832305e+15 -1.13054072e+15 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -3.91170589e+42 -3.71477021e+44 -1.55100113e+46 -3.65410576e+47 -5.53824601e+48 -5.87586247e+49] [ -4.29262866e+17 -3.04176626e+17 -2.48531861e+17 -2.15309349e+17 -1.92618393e+17 -1.75859951e+17 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -3.71477021e+44 -5.04566596e+46 -2.92377802e+48 -9.33903419e+49 -1.88272072e+51 -2.61414916e+52] [ -3.76558847e+19 -2.67015928e+19 -2.18219073e+19 -1.89070134e+19 -1.69155984e+19 -1.54445848e+19 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -1.55100113e+46 -2.92377802e+48 -2.28558880e+50 -9.63567387e+51 -2.51888438e+53 -4.46829479e+54] [ -2.27013318e+21 -1.61101326e+21 -1.31694511e+21 -1.14117996e+21 -1.02106226e+21 -9.32316767e+20 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -3.65410576e+47 -9.33903419e+49 -9.63567387e+51 -5.25195965e+53 -1.74576666e+55 -3.88366439e+56] [ -1.03308991e+23 -7.33788197e+22 -6.00020265e+22 -5.20014426e+22 -4.65319430e+22 -4.24900807e+22 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -5.53824601e+48 -1.88272072e+51 -2.51888438e+53 -1.74576666e+55 -7.26381158e+56 -1.99648815e+58] [ -3.75607123e+24 -2.67049818e+24 -2.18437720e+24 -1.89341950e+24 -1.69443289e+24 -1.54734986e+24 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -5.87586247e+49 -2.61414916e+52 -4.46829479e+54 -3.88366439e+56 -1.99648815e+58 -6.69651817e+59]] </code></pre> <p>I have left the prior incorrect H matrix so that already existing answers make sense to any future readers.</p> <p>EDIT 2: old H matrix that is definitely not right.</p> <pre><code>[[ 9.84292024e+03 -8.31470427e+03 1.28883548e+04 -1.42234052e+03 6.39075781e+03 1.68134522e+03 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -5.93837816e+16 6.38322749e+16 -6.85843186e+16 5.75338966e+16 -4.88603241e+16 3.50805052e+16] [ -8.31470427e+03 1.16557521e+05 -3.57981876e+05 7.96363898e+05 -1.49026732e+06 2.53900589e+06 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 8.06918956e+18 -3.72079304e+19 1.23800418e+20 -3.42505937e+20 8.36989008e+20 -1.86726751e+21] [ 1.28883548e+04 -3.57981876e+05 3.15391321e+06 -1.63653726e+07 6.55556033e+07 -2.25027001e+08 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -2.29647856e+20 2.23060743e+21 -1.47751020e+22 7.86504336e+22 -3.61027130e+23 1.48623808e+24] [ -1.42234052e+03 7.96363898e+05 -1.63653726e+07 1.68187967e+08 -1.22007429e+09 7.18684022e+09 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 3.46309077e+21 -5.84715936e+22 6.46859079e+23 -5.59189865e+24 4.08308120e+25 -2.63166392e+26] [ 6.39075781e+03 -1.49026732e+06 6.55556033e+07 -1.22007429e+09 1.47164022e+10 -1.36810088e+11 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -3.69586675e+22 9.89735934e+23 -1.67505077e+25 2.15859810e+26 -2.30442140e+27 2.13906412e+28] [ 1.68134522e+03 2.53900589e+06 -2.25027001e+08 7.18684022e+09 -1.36810088e+11 1.90724566e+12 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 3.17341100e+23 -1.27716724e+25 3.13774985e+26 -5.72503211e+27 8.49214835e+28 -1.07936569e+30] [ -2.62366128e+07 3.12867102e+08 -2.07586348e+09 9.55718390e+09 -3.58688215e+10 1.18206299e+11 -3.72545099e+19 3.55377485e+20 -2.19797302e+21 1.06820421e+22 -4.43482421e+22 1.64613799e+23 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 6.21899934e+06 -1.35300269e+09 2.25199661e+10 -2.08147670e+11 1.41978312e+12 -8.03720030e+12 3.55377485e+20 -6.92933885e+21 7.86285194e+22 -6.60223225e+23 4.55617308e+24 -2.73627888e+25 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ -2.10320449e+07 4.11734924e+09 -1.49402973e+11 2.51974540e+12 -2.86573004e+13 2.56306446e+14 -2.19797302e+21 7.86285194e+22 -1.49349605e+24 1.98682041e+25 -2.09455082e+26 1.87262719e+27 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 2.94574146e+06 -1.02367345e+10 7.58502833e+11 -2.20591701e+13 3.96780330e+14 -5.32688366e+15 1.06820421e+22 -6.60223225e+23 1.98682041e+25 -3.97295506e+26 6.07807050e+27 -7.69005025e+28 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ -1.34143013e+07 2.23461195e+10 -3.24369808e+12 1.56693459e+14 -4.30207139e+15 8.37537551e+16 -4.43482421e+22 4.55617308e+24 -2.09455082e+26 6.07807050e+27 -1.30367761e+29 2.25611610e+30 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ -1.29818367e+06 -4.45010813e+10 1.22995908e+13 -9.61118583e+14 3.92675368e+16 -1.08141277e+18 1.64613799e+23 -2.73627888e+25 1.87262719e+27 -7.69005025e+28 2.25611610e+30 -5.21172115e+31 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ -5.93837816e+16 8.06918956e+18 -2.29647856e+20 3.46309077e+21 -3.69586675e+22 3.17341100e+23 -4.12843622e+30 1.31023908e+32 -2.32666430e+33 2.96546578e+34 -3.03538906e+35 2.65606764e+36 -3.91170589e+42 9.60158418e+43 -1.53011091e+45 1.88717441e+46 -1.94862667e+47 1.76215269e+48] [ 6.38322749e+16 -3.72079304e+19 2.23060743e+21 -5.84715936e+22 9.89735934e+23 -1.27716724e+25 3.81005492e+31 -2.11434722e+33 5.97475670e+34 -1.14689052e+36 1.70519806e+37 -2.11260012e+38 9.60158418e+43 -3.55965236e+45 8.25594136e+46 -1.44373334e+48 2.07304224e+49 -2.56816531e+50] [ -6.85843186e+16 1.23800418e+20 -1.47751020e+22 6.46859079e+23 -1.67505077e+25 3.13774985e+26 -2.44128687e+32 2.26787992e+34 -9.81982973e+35 2.73945925e+37 -5.71489855e+38 9.68877560e+39 -1.53011091e+45 8.25594136e+46 -2.68951294e+48 6.44111112e+49 -1.24291248e+51 2.03914940e+52] [ 5.75338966e+16 -3.42505937e+20 7.86504336e+22 -5.59189865e+24 2.15859810e+26 -5.72503211e+27 1.26953404e+33 -1.91719765e+35 1.23852494e+37 -4.89510985e+38 1.39768692e+40 -3.16418412e+41 1.88717441e+46 -1.44373334e+48 6.44111112e+49 -2.06086355e+51 5.21505047e+52 -1.10591734e+54] [ -4.88603241e+16 8.36989008e+20 -3.61027130e+23 4.08308120e+25 -2.30442140e+27 8.49214835e+28 -5.72744451e+33 1.37649297e+36 -1.30066841e+38 7.14559647e+39 -2.74085484e+41 8.13459136e+42 -1.94862667e+47 2.07304224e+49 -1.24291248e+51 5.21505047e+52 -1.69908728e+54 4.57315759e+55] [ 3.50805052e+16 -1.86726751e+21 1.48623808e+24 -2.63166392e+26 2.13906412e+28 -1.07936569e+30 2.32655148e+34 -8.75599541e+36 1.19185355e+39 -8.96773338e+40 4.55433688e+42 -1.74680817e+44 1.76215269e+48 -2.56816531e+50 2.03914940e+52 -1.10591734e+54 4.57315759e+55 -1.54023048e+57]] </code></pre>
1
2016-10-13T11:28:08Z
40,060,880
<p>Your matrix does not have any Hamiltonian structure as </p> <pre><code>J.dot(A).dot(J.T) - A.T </code></pre> <p>is not zero where <code>J</code> is given by</p> <pre><code># J = [0 I] # [-I 0] J = np.rot90(sp.linalg.block_diag(np.rot90(-eye(9)),np.rot90(eye(9)))) </code></pre> <p>hence there is no condition for eigenvalues to be real and symmetric with respect to imaginary axis. Also you can scale down your matrix with the smallest number of power of ten appearing in your matrix. </p> <p>Do you maybe have the extended Hamiltonian pencil in mind ? </p>
1
2016-10-15T15:28:41Z
[ "python", "numpy", "scipy", "eigenvalue" ]
How to use transactions with Django REST framework?
40,019,769
<p>I wish to use Django REST framework to create a number of model objects "together" -- i.e. in a single transaction. </p> <p>The objective is that each of the objects will only be visible at the (successful) end of the transaction.</p> <p>How can I do that?</p>
0
2016-10-13T11:30:41Z
40,019,862
<p>Use <code>atomic</code> from <code>django.db.transaction</code> as a decorator around a function performing the database operations you are after:</p> <p>If <code>obj_list</code> contains a list of populated (but not saved) model objects, this will execute all operations as part of one transaction.</p> <p><code> @atomic def save_multiple_objects(obj_list): for o in obj_list: o.save() </code></p> <p>If you want to save multiple objects as part of the same API request, then (for example), if they are all of the same type, then you could POST a list of objects to an API endpoint - see <a href="http://stackoverflow.com/questions/22881067/django-rest-framework-post-array-of-objects">Django REST framework post array of objects</a></p>
0
2016-10-13T11:35:15Z
[ "python", "django", "rest", "django-models", "django-rest-framework" ]
How to use transactions with Django REST framework?
40,019,769
<p>I wish to use Django REST framework to create a number of model objects "together" -- i.e. in a single transaction. </p> <p>The objective is that each of the objects will only be visible at the (successful) end of the transaction.</p> <p>How can I do that?</p>
0
2016-10-13T11:30:41Z
40,019,924
<p>You can achieve this by using <code>django db transactions</code>. Refer to the code below</p> <pre><code>from django.db import transaction with transaction.atomic(): model_instance = form.save(commit=False) model_instance.creator = self.request.user model_instance.img_field.field.upload_to = 'directory/'+model_instance.name+'/logo' self.object = form.save() </code></pre> <p>This example is taken from my own answer to this <a href="http://stackoverflow.com/questions/11796383/django-set-the-upload-to-in-the-view/40018311#40018311">SO post</a>. This way, <code>before calling save() you can save/edit other dependencies</code></p>
0
2016-10-13T11:38:23Z
[ "python", "django", "rest", "django-models", "django-rest-framework" ]
Get Pixel count using RGB values in Python
40,019,830
<p>I have to find count of pixels of RGB which is using by RGB values. So i need logic for that one.</p> <p>Example.</p> <p>i have image called image.jpg. That image contains width = 100 and height = 100 and Red value is 128. Green is 0 and blue is 128. so that i need to find how much RGB pixels have that image. can anyone help?</p>
0
2016-10-13T11:33:50Z
40,020,490
<p>As already mentioned in <a href="http://stackoverflow.com/questions/138250/how-can-i-read-the-rgb-value-of-a-given-pixel-in-python">this</a> question, by using <code>Pillow</code> you can do the following:</p> <pre><code>from PIL import Image im = Image.open('image.jpg', 'r') </code></pre> <p>If successful, this function returns an Image object. You can now use instance attributes to examine the file contents:</p> <pre><code>width, height = im.size pixel_values = list(im.getdata()) print(im.format, im.size, im.mode) </code></pre> <p>The <code>format</code> attribute identifies the source of an image. If the image was not read from a file, it is set to None. The <code>mode</code> attribute defines the number and names of the bands in the image, and also the pixel type and depth. Common modes are “L” (luminance) for greyscale images, “RGB” for true color images, and “CMYK” for pre-press images.</p>
0
2016-10-13T12:03:42Z
[ "python" ]
sympy - cos of a list of variables
40,019,839
<p>I try to implement a cos function of a list with variables with sympy. Here an easy example: </p> <pre><code> from sympy import * x = Symbol('x') cos([x+1,x+2,x+3]) </code></pre> <p>But then the error </p> <pre><code>AttributeError: 'list' object has no attribute 'is_Number' </code></pre> <p>occurs and not what I've expected <code>array([cos([x+1]),cos([x+2]),cos([x+3])])</code>. Is there an easy way to use the cos as <code>numpy.cos()</code>?</p>
-1
2016-10-13T11:34:33Z
40,019,910
<p>Use the <a href="https://docs.python.org/3/library/functions.html#map" rel="nofollow">Python builtin <code>map</code> function</a> to apply <code>sympy.cos</code> to each element in the list:</p> <pre><code>import sympy as sy x = sy.Symbol('x') print(list(map(sy.cos, [x+1,x+2,x+3]))) </code></pre> <p>yields</p> <pre><code>[cos(x + 1), cos(x + 2), cos(x + 3)] </code></pre>
2
2016-10-13T11:37:44Z
[ "python", "sympy" ]
Django - distinct rows/objects distinguished by date/day from datetime field
40,019,886
<p>I'v searched quite a while now and know about several answers on sof but none of the solutions does work at my end even if my problem is pretty simple:</p> <p>What I need (using postgres + django 1.10): I have many rows with many duplicate dates (=days) within a datetime field. I want a queryset containing one row/object each date/day.</p> <pre><code>fk | col1 | colX | created (type: datetime) ---------------------------------------------- 1 | info | info | 2016-09-03 08:25:52.142617+00:00 &lt;- get it (time does not matter) 1 | info | info | 2016-09-03 16:26:52.142617+00:00 2 | info | info | 2016-09-03 11:25:52.142617+00:00 1 | info | info | 2016-09-14 16:26:52.142617+00:00 &lt;- get it (time does not matter) 3 | info | info | 2016-09-14 11:25:52.142617+00:00 1 | info | info | 2016-09-25 23:25:52.142617+00:00 &lt;- get it (time does not matter) 1 | info | info | 2016-09-25 16:26:52.142617+00:00 1 | info | info | 2016-09-25 11:25:52.142617+00:00 2 | info | info | 2016-09-25 14:27:52.142617+00:00 2 | info | info | 2016-09-25 16:26:52.142617+00:00 3 | info | info | 2016-09-25 11:25:52.142617+00:00 etc. </code></pre> <p>Whats the best (performance + pythionic/django) way to do this. My model/table is going to have many rows (>million).</p> <p><strong>EDIT 1</strong></p> <p>The results must be filtered by a fk (e.g. WHERE fk = 1) first.</p> <p>I already tried the most obvious things such as</p> <pre><code>MyModel.objects.filter(fk=1).order_by('created__date').di‌​stinct('created__dat‌​e') </code></pre> <p>but got following error:</p> <blockquote> <p>django.core.exceptions.FieldError: Cannot resolve keyword 'date' into field. Join on 'created' not permitted.</p> </blockquote> <p>...same error with all() and respective ordering through class Meta instead of query-method order_by()...</p> <p>Does somebody maybe know more about this error in this specific case?</p>
0
2016-10-13T11:36:32Z
40,020,269
<p>you can use a Queryset to get the results from your table by a distinct on the created value because you are using postgresql.</p> <p>Maybe a query like this should do the work :</p> <pre><code>MyModel.objects.all().distinct('created__date') </code></pre> <p>I refer you too the queryset documentation of django : <a href="https://docs.djangoproject.com/fr/1.10/ref/models/querysets/#distinct" rel="nofollow">https://docs.djangoproject.com/fr/1.10/ref/models/querysets/#distinct</a></p>
0
2016-10-13T11:53:55Z
[ "python", "django", "postgresql", "datetime" ]
How to map a column with dask
40,019,905
<p>I want to apply a mapping on a DataFrame column. With Pandas this is straight forward:</p> <pre><code>df["infos"] = df2["numbers"].map(lambda nr: custom_map(nr, hashmap)) </code></pre> <p>This writes the <code>infos</code> column, based on the <code>custom_map</code> function, and uses the rows in numbers for the <code>lambda</code> statement.</p> <p>With <a href="http://dask.pydata.org/en/latest/dataframe-api.html" rel="nofollow">dask</a> this isn't that simple. <code>ddf</code> is a dask DataFrame. <code>map_partitions</code> is the equivalent to parallel execution of the mapping on a part of the DataFrame.</p> <p>This does <strong>not</strong> work because you don't define columns like that in dask.</p> <pre><code>ddf["infos"] = ddf2["numbers"].map_partitions(lambda nr: custom_map(nr, hashmap)) </code></pre> <p>Does anyone know how I can use columns here? I don't understand their <a href="http://dask.pydata.org/en/latest/dataframe-api.html#dask.dataframe.DataFrame.map_partitions" rel="nofollow">API documentation</a> at all. </p>
0
2016-10-13T11:37:32Z
40,020,855
<p>You can use the <a href="http://dask.pydata.org/en/latest/dataframe-api.html#dask.dataframe.Series.map" rel="nofollow">.map</a> method, exactly as in Pandas</p> <pre><code>In [1]: import dask.dataframe as dd In [2]: import pandas as pd In [3]: df = pd.DataFrame({'x': [1, 2, 3]}) In [4]: ddf = dd.from_pandas(df, npartitions=2) In [5]: df.x.map(lambda x: x + 1) Out[5]: 0 2 1 3 2 4 Name: x, dtype: int64 In [6]: ddf.x.map(lambda x: x + 1).compute() Out[6]: 0 2 1 3 2 4 Name: x, dtype: int64 </code></pre> <h3>Metadata</h3> <p>You may be asked to provide a <code>meta=</code> keyword. This lets dask.dataframe know the output name and type of your function. Copying the docstring from <code>map_partitions</code> here:</p> <pre><code>meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional An empty pd.DataFrame or pd.Series that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of a DataFrame, a dict of {name: dtype} or iterable of (name, dtype) can be provided. Instead of a series, a tuple of (name, dtype) can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providing meta is recommended. For more information, see dask.dataframe.utils.make_meta. </code></pre> <p>So in the example above, where my output will be a series with name <code>'x'</code> and dtype <code>int</code> I can do either of the following to be more explicit</p> <pre><code>&gt;&gt;&gt; ddf.x.map(lambda x: x + 1, meta=('x', int)) </code></pre> <p>or </p> <pre><code>&gt;&gt;&gt; ddf.x.map(lambda x: x + 1, meta=pd.Series([], dtype=int, name='x')) </code></pre> <p>This tells dask.dataframe what to expect from our function. If no meta is given then dask.dataframe will try running your function on a little piece of data. It will raise an error asking for help if this fails.</p>
2
2016-10-13T12:21:18Z
[ "python", "pandas", "dask" ]
Collecting Revit Warnings
40,019,955
<p>Where are Warnings in the Revit Database? </p> <p>I'd like to use python to create my own error report (similar to the HTML export), but not sure where to find this information.</p> <p>I cant find anything in the Revit API (Revit 2015) referring to warnings. How would i collect these?</p> <p>I suspected that warnings might be a parameter of an element (such as groupid), but using revitsnoop - i'm coming up empty.</p>
0
2016-10-13T11:40:12Z
40,049,892
<p>Sadly, not possible. Errors "happen" during opening, audits and other events. You can catch them sometimes, but not very cleanly. Jeremy Tammik has at least one blog post with a partial unsupported workaround.</p> <p>Vote for my enhancement request on this topic: <a href="https://forums.autodesk.com/t5/revit-ideas/revit-api-access-to-the-revit-warnings-errors/idi-p/6463710" rel="nofollow">Revit Ideastation</a></p>
0
2016-10-14T18:31:32Z
[ "python", "warnings", "revit-api" ]