Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
200
71,740,466
Sympy: How to calculate the t value for a point on a 3D Line
<p>Using sympy how would one go about to solve for the t value for a specific point on a line or line segment?</p> <pre><code>p1 = sympy.Point3D(0,0,0) p2 = sympy.Point3D(1,1,1) p3 = sympy.Point3D(0.5,0.5,0.5) lineSegment = sympy.Segment(p1,p2) eqnV = lineSegment.arbitrary_point() if lineSegment.contains(p3): t = SolveForT(lineSegment, p3) </code></pre>
<p>You can get a list of coordinate equations and pass them to sympy's solve function:</p> <pre><code>In [112]: solve((lineSegment.arbitrary_point() - p3).coordinates) Out[112]: {t: 1/2} </code></pre>
python-3.x|sympy
1
201
62,547,186
Python Dataframe add new row based on column name
<p>How do I add a new row to my dataframe, with values that are based on the column names?</p> <p>For example</p> <pre><code>Dog = 'happy' Cat = 'sad' df = pd.DataFrame(columns=['Dog', 'Cat']) </code></pre> <p>I want to add a new line to the dataframe where is pulls in the variable of the column heading</p> <pre><code> Dog Cat 0 happy sad </code></pre>
<p>You can try <code>append</code>:</p> <pre><code>df.append({'Dog':Dog,'Cat':Cat}, ignore_index=True) </code></pre> <p>Output:</p> <pre><code> Dog Cat 0 happy sad </code></pre>
python|pandas
1
202
62,541,592
Write Data to BigQuery table using load_table_from_dataframe method ERROR - 'str' object has no attribute 'to_api_repr'
<p>I am trying to read the data from Cloud storage and write the data into BigQuery table. Used Pandas library for reading the data from GCS and to write the data used client.load_table_from_dataframe method. I am executing this code as python operator in Google cloud composer. Got below error when i execute the code.</p> <pre><code>[2020-06-23 17:09:36,119] {taskinstance.py:1059} ERROR - 'str' object has no attribute 'to_api_repr'@-@{&quot;workflow&quot;: &quot;DataTransformationSample1&quot;, &quot;task-id&quot;: &quot;dag_init&quot;, &quot;execution-date&quot;: &quot;2020-06-23T17:03:42.202219+00:00&quot;} Traceback (most recent call last): File &quot;/usr/local/lib/airflow/airflow/models/taskinstance.py&quot;, line 930, in _run_raw_task result = task_copy.execute(context=context) File &quot;/usr/local/lib/airflow/airflow/operators/python_operator.py&quot;, line 113, in execute return_value = self.execute_callable() File &quot;/usr/local/lib/airflow/airflow/operators/python_operator.py&quot;, line 118, in execute_callable return self.python_callable(*self.op_args, **self.op_kwargs) File &quot;/home/airflow/gcs/dags/DataTransformationSample1.py&quot;, line 225, in dag_initialization destination=table_id, job_config=job_config) File &quot;/opt/python3.6/lib/python3.6/site-packages/google/cloud/bigquery/client.py&quot;, line 968, in load_table_from_dataframe job_config=job_config, File &quot;/opt/python3.6/lib/python3.6/site-packages/google/cloud/bigquery/client.py&quot;, line 887, in load_table_from_file job_resource = load_job._build_resource() File &quot;/opt/python3.6/lib/python3.6/site-packages/google/cloud/bigquery/job.py&quot;, line 1379, in _build_resource self.destination.to_api_repr()) AttributeError: 'str' object has no attribute 'to_api_repr' [2020-06-23 17:09:36,122] {base_task_runner.py:115} INFO - Job 202544: Subtask dag_init [2020-06-23 17:09:36,119] {taskinstance.py:1059} ERROR - 'str' object has no attribute 'to_api_repr'@-@{&quot;workflow&quot;: &quot;DataTransformationSample1&quot;, &quot;task-id&quot;: &quot;dag_init&quot;, &quot;execution-date&quot;: &quot;2020-06-23T17:03:42.202219+00:00&quot;} </code></pre> <p>Below code i used,</p> <pre><code>client = bigquery.Client() table_id = 'project.dataset.table' job_config = bigquery.LoadJobConfig() job_config.schema = [ bigquery.SchemaField(name=&quot;Code&quot;, field_type=&quot;STRING&quot;, mode=&quot;NULLABLE&quot;), bigquery.SchemaField(name=&quot;Value&quot;, field_type=&quot;STRING&quot;, mode=&quot;NULLABLE&quot;) ] job_config.create_disposition = &quot;CREATE_IF_NEEDED&quot; job_config.write_disposition = &quot;WRITE_TRUNCATE&quot; load_result = client.load_table_from_dataframe(dataframe=concatenated_df, destination=table_id, job_config=job_config) load_result.result() </code></pre> <p>Someone please help to solve this case.</p>
<p>Basically Panda consider string as object, but BigQuery doesn't know it. We need to explicitly convert the object to string using Panda in order to make it load the data to BQ table.</p> <p>df[columnname] = df[columnname].astype(str)</p>
python|google-bigquery|google-cloud-composer
0
203
61,657,432
Python tracemalloc's "compare_to" function delivers always "StatisticDiff" objects with len(traceback)=1
<p>Using Python's 3.5 tracemalloc module as follows</p> <pre><code>tracemalloc.start(25) # (I also tried PYTHONTRACEMALLOC=25) snapshot_start = tracemalloc.take_snapshot() ... # my code is running snapshot_stop = tracemalloc.take_snapshot() diff = snapshot_stop.compare_to(snapshot_start, 'lineno') tracemalloc.stop() </code></pre> <p>leads in a list of StatisticDiff instances where each instance has a traceback with only 1 (the most recent) frame.</p> <p>Any hints how to get there the full stack trace for each StatisticDiff instance?</p> <p>Thank you! Michael</p>
<p>You need to use <code>'traceback'</code> instead of <code>'lineno'</code> when calling <code>compare_to()</code> to get more than one line.</p> <p>BTW, I also answered a similar question <a href="https://stackoverflow.com/questions/56935252/how-to-get-more-frames-from-backtrace-in-tracemalloc-snapshot-comparisons-pytho">here</a> with a little more detail.</p>
python|compare|diff|snapshot|tracemalloc
1
204
67,491,254
Filter objects manyTomany with users manyTomany
<p>I want to filter the model <code>Foo</code> by its manyTomany field <code>bar</code> with users <code>bar</code>.</p> <p>Models</p> <pre><code>class User(models.Model): bar = models.ManyToManyField(&quot;Bar&quot;, verbose_name=_(&quot;Bar&quot;), blank=True) class Foo(models.Model): bar = models.ManyToManyField(&quot;Bar&quot;, verbose_name=_(&quot;Bar&quot;), blank=True) class Bar(models.Model): fubar = models.CharField() </code></pre> <p>with this</p> <blockquote> <p>user = User.objects.get(id=user_id)</p> </blockquote> <p>I want to gett all Foo's that have the same Bar's that the User has. I would like this to work:</p> <blockquote> <p>bar = Foo.objects.filter(foo=user.foo)</p> </blockquote> <p>but it doesn't work.</p>
<pre><code>foos = Foo.objects.filter(bar__in=user.bar.all()) </code></pre>
python|django|django-models
1
205
60,981,187
Not able to align specific patterns side by side in this grid
<p><a href="https://i.stack.imgur.com/CZzNP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CZzNP.png" alt=""></a></p> <p>So I tried different methods to do this like:</p> <pre><code>a = ("+ " + "- "*4) b = ("|\n"*4) print(a + a + "\n" + b + a + a + "\n" + b + a + a) </code></pre> <p>But the basic problem I am facing is how to print the vertical pattern on the sixth column i.e in the middle as well as, at the last</p>
<p>I got it actually and thought of posting the solution I might help others: we ought to make use of the do_twice and do_four function:</p> <pre><code>def draw_grid_art(): a = "+ - - - - + - - - - +" def do_twice(f): f() f() def do_four(f): do_twice(f) do_twice(f) def vertical(): b = "| | |" print(b) print(a) do_four(vertical) print(a) do_four(vertical) print(a) </code></pre> <p>I was able to come up with only this.As always anyone is free to shorten/organize my code as I think it is long</p>
python|function
0
206
60,948,399
How to remove rows that contains a repeating number pandas python
<p>I have a dataframe like:</p> <pre><code>'a' 'b' 'c' 'd' 0 1 2 3 3 3 4 5 9 8 8 8 </code></pre> <p>and I want to remove rows that have a number that repeats more than once. So the answer is :</p> <pre><code>'a' 'b' 'c' 'd' 0 1 2 3 </code></pre> <p>Thanks.</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.nunique.html" rel="noreferrer"><code>DataFrame.nunique</code></a> with compare length of columns ad filter by <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="noreferrer"><code>boolean indexing</code></a>:</p> <pre><code>df = df[df.nunique(axis=1) == len(df.columns)] print (df) 'a' 'b' 'c' 'd' 0 0 1 2 3 </code></pre>
python|pandas|dataframe
5
207
69,002,851
Slowly updating global window side inputs In Python
<p>I try to get the updating sideinputs working in python as stated in the Documentation (there is only a java example provided) [https://beam.apache.org/documentation/patterns/side-inputs/]</p> <p>I already found this thread here on Stackoverflow: [https://stackoverflow.com/questions/63812879/how-to-implement-the-slowly-updating-side-inputs-in-python] and tried the code and solution from there...</p> <p>But when I try:</p> <pre><code> pipeline | &quot;generate sequence&quot; &gt;&gt; PeriodicImpulse(0,90,30) | beam.WindowInto( GlobalWindows(), trigger=Repeatedly(AfterProcessingTime(1*30)), accumulation_mode=AccumulationMode.DISCARDING ) | beam.Map(lambda _: print(&quot;fired&quot;)) ) </code></pre> <p>There are 3 events fired as expected... the only thing is that those 3 events are fired instant and not every 30 seconds as I would be expecting.</p> <p>To get it working I'm currently don't use it as a sideinput but just run it in pytest via:</p> <pre><code>def test_updating_sideinput(): pipeline = beam.Pipeline() res = ( pipeline | &quot;generate sequence&quot; &gt;&gt; PeriodicImpulse(0, 90, 30) | beam.Map(lambda _: print(&quot;fired&quot;)) | beam.WindowInto( GlobalWindows(), trigger=Repeatedly(AfterProcessingTime(1*30)), accumulation_mode=AccumulationMode.DISCARDING ) ) pipeline.run() </code></pre> <p>What would be the correct way to have a sideInput Updated triggered periodically using python?</p> <p>thanks and regards</p>
<p>The reason why all of the elements from <code>PeriodicImpulse</code> are emitted at the same time is because of the parameters you use when creating the transform. The documentation of the transform states that the arguments <code>start_timestamp</code> and <code>stop_timestamp</code> are timestamps, and (despite the documentation not stating that), <code>interval</code> is then interpreted as a number of seconds.</p> <p>Since the implementation of <code>PeriodicImpulse</code> is based on Splittable DoFn with <code>OffsetRange</code>, every time a single output is processed, the remainder of all (future) outputs is deferred to later time, which is specified by the current timestamp + interval. This causes all the deferred timestamps generated to be in the past (lower than <code>Timestamp.now()</code>), therefore triggering processing of the remainder immediately. You can see the implementation in <a href="https://beam.apache.org/releases/pydoc/2.32.0/_modules/apache_beam/transforms/periodicsequence.html#ImpulseSeqGenDoFn" rel="nofollow noreferrer">https://beam.apache.org/releases/pydoc/2.32.0/_modules/apache_beam/transforms/periodicsequence.html#ImpulseSeqGenDoFn</a>.</p> <p>Using <code>Timestamp</code>s instead of absolute numbers in <code>PeriodicImpulse</code> should solve your problem.</p> <pre class="lang-py prettyprint-override"><code>start = Timestamp.now() stop = now + Duration(seconds=60) ... pipeline | &quot;generate sequence&quot; &gt;&gt; PeriodicImpulse(start, stop, 30) </code></pre> <p>But keep in mind once you are using a runner, <code>Timestamp.now()</code> is called when constructing the pipeline, and by the time the pipeline is executed, may already be well in the past, possibly triggering several minutes worth of data immediately.</p> <p>Also note that <code>PeriodicImpulse</code> already supports windowing into <code>FixedWindows</code> based on the <code>inverval</code> param.</p>
python|apache-beam
1
208
69,042,451
StaleElementReferenceException while looping over list
<p>I'm trying to make a webscraper for <a href="https://opendata-dashboard.cijfersoverwonen.nl/dashboard/opendata-dashboard/beleidswaarde" rel="nofollow noreferrer">this</a> website. The idea is that code iterates over all institutions by selecting the institution's name (3B-Wonen at first instance), closes the pop-up screen, clicks the download button, and does it all again for all items in the list.</p> <p>However, after the first loop it throws the <code>StaleElementReferenceException</code> when selecting the second institution in the loop. From what I read about it this implies that the elements defined in the first loop are no longer accessible. I've read multiple posts but I've no idea to overcome this particular case.</p> <p>Can anybody point me in the right directon? Btw, I'm using Pythons selenium and I'm quite a beginner in programming so I'm still learning. If you could point me in a general direction that would help me a lot! The code I have is te following:</p> <pre><code>#importing and setting up parameters for geckodriver/firefox ... # webpage driver.get(&quot;https://opendata-dashboard.cijfersoverwonen.nl/dashboard/opendata-dashboard/beleidswaarde&quot;) WebDriverWait(driver, 30) # Get rid of cookie notification # driver.find_element_by_class_name(&quot;cc-compliance&quot;).click() # Store position of download button element_to_select = driver.find_element_by_id(&quot;utilsmenu&quot;) action = ActionChains(driver) WebDriverWait(driver, 30) # Drop down menu driver.find_element_by_id(&quot;baseGeo&quot;).click() # Add institutions to array corporaties=[] corporaties = driver.find_elements_by_xpath(&quot;//button[@role='option']&quot;) # Iteration for i in corporaties: i.click() # select institution driver.find_element_by_class_name(&quot;close-button&quot;).click() # close pop-up screen action.move_to_element(element_to_select).perform() # select download button driver.find_element_by_id(&quot;utilsmenu&quot;).click() # click download button driver.find_element_by_id(&quot;utils-export-spreadsheet&quot;).click() # pick export to excel driver.find_element_by_id(&quot;baseGeo&quot;).click() # select drop down menu for next iteration </code></pre>
<p>This code worked for me. But I am not doing <code>driver.find_element_by_id(&quot;utils-export-spreadsheet&quot;).click()</code></p> <pre><code>from selenium import webdriver import time from selenium.webdriver.common.action_chains import ActionChains driver = webdriver.Chrome(executable_path=&quot;path&quot;) driver.maximize_window() driver.implicitly_wait(10) driver.get(&quot;https://opendata-dashboard.cijfersoverwonen.nl/dashboard/opendata-dashboard/beleidswaarde&quot;) act = ActionChains(driver) driver.find_element_by_xpath(&quot;//a[text()='Sluiten en niet meer tonen']&quot;).click() # Close pop-up # Get the count of options driver.find_element_by_id(&quot;baseGeoContent&quot;).click() cor_len = len(driver.find_elements_by_xpath(&quot;//button[contains(@class,'sel-listitem')]&quot;)) print(cor_len) driver.find_element_by_class_name(&quot;close-button&quot;).click() # No need to start from 0, since 1st option is already selected. Start from downloading and then move to next items. for i in range(1,cor_len-288): # Tried only for 5 items act.move_to_element(driver.find_element_by_id(&quot;utilsmenu&quot;)).click().perform() #Code to click on downloading option print(&quot;Downloaded:{}&quot;.format(driver.find_element_by_id(&quot;baseGeoContent&quot;).get_attribute(&quot;innerText&quot;))) driver.find_element_by_id(&quot;baseGeoContent&quot;).click() time.sleep(3) # Takes time to load. coritems = driver.find_elements_by_xpath(&quot;//button[contains(@class,'sel-listitem')]&quot;) coritems[i].click() driver.find_element_by_class_name(&quot;close-button&quot;).click() driver.quit() </code></pre> <p>Output:</p> <pre><code>295 Downloaded:3B-Wonen Downloaded:Acantus Downloaded:Accolade Downloaded:Actium Downloaded:Almelose Woningstichting Beter Wonen Downloaded:Alwel </code></pre>
python|selenium|staleelementreferenceexception
1
209
72,741,276
SQLite|Pandas|Python: Select rows that contain values in any column?
<p>I have an SQLite table with 13500 rows with the following SQL schema:</p> <pre><code>PRAGMA foreign_keys = false; -- ---------------------------- -- Table structure for numbers -- ---------------------------- DROP TABLE IF EXISTS &quot;numbers&quot;; CREATE TABLE &quot;numbers&quot; ( &quot;RowId&quot; INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, &quot;Date&quot; TEXT NOT NULL, &quot;Hour&quot; TEXT NOT NULL, &quot;N1&quot; INTEGER NOT NULL, &quot;N2&quot; INTEGER NOT NULL, &quot;N3&quot; INTEGER NOT NULL, &quot;N4&quot; INTEGER NOT NULL, &quot;N5&quot; INTEGER NOT NULL, &quot;N6&quot; INTEGER NOT NULL, &quot;N7&quot; INTEGER NOT NULL, &quot;N8&quot; INTEGER NOT NULL, &quot;N9&quot; INTEGER NOT NULL, &quot;N10&quot; INTEGER NOT NULL, &quot;N11&quot; INTEGER NOT NULL, &quot;N12&quot; INTEGER NOT NULL, &quot;N13&quot; INTEGER NOT NULL, &quot;N14&quot; INTEGER NOT NULL, &quot;N15&quot; INTEGER NOT NULL, &quot;N16&quot; INTEGER NOT NULL, &quot;N17&quot; INTEGER NOT NULL, &quot;N18&quot; INTEGER NOT NULL, &quot;N19&quot; INTEGER NOT NULL, &quot;N20&quot; INTEGER NOT NULL, UNIQUE (&quot;RowId&quot; ASC) ); PRAGMA foreign_keys = true; </code></pre> <p>Each row contain non repeating numbers from 1 to 80, sorted in ascending order.</p> <p><strong>I want to select from this table only the rows that contain numbers only these numbers: 10,20,30,40,50,60,70,80 but not more than 3 of them (I mean EXACTLY 3 and not more and not less).</strong></p> <p>I did the following:</p> <p><strong>First step:</strong></p> <p>e.g. for selecting only the rows that contains ANY of these numbers on the column N1 I did this command:</p> <pre><code>SELECT * FROM numbers WHERE N1 IN (10,20,30,40,50,60,70,80); </code></pre> <p>Of course that this is giving to me rows with just one of these numbers but also rows with let's say 5 or even all these numbers which I do not want, I want exactly 3 of these numbers on ANY column.</p> <p><strong>Second step:</strong></p> <p>For selecting rows which contain any of these numbers on columns N1 and N2 we just run this command:</p> <pre><code>SELECT * FROM numbers WHERE N1 IN (10,20,30,40,50,60,70,80) AND N2 IN (10,20,30,40,50,60,70,80); </code></pre> <p>But this will give also columns with 2 or more (even all numbers) which I do not want because this is not exactly 3 of this numbers on any of this columns.</p> <p><strong>Third step:</strong></p> <p>Retrieving rows that contain any of these numbers on N1, N2 and N3 with this command:</p> <pre><code>SELECT * FROM numbers WHERE N1 IN (10,20,30,40,50,60,70,80) AND N2 IN (10,20,30,40,50,60,70,80) AND N3 IN (10,20,30,40,50,60,70,80); </code></pre> <p>This is almost good because of giving the rows with any 3 of these numbers but also gives rows that could have more than 3 of these numbers like 4, 5 or even all numbers which I don't need.</p> <p>Also, one idea is to modify this command by adding <strong>AND NOT N4 IN (10,20,30,40,50,60,70,80) AND NOT N5 IN (10,20,30,40,50,60,70,80) and so on until reach the N20.</strong></p> <p>On the other hand, any of these numbers (10,20,30,40,50,60,70,80) could be on N1, N2,N3 but also in any given column like N1, N12, N18 and any other combination of columns which means I should create any possible combination of 3 columns taken from 20 columns in order to get what I need.</p> <p>Is there any smarter way to do this?</p> <p>Thank you in advance!</p> <p><strong>P.S.</strong></p> <ol> <li>I have already read <a href="https://stackoverflow.com/questions/17096452/select-rows-from-sqlite-which-contains-given-values">this</a> which is somehow something I need but I want to avoid because of to many combinations (and also it is in the Java language section), <a href="https://stackoverflow.com/questions/57208954/select-rows-that-match-values-in-multiple-columns-in-pandas">this</a> which is doing what I need (I think) but it is in Python and pandas not SQLite syntax and I think <a href="https://stackoverflow.com/questions/66037923/python-selecting-rows-containing-a-string-in-any-column">this</a> one is the same but also in Python and pandas, also, keep in mind that the last two do not look for any possible combination but just for a give combination to look for in any given column which partially what I need.</li> <li>Also, If you can do it in Python and pandas it is very good too because I could use that too (so, I'm adding tags for these in order to be seen as well maybe there is someone which is looking for that solution too, if you don't mind).</li> </ol>
<p>Here's an SQLite query that will give you the results you want. It creates a CTE of all the values of interest, then joins your <code>numbers</code> table to the CTE if any of the columns contain the value from the CTE, selecting only <code>RowId</code> values from <code>numbers</code> where the number of rows in the join is exactly 3 (using <code>GROUP BY</code> and <code>HAVING</code>) and then finally selecting all the data from the rows which match that criteria:</p> <pre class="lang-sql prettyprint-override"><code>WITH CTE(n) AS ( VALUES (10),(20),(30),(40),(50),(60),(70),(80) ), rowids AS ( SELECT RowId FROM numbers JOIN CTE ON n IN (n1, n2, n3, n4, n5, n6, n7, n8, n9, n10, n11, n12, n13, n14, n15, n16, n17, n18, n19, n20) GROUP BY RowId HAVING COUNT(*) = 3 ) SELECT n.* FROM numbers n JOIN rowids r ON n.RowId = r.RowId </code></pre> <p>I've made a small <a href="https://www.db-fiddle.com/f/3fx9J76uzNm315xqYa2N7c/1" rel="nofollow noreferrer">demo on db-fiddle</a>.</p>
python|python-3.x|pandas|dataframe|sqlite
2
210
59,263,662
How to configure logging with colour, format etc in separate setting file in python?
<p>I am trying to call python script from bash script. (Note: I am using python version 3.7) Following is the Directory structure (so_test is a directory)</p> <pre><code>so_test shell_script_to_call_py.sh main_file.py log_settings.py </code></pre> <p>files are as below,</p> <p><strong>shell_script_to_call_py.sh</strong></p> <pre><code>#!/bin/bash echo "...Enable Debug..." python $(cd $(dirname ${BASH_SOURCE[0]}) &amp;&amp; pwd)/main_file.py "input_1" --debug echo "...No Debug..." python $(cd $(dirname ${BASH_SOURCE[0]}) &amp;&amp; pwd)/main_file.py "input_2" </code></pre> <p><strong>main_file.py</strong></p> <pre><code>import argparse import importlib importlib.import_module("log_settings.py") from so_test import log_settings def func1(): log.info("INFO Test") log.debug("DEBUG Test") log.warning("WARN Test") def func2(): log.info("INFO Test") log.debug("DEBUG Test") log.warning("WARN Test") def main(): parser = argparse.ArgumentParser() parser.add_argument("input", type=str, help="input argument 1 is missing") parser.add_argument("--debug", help="to print debug logs", action="store_true") args = parser.parse_args() log_settings.log_conf(args.debug) log.info("INFO Test") log.debug("DEBUG Test") log.warning("WARN Test") func1() func2() if __name__ == "__main__": main() </code></pre> <p><strong>log_settings.py</strong></p> <pre><code>import logging from colorlog import ColoredFormatter def log_config(is_debug_level): log_format = "%(log_color)s %(levelname)s %(message)s" if is_debug_level: logging.root.setLevel(logging.DEBUG) else: logging.root.setLevel(logging.INFO) stream = logging.StreamHandler() stream.setFormatter(ColoredFormatter(log_format)) global log log = logging.getLogger('pythonConfig') log.addHandler(stream) </code></pre> <p>Following are 2 issues I am facing. (as a newbie to python)</p> <ol> <li>I am not able to import the log_settings.py properly in main_file.py</li> <li>I want to access use log.debug, log.info etc. in main_file (and other .py file) across different functions, for which the settings (format, color etc.) is declared in log_settings.py file.</li> </ol>
<p>I got the code working with the following changes:</p> <ol> <li><p>Declare 'log' variable outside the function in log_settings.py, so that it can be imported by other programs.</p></li> <li><p>Rename the function named log_config to log_conf, which is referred in the main program.</p></li> <li><p>In the main program, update the import statements to import 'log' and 'log_conf' from log_settings</p></li> </ol> <p>Working code:</p> <p><strong>1. log_settings.py</strong></p> <pre><code>import logging from colorlog import ColoredFormatter global log log = logging.getLogger('pythonConfig') def log_conf(is_debug_level): log_format = "%(log_color)s %(levelname)s %(message)s" if is_debug_level: logging.root.setLevel(logging.DEBUG) else: logging.root.setLevel(logging.INFO) stream = logging.StreamHandler() stream.setFormatter(ColoredFormatter(log_format)) log.addHandler(stream) </code></pre> <p><strong>2. main_file.py</strong></p> <pre><code>import argparse import importlib from log_settings import log_conf, log def func1(): log.info("INFO Test") log.debug("DEBUG Test") log.warning("WARN Test") def func2(): log.info("INFO Test") log.debug("DEBUG Test") log.warning("WARN Test") def main(): parser = argparse.ArgumentParser() parser.add_argument("input", type=str, help="input argument 1 is missing") parser.add_argument("--debug", help="to print debug logs", action="store_true") args = parser.parse_args() log_conf(args.debug) log.info("INFO Test") log.debug("DEBUG Test") log.warning("WARN Test") func1() func2() if __name__ == "__main__": main() </code></pre> <p><strong>Testing</strong></p> <p>$ python3 main_file.py "input_1" --debug</p> <p>INFO INFO Test (Shows in Green)</p> <p>DEBUG DEBUG Test (Shows in White)</p> <p>WARNING WARN Test (Shows in Yellow)</p> <p>INFO INFO Test</p> <p>DEBUG DEBUG Test</p> <p>WARNING WARN Test</p> <p>INFO INFO Test</p> <p>DEBUG DEBUG Test</p> <p>WARNING WARN Test</p>
python-3.x|logging|python-logging
1
211
63,222,141
How to slice Data frame with Pandas, and operate on each slice
<p>I'm new into pandas and python in general and I want to <strong>know your opinion about the best way</strong> to create a new data frame using slices of an &quot;original&quot; data frame.</p> <p>input (original df):</p> <pre><code> date author_id time_spent 0 2020-01-02 1 2.5 1 2020-01-02 2 0.5 2 2020-01-02 1 1.5 3 2020-01-01 1 2 4 2020-01-01 1 1 5 2020-01-01 3 3.5 6 2020-01-01 2 1.5 7 2020-01-01 2 1.5 </code></pre> <p>expected output (new df):</p> <pre><code> date author_id total_time_spent 0 2020-01-01 1 3 1 2020-01-01 2 3 2 2020-01-01 3 3.5 3 2020-01-02 1 4 4 2020-01-02 2 0.5 </code></pre> <p>I want:</p> <ul> <li>Slice the original df by day.</li> <li>Operate each day to get the total_time_spent</li> <li>Create new df with these data</li> </ul> <p>What you think which is the most efficient way?</p> <p>Thanks for share your answer!</p>
<p>What we will do</p> <pre><code>df = df.groupby(['date','author_id'])['time_spent'].sum().reset_index() date author_id time_spent 0 2020-01-01 1 3.0 1 2020-01-01 2 3.0 2 2020-01-01 3 3.5 3 2020-01-02 1 4.0 4 2020-01-02 2 0.5 </code></pre>
python|pandas|dataframe
2
212
63,196,745
Convert one-hot encoded data-frame columns into one column
<p>In the pandas data frame, the one-hot encoded vectors are present as columns, i.e:</p> <pre><code>Rows A B C D E 0 0 0 0 1 0 1 0 0 1 0 0 2 0 1 0 0 0 3 0 0 0 1 0 4 1 0 0 0 0 4 0 0 0 0 1 </code></pre> <p>How to convert these columns into one data frame column by label encoding them in python? i.e:</p> <pre><code>Rows A 0 4 1 3 2 2 3 4 4 1 5 5 </code></pre> <p>Also need suggestion on this that some rows have multiple 1s, how to handle those rows because we can have only one category at a time.</p>
<p>Try with <code>argmax</code></p> <pre><code>#df=df.set_index('Rows') df['New']=df.values.argmax(1)+1 df Out[231]: A B C D E New Rows 0 0 0 0 1 0 4 1 0 0 1 0 0 3 2 0 1 0 0 0 2 3 0 0 0 1 0 4 4 1 0 0 0 0 1 4 0 0 0 0 1 5 </code></pre>
python|pandas|numpy|dataframe
6
213
59,608,406
Odoo 11 - Action Server
<p>Here is my code for a custom action declaration:</p> <pre><code> &lt;record id="scheduler_synchronization_update_school_and_grade" model="ir.cron"&gt; &lt;field name="name"&gt;Action automatisee ...&lt;/field&gt; &lt;field name="user_id" ref="base.user_root"/&gt; &lt;field name="interval_number"&gt;1&lt;/field&gt; &lt;field name="interval_type"&gt;days&lt;/field&gt; &lt;field name="numbercall"&gt;-1&lt;/field&gt; &lt;field name="doall" eval="False"/&gt; &lt;field name="model_id" ref="model_ecole_partner_school"/&gt; &lt;field name="code"&gt;model.run_grade_establishment_smartbambi()&lt;/field&gt; &lt;field name="active" eval="False"/&gt; &lt;/record&gt; </code></pre> <p>Here is the start of my function which is called:</p> <p><a href="https://i.stack.imgur.com/MyKtt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MyKtt.jpg" alt="enter image description here"></a></p> <p>Here is the error message when I update my custom module on the server:</p> <pre><code>odoo.tools.convert.ParseError: "ERREUR: une valeur NULL viole la contrainte NOT NULL de la colonne « use_relational_model » DETAIL: La ligne en échec contient (516559, 1, null, 1, 2020-01-02 14:56:39.02145, null, 2020-01-02 14:56:39.02145, ir.actions.server, Action automatisee ..., null, action, model.run_grade_establishment_smartbambi(), 5, null, null, null, null, null, null, null, null, object_write, null, null, 397, null, null, null, null, null, null, null, null, null, null, null, null, f, null, null, ir_cron, null) " while parsing /opt/odoo11/addons-odoo/Odoo/ecole/data/actions.xml:33, near &lt;record id="scheduler_synchronization_update_school_and_grade" model="ir.cron"&gt; &lt;field name="name"&gt;Action automatisee ...&lt;/field&gt; &lt;field name="user_id" ref="base.user_root"/&gt; &lt;field name="interval_number"&gt;1&lt;/field&gt; &lt;field name="interval_type"&gt;days&lt;/field&gt; &lt;field name="numbercall"&gt;-1&lt;/field&gt; &lt;field name="doall" eval="False"/&gt; &lt;field name="model_id" ref="model_ecole_partner_school"/&gt; &lt;field name="code"&gt;model.run_grade_establishment_smartbambi()&lt;/field&gt; &lt;field name="active" eval="False"/&gt; &lt;/record&gt; </code></pre> <p>Do you have an idea of ​​the problem ? I can't find anything on the internet</p> <p>thank you so much</p> <p>EDIT : </p> <p>I have solved my problem. With PGAdmin 4, the use_relational_model field was required. I have deactivate the required. </p> <p>Thanks</p>
<p>You missed the <code>state</code> field in the cron definition. This is the "Action To Do" field. Try following:</p> <pre><code> &lt;record id="scheduler_synchronization_update_school_and_grade" model="ir.cron"&gt; &lt;field name="name"&gt;Action automatisee ...&lt;/field&gt; &lt;field name="user_id" ref="base.user_root"/&gt; &lt;field name="interval_number"&gt;1&lt;/field&gt; &lt;field name="interval_type"&gt;days&lt;/field&gt; &lt;field name="numbercall"&gt;-1&lt;/field&gt; &lt;field name="doall" eval="False"/&gt; &lt;field name="model_id" ref="model_ecole_partner_school"/&gt; &lt;field name="state"&gt;code&lt;/field&gt; &lt;field name="code"&gt;model.run_grade_establishment_smartbambi()&lt;/field&gt; &lt;field name="active" eval="False"/&gt; &lt;/record&gt; </code></pre>
python|xml|odoo
1
214
48,993,334
Execute python script in Qlik Sense load script
<p>I am trying to run python script inside my load script in Qlik Sense app.</p> <p>I know that I need to put <code>OverrideScriptSecurity=1</code> in <code>Settings.ini</code></p> <p>I put</p> <pre><code>Execute py lib://python/getSolution.py 100 'bla'; // 100 and 'bla' are parameters </code></pre> <p>and I get no error in qlik sense, but script is not executed (I think) because inside the script I have</p> <pre><code>f = open("file.xml", "wb") f.write(xml) f.close </code></pre> <p>and file is not saved.</p> <p>If I run script from terminal, then script is properly executed.</p> <p>What could go wrong?</p> <p>By the way, my full path to python interpreter is</p> <pre><code>C:\Users\Marko Z\AppData\Local\Programs\Python\Python37-32\python.exe </code></pre> <h2>EDIT :</h2> <p>Even if I add this</p> <pre><code>Set vPythonPath = "C:\Users\Marko Z\AppData\Local\Programs\Python\Python37-32\python.exe"; Set vPythonFile = "C:\Users\Marko Z\Documents\Qlik\Sense\....\getSolution.py"; Execute $(vPythonPath) $(vPythonFile); </code></pre> <p>I get the same behaviour. No error, but not working,... I even see that if I change path (incorrect path) it give me an error, but incorrect file it doesn't give me an error.... (but I am sure it is the right file path...)</p> <p>My python code is</p> <pre><code>xml = "Marko" xml = xml.encode('utf-8') f = open("C:\\Users\\Marko Z\\Test.xml", "wb") f.write(xml) f.close </code></pre>
<p>I figure out what was wrong. For all others that would have similar problems:</p> <p>Problem is in space in path. If I move my script in c:\Windows\getSolution.py it work. I also need to change the python path to c:\Windows\py.exe</p> <p>so end script looks like:</p> <pre><code>Execute c:\Windows\py.exe c:\Windows\getSolution.py 100 'bla'; </code></pre> <p>But I still need to figure how to work with space in path...</p>
python|qliksense
1
215
49,203,023
openAI Gym NameError in Google Colaboratory
<p>I've just installed openAI gym on Google Colab, but when I try to run 'CartPole-v0' environment as <a href="https://gym.openai.com/docs/" rel="noreferrer">explained here</a>.</p> <p>Code:</p> <pre><code>import gym env = gym.make('CartPole-v0') for i_episode in range(20): observation = env.reset() for t in range(100): env.render() print(observation) action = env.action_space.sample() observation, reward, done, info = env.step(action) if done: print("Episode finished after {} timesteps".format(t+1)) break </code></pre> <p>I get this:</p> <pre><code>WARN: gym.spaces.Box autodetected dtype as &lt;class 'numpy.float32'&gt;. Please provide explicit dtype. --------------------------------------------------------------------------- NameError Traceback (most recent call last) &lt;ipython-input-19-a81cbed23ce4&gt; in &lt;module&gt;() 4 observation = env.reset() 5 for t in range(100): ----&gt; 6 env.render() 7 print(observation) 8 action = env.action_space.sample() /content/gym/gym/core.py in render(self, mode) 282 283 def render(self, mode='human'): --&gt; 284 return self.env.render(mode) 285 286 def close(self): /content/gym/gym/envs/classic_control/cartpole.py in render(self, mode) 104 105 if self.viewer is None: --&gt; 106 from gym.envs.classic_control import rendering 107 self.viewer = rendering.Viewer(screen_width, screen_height) 108 l,r,t,b = -cartwidth/2, cartwidth/2, cartheight/2, -cartheight/2 /content/gym/gym/envs/classic_control/rendering.py in &lt;module&gt;() 21 22 try: ---&gt; 23 from pyglet.gl import * 24 except ImportError as e: 25 reraise(prefix="Error occured while running `from pyglet.gl import *`",suffix="HINT: make sure you have OpenGL install. On Ubuntu, you can run 'apt-get install python-opengl'. If you're running on a server, you may need a virtual frame buffer; something like this should work: 'xvfb-run -s \"-screen 0 1400x900x24\" python &lt;your_script.py&gt;'") /usr/local/lib/python3.6/dist-packages/pyglet/gl/__init__.py in &lt;module&gt;() 225 else: 226 from .carbon import CarbonConfig as Config --&gt; 227 del base 228 229 # XXX remove NameError: name 'base' is not defined </code></pre> <p>The problem is the same in <a href="https://stackoverflow.com/questions/44150310/open-ai-gym-nameerror">this question about NameError in openAI gym</a></p> <p>Nothing is being rendered. I don't know how I could use this in google colab: <code>'xvfb-run -s \"-screen 0 1400x900x24\" python &lt;your_script.py&gt;'"</code></p>
<p>One way to render gym environment in google colab is to use pyvirtualdisplay and store rgb frame array while running environment. Environment frames can be animated using animation feature of matplotlib and HTML function used for Ipython display module. You can find the implementation <a href="https://colab.research.google.com/drive/1XQaqLeUpn299ZdDDJ7CtGtJkZ0A-Tgii#scrollTo=r27PopmiPsAX&amp;line=7&amp;uniqifier=1" rel="noreferrer">here</a>. Make sure you install required libraries which you can find in the first cell of the colab. In case the first link for google colab doesn't work you can see <a href="https://colab.research.google.com/drive/1GLlB53gvZaUyqMYv8GmZQJmshRUzV_tg" rel="noreferrer">this one</a>. </p>
python|google-colaboratory|openai-gym
12
216
70,940,598
I need example on how to mention using PTB
<p>I need further elaboration on this thread <a href="https://stackoverflow.com/questions/40905948/how-can-i-mention-telegram-users-without-a-username">How can I mention Telegram users without a username?</a></p> <p>Can someone give me an example of how to use the markdown style? I am also using PTB library</p> <p>The code I want modified</p> <pre><code>context.bot.send_message(chat_id=-1111111111, text=&quot;hi&quot;) </code></pre>
<p>Alright, so I finally found the answer. The example below should work.</p> <pre><code>context.bot.send_message(chat_id=update.effective_chat.id, parse_mode = ParseMode.MARKDOWN_V2, text = &quot;[inline mention of a user](tg://user?id=123456789)&quot;) </code></pre>
python|python-telegram-bot
0
217
60,208,374
Increasing distances among nodes in NetworkX
<p>I'm trying to create a network of approximately 6500 nodes for retweets. The shape of network looks so bad with a very low distance among node. I've tried spring_layout to increase the distances but it didn't change anything.</p> <pre><code>nx.draw(G, with_labels=False, node_color=color_map_n, node_size=5,layout=nx.spring_layout(G,k=100000)) </code></pre> <p><a href="https://i.stack.imgur.com/Qlqqa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Qlqqa.png" alt="my network"></a></p>
<p>I swapped "layout=..." with "pos=..." and it worked</p>
python|matplotlib|networkx|pos
0
218
67,953,320
Selenium not sending keys to input field
<p>I'm trying to scrape this url <a href="https://www.veikkaus.fi/fi/tulokset#!/tarkennettu-haku" rel="nofollow noreferrer">https://www.veikkaus.fi/fi/tulokset#!/tarkennettu-haku</a></p> <p>There's three main parts to the scrape:</p> <ol> <li>Select the correct game type from &quot;Valitse peli&quot; <br /> For this I want to choose &quot;Eurojackpot&quot;</li> <li>Set the date range from variables. In the full version I'll be generating dates based on the 12 week range limit. For now I've just chose two dates that are close enough. This date range needs to be inputted into the two input fields below &quot;Näytä tulokset aikaväliltä&quot;</li> <li>I need to click the show results button. (Labeled &quot;Näytä Tulokset&quot;)</li> </ol> <p>I believe my code does parts 1 and 3 correct, but I'm having trouble with part 2. For some reason the scraper isn't sending the dates to the elements. I've tried <code>click</code>, <code>clear</code> and then <code>send_keys</code>. I've also tried to first send <code>key_down(Keys.CONTROL)</code> then <code>send_keys(&quot;a&quot;)</code> and then <code>send_keys(date)</code>, but none of these are working. The site always goes back to the date it loads up with (current date).</p> <p>Here's my full code:</p> <pre><code># -*- coding: utf-8 -*- &quot;&quot;&quot; Created on Sat Jun 12 12:05:40 2021 @author: Samu Kaarlela &quot;&quot;&quot; from selenium import webdriver from selenium.webdriver import ActionChains from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By from selenium.webdriver.support.select import Select from selenium.webdriver.chrome.options import Options url = &quot;https://www.veikkaus.fi/fi/tulokset#!/tarkennettu-haku&quot; wd = r&quot;C:\Users\Oppilas\Desktop\EJ prediction\scraper\chromedriver&quot; chrome_options = Options() chrome_options.add_argument(&quot;--headless&quot;) webdriver = webdriver.Chrome( wd, options=chrome_options ) from_date = &quot;05.05.2021&quot; to_date = &quot;11.06.2021&quot; with webdriver as driver: wait = WebDriverWait(driver,10) driver.get(url) game_type_element = driver.find_element_by_css_selector( &quot;#choose-game&quot; ) slc = Select(game_type_element) slc.select_by_visible_text(&quot;Eurojackpot&quot;) from_date_element = WebDriverWait( driver, 20).until( EC.element_to_be_clickable( ( By.CSS_SELECTOR, &quot;#date-range div:nth-child(1) input&quot; ) ) ) ActionChains(driver). \ click(from_date_element). \ key_down(Keys.CONTROL). \ send_keys(&quot;a&quot;). \ send_keys(from_date). \ perform() print(from_date_element.get_attribute(&quot;value&quot;)) driver.save_screenshot(&quot;./image.png&quot;) driver.close() </code></pre> <p>EDIT:</p> <p>I just realized that when selected the input field goes from #date-range #from-date to #date-range #from-date #focus-visible</p>
<p>For me, simply doing the following works:</p> <pre><code>driver.find_element_by_css_selector('.date-input.from-date').send_keys(from_date) ActionChains(driver).send_keys(Keys.RETURN).perform() driver.find_element_by_css_selector('.date-input.to-date').send_keys(to_date) ActionChains(driver).send_keys(Keys.RETURN).perform() </code></pre>
python|css|selenium|web-scraping
1
219
67,015,296
Python Multiple Datetimes To One
<p>I have two types of datetime format in a Dataframe.</p> <pre><code>Date 2019-01-06 00:00:00 (%Y-%d-%m %H:%M:%S') 07/17/2018 ('%m/%d/%Y') </code></pre> <p>I want to convert into one specific datetime format. Below is the script that I am using</p> <pre><code>d1 = pd.to_datetime(df1['DATE'], format='%m/%d/%Y',errors='coerce') d2 = pd.to_datetime(df1['DATE'], format='%Y-%d-%m %H:%M:%S',errors='coerce') df1['Date'] = d2.fillna(d1) </code></pre> <p>While doing this, the code is clubbing some of the other datetime into another. For ex: 7th January 2018 is coming as July 1st 2018. This problem is associated with this format (%Y-%d-%m %H:%M:%S') after running the above script.</p>
<p>If there are mixed format also in format <code>2019-01-06 00:00:00</code> - it means it should be January or June, only ways is prioritize one format - e.g. here first months and add first format <code>d2</code> and then <code>d3</code> in chained <code>fillna</code>:</p> <pre><code>d1 = pd.to_datetime(df1['DATE'], format='%m/%d/%Y',errors='coerce') d2 = pd.to_datetime(df1['DATE'], format='%Y-%m-%d %H:%M:%S',errors='coerce') d3 = pd.to_datetime(df1['DATE'], format='%Y-%d-%m %H:%M:%S',errors='coerce') df1['Date'] = d2.fillna(d1).fillna(d3) </code></pre> <p>If need prioritize first days:</p> <pre><code>df1['Date'] = d3.fillna(d1).fillna(d2) </code></pre> <p>In sample data is possible check difference:</p> <pre><code>print (df1) DATE 0 2019-01-06 00:00:00 1 2019-01-15 00:00:00 2 2019-20-10 00:00:00 3 07/17/2018 d1 = pd.to_datetime(df1['DATE'], format='%m/%d/%Y',errors='coerce') d2 = pd.to_datetime(df1['DATE'], format='%Y-%m-%d %H:%M:%S',errors='coerce') d3 = pd.to_datetime(df1['DATE'], format='%Y-%d-%m %H:%M:%S',errors='coerce') df1['Date1'] = d2.fillna(d1).fillna(d3) df1['Date2'] = d3.fillna(d1).fillna(d2) print (df1) DATE Date1 Date2 0 2019-01-06 00:00:00 2019-01-06 2019-06-01 &lt;- difference 1 2019-01-15 00:00:00 2019-01-15 2019-01-15 2 2019-20-10 00:00:00 2019-10-20 2019-10-20 3 07/17/2018 2018-07-17 2018-07-17 </code></pre>
python|pandas|dataframe|datetime|datetime-format
2
220
72,257,321
pandas change all rows with Type X if 1 Type X Result = 1
<p>Here is a simple pandas df:</p> <pre><code>&gt;&gt;&gt; df Type Var1 Result 0 A 1 NaN 1 A 2 NaN 2 A 3 NaN 3 B 4 NaN 4 B 5 NaN 5 B 6 NaN 6 C 1 NaN 7 C 2 NaN 8 C 3 NaN 9 D 4 NaN 10 D 5 NaN 11 D 6 NaN </code></pre> <p>The object of the exercise is: if column Var1 = 3, set Result = 1 for all that Type.</p> <p>This finds the rows with 3 in Var1 and sets Result to 1,</p> <pre><code>df['Result'] = df['Var1'].apply(lambda x: 1 if x == 3 else 0) </code></pre> <p>but I can't figure out how to then catch all the same Type and make them 1. In this case it should be all the As and all the Cs. Doesn't have to be a one-liner.</p> <p>Any tips please?</p>
<p>Create boolean mask and for <code>True/False</code> to <code>1/0</code> mapp convert values to integers:</p> <pre><code>df['Result'] = df['Type'].isin(df.loc[df['Var1'].eq(3), 'Type']).astype(int) #alternative df['Result'] = np.where(df['Type'].isin(df.loc[df['Var1'].eq(3), 'Type']), 1, 0) print (df) Type Var1 Result 0 A 1 1 1 A 2 1 2 A 3 1 3 B 4 0 4 B 5 0 5 B 6 0 6 C 1 1 7 C 2 1 8 C 3 1 9 D 4 0 10 D 5 0 11 D 6 0 </code></pre> <p><strong>Details</strong>:</p> <p>Get all <code>Type</code> values if match condition:</p> <pre><code>print (df.loc[df['Var1'].eq(3), 'Type']) 2 A 8 C Name: Type, dtype: object </code></pre> <p>Test original column <code>Type</code> by filtered types:</p> <pre><code>print (df['Type'].isin(df.loc[df['Var1'].eq(3), 'Type'])) 0 True 1 True 2 True 3 False 4 False 5 False 6 True 7 True 8 True 9 False 10 False 11 False Name: Type, dtype: bool </code></pre> <p>Or use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with <code>any</code> for test if match at least one value, thi solution is slowier if larger df:</p> <pre><code>df['Result'] = df['Var1'].eq(3).groupby(df['Type']).transform('any').astype(int) </code></pre>
pandas
1
221
50,909,754
Referencing folder without absolute path
<p>I am writing a code that will be implemented alongside my company's software. My code is written in Python and requires access to a data file (<code>.ini</code> format) that will be stored on the user's desktop, inside the software's shortcuts folder.</p> <p>This being said, I want to be able to read/write from that file, but I can't simply reference the desktop as <code>C:\USERS\DESKTOP\Parameters\ParameterUpdate.ini</code>, since the absolute path will be different across different systems.</p> <p>Is there a way to ensure that I am referencing whatever the desktop's absolute path is?</p>
<p>In windows, desktop absolute path looks like this:</p> <pre><code>%systemdrive%\users\%username%\Desktop </code></pre> <p>So this path will fit your requirements:</p> <pre><code>%systemdrive%\users\%username%\Desktop\Parameters\ParameterUpdate.ini </code></pre> <p>Please make sure u don't actually mean public desktop path, with will be:</p> <pre><code>%public%\Desktop\Parameters\ParameterUpdate.ini </code></pre>
python|path
1
222
35,267,743
Subscription modelling in Flask SQLAlchemy
<p>I am trying to model the following scenario in Flask SQLAlchemy:</p> <p>There are a list of <code>SubscriptionPacks</code> available for purchase. When a particular <code>User</code> buys a <code>SubscriptionPack</code> they start an instance of that <code>Subscription</code>.</p> <p>The model is as follows:</p> <p><a href="https://i.stack.imgur.com/CmldB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CmldB.png" alt="ERD"></a></p> <p>A <code>User</code> can have many <code>Subscriptions</code> (only one of which will be Active at a time) and each <code>Subscription</code> will be referencing one <code>SubscriptionPack</code>.</p> <p>How would this be modelled in SQLAlchemy?</p> <p>Currently I have the <code>User.id</code> and <code>SubscriptionPack.id</code> referenced as <code>db.ForeignKey</code> in the <code>Subscriptions</code> model. And I have <code>Subscriptions</code> referenced as a <code>db.Relationship</code> in the <code>Users</code> table. This seems inconsistent and wrong and is leading me to have to hand-code a lot of SQL statements to return the right results.</p> <p>Any help as to how to do this right?</p>
<p>For those who stumble upon this, what I was looking for was the <a href="http://docs.sqlalchemy.org/en/latest/orm/basic_relationships.html" rel="nofollow">bidirectional SQLAlchemy Association Object pattern</a>.</p> <p>This allows the intermediate table of a Many-to-Many to have it's own stored details. In my instance above the <code>Subscription</code> table needed to be an Association Object (has it's own class).</p>
python|orm|flask|sqlalchemy|flask-sqlalchemy
1
223
26,545,188
Append information to failed tests
<p>I have some details I have to print out for a failed test. Right now I'm just outputting this information to STDOUT and I use the -s to see this information. But I would like to append this information to the test case details when it failed, and not need to use the -s option.</p>
<p>You can just keep printing to stdout and simply not use <code>-s</code>. If you do this py.test will put the details you printed next to the assertion failure message when the test fails, in a "captured stdout" section.</p> <p>When using <code>-s</code> things get worse since they are also printed to stdout even if the test passes and it also displays during the test run instead of nicely in a section of a failure report.</p>
python|pytest
0
224
57,763,773
Install Numpy Requirement in a Dockerfile. Results in error
<p>I am attempting to install a numpy dependancy inside a docker container. (My code heavily uses it). On building the container the numpy library simply does not install and the build fails. This is on OS raspbian-buster/stretch. This does however work when building the container on MAC OS. </p> <p>I suspect some kind of python related issue, but can not for the life of me figure out how to make it work.</p> <p>I should point out that removing the pip install numpy from the requirements file and using it in its own RUN statement in the dockerfile does not solve the issue.</p> <p>The Dockerfile:</p> <pre><code>FROM python:3.6 ENV PYTHONUNBUFFERED 1 ENV APP /app RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &amp;&amp; echo $TZ &gt; /etc/timezone RUN mkdir $APP WORKDIR $APP ADD requirements.txt . RUN pip install -r requirements.txt COPY . . </code></pre> <p>The requirements.txt contains all the project requirements, amounf which is numpy.</p> <pre><code>Step 6/15 : RUN pip install numpy==1.14.3 ---&gt; Running in 266a2132b078 Collecting numpy==1.14.3 Downloading https://files.pythonhosted.org/packages/b0/2b/497c2bb7c660b2606d4a96e2035e92554429e139c6c71cdff67af66b58d2/numpy-1.14.3.zip (4.9MB) Building wheels for collected packages: numpy Building wheel for numpy (setup.py): started Building wheel for numpy (setup.py): still running... Building wheel for numpy (setup.py): still running... </code></pre> <p>EDIT:</p> <p>So after the comment by <a href="https://stackoverflow.com/users/8872639/skybunk">skybunk</a> and the suggestion to head to official docs, some more debugging on my part, the solution wound up being pretty simple. Thanks <a href="https://stackoverflow.com/users/8872639/skybunk">skybunk</a> to you go all the glory. Yay.</p> <p>Solution:</p> <p><strong>Use alpine and install python install package dependencies, upgrade pip before doing a pip install requirements.</strong></p> <p>This is my edited Dockerfile - working obviously...</p> <pre><code>FROM python:3.6-alpine3.7 RUN apk add --no-cache --update \ python3 python3-dev gcc \ gfortran musl-dev \ libffi-dev openssl-dev RUN pip install --upgrade pip ENV PYTHONUNBUFFERED 1 ENV APP /app RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &amp;&amp; echo $TZ &gt; /etc/timezone RUN mkdir $APP WORKDIR $APP ADD requirements.txt . RUN pip install -r requirements.txt COPY . . </code></pre>
<p>To use Numpy on python3 here, we first head over to the <a href="https://docs.scipy.org/doc/numpy/user/building.html" rel="noreferrer">official documentation</a> to find what dependencies are required to build Numpy.</p> <p>Mainly these 5 packages + their dependencies must be installed:</p> <ol> <li>Python3 - 70 mb</li> <li>Python3-dev - 25 mb</li> <li>gfortran - 20 mb</li> <li>gcc - 70 mb</li> <li>musl-dev -10 mb (used for tracking unexpected behaviour/debugging)</li> </ol> <p>An POC setup would look something like this -</p> <p>Dockerfile:</p> <pre><code>FROM gliderlabs/alpine ADD repositories.txt /etc/apk/repositories RUN apk add --no-cache --update \ python3 python3-dev gcc \ gfortran musl-dev ADD requirements-pip.txt . RUN pip3 install --upgrade pip setuptools &amp;&amp; \ pip3 install -r requirements-pip.txt ADD . /app WORKDIR /app ENV PYTHONPATH=/app/ ENTRYPOINT python3 testscript.py </code></pre> <p>repositories.txt</p> <pre><code>http://dl-5.alpinelinux.org/alpine/v3.4/main </code></pre> <p>requirements-pip.txt</p> <pre><code>numpy </code></pre> <p>testscript.py</p> <pre><code>import numpy as np def random_array(a, b): return np.random.random((a, b)) a = random_array(2,2) b = random_array(2,2) print(np.dot(a,b)) </code></pre> <p>To run this - clone <a href="https://github.com/gliderlabs/docker-alpine" rel="noreferrer">alpine</a>, build it using "docker build -t gliderlabs/alpine ."</p> <p>Build and Run your Dockerfile</p> <pre><code>docker build -t minidocker . docker run minidocker </code></pre> <p>Output should be something like this-</p> <pre><code>[[ 0.03573961 0.45351115] [ 0.28302967 0.62914049]] </code></pre> <p>Here's the <a href="https://github.com/vibhusheet/NumpyDocker" rel="noreferrer">git link</a>, if you want to test it out</p>
numpy|docker|docker-compose
7
225
28,535,121
Python program can not import dot parser
<p>I am trying to run a huge evolution simulating python software from the command line. The software is dependent on the following python packages:</p> <p>1-networkX </p> <p>2-pyparsing</p> <p>3-numpy</p> <p>4-pydot </p> <p>5-matplotlib</p> <p>6-graphviz</p> <p>The error I get is this:</p> <pre><code>Couldn't import dot_parser, loading of dot files will not be possible. initializing with file= initAdapt.py in model dir= ./Test_adaptation// Traceback (most recent call last): File "run_evolution.py", line 230, in &lt;module&gt; gr.write_dot( os.path.join(test_output_dir, 'test_net.dot') ) File "/Library/Python/2.7/site-packages/pydot.py", line 1602, in &lt;lambda&gt; lambda path, f=frmt, prog=self.prog : self.write(path, format=f, prog=prog)) File "/Library/Python/2.7/site-packages/pydot.py", line 1696, in write dot_fd.write(self.create(prog, format)) File "/Library/Python/2.7/site-packages/pydot.py", line 1740, in create self.write(tmp_name) File "/Library/Python/2.7/site-packages/pydot.py", line 1694, in write dot_fd.write(self.to_string()) File "/Library/Python/2.7/site-packages/pydot.py", line 1452, in to_string graph.append( node.to_string()+'\n' ) File "/Library/Python/2.7/site-packages/pydot.py", line 722, in to_string node_attr.append( attr + '=' + quote_if_necessary(value) ) TypeError: cannot concatenate 'str' and 'int' objects </code></pre> <p>I have already tried the solution suggested for a similar <a href="https://stackoverflow.com/questions/15951748/pydot-and-graphviz-error-couldnt-import-dot-parser-loading-of-dot-files-will">question</a> on stack overflow. I still get the same error. Here are the package versions I am using and my python version. </p> <ul> <li>I'm using python 2.7.6 </li> <li>typing the command <code>which -a python</code> yields the result: "/usr/bin/python".</li> </ul> <p>1-pyparsing (1.5.7)</p> <p>2-pydot (1.0.2)</p> <p>3-matplotlib (1.3.1)</p> <p>4-graphviz (0.4.2)</p> <p>5-networkx (0.37)</p> <p>6-numpy (1.8.0rc1)</p> <p>Any ideas? Seeing that the solution to similar questions is not working for me, I think the problem might be more fundamental in my case. Something wrong with the way I installed my python perhaps. </p>
<p>Any particular reason you're not using the newest version of pydot?</p> <p>This revision of 1.0.2 looks like it fixes exactly that problem:</p> <p><a href="https://code.google.com/p/pydot/source/diff?spec=svn10&amp;r=10&amp;format=side&amp;path=/trunk/pydot.py" rel="nofollow">https://code.google.com/p/pydot/source/diff?spec=svn10&amp;r=10&amp;format=side&amp;path=/trunk/pydot.py</a></p> <p>See line 722.</p>
python|numpy|graphviz|pydot
3
226
53,681,564
How to extract specific time period from Alpha Vantage in Python?
<p>outputsize='compact' is giving last 100 days, and outputsize='full' is giving whole history which is too much data. Any idea how to write a code that extract some specific period? </p> <pre><code>ts=TimeSeries(key='KEY', output_format='pandas') data, meta_data = ts.get_daily(symbol='MSFT', outputsize='compact') print(data) </code></pre> <p>Thanks.</p>
<p>This is how I was able to get the dates to work</p> <pre><code>ts = TimeSeries (key=api_key, output_format = &quot;pandas&quot;) data_daily, meta_data = ts.get_daily_adjusted(symbol=stock_ticker, outputsize ='full') start_date = datetime.datetime(2000, 1, 1) end_date = datetime.datetime(2019, 12, 31) # Create a filtered dataframe, and change the order it is displayed. date_filter = data_daily[(data_daily.index &gt; start_date) &amp; (data_daily.index &lt;= end_date)] date_filter = date_filter.sort_index(ascending=True) </code></pre> <p>If you want to iterate trough the rows in the new dataframe</p> <pre><code>for index, row in date_filter.iterrows(): </code></pre>
python|alpha-vantage
0
227
41,189,951
How do I get hundreds of DLL files?
<p>I am using python and I am trying to install the GDAL library. I kept having an error telling me that many DLL files were missing so I used the software Dependency Walker and it showed me that 330 DLL files were missing...</p> <p>My question is: How do I get that much files without downloading them one by one on a website ?</p>
<p>First of all, never download <code>.dll</code> files from shady websites.</p> <p>The best way of repairing missing dependencies is to reinstall the software that shipped the <code>.dll</code> files completely.</p>
python|dll|gdal
2
228
54,477,877
How to change the performance metric from accuracy to precision, recall and other metrics in the code below?
<p>As a beginner in scikit-learn, and trying to classify the iris dataset, I'm having <em>problems with adjusting the scoring metric</em> from <code>scoring='accuracy'</code> to <em>others like precision, recall, f1</em> etc., in the cross-validation step. Below is the <strong>full</strong> code sample (<strong>enough to start at</strong> <code># Test options and evaluation metric</code>).</p> <pre class="lang-python prettyprint-override"><code># Load libraries import pandas from pandas.plotting import scatter_matrix import matplotlib.pyplot as plt from sklearn import model_selection # for command model_selection.cross_val_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC # Load dataset url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/iris.csv" names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class'] dataset = pandas.read_csv(url, names=names) # Split-out validation dataset array = dataset.values X = array[:,0:4] Y = array[:,4] validation_size = 0.20 seed = 7 X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed) # Test options and evaluation metric seed = 7 scoring = 'accuracy' #Below, we build and evaluate 6 different models # Spot Check Algorithms models = [] models.append(('LR', LogisticRegression())) models.append(('LDA', LinearDiscriminantAnalysis())) models.append(('KNN', KNeighborsClassifier())) models.append(('CART', DecisionTreeClassifier())) models.append(('NB', GaussianNB())) models.append(('SVM', SVC())) # evaluate each model in turn, we calculate the cv-scores, ther mean and std for each model # results = [] names = [] for name, model in models: #below, we do k-fold cross-validation kfold = model_selection.KFold(n_splits=10, random_state=seed) cv_results = model_selection.cross_val_score(model, X_train, Y_train, cv=kfold, scoring=scoring) results.append(cv_results) names.append(name) msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std()) print(msg) </code></pre> <p>Now, apart from scoring ='accuracy', I'd like to evaluate other performance metrics for this multiclass classification problem. But when I use, scoring='precision', it raises:</p> <pre class="lang-python prettyprint-override"><code>ValueError: Target is multiclass but average='binary'. Please choose another average setting. </code></pre> <p><strong>My questions are:</strong></p> <p>1) I guess the above is happening because 'precision' and 'recall' are defined in scikit-learn only for binary classification-is that correct? If yes, then, which command(s) should replace <code>scoring='accuracy'</code> in the code above?</p> <p>2) If I want to compute the confusion matrix, precision and recall for each fold while performing the k-fold cross validation, what commands should I type? </p> <p>3) For the sake of experimentation, I tried scoring='balanced_accuracy', only to find:</p> <pre class="lang-python prettyprint-override"><code>ValueError: 'balanced_accuracy' is not a valid scoring value. </code></pre> <p><em>Why is this happening, when the model evaluation documentation (<a href="https://scikit-learn.org/stable/modules/model_evaluation.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/model_evaluation.html</a>) clearly says balanced_accuracy is a scoring method</em>? I'm quite confused here, so an actual code to show how to evaluate other performance etrics would be appreciated! Thanks inn advance!!</p>
<blockquote> <p>1) I guess the above is happening because 'precision' and 'recall' are defined in scikit-learn only for binary classification-is that correct?</p> </blockquote> <p>No. Precision &amp; recall are certainly valid for multi-class problems, too - see the docs for <a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html#sklearn.metrics.precision_score" rel="nofollow noreferrer">precision</a> &amp; <a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score" rel="nofollow noreferrer">recall</a>.</p> <blockquote> <p>If yes, then, which command(s) should replace scoring='accuracy' in the code above?</p> </blockquote> <p>The problem arises because, as you can see from the documentation links I have provided above, the default setting for these metrics is for binary classification (<code>average='binary'</code>). In your case of multi-class classification, you need to specify which exact "version" of the particular metric you are interested in (there are more than one); have a look at the <a href="https://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values" rel="nofollow noreferrer">relevant page</a> of the scikit-learn documentation, but some valid options for your <code>scoring</code> parameter could be:</p> <pre><code>'precision_macro' 'precision_micro' 'precision_weighted' 'recall_macro' 'recall_micro' 'recall_weighted' </code></pre> <p>The documentation link above contains even an example of using <code>'recall_macro'</code> with the iris data - be sure to check it.</p> <blockquote> <p>2) If I want to compute the confusion matrix, precision and recall for each fold while performing the k-fold cross validation, what commands should I type? </p> </blockquote> <p>This is not exactly trivial, but you can see a way in my answer for <a href="https://stackoverflow.com/questions/54201464/cross-validation-metrics-in-scikit-learn-for-each-data-split/54202609#54202609">Cross-validation metrics in scikit-learn for each data split</a></p> <blockquote> <p>3) For the sake of experimentation, I tried scoring='balanced_accuracy', only to find:</p> <pre><code> ValueError: 'balanced_accuracy' is not a valid scoring value. </code></pre> </blockquote> <p>This is because you are probably using an older version of scikit-learn. <code>balanced_accuracy</code> became available only in v0.20 - you can verify that <a href="https://scikit-learn.org/0.18/modules/model_evaluation.html#common-cases-predefined-values" rel="nofollow noreferrer">it is not available in v0.18</a>. Upgrade your scikit-learn to v0.20 and you should be fine.</p>
python|machine-learning|scikit-learn|multiclass-classification
2
229
39,711,473
Cannot find django.views.generic . Where is generic. Looked in all folders for the file
<p>I know this is a strange question but I am lost on what to do. i cloned pinry... It is working and up . I am trying to find django.views.generic. I have searched the directory in my text editor, I have looked in django.views. But I cannot see generic (only a folder with the name "generic"). I cant understand where the generic file is . It is used in many imports and to extend classes but I cannot find the file to see the import functions. I have a good understanding of files and imports and i would say at this stage I am just above noob level. So is there something I am missing here. How come i cannot find this file? If i go to from django.core.urlresolvers import reverse, I can easly find this but not eg : from django.views.generic import CreateView</p> <p>Where is generic?</p>
<p>Try running this from a Python interpreter: </p> <pre><code>&gt;&gt;&gt; import django.views.generic &gt;&gt;&gt; django.views.generic.__file__ </code></pre> <p>This will show you the location of the <code>gerneric</code> as a string path. In my case the output is:</p> <pre><code>'/.../python3.5/site-packages/django/views/generic/__init__.py' </code></pre> <p>If you look at this <code>__init__.py</code> you will not see the code for any of the generic <code>*View</code> classes. However, these classes can still be imported from the path <code>django.views.generic</code> (if I am not mistaken, this is because the <code>*View</code> classes are part of the <a href="https://stackoverflow.com/a/44843/3642398"><code>__all__</code></a> list in <code>django/views/generic/__init__.py</code>). In the case of <code>CreateView</code>, it is actually in <code>django/views/generic/edit.py</code>, although it can be imported from <code>django.views.generic</code>, because of the way the <code>__init__.py</code> is set up.</p> <p>This is technique is generally useful when you want to find the path to a <code>.py</code> file. Also useful: if you use it on its own in a script (<code>print(__file__)</code>), it will give you the path to the script itself.</p>
python|django|generics
3
230
38,022,480
Django- limit_choices_to using 2 different tables
<p>I fear that what I am trying to do might be impossible but here we go:</p> <p>Among my models, I have the following</p> <pre><code>Class ParentCategory(models.Model): name = models.CharField(max_length=128) def __unicode__(self): return self.name Class Category(models.Model): parentCategory = models.ForeignKey(ParentCategory, on_delete=models.CASCADE, ) name = models.CharField(max_length=128) def __unicode__(self): return self.name Class Achievement(models.Model): milestone = models.ForeignKey(Milestone, on_delete=models.CASCADE) description = models.TextField( ) level_number = models.IntegerField() completeion_method = models.ForeignKey(Category, on_delete = models.CASCADE, limit_choices_to={'parentCategory.name':'comp method'}) def __unicode__(self): # TODO: return description[0,75] + '...' </code></pre> <p>I know the completion method field throws an error because it is not correct syntax. But is there a way to achieve the wanted result using a similar method?</p>
<p>Maybe this will work:</p> <pre><code>limit_choices_to={'parentCategory__name': 'comp method'} </code></pre>
python|django|django-models
1
231
58,057,031
How to reduce the retry count for kubernetes cluster in kubernetes-client-python
<p>I need to reduce the retry count for unavailable/deleted kubernetes cluster using kubernetes-client-python, currently by default it is 3.</p> <pre><code>WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('&lt;urllib3.connection.VerifiedHTTPSConnection object at 0x00000000096E3860&gt;: Failed to establish a new connection: [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond',)': /api/v1/pods WARNING Retrying (Retry(total=1,....... /api/v1/pods WARNING Retrying (Retry(total=0,....... /api/v1/pods </code></pre> <p>After 3 retries it throws an exception.</p> <p>Is there any way to reduce the count.</p> <p>Example Code</p> <pre><code>from kubernetes import client, config config.load_kube_config(config_file='location-for-kube-config') v1 = client.CoreV1Api() ret = v1.list_pod_for_all_namespaces() for i in ret.items: print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name)) </code></pre>
<p>Sadly it seems that it's not possible because:</p> <p>Python client use urlib3 PoolManager to make requests as you can see there </p> <p><a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/client/rest.py#L162" rel="nofollow noreferrer">https://github.com/kubernetes-client/python/blob/master/kubernetes/client/rest.py#L162</a></p> <pre><code>r = self.pool_manager.request(method, url, body=request_body, preload_content=_preload_content, timeout=timeout, headers=headers) </code></pre> <p>and underhood it uses urlopen with default parameters as you can see there</p> <p><a href="https://urllib3.readthedocs.io/en/1.2.1/pools.html#urllib3.connectionpool.HTTPConnectionPool.urlopen" rel="nofollow noreferrer">https://urllib3.readthedocs.io/en/1.2.1/pools.html#urllib3.connectionpool.HTTPConnectionPool.urlopen</a></p> <p><code>urlopen(..., retries=3, ...)</code></p> <p>so there is now way to pass other value here - you must fork official lib to achieve that.</p>
python|kubernetes|kubernetes-pod|kubernetes-python-client|kubeconfig
2
232
55,481,872
Sum value by group by and cumulatively add to separate list or numpy array cumulatively and use the last value in conditional statement
<p>I want to sum the values for multi-level index pandas dataframe. I would then like to add this value to another value in a cumulative fashion. I would then like to use a conditional statement which is dependant on the last value of this cumulative list for the next index value of the same level.</p> <p>I have been able to sum the values for of the multi-level index but unable to add this cumulatively to a list which I have stored separately. </p> <p>Here is a snippet of my dataframe. There is rather a lot of code but I feel it is required to fully explain my problem:</p> <pre><code> import pandas as pd import numpy as np balance = [20000] data = {'EVENT_ID': [112335580,112335580,112335580,112335580,112335580,112335580,112335580,112335580, 112335582, 112335582,112335582,112335582,112335582,112335582,112335582,112335582,112335582,112335582, 112335582,112335582,112335582], 'SELECTION_ID': [6356576,2554439,2503211,6297034,4233251,2522967,5284417,7660920,8112876,7546023,8175276,8145908, 8175274,7300754,8065540,8175275,8106158,8086265,2291406,8065533,8125015], 'BSP': [5.080818565,6.651493872,6.374683435,24.69510797,7.776082305,11.73219964,270.0383021,4,8.294425408,335.3223613, 14.06040142,2.423340019,126.7205863,70.53780982,21.3328554,225.2711962,92.25113066,193.0151362,3.775394142, 95.3786641,17.86333041], 'WIN_LOSE':[0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0], 'INDICATOR': [1,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0], 'POT_BET': [2.258394,2.257205,2.255795,2.255495,2.254286,2.250119,2.237375,2.120843,2.256831,2.253802,2.244174,2.232902, 2.226021,2.220088,2.160382,2.143235,2.141063,2.122452,2.095736,2.086548,2.065200], 'LIABILITY': [2.258394,2.257205,12.124184,12.746919,15.275225,24.148729,53.014851,570.587899,2.256831,6.255188, 16.369963,29.162601,37.538122,45.140722,150.228225,195.572610,202.070630,266.835913,402.412997, 467.952670,690.442601]} df = pd.DataFrame(data, columns=['EVENT_ID','SELECTION_ID','BSP','WIN_LOSE','INDICATOR','POT_BET','LIABILITY']) df = df.sort_values(["EVENT_ID",'BSP']) df.set_index(['EVENT_ID', 'SELECTION_ID'], inplace=True) df['BET'] = np.where(df.groupby(level = 0)['LIABILITY'].transform('sum') &lt; 0.75*balance[-1], df['POT_BET'], 0) df.loc[(df.INDICATOR == 1) &amp; (df.WIN_LOSE == 1), 'RESULT'] = df['BSP'] * df['BET'] - df['BET'] df.loc[(df.INDICATOR == 1) &amp; (df.WIN_LOSE == 0), 'RESULT'] = - df['BET'] df.loc[(df.INDICATOR == 0) &amp; (df.WIN_LOSE == 0), 'RESULT'] = df['BET'] df.loc[(df.INDICATOR == 0) &amp; (df.WIN_LOSE == 1), 'RESULT'] = -df['BSP'] * df['BET'] + df['BET'] results = df.groupby('EVENT_ID')['RESULT'].sum() balance.append(results) </code></pre> <p>This yields the following result for the balance list:</p> <pre><code> [20000, EVENT_ID 112335580 23.872099 112335582 -22.304487 Name: RESULT, dtype: float64] </code></pre> <p>I expect the balance list to be:</p> <pre><code>balance = [20000, 20023.8721, 20001.56761] </code></pre> <p>It is important to note that the balance value should change for each iteration and this new value used in the conditional statement.</p> <p>I am also not sure that a list is the most efficient way to achieve my goals but that is a slightly different question. </p> <p>Cheers, Sandy</p>
<p>Let's change balance to a pd.Series:</p> <pre><code>balance = pd.Series([20000]) Your code #change this line df['BET'] = np.where(df.groupby(level = 0)['LIABILITY'].transform('sum') &lt; 0.75*balance.values.tolist()[-1], df['POT_BET'], 0) Your code balance = pd.concat([balance, results]).cumsum().tolist() </code></pre> <p>Output:</p> <pre><code>[20000.0, 20023.872099225347, 20001.567612410585] </code></pre>
python|pandas|numpy
3
233
55,216,093
Questions Tags Users Unanswered shifting specific column to before/after specific column in dataframe
<p>In dataframe example :</p> <pre><code> medcine_preg_oth medcine_preg_oth1 medcine_preg_oth2 medcine_preg_oth3 0 Berplex Berplex None None 1 NaN NaN NaN NaN 2 NaN NaN NaN NaN 3 obmin obmin None None 4 NaN NaN NaN NaN </code></pre> <p>'medcine_preg_oth1' 'medcine_preg_oth2' 'medcine_preg_oth3' ,these three columns are in somewhere of dataframe with other columns. . I want to shift these three : medcine_preg_oth1 medcine_preg_oth2 medcine_preg_oth3 to the place of after 'medcine_preg_oth'.</p> <p>My idea is shifting the specific columns to place after/ before specific columns in dataframe for wider purpose . please suggest me! Thanks</p>
<p>You can re-arrange your columns like this:</p> <pre><code>re_ordered_columns = ['medicine_pred_oth','medcine_preg_oth1','medcine_preg_oth2','medcine_preg_oth3'] df = df[re_ordered_columns+df.columns.difference(re_ordered_columns).tolist()] </code></pre> <p>add the remaining columns in place of <code>...</code></p>
python|pandas|dataframe|data-cleaning
0
234
45,646,569
padding a batch with 0 vectors in dynamic rnn
<p>I have a prediction task working with variable sequences of input data. Directly using a dynamic rnn will run into the trouble of splitting the outputs according to this post:</p> <p><a href="https://stackoverflow.com/questions/34970582/using-a-variable-for-num-splits-for-tf-split">Using a variable for num_splits for tf.split()</a></p> <p>So, I am wondering if is it possible to pad an entire batch of sequence to make all examples have the same number of sequences and then in <code>sequence_length</code> parameter of <code>tf.nn.dynamic_rnn</code> I give 0 length for the padded batch of sequence. Would this work?</p>
<p>These days (2022) two methods you can use to pad sequences in tensorflow are using a tf.data.Dataset pipeline, or preprocessing with tf.keras.utils.pad_sequences.</p> <h2>Method 1: Use Tensorflow Pipelines (tf.data.Dataset)</h2> <p>The padded_batch() method can be used in place of a normal batch() method to pad the elements of a tf.data.Dataset object when batching for model training: <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset#padded_batch" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/data/Dataset#padded_batch</a></p> <p>The 'batching tensors with padding' pipeline is also described here: <a href="https://www.tensorflow.org/guide/data#batching_tensors_with_padding" rel="nofollow noreferrer">https://www.tensorflow.org/guide/data#batching_tensors_with_padding</a></p> <p>The call signature is:</p> <pre><code>padded_batch( batch_size, padded_shapes=None, padding_values=None, drop_remainder=False, name=None ) </code></pre> <p>An example for your use case of inputting to an RNN is:</p> <pre><code>import tensorflow as tf import numpy as np # input is a ragged tensor of different sequence lengths inputs = tf.ragged.constant([[1], [2, 3], [4, 5, 6]], dtype = tf.float32) # construct dataset using tf.data.Dataset dataset = tf.data.Dataset.from_tensor_slices(inputs) # convert ragged tensor to dense tensor to avoid TypeError dataset = dataset.map(lambda x: x) # pad sequences using padded_batch dataset = dataset.padded_batch(3) # run the batch through a simple RNN model simple_rnn = tf.keras.Sequential([ tf.keras.layers.SimpleRNN(4) ]) output = simple_rnn(batch) </code></pre> <p>Note that this method does not allow you to use pre-padding, the method is always post-padding. However, you can use <code>padded_shapes</code> argument to specify the sequence length.</p> <h2>Method 2: Preprocess sequence as nested list using Keras pad_sequences</h2> <p>Keras (a package sitting on top of Tensorflow since version 2.0) provides a utility function to truncate and pad Python lists to a common length: <a href="https://www.tensorflow.org/api_docs/python/tf/keras/utils/pad_sequences" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/utils/pad_sequences</a></p> <p>The call signature is:</p> <pre><code>tf.keras.utils.pad_sequences( sequences, maxlen=None, dtype='int32', padding='pre', truncating='pre', value=0.0 ) </code></pre> <p>From the documentation:</p> <blockquote> <p>This function transforms a list (of length <code>num_samples</code>) of sequences (lists of integers) into a 2D Numpy array of shape <code>(num_samples,num_timesteps)</code>. <code>num_timesteps</code> is either the <code>maxlen</code> argument if provided, or the length of the longest sequence in the list.</p> <p>Sequences that are shorter than <code>num_timesteps</code> are padded with value until they are <code>num_timesteps</code> long.</p> <p>Sequences longer than <code>num_timesteps</code> are truncated so that they fit the desired length.</p> <p>The position where padding or truncation happens is determined by the arguments <code>padding</code> and <code>truncating</code>, respectively. Pre-padding or removing values from the beginning of the sequence is the default.</p> </blockquote> <p>An example for your use case of inputting to an RNN:</p> <pre><code>import tensorflow as tf import numpy as np # inputs is list of varying length sequences with batch size (list length) 3 inputs = [[1], [2, 3], [4, 5, 6]] # pad the sequences with 0's using pre-padding (default values) inputs = tf.keras.preprocessing.sequence.pad_sequences(inputs, dtype = np.float32) # add an outer batch dimension for RNN input inputs = tf.expand_dims(inputs, axis = 0) # run the batch through a simple RNN layer simple_rnn = tf.keras.layers.SimpleRNN(4) output = simple_rnn(inputs) </code></pre>
tensorflow|rnn
1
235
57,127,821
Login to a website then open it in browser
<p>I am trying to write a Python 3 code that logins in to a website and then opens it in a web browser to be able to take a screenshot of it. Looking online I found that I could do webbrowser.open('example.com') This opens the website, but cannot login. Then I found that it is possible to login to a website using the request library, or urllib. But the problem with both it that they do not seem to provide the option of opening a web page.</p> <p>So how is it possible to login to a web page then display it, so that a screenshot of that page could be taken</p> <p>Thanks</p>
<p>Have you considered <a href="https://www.seleniumhq.org/" rel="nofollow noreferrer">Selenium</a>? It drives a browser natively as a user would, and its Python client is pretty easy to use. </p> <p>Here is one of my latest works with Selenium. It is a script to scrape multiple pages from a certain website and save their data into a csv file:</p> <pre class="lang-py prettyprint-override"><code>import os import time import csv from selenium import webdriver cols = [ 'ies', 'campus', 'curso', 'grau_turno', 'modalidade', 'classificacao', 'nome', 'inscricao', 'nota' ] codigos = [ 96518, 96519, 96520, 96521, 96522, 96523, 96524, 96525, 96527, 96528 ] if not os.path.exists('arquivos_csv'): os.makedirs('arquivos_csv') options = webdriver.ChromeOptions() prefs = { 'profile.default_content_setting_values.automatic_downloads': 1, 'profile.managed_default_content_settings.images': 2 } options.add_experimental_option('prefs', prefs) # Here you choose a webdriver ("the browser") browser = webdriver.Chrome('chromedriver', chrome_options=options) for codigo in codigos: time.sleep(0.1) # Here is where I set the URL browser.get(f'http://www.sisu.mec.gov.br/selecionados?co_oferta={codigo}') with open(f'arquivos_csv/sisu_resultados_usp_final.csv', 'a') as file: dw = csv.DictWriter(file, fieldnames=cols, lineterminator='\n') dw.writeheader() ies = browser.find_element_by_xpath('//div[@class ="nome_ies_p"]').text.strip() campus = browser.find_element_by_xpath('//div[@class ="nome_campus_p"]').text.strip() curso = browser.find_element_by_xpath('//div[@class ="nome_curso_p"]').text.strip() grau_turno = browser.find_element_by_xpath('//div[@class = "grau_turno_p"]').text.strip() tabelas = browser.find_elements_by_xpath('//table[@class = "resultado_selecionados"]') for t in tabelas: modalidade = t.find_element_by_xpath('tbody//tr//th[@colspan = "4"]').text.strip() aprovados = t.find_elements_by_xpath('tbody//tr') for a in aprovados[2:]: linha = a.find_elements_by_class_name('no_candidato') classificacao = linha[0].text.strip() nome = linha[1].text.strip() inscricao = linha[2].text.strip() nota = linha[3].text.strip().replace(',', '.') dw.writerow({ 'ies': ies, 'campus': campus, 'curso': curso, 'grau_turno': grau_turno, 'modalidade': modalidade, 'classificacao': classificacao, 'nome': nome, 'inscricao': inscricao, 'nota': nota }) browser.quit() </code></pre> <p>In short, you set preferences, choose a webdriver (I recommend Chrome), point to the URL and that's it. The browser is automatically opened and start executing your instructions.</p> <p>I have tested using it to log in and it works fine, but never tried to take screenshot. It theoretically should do.</p>
python-3.x|request|urllib
0
236
44,734,655
scrapy callback doesnt work in function
<p>When executing the first <strong>yield</strong> it will not go into the function <strong>parse_url</strong> and when executing the second <strong>yield</strong> it will not go back the function <strong>parse</strong> and it just end. During the whole process, there are no exceptions. I don't know how to deal with this problem, I need help.</p> <pre><code>import scrapy import re from crawlurl.items import CrawlurlItem class HouseurlSpider(scrapy.Spider): name = 'houseurl' allowed_domains = ['qhd.58.com/ershoufang/'] start_urls = ['http://qhd.58.com/ershoufang//'] header = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.221 Safari/537.36 SE 2.X MetaSr 1.0' } def parse(self, response): urls = response.xpath('//div[@class="list-info"]/h2[@class="title"]/a/@href').extract() next_url = response.xpath('//a[@class="next"]/@href').extract() for url in urls: yield scrapy.Request(url,headers=self.header,callback=self.parse_url) if next_url: next_url = next_url[0] yield scrapy.Request(next_url,headers=self.header,callback=self.parse) def parse_url(self,response): item = CrawlurlItem() url_obj = re.search('(http://qhd.58.com/ershoufang/\d+x.shtml).*',response.url) url = url_obj.group(1) item['url'] = url yield item </code></pre>
<p>If you carefully looked at the logs then you might have noticed that <code>scrapy</code> filtered offsite domain requests. This means when <code>scrapy</code> tried to ping <code>short.58.com</code> and <code>jxjump.58.com</code>, it did not follow through. You can add those domains to the <code>allowed_domains</code> filter in your Spider class and you will see the requests being sent.</p> <p>Replace:</p> <pre><code>allowed_domains = ['qhd.58.com/ershoufang/'] </code></pre> <p>With:</p> <pre><code>allowed_domains = ['qhd.58.com', 'short.58.com', 'jxjump.58.com'] </code></pre> <p>And it should work!</p>
python-3.x|scrapy
3
237
36,155,760
Splitting HTML text by <br> while using beautifulsoup
<p>HTML code:</p> <pre><code>&lt;td&gt; &lt;label class="identifier"&gt;Speed (avg./max):&lt;/label&gt; &lt;/td&gt; &lt;td class="value"&gt; &lt;span class="block"&gt;4.5 kn&lt;br&gt;7.1 kn&lt;/span&gt; &lt;/td&gt; </code></pre> <p>I need to get values 4.5 kn and 7.1 as separate list items so I could append them separately. I do not want to split it I wanted to split the text string using re.sub, but it does not work. I tried too use replace to replace br, but it did not work. Can anybody provide any insight?</p> <p>Python code:</p> <pre><code> def NameSearch(shipLink, mmsi, shipName): from bs4 import BeautifulSoup import urllib2 import csv import re values = [] values.append(mmsi) values.append(shipName) regex = re.compile(r'[\n\r\t]') i = 0 with open('Ship_indexname.csv', 'wb')as f: writer = csv.writer(f) while True: try: shipPage = urllib2.urlopen(shipLink, timeout=5) except urllib2.URLError: continue except: continue break soup = BeautifulSoup(shipPage, "html.parser") # Read the web page HTML #soup.find('br').replaceWith(' ') #for br in soup('br'): #br.extract() table = soup.find_all("table", {"id": "vessel-related"}) # Finds table with class table1 for mytable in table: #Loops tables with class table1 table_body = mytable.find_all('tbody') #Finds tbody section in table for body in table_body: rows = body.find_all('tr') #Finds all rows for tr in rows: #Loops rows cols = tr.find_all('td') #Finds the columns for td in cols: #Loops the columns checker = td.text.encode('ascii', 'ignore') check = regex.sub('', checker) if check == ' Speed (avg./max): ': i = 1 elif i == 1: print td.text pat=re.compile('&lt;br\s*/&gt;') print pat.sub(" ",td.text) values.append(td.text.strip("\n").encode('utf-8')) #Takes the second columns value and assigns it to a list called Values i = 0 #print values return values NameSearch('https://www.fleetmon.com/vessels/kind-of-magic_0_3478642/','230034570','KIND OF MAGIC') </code></pre>
<p>Locate the "Speed (avg./max)" label first and then go to the value via <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all-next-and-find-next" rel="nofollow"><code>.find_next()</code></a>:</p> <pre><code>from bs4 import BeautifulSoup data = '&lt;td&gt; &lt;label class="identifier"&gt;Speed (avg./max):&lt;/label&gt; &lt;/td&gt; &lt;td class="value"&gt; &lt;span class="block"&gt;4.5 kn&lt;br&gt;7.1 kn&lt;/span&gt; &lt;/td&gt;' soup = BeautifulSoup(data, "html.parser") label = soup.find("label", class_="identifier", text="Speed (avg./max):") value = label.find_next("td", class_="value").get_text(strip=True) print(value) # prints 4.5 kn7.1 kn </code></pre> <p>Now, you can extract the actual numbers from the string:</p> <pre><code>import re speed_values = re.findall(r"([0-9.]+) kn", value) print(speed_values) </code></pre> <p>Prints <code>['4.5', '7.1']</code>.</p> <p>You can then further convert the values to floats and unpack into separate variables:</p> <pre><code>avg_speed, max_speed = map(float, speed_values) </code></pre>
python|regex|beautifulsoup
0
238
46,356,238
Repeating if statement
<p>I am having a problem with my code mapping a random walk in 3D space. The purpose of this code is to simulate N steps of a random walk in 3 dimensions. At each step, a random direction is chosen (north, south, east, west, up, down) and a step of size 1 is taken in that direction. Here is my code:</p> <pre><code>import random # this helps us generate random numbers N = 30 # number of steps n = random.random() # generate a random number x = 0 y = 0 z = 0 count = 0 while count &lt;= N: if n &lt; 1/6: x = x + 1 # move east n = random.random() # generate a new random number if n &gt;= 1/6 and n &lt; 2/6: y = y + 1 # move north n = random.random() # generate a new random number if n &gt;= 2/6 and n &lt; 3/6: z = z + 1 # move up n = random.random() # generate a new random number if n &gt;= 3/6 and n &lt; 4/6: x = x - 1 # move west n = random.random() # generate a new random number if n &gt;= 4/6 and n &lt; 5/6: y = y - 1 # move south n = random.random() # generate a new random number if n &gt;= 5/6: z = z - 1 # move down n = random.random() # generate a new random number print("(%d,%d,%d)" % (x,y,z)) count = count + 1 print("squared distance = %d" % (x*x + y*y + z*z)) </code></pre> <p>The problem is I am getting more than a single step between each iteration. I've added comments showing the difference in steps between iterations.</p> <p>Here are the first 10 lines of the output:</p> <pre><code>(0,-1,0) #1 step (0,-2,0) #1 step (1,-3,1) #4 steps (1,-4,1) #1 step (1,-3,1) #1 step (1,-2,1) #1 step (2,-2,0) #2 steps (2,-2,0) #0 steps (2,-2,0) #0 steps (2,-1,0) #1 step </code></pre>
<p>If you remove the multiple <code>n = random.random()</code> from within the if statements and replace by a single <code>n = random.random()</code> at start of the while loop then there will be only one step per loop.</p>
python
2
239
21,538,254
Trying to create and use a class; name 'is_empty' is not defined
<p>I'm trying to create a class called <code>Stack</code> (it's probably not very useful for writing actual programmes, I'm just doing it to learn about creating classes in general) and this is my code, identical to the example in the guide I'm following save for one function name:</p> <pre><code>class Stack: def __init__(self): self.items = [] def is_empty(self): return self.items == [] def push(self,item): self.items.append(item) def pop(self): return self.items.pop() def peek(self): return self.items[len(self.items)-1] def size(self): return len(self.items) </code></pre> <p>I saved it in a file called <code>stack.py</code> and tested it with this:</p> <pre><code>from stack import Stack my_stack = Stack() print(is_empty(my_stack)) </code></pre> <p>but I got this error message:</p> <pre><code>Mac:python mac$ python3 stacktest.py Traceback (most recent call last): File "stacktest.py", line 5, in &lt;module&gt; print(is_empty(my_stack)) NameError: name 'is_empty' is not defined </code></pre> <p>The guide in question has something called activecode, which is basically Python installed on the browser so you can run example programmes on it, and <code>is_empty(my_stack)</code> returns <code>True</code> like it should. What am I doing wrong?</p> <p><strong>EDIT:</strong> Yeah, it's actually <code>my_stack.is_empty()</code>. I mixed classes up with functions AND misread the guide.</p>
<p>The method <code>is_empty()</code> is part of the class. To call it you need to <code>my_stack.is_empty()</code></p>
python|python-3.x
3
240
24,869,306
How to control if a component exists in a Tk object/window?
<p>I would like to know what is the most effecient way to control if a certain component (label, button or entry) exists already on the Tk object/window.</p> <p>I have searched on the web for a while and the only thing I found is:</p> <p><code>if component.winfo_exists(): # But this doesn't work for me (I am using Python 3.4)</code></p> <p>I have tried also something (stupid, of course label is not a boolean) like this:</p> <p><code>if not self.label: self.label = Label(self, text="Label")</code></p> <p>I have tried to invent also something cleverer like this:</p> <p><code>if not self.label.exists(): self.label = Label(self, text="Label")</code></p> <p>Since I am a noob using tkinter, I am probably missing something.</p> <p><strong>EDIT</strong></p> <p>This is the whole class:</p> <p>class Form(Tk):</p> <pre><code>def __init__(self): Tk.__init__(self) self.label_question = Label().pack(side=LEFT) self.text = StringVar() self.entry = Entry().pack(side=LEFT) self.button = Button(text="Show", command=self.showName).pack(side=LEFT) self.label = None # Initializinh to None def showName(self): self.name = self.text.get() if not self.label: self.label = Label().pack(side=LEFT) </code></pre>
<p>I think your second approach is good enough.</p> <pre><code>self.label = None # Initialize `self.label` as None somewhere. ... if not self.label: self.label = Label(self, text="Label") </code></pre> <p>This will work, because before the label creation, <code>self.label</code> is evaluated as false when used as predicate (<code>bool(None) is False</code>), and will be evaluated as truth value once the label is set.</p> <hr> <p><strong>UPDATE</strong></p> <p>Following line is not what you want, because <code>pack</code> does not return anything.</p> <pre><code>self.label = Label().pack(side=LEFT) # pack return nothing -&gt; None </code></pre> <p><code>self.label</code> become <code>None</code> after the statement.</p> <p>You should separate the label creation and packing:</p> <pre><code>self.label = Label() self.label.pack(side=LEFT) </code></pre>
python|python-3.x|tkinter|python-3.4
1
241
40,186,467
How to determine the version of PyJWT?
<p>I have two different software environments (<strong>Environment A</strong> and <strong>Environment B</strong>) and I'm trying to run PyJWT on both environments. It is working perfectly fine on one environment <strong>Environment A</strong> but fail on <strong>Environment B</strong>. </p> <p>The error I'm getting on <strong>Environment B</strong> when I call <code>jwt.encode()</code> with <code>algorithm</code> == <code>ES</code> is: <code>Algorithm not supported</code>.</p> <p>I'm trying to figure out why it works on <strong>Environment A</strong> but not <strong>Environment B</strong>. It seems like the two environments have different versions of PyJWT installed. But determining which version of PyJWT is installed on <strong>Environment B</strong> is proving difficult for me. How can I do it??</p> <p>I ran the following instrumented code on both <strong>Environment A</strong> and <strong>Environment B</strong>:</p> <pre><code>import jwt, cryptography, sys, pkg_resources my_private_key = """XXXXX""" my_public_key = """YYYYYY""" original = {"Hello": "World"} print "sys.version = {}".format(str(sys.version)) try: print "dir(jwt) = {}".format(str(dir(jwt))) except Exception as e: print "Failed to get dir of jwt module: {}".format(e) try: print "dir(cryptography) = {}".format(str(dir(cryptography))) except Exception as e: print "Failed to get dir of cryptography module: {}".format(e) try: print "jwt = {}".format(str(jwt.__version__)) except Exception as e: print "Failed to get version of jwt module using .__version: {}".format(e) try: print "cryptography = {}".format(str(cryptography.__version__)) except Exception as e: print "Failed to get version of cryptography module using .__version: {}".format(e) try: print "pkg_resources.require('jwt')[0].version = {}".format(str(pkg_resources.require("jwt")[0].version)) except Exception as e: print "Failed to get version of jwt module via pkg_resources: {}".format(e) try: print "pkg_resources.require('cryptography')[0].version = {}".format(str(pkg_resources.require("cryptography")[0].version)) except Exception as e: print "Failed to get version of cryptography module via pkg_resources: {}".format(e) try: print "original = {}".format(str(original)) encoded = jwt.encode(original, my_private_key, algorithm='ES256') except Exception as e: print "encoding exception = {}".format(str(e)) else: try: print "encoded = {}".format(str(encoded)) unencoded = jwt.decode(encoded, my_public_key, algorithms=['ES256']) except Exception as e: print "decoding exception = {}".format(str(e)) else: print "unencoded = {}".format(str(unencoded)) </code></pre> <p>On <strong>Environment A</strong>, the encoding succeeds:</p> <pre><code>sys.version = 2.7.12 (default, Sep 1 2016, 22:14:00) [GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] dir(jwt) = ['DecodeError', 'ExpiredSignature', 'ExpiredSignatureError', 'ImmatureSignatureError', 'InvalidAudience', 'InvalidAudienceError', 'InvalidIssuedAtError', 'InvalidIssuer', 'InvalidIssuerError', 'InvalidTokenError', 'MissingRequiredClaimError', 'PyJWS', 'PyJWT', '__author__', '__builtins__', '__copyright__', '__doc__', '__file__', '__license__', '__name__', '__package__', '__path__', '__title__', '__version__', 'algorithms', 'api_jws', 'api_jwt', 'compat', 'decode', 'encode', 'exceptions', 'get_unverified_header', 'register_algorithm', 'unregister_algorithm', 'utils'] dir(cryptography) = ['__about__', '__all__', '__author__', '__builtins__', '__copyright__', '__doc__', '__email__', '__file__', '__license__', '__name__', '__package__', '__path__', '__summary__', '__title__', '__uri__', '__version__', 'absolute_import', 'division', 'exceptions', 'hazmat', 'print_function', 'sys', 'utils', 'warnings'] jwt = 1.4.2 cryptography = 1.5.2 Failed to get version of jwt module via pkg_resources: jwt pkg_resources.require('cryptography')[0].version = 1.5.2 original = {'Hello': 'World'} encoded = eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJIZWxsbyI6IldvcmxkIn0.ciaXCcO2gTqsQ4JUEKj5q4YX6vfHu33XY32g2MNIVEDXHNllpuqDCj-cCrlGPf6hGNifAJbNI9kBaAyuCIwyJQ unencoded = {u'Hello': u'World'} </code></pre> <p>On <strong>Environment B</strong> the the encoding fails. You can see that I cannot tell what version of PyJWT is running. However this version of PyJWT doesn't have the algorithm <code>ES256</code> that I'm trying to use: </p> <pre><code>sys.version = 2.7.12 (default, Sep 1 2016, 22:14:00) [GCC 4.8.3 20140911 (Red Hat 4.8.3-9)]" dir(jwt) = ['DecodeError', 'ExpiredSignature', 'Mapping', 'PKCS1_v1_5', 'SHA256', 'SHA384', 'SHA512', '__all__', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', 'base64', 'base64url_decode', 'base64url_encode', 'binascii', 'constant_time_compare', 'datetime', 'decode', 'encode', 'hashlib', 'header', 'hmac', 'json', 'load', 'signing_methods', 'sys', 'timegm', 'unicode_literals', 'verify_methods', 'verify_signature'] dir(cryptography) = ['__about__', '__all__', '__author__', '__builtins__', '__copyright__', '__doc__', '__email__', '__file__', '__license__', '__name__', '__package__', '__path__', '__summary__', '__title__', '__uri__', '__version__', 'absolute_import', 'division', 'print_function', 'sys', 'warnings'] Failed to get version of jwt module using .__version: 'module' object has no attribute '__version__' cryptography = 1.5.2 Failed to get version of jwt module via pkg_resources: jwt pkg_resources.require('cryptography')[0].version = 1.5.2 original = {'Hello': 'World'} encoding exception = Algorithm not supported </code></pre>
<p>The PyJWT <code>.__version__</code> attribute appeared in <code>0.2.2</code> in <a href="https://github.com/jpadilla/pyjwt/commit/d626f7e034c5a19627ba7a65dacc25d1e21d6573" rel="nofollow">this</a> commit.</p> <p>Generally, to find the version of the package, that was installed via setuptools, you need to run following code:</p> <pre><code>import pkg_resources print pkg_resources.require("jwt")[0].version </code></pre> <p>If <code>pip</code> was used to install the package, you could try from linux shell:</p> <pre><code>pip show jwt | grep Version </code></pre> <p>Same thing from inside the python:</p> <pre><code>import pip print next(pip.commands.show.search_packages_info(['jwt']))['version'] </code></pre>
python|pyjwt
5
242
40,113,514
Setting up proxy with selenium / python
<p>I am using selenium with python. I need to configure a proxy.</p> <p>It is working for HTTP but not for HTTPS.</p> <p>The code I am using is:</p> <pre><code># configure firefox profile = webdriver.FirefoxProfile() profile.set_preference("network.proxy.type", 1) profile.set_preference("network.proxy.http", '11.111.11.11') profile.set_preference("network.proxy.http_port", int('80')) profile.update_preferences() # launch driver = webdriver.Firefox(firefox_profile=profile) driver.get('https://www.iplocation.net/find-ip-address') </code></pre> <p>Also. Is there a way for me to completely block any outgoing traffic from my IP and restrict it ONLY to the proxy IP so that I don't accidently mess up the test/stats by accidently switching from proxy to direct connection?</p> <p>Any tips would help! Thanks :)</p>
<p>Check out <a href="https://github.com/AutomatedTester/browsermob-proxy-py" rel="nofollow">browsermob proxy</a> for setting up a proxies for use with <code>selenium</code></p> <pre><code>from browsermobproxy import Server server = Server("path/to/browsermob-proxy") server.start() proxy = server.create_proxy() from selenium import webdriver profile = webdriver.FirefoxProfile() profile.set_proxy(proxy.selenium_proxy()) driver = webdriver.Firefox(firefox_profile=profile) proxy.new_har("google") driver.get("http://www.google.co.uk") proxy.har # returns a HAR JSON blob server.stop() driver.quit() </code></pre> <p>You can use a remote proxy server with the <code>RemoteServer</code> class.</p> <blockquote> <p>Is there a way for me to completely block any outgoing traffic from my IP and restrict it ONLY to the proxy IP</p> </blockquote> <p>Yes, just look up how to setup proxies for whatever operating system you're using. Just use caution because some operating systems will ignore proxy rules based on certain conditions, for example, if using a VPN connection.</p>
python|selenium|proxy
1
243
51,588,981
link in html do not function
<p>python 2.7 DJANGO 1.11.14 win7</p> <p>when I click the link in FWinstance_list_applied_user.html it was supposed to jump to FW_detail.html but nothing happened</p> <p>url.py</p> <pre><code>urlpatterns += [ url(r'^myFWs/', views.LoanedFWsByUserListView.as_view(), name='my-applied'), url(r'^myFWs/(?P&lt;pk&gt;[0-9]+)$', views.FWDetailView.as_view(), name='FW-detail'), </code></pre> <p>views.py:</p> <pre><code>class FWDetailView(LoginRequiredMixin,generic.ListView): model = FW template_name = 'FW_detail.html' </code></pre> <p>models.py </p> <pre><code>class FW(models.Model): ODM_name = models.CharField(max_length=20) project_name = models.CharField(max_length=20) </code></pre> <p>FW_detail.html</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>{% block content %} &lt;h1&gt;FW request information: {{ FW.ODM_name}};{{ FW.project_name}}&lt;/h1&gt; &lt;p&gt;&lt;strong&gt;please download using this link:&lt;/strong&gt; {{ FW.download }}&lt;/p&gt; {% endblock %}</code></pre> </div> </div> </p> <p>FWinstance_list_applied_user.html</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>{% block content %} &lt;h1&gt;Applied FWs&lt;/h1&gt; {% if FW_list %} &lt;ul&gt; {% for FWinst in FW_list %} {% if FWinst.is_approved %} &lt;li class="{% if FWinst.is_approved %}text-danger{% endif %}"&gt;--&gt; &lt;a href="{% url 'FW-detail' FWinst.pk %}"&gt;{{FWinst.ODM_name}}&lt;/a&gt; ({{ FWinst.project_name }}) &lt;/li&gt; {% endif %} {% endfor %} &lt;/ul&gt; {% else %} &lt;p&gt;Nothing.&lt;/p&gt; {% endif %} {% endblock %}</code></pre> </div> </div> </p> <p>the image of FWinstance_list_applied_user.html, when I click the link CSR, nothing happened<a href="https://i.stack.imgur.com/P2Oxy.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P2Oxy.jpg" alt="enter image description here"></a></p>
<p>You haven't terminated your "my-applied" URL pattern, so it matches everything <em>beginning</em> with "myFWs/" - including things that that would match the detail URL. Make sure you always use a terminating <code>$</code> with regex URLs.</p> <pre><code>url(r'^myFWs/$', views.LoanedFWsByUserListView.as_view(), name='my-applied'), </code></pre>
python|django
2
244
38,676,937
Allow_Other with fusepy?
<p>I have a 16.04 ubuntu server with <a href="https://github.com/sondree/b2_fuse/issues" rel="nofollow">b2_fuse</a> mounting my b2 cloud storage bucket which uses pyfuse. The problem is, I have no idea how I can pass the allow_other argument like with FUSE! This is an issue because other services running under different users cannot see the mounted drive.</p> <p>Does anybody here have some experience with this that could point me in the right direction?</p>
<p>Inside of file <code>b2fuse.py</code> if you change the line:</p> <pre><code>FUSE(filesystem, mountpoint, nothreads=True, foreground=True) </code></pre> <p><em>to</em></p> <pre><code>FUSE(filesystem, mountpoint, nothreads=True, foreground=True,**{'allow_other': True}) </code></pre> <p>the volume will be mounted with <code>allow_other</code>.</p>
python|fuse
5
245
40,703,458
Python3 + vagrant ubuntu 16.04 + ssl request = [Errno 104] Connection reset by peer
<p>I'm using on my Mac Vagrant with "bento/ubuntu-16.04" box. I'm trying to use Google Adwords Api via python library but got error <code>[Errno 104] Connection reset by peer</code></p> <p>I make sample script to check possibility to send requests:</p> <pre><code>import urllib.request url ="https://adwords.google.com/api/adwords/mcm/v201609/ManagedCustomerService?wsdl" f = urllib.request.urlopen(url) print(f.read()) </code></pre> <p>If I try this request via python3 - I've got <code>[Errno 104] Connection reset by peer</code>. But if I send request via curl <code>curl https://adwords.google.com/api/adwords/mcm/v201609/ManagedCustomerService?wsdl</code> - I've got some response( even if it is 500 code) with body.</p> <p>If I try this sample python script from my host Mac machine - I also receive some text response. I tried this script from VDS server with ubuntu 16.04 - also worked.</p> <p>So I assume, problem is possible between Vagrant/Mac.</p> <p>Maybe you can help me?</p> <p>Thanks.</p>
<p>I found solution. It looks like bug in Virtualbox 5.1.8 version. You can read about it <a href="https://github.com/mitchellh/vagrant/issues/7946" rel="nofollow noreferrer">here</a></p> <p>So, you can fix it by downgrade Virtualbox to &lt; 5.1.6</p>
python|google-api|vagrant|python-3.5|ubuntu-16.04
0
246
44,287,861
While loop causing issues with CSV read
<p>Everything was going fine until I tried to combine a while loop with a CSV read and I am just unsure where to go with this.</p> <p>The code that I am struggling with:</p> <pre><code>airport = input('Please input the airport ICAO code: ') with open('airport-codes.csv', encoding='Latin-1') as f: reader = csv.reader(f, delimiter=',') for row in reader: if airport.lower() == row[0].lower(): airportCode = row[2] + &quot;/&quot; + row[0] print(airportCode) else: print('Sorry, I don\'t recognise that airport.') print('Please try again.') </code></pre> <p>Executing this code causes the 'else' to print continuously until the code is stopped, regardless of whether or not the input matches that in the CSV file. The moment I remove this statement the code runs fine (albeit doesn't print anything if the input doesn't match).</p> <p>What I am aiming to try and do is have the question loop until true. So my attempt was as follows:</p> <pre><code>with open('airport-codes.csv', encoding='Latin-1') as f: reader = csv.reader(f, delimiter=',') for row in reader: while True: airport = input('Please input the airport ICAO code: ') if airport.lower() == row[0].lower(): airportCode = row[2] + &quot;/&quot; + row[0] print(airportCode) break else: print('Sorry, I don\'t recognise that airport.') print('Please try again.') False </code></pre> <p>I'm pretty sure my limited experience is causing me to oversee an obvious issue but I couldn't find anything similar with my search queries so my next stop was here.</p> <p>As requested, a few lines of the CSV file:</p> <pre><code>EDQO small_airport Ottengrüner Heide Airport 50.22583389, 11.73166656 EDQP small_airport Rosenthal-Field Plössen Airport 49.86333466, EDQR small_airport Ebern-Sendelbach Airport 50.03944397, 10.82277775 EDQS small_airport Suhl-Goldlauter Airport 50.63194275, 10.72749996 EDQT small_airport Haßfurt-Schweinfurt Airport 50.01805496, EDQW small_airport Weiden in der Oberpfalz Airport 49.67890167, </code></pre>
<p>I had a different suggestion using functions:</p> <pre><code>import csv def findAirportCode(airport): with open('airport-codes.csv', encoding='Latin-1') as f: reader = csv.reader(f, delimiter=',') for row in reader: if airport.lower() == row[0].lower(): airportCode = row[2] + "/" + row[0] return airportCode return None airport = input('Please input the airport ICAO code: ') code = findAirportCode(airport) if(code != None ): print (code) else: print('Sorry, I don\'t recognise that airport.') print('Please try again.') </code></pre>
python|csv|while-loop
0
247
47,409,456
Getting next Timestamp Value
<p>What is the proper solution in pandas to get the next timestamp value?</p> <p>I have the following timestamp:</p> <pre><code>Timestamp('2017-11-01 00:00:00', freq='MS') </code></pre> <p>I want to get this as the result for the next timestamp value:</p> <pre><code>Timestamp('2017-12-01 00:00:00', freq='MS') </code></pre> <p><strong>Edit:</strong></p> <p>I am working with multiple frequencies (1min, 5min, 15min, 60min, D, W-SUN, MS).</p> <p>Is there a generic command to get next value? </p> <p>Is the best approach to build a function that behaves accordingly to each one of the frequencies?</p>
<p>General solution is convert strings to <code>offset</code> and add to timestamp:</p> <pre><code>L = ['1min', '5min', '15min', '60min', 'D', 'W-SUN', 'MS'] t = pd.Timestamp('2017-11-01 00:00:00', freq='MS') t1 = [t + pd.tseries.frequencies.to_offset(x) for x in L] print (t1) [Timestamp('2017-11-01 00:01:00', freq='MS'), Timestamp('2017-11-01 00:05:00', freq='MS'), Timestamp('2017-11-01 00:15:00', freq='MS'), Timestamp('2017-11-01 01:00:00', freq='MS'), Timestamp('2017-11-02 00:00:00', freq='MS'), Timestamp('2017-11-05 00:00:00'), Timestamp('2017-12-01 00:00:00')] </code></pre>
python|pandas
3
248
43,403,894
Django 1.10 Count on Models ForeignKey
<p>I guess this must be simple, but I've been trying for hours and can't find anything to help.</p> <p>I have 2 models. One for a <strong>Template Categories</strong> and another for a <strong>Template</strong></p> <p>I'm listing the Template Categories on the Homepage and for each Category I want to show how many templates have that category as a Foreign Key.</p> <p>My code is as follows:</p> <p><strong>Models.py</strong></p> <pre><code>class TemplateType(models.Model): type_title = models.CharField(max_length=60) type_description = models.TextField() file_count = models.ForeignKey('TemplateFile') def __str__(self): return self.type_title def get_absolute_url(self): return "/templates/%s/" %(self.id) class TemplateFile(models.Model): template_type = models.ForeignKey(TemplateType, on_delete=models.DO_NOTHING) template_file_title = models.CharField(max_length=120) template_file_description = models.TextField() def __str__(self): return self.template_file_title </code></pre> <p><strong>Views.py</strong></p> <pre><code>from django.shortcuts import HttpResponse from django.shortcuts import render, get_object_or_404 from django.db.models import Count from .models import TemplateType from .models import TemplateFile def home(request): queryset = TemplateType.objects.all().order_by('type_title').annotate(Count('file_count')) context = { "object_list": queryset, "title": "Home", } return render(request, "index.html", context) </code></pre> <p><strong>index.html</strong></p> <pre><code>&lt;div class="row"&gt; {% for obj in object_list %} &lt;div class="template_type col-md-6"&gt; &lt;a href="{{ obj.get_absolute_url }}"&gt; &lt;h4&gt;{{ obj.type_title }}&lt;/h4&gt; &lt;/a&gt; &lt;p&gt;{{ obj.type_short_description }}&lt;/p&gt; &lt;button class="btn btn-primary" type="button"&gt;Templates &lt;span class="badge"&gt;{{ obj.file_count__count }}&lt;/span&gt;&lt;/button&gt; &lt;/div&gt; {% endfor %} &lt;/div&gt; </code></pre> <p>Can somebody help please?</p>
<p><strong>Views.py</strong></p> <pre><code>from django.shortcuts import HttpResponse from django.shortcuts import render, get_object_or_404 from django.db.models import Count from .models import TemplateType from .models import TemplateFile def home(request): queryset = TemplateType.objects.order_by('type_title').annotate(num_file=Count('file_count')) context = { "object_list": queryset, "title": "Home", } return render(request, "index.html", context) </code></pre> <p>Now object_list contains TemplateType objects. And you can acces num_file like : <code>object_list[0].num_file</code>. Use it in your template.</p> <p><strong>index.html</strong></p> <pre><code>&lt;div class="row"&gt; {% for obj in object_list %} &lt;div class="template_type col-md-6"&gt; &lt;a href="{{ obj.get_absolute_url }}"&gt; &lt;h4&gt;{{ obj.type_title }}&lt;/h4&gt; &lt;/a&gt; &lt;p&gt;{{ obj.type_short_description }}&lt;/p&gt; &lt;button class="btn btn-primary" type="button"&gt;Templates &lt;span class="badge"&gt;{{ obj.num_file }}&lt;/span&gt;&lt;/button&gt; &lt;/div&gt; {% endfor %} &lt;/div&gt; </code></pre>
python|django
0
249
43,291,347
Internal Error 500 when using Flask and Apache
<p>I am working on a small college project using Raspberry Pi. Basically, the project is to provide an html interface to control a sensor attached to the Pi. I wrote a very simple Python code attached with a very basic html code also. Everything is done in this path /var/www/NewTest. However everytime I try to access it throws a 500 internal error. I tried simple "Hello World" examples that worked with me and tried to do this example the same way but didn't work.</p> <p>led.py</p> <pre><code>from gpiozero import LED from time import sleep from flask import Flask, render_template app = Flask(__name__) ledr = LED(17) ledg = LED(27) ledb = LED(22) @app.route('/') def index(): return render_template('index.html') @app.route('/red/') def red(): ledr.off() ledg.off() ledb.off() ledr.on() return ' ' @app.route('/green/') def green(): ledr.off() ledg.off() ledb.off() ledg.on() return ' ' @app.route('/blue/') def blue(): ledr.off() ledg.off() ledb.off() ledb.on() return ' ' if __name__ == '__main__': app.run(debug=True) </code></pre> <p>led.conf</p> <pre><code>&lt;virtualhost *:80&gt; ServerName 10.0.0.146 WSGIDaemonProcess led user=www-data group=www-data threads=5 home=/var/www/NewTest/ WSGIScriptAlias / /var/www/NewTest/led.wsgi &lt;directory /var/www/NewTest&gt; WSGIProcessGroup led WSGIApplicationGroup %{GLOBAL} WSGIScriptReloading On Order deny,allow Allow from all &lt;/directory&gt; &lt;/virtualhost&gt; </code></pre> <p>index.html</p> <pre><code>&lt;!doctype html&gt; &lt;title&gt;Test&lt;/title&gt; &lt;meta charset=utf-8&gt; &lt;a href="/red/"&gt;RED&lt;/a&gt; &lt;br/&gt; &lt;a href="/green/"&gt;GREEN&lt;/a&gt;&lt;br/&gt; &lt;a href="/blue/"&gt;BLUE&lt;/a&gt; </code></pre> <p>any ideas? Thanks!</p>
<p>The problem was in led.conf. The user needs to be pi.</p> <pre><code>&lt;virtualhost *:80&gt; ServerName 10.0.0.146 WSGIDaemonProcess led user=pi group=www-data threads=5 home=/var/www/NewTest/ WSGIScriptAlias / /var/www/NewTest/led.wsgi &lt;directory /var/www/NewTest&gt; WSGIProcessGroup led WSGIApplicationGroup %{GLOBAL} WSGIScriptReloading On Order deny,allow Allow from all &lt;/directory&gt; &lt;/virtualhost&gt; </code></pre>
python|apache|flask|raspberry-pi|raspbian
0
250
37,083,434
lmdb no locks available error
<p>I have a data.mdb and lock.mdb file in test/ directory. I was trying to use the python lmdb package to read/write data from the lmdb database. I tried</p> <pre><code>import lmdb env = lmdb.open('test', map_size=(1024**3), readonly=True) </code></pre> <p>but got the following error:</p> <pre><code>lmdb.Error: test: No locks available </code></pre> <p>Then I tried</p> <pre><code>mdb_stat test </code></pre> <p>with a separately installed lmdb library compiled from source and got the following error:</p> <pre><code>mdb_env_open failed, error 37 No locks available </code></pre> <p>However, in python I also tried</p> <pre><code>env = lmdb.open('test', map_size=(1024**3), lock=False) </code></pre> <p>This works and I can read data from the database normally.</p> <p>I searched on Google about "lmdb no locks available error" very hard but got nothing. Any one has any idea where this error came from?</p> <p>Thanks!</p>
<p>Use the -r option in mdb_stat to check the number of readers in the reader lock table. You may be hitting the max limit for number of readers. You can try setting this limit to a higher number.</p>
python|lmdb
0
251
51,234,035
Neural networks pytorch
<p>I am very new in pytorch and implementing my own network of image classifier. However I see for each epoch training accuracy is very good but validation accuracy is 0.i noted till 5th epoch. I am using Adam optimizer and have learning rate .001. also resampling the whole data set after each epoch into training n validation set. Please help where I am going wrong.</p> <p>Here is my code:</p> <pre><code>### where is data? data_dir_train = '/home/sup/PycharmProjects/deep_learning/CNN_Data/training_set' data_dir_test = '/home/sup/PycharmProjects/deep_learning/CNN_Data/test_set' # Define your batch_size batch_size = 64 allData = datasets.ImageFolder(root=data_dir_train,transform=transformArr) # We need to further split our training dataset into training and validation sets. def split_train_validation(): # Define the indices num_train = len(allData) indices = list(range(num_train)) # start with all the indices in training set split = int(np.floor(0.2 * num_train)) # define the split size #train_idx, valid_idx = indices[split:], indices[:split] # Random, non-contiguous split validation_idx = np.random.choice(indices, size=split, replace=False) train_idx = list(set(indices) - set(validation_idx)) # define our samplers -- we use a SubsetRandomSampler because it will return # a random subset of the split defined by the given indices without replacement train_sampler = SubsetRandomSampler(train_idx) validation_sampler = SubsetRandomSampler(validation_idx) #train_loader = DataLoader(allData,batch_size=batch_size,sampler=train_sampler,shuffle=False,num_workers=4) #validation_loader = DataLoader(dataset=allData,batch_size=1, sampler=validation_sampler) return (train_sampler,validation_sampler) </code></pre> <h1>Training</h1> <pre><code>from torch.optim import Adam import torch import createNN import torch.nn as nn import loadData as ld from torch.autograd import Variable from torch.utils.data import DataLoader # check if cuda - GPU support available cuda = torch.cuda.is_available() #create model, optimizer and loss function model = createNN.ConvNet(class_num=2) optimizer = Adam(model.parameters(),lr=.001,weight_decay=.0001) loss_func = nn.CrossEntropyLoss() if cuda: model.cuda() # function to save model def save_model(epoch): torch.save(model.load_state_dict(),'imageClassifier_{}.model'.format(epoch)) print('saved model at epoch',epoch) def exp_lr_scheduler ( epoch , init_lr = args.lr, weight_decay = args.weight_decay, lr_decay_epoch = cf.lr_decay_epoch): lr = init_lr * ( 0.5 ** (epoch // lr_decay_epoch)) def train(num_epochs): best_acc = 0.0 for epoch in range(num_epochs): print('\n\nEpoch {}'.format(epoch)) train_sampler, validation_sampler = ld.split_train_validation() train_loader = DataLoader(ld.allData, batch_size=30, sampler=train_sampler, shuffle=False) validation_loader = DataLoader(dataset=ld.allData, batch_size=1, sampler=validation_sampler) model.train() acc = 0.0 loss = 0.0 total = 0 # train model with training data for i,(images,labels) in enumerate(train_loader): # if cuda then move to GPU if cuda: images = images.cuda() labels = labels.cuda() # Variable class wraps a tensor and we can calculate grad images = Variable(images) labels = Variable(labels) # reset accumulated gradients for each batch optimizer.zero_grad() # pass images to model which returns preiction output = model(images) #calculate the loss based on prediction and actual loss = loss_func(output,labels) # backpropagate the loss and compute gradient loss.backward() # update weights as per the computed gradients optimizer.step() # prediction class predVal , predClass = torch.max(output.data, 1) acc += torch.sum(predClass == labels.data) loss += loss.cpu().data[0] total += labels.size(0) # print the statistics train_acc = acc/total train_loss = loss / total print('Mean train acc = {} over epoch = {}'.format(epoch,acc)) print('Mean train loss = {} over epoch = {}'.format(epoch, loss)) # Valid model with validataion data model.eval() acc = 0.0 loss = 0.0 total = 0 for i,(images,labels) in enumerate(validation_loader): # if cuda then move to GPU if cuda: images = images.cuda() labels = labels.cuda() # Variable class wraps a tensor and we can calculate grad images = Variable(images) labels = Variable(labels) # reset accumulated gradients for each batch optimizer.zero_grad() # pass images to model which returns preiction output = model(images) #calculate the loss based on prediction and actual loss = loss_func(output,labels) # backpropagate the loss and compute gradient loss.backward() # update weights as per the computed gradients optimizer.step() # prediction class predVal, predClass = torch.max(output.data, 1) acc += torch.sum(predClass == labels.data) loss += loss.cpu().data[0] total += labels.size(0) # print the statistics valid_acc = acc / total valid_loss = loss / total print('Mean train acc = {} over epoch = {}'.format(epoch, valid_acc)) print('Mean train loss = {} over epoch = {}'.format(epoch, valid_loss)) if(best_acc&lt;valid_acc): best_acc = valid_acc save_model(epoch) # at 30th epoch we save the model if (epoch == 30): save_model(epoch) train(20) </code></pre>
<p>I think you did not take into account that <code>acc += torch.sum(predClass == labels.data)</code> returns a tensor instead of a float value. Depending on the version of pytorch you are using I think you should change it to:</p> <pre><code>acc += torch.sum(predClass == labels.data).cpu().data[0] #pytorch 0.3 acc += torch.sum(predClass == labels.data).item() #pytorch 0.4 </code></pre> <p>Although your code seems to be working for old pytorch version, I would recommend you to upgrade to the 0.4 version.</p> <p>Also, I mentioned other problems/typos in your code. </p> <p>You are loading the dataset for every epoch. </p> <pre><code>for epoch in range(num_epochs): print('\n\nEpoch {}'.format(epoch)) train_sampler, validation_sampler = ld.split_train_validation() train_loader = DataLoader(ld.allData, batch_size=30, sampler=train_sampler, shuffle=False) validation_loader = DataLoader(dataset=ld.allData, batch_size=1, sampler=validation_sampler) ... </code></pre> <p>That should not happen, it should be enough loading it once</p> <pre><code>train_sampler, validation_sampler = ld.split_train_validation() train_loader = DataLoader(ld.allData, batch_size=30, sampler=train_sampler, shuffle=False) validation_loader = DataLoader(dataset=ld.allData, batch_size=1, sampler=validation_sampler) for epoch in range(num_epochs): print('\n\nEpoch {}'.format(epoch)) ... </code></pre> <p>In the training part you have (this does not happen in the validation):</p> <pre><code>train_acc = acc/total train_loss = loss / total print('Mean train acc = {} over epoch = {}'.format(epoch,acc)) print('Mean train loss = {} over epoch = {}'.format(epoch, loss)) </code></pre> <p>Where you are printing <code>acc</code> instead of <code>train_acc</code></p> <p>Also, in the validation part I mentioned that you are printing <code>print('Mean train acc = {} over epoch = {}'.format(epoch, valid_acc))</code> when it should be something like <code>'Mean val acc'</code>.</p> <p>Changing this lines of code, using a standard model I created and CIFAR dataset the training seems to converge, accuracy increases at every epoch while mean loss value decreases. </p> <p>I Hope I could help you!</p>
python|machine-learning|conv-neural-network|pytorch
2
252
24,787,962
How to feed weights into igraph community detection [Python/C/R]
<p>When using <code>commuinity_leading_eigenvector</code> of <a href="http://igraph.org/python/doc/igraph.Graph-class.html#community_leading_eigenvector" rel="nofollow">igraph</a>, assuming a graph g has already been created, how do I pass the list of weights of graph g to <code>community_leading_eigenvector</code>?</p> <blockquote> <p>community_leading_eigenvector(clusters=None, weights=None, arpack_options=None)</p> </blockquote>
<p>You can either pass the name of the attribute containing the weights to the <code>weights</code> parameter, or retrieve all the weights into a list using <code>g.es["weight"]</code> and then pass that to the <code>weights</code> parameter. So, either of these would suffice, assuming that your weights are in the <code>weight</code> edge attribute:</p> <ol> <li><code>g.community_leading_eigenvector(weights="weight")</code></li> <li><code>g.community_leading_eigenvector(weights=g.es["weight"])</code> </li> </ol>
python|c|r|graph|igraph
3
253
40,845,169
Aggregation fails when using lambdas
<p>I'm trying to port parts of my application from pandas to dask and I hit a roadblock when using a lamdba function in a groupby on a dask DataFrame.</p> <pre><code>import dask.dataframe as dd dask_df = dd.from_pandas(pandasDataFrame, npartitions=2) dask_df = dask_df.groupby( ['one', 'two', 'three', 'four'], sort=False ).agg({'AGE' : lambda x: x * x }) </code></pre> <p>This code fails with the following error: </p> <p><code>ValueError: unknown aggregate lambda</code> </p> <p>My lambda function is more complex in my application than here, but the content of the lambda doesn't matter, the error is always the same. There is a very similar example in the <a href="http://dask.pydata.org/en/latest/dataframe-api.html?highlight=agg#seriesgroupby" rel="noreferrer">documentation</a>, so this should actually work, I'm not sure what I'm missing. </p> <p>The same groupby works in pandas, but I need to improve it's performance.</p> <p>I'm using dask 0.12.0 with python 3.5.</p>
<p>From <a href="https://docs.dask.org/en/latest/dataframe-groupby.html#aggregate" rel="nofollow noreferrer">the Dask docs</a>:</p> <p>&quot;Dask supports Pandas’ aggregate syntax to run multiple reductions on the same groups. Common reductions such as max, sum, list and mean are directly supported.</p> <p>Dask also supports user defined reductions. To ensure proper performance, the reduction has to be formulated in terms of three independent steps. The chunk step is applied to each partition independently and reduces the data within a partition. The aggregate combines the within partition results. The optional finalize step combines the results returned from the aggregate step and should return a single final column. For Dask to recognize the reduction, it has to be passed as an instance of dask.dataframe.Aggregation.</p> <p>For example, sum could be implemented as:</p> <pre><code>custom_sum = dd.Aggregation('custom_sum', lambda s: s.sum(), lambda s0: s0.sum()) df.groupby('g').agg(custom_sum) </code></pre> <p>&quot;</p>
python|dask
0
254
38,468,549
how to convert pandas series to tuple of index and value
<p>I'm looking for an efficient way to convert a series to a tuple of its index with its values.</p> <pre><code>s = pd.Series([1, 2, 3], ['a', 'b', 'c']) </code></pre> <p>I want an array, list, series, some iterable:</p> <pre><code>[(1, 'a'), (2, 'b'), (3, 'c')] </code></pre>
<p>Well it seems simply <code>zip(s,s.index)</code> works too!</p> <p>For Python-3.x, we need to wrap it with <code>list</code> -</p> <pre><code>list(zip(s,s.index)) </code></pre> <p>To get a tuple of tuples, use <code>tuple()</code> : <code>tuple(zip(s,s.index))</code>.</p> <p>Sample run -</p> <pre><code>In [8]: s Out[8]: a 1 b 2 c 3 dtype: int64 In [9]: list(zip(s,s.index)) Out[9]: [(1, 'a'), (2, 'b'), (3, 'c')] In [10]: tuple(zip(s,s.index)) Out[10]: ((1, 'a'), (2, 'b'), (3, 'c')) </code></pre>
python|pandas|series|iterable
59
255
31,029,641
Python Kivy: Add Background loop
<p>I want to paste a background loop into my Python-Kivy script. The problem is, that I've got only a <code>App().run()</code> under my script. So, if I put a loop, somewhere in the the App-Class, the whole App stopps updating and checking for events. Is there a function name like <code>build(self)</code>, that's recognized by Kivy, and represents a main/background-loop?</p> <p><em>If you don't know, what I'm talking about, feel free to ask.</em></p>
<p>In case you need to schedule a repeated activity in a loop, you can use <code>Clock.schedule_interval()</code> to call a function on a regular schedule:</p> <pre><code>def my_repeated_function(data): print ("My function called.") Clock.schedule_interval(my_repeated_function, 1.0 / 30) # no brackets on function reference # call it 30 times per second </code></pre> <p>There is a lot more information on how to schedule events on a regular, conditional or one-time basis with Kivy's event loop <a href="http://kivy.org/docs/guide/events.html" rel="nofollow">here</a>.</p>
android|python|infinite-loop|kivy
2
256
40,305,692
How to learn multi-class multi-output CNN with TensorFlow
<p>I want to train a convolutional neural network with TensorFlow to do multi-output multi-class classification.</p> <p>For example: If we take the MNIST sample set and always combine two random images two a single one and then want to classify the resulting image. The result of the classification should be the two digits shown in the image. </p> <p>So the output of the network could have the shape [-1, 2, 10] where the first dimension is the batch, the second represents the output (is it the first or the second digit) and the third is the "usual" classification of the shown digit. </p> <p>I tried googling for this for a while now, but wasn't able find something useful. Also, I don't know if multi-output multi-class classification is the correct naming for this task. If not, what is the correct naming? Do you have any links/tutorials/documentations/papers explaining what I'd need to do to build the loss function/training operations?</p> <p>What I tried was to split up the output of the network into the single outputs with tf.split and then use softmax_cross_entropy_with_logits on every single output. The result I averaged over all outputs but it doesn't seem to work. Is this even a reasonable way?</p>
<p>For nomenclature of classification problems, you can have a look at this link: <a href="http://scikit-learn.org/stable/modules/multiclass.html" rel="nofollow noreferrer">http://scikit-learn.org/stable/modules/multiclass.html</a></p> <p>So your problem is called "Multilabel Classification". In normal TensorFlow multiclass classification (classic MNIST) you will have 10 output units and you will use <strong>softmax</strong> at the end for computing losses i.e. "tf.nn.softmax_cross_entropy_with_logits". </p> <p>Ex: If your image has "2", then groundtruth will be [0,0,1,0,0,0,0,0,0,0]</p> <p>But here, your network output will have 20 units and you will use <strong>sigmoid</strong> i.e. "tf.nn.sigmoid_cross_entropy_with_logits"</p> <p>Ex: If your image has "2" &amp; "4", then groundtruth will be [0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0], i.e. first ten bits to represent first digit class and second to represent second digit class.</p>
tensorflow|conv-neural-network
3
257
29,043,138
Using Tweepy to determine the age on an account
<p>I'm looking to use Tweepy for a small project. I'd like to be able to write a bit of code that returns the age of a given Twitter account. The best way I can think of to do this is to return all Tweets from the very first page, find the earliest Tweet and check the date/timestamp on it. </p> <p>It's a bit hacky but I was wondering if anyone could think of an easier or cleaner way to accomplish this?</p>
<p>The get_user method returns a user object that includes a created_at field.</p> <p>Check <a href="https://dev.twitter.com/overview/api/users" rel="nofollow">https://dev.twitter.com/overview/api/users</a></p>
python|date|twitter|tweepy
1
258
58,619,136
how to remove /n and comma while extracting using response.css
<p>I am trying to crawl amazon to get product name, price and [savings information]. i am using response.css to extract [saving information] as below</p> <p>python code to extract [savings information]:</p> <pre><code>savingsinfo = amzscrape.css(".a-color-secondary .a-row , .a-row.a-size-small.a-color-secondary span").css('::text').extract() </code></pre> <p>Returning below output with above code</p> <pre><code>'savingsinfo_item': ['Save ', '$20.00', ' when you buy ', '$100.00', ' of select items'] </code></pre> <p>Expected output:</p> <pre><code>Save $20.00 when you buy $100 of select items </code></pre>
<pre class="lang-py prettyprint-override"><code>output = ''.join(savingsinfo['savingsinfo_item']) </code></pre>
python|css|web-scraping
2
259
52,358,022
BeautifulSoup not defined when called in function
<p>My web scraper is throwing <code>NameError: name 'BeautifulSoup' is not defined</code> when I call BeautifulSoup() inside my function, but it works normally when I call it outside the function and pass the Soup as an argument. </p> <p>Here is the working code:</p> <pre><code>from teams.models import * from bs4 import BeautifulSoup from django.conf import settings import requests, os, string soup = BeautifulSoup(open(os.path.join(settings.BASE_DIR, 'revolver.html')), 'html.parser') def scrapeTeamPage(soup): teamInfo = soup.find('div', 'profile_info') ... print(scrapeTeamPage(soup)) </code></pre> <p>But when I move the BeautifulSoup call inside my function, I get the error.</p> <pre><code>from teams.models import * from bs4 import BeautifulSoup from django.conf import settings import requests, os, string def scrapeTeamPage(url): soup = BeautifulSoup(open(os.path.join(settings.BASE_DIR, url)), 'html.parser') teamInfo = soup.find('div', 'profile_info') </code></pre>
<p>I guess you are doing some spelling mistake of BeautifulSoup, its case sensitive. if not, use requests in your code as:</p> <pre><code>from teams.models import * from bs4 import BeautifulSoup from django.conf import settings import requests, os, string def scrapeTeamPage(url): res = requests.get(url) soup = BeautifulSoup(res.content, 'html.parser') teamInfo = soup.find('div', 'profile_info') </code></pre>
python|beautifulsoup
2
260
19,037,703
Missing parameters when creating new table in Google BigQuery through Python API V2
<p>I'm trying to create new table using BigQuery's Python API:</p> <pre><code>bigquery.tables().insert( projectId="xxxxxxxxxxxxxx", datasetId="xxxxxxxxxxxxxx", body='{ "tableReference": { "projectId":"xxxxxxxxxxxxxx", "tableId":"xxxxxxxxxxxxxx", "datasetId":"accesslog"}, "schema": { "fields": [ {"type":"STRING", "name":"ip"}, {"type":"TIMESTAMP", "name":"ts"}, {"type":"STRING", "name":"event"}, {"type":"STRING", "name":"id"}, {"type":"STRING","name":"sh"}, {"type":"STRING", "name":"pub"}, {"type":"STRING", "name":"context"}, {"type":"STRING", "name":"brand"}, {"type":"STRING", "name":"product"} ] } }' ).execute() </code></pre> <p>The error I'm getting is:</p> <pre><code>(&lt;class 'apiclient.errors.HttpError'&gt;, &lt;HttpError 400 when requesting https://www.googleapis.com/bigquery/v2/projects/xxxxxxxxxxxxxx/datasets/xxxxxxxxxxxxxx/tables?alt=json returned "Required parameter is missing"&gt;, &lt;traceback object at 0x17e1c20&gt;) </code></pre> <p>I think all required parameters are included as far as this is documented at <a href="https://developers.google.com/resources/api-libraries/documentation/bigquery/v2/python/latest/bigquery_v2.tables.html#insert" rel="nofollow">https://developers.google.com/resources/api-libraries/documentation/bigquery/v2/python/latest/bigquery_v2.tables.html#insert</a></p> <p>What's missing?</p>
<p>The only required parameter for a <code>tables.insert</code> is the <code>tableReference</code>, which must have <code>tableId</code>, <code>datasetId</code>, and <code>projectId</code> fields. I think the actual issue may be that you're passing the JSON string when you could just pass a <code>dict</code> with the values. For instance, the following code works to create a table (note the <code>dataset_ref</code> is a Python trick to copy the contents to named arguments):</p> <pre><code>project_id = &lt;my project&gt; dataset_id = &lt;my dataset&gt; table_id = 'table_001' dataset_ref = {'datasetId': dataset_id, 'projectId': project_id} table_ref = {'tableId': table_id, 'datasetId': dataset_id, 'projectId': project_id} table = {'tableReference': table_ref} table = bigquery.tables().insert( body=table, **dataset_ref).execute(http) </code></pre>
python|google-bigquery
2
261
69,072,902
Loop does not iterate over all data
<p>I have code that produces the following df as output:</p> <pre><code> year month day category keywords 0 '2021' '09' '06' 'us' ['afghan, refugees, volunteers'] 1 '2021' '09' '05' 'us' ['politics' 'military, drone, strike, kabul'] 2 '2021' '09' '06' 'business' ['rto, return, to, office'] 3 '2021' '09' '06' 'nyregion' ['nyc, jewish, high, holy, days'] 4 '2021' '09' '06' 'world' ['americas' 'mexico, migrants, asylum, border'] 5 '2021' '09' '06' 'us' ['TAHOE, CALDORFIRE, WORKERS'] 6 '2021' '09' '06' 'nyregion' ['queens, flooding, cleanup'] 7 '2021' '09' '05' 'us' ['new, orleans, power, failure, traps, older, residents, in, homes'] 8 '2021' '09' '05' 'nyregion' ['biden, flood, new, york, new, jersey'] 9 '2021' '09' '06' 'technology' ['freedom, phone, smartphone, conservatives'] 10 '2021' '09' '06' 'sports' ['football' 'nfl, preview, nfc, predictions'] 11 '2021' '09' '06' 'sports' ['football' 'nfl, preview, afc, predictions'] 12 '2021' '09' '06' 'opinion' ['texas, abortion, september, 11'] 13 '2021' '09' '06' 'opinion' ['coronavirus, masks, school, board, meetings'] 14 '2021' '09' '06' 'opinion' ['south, republicans, vaccines, climate, change'] 15 '2021' '09' '06' 'opinion' ['labor, workers, rights'] 16 '2021' '09' '05' 'opinion' ['ku, kluxism, trumpism'] 17 '2021' '09' '05' 'opinion' ['culture' 'sexually, harassed, pentagon'] 18 '2021' '09' '05' 'opinion' ['parenting, college, empty, nest, pandemic'] 19 '2021' '09' '04' 'opinion' ['letters' 'coughlin, caregiving'] 20 '2021' '08' '24' 'opinion' ['kara, swisher, maggie, haberman, event'] 21 '2021' '09' '05' 'opinion' ['labor, day, us, history'] 22 '2021' '09' '04' 'opinion' ['drowning, our, future, in, the, past'] 23 '2021' '09' '04' 'opinion' ['biden, job, approval, rating'] 24 '2021' '09' '05' 'opinion' ['dorothy, day, christian, labor'] 25 '2021' '09' '03' 'business' ['goodbye, office, mom'] 26 '2021' '09' '06' 'business' ['media' 'burn, out, companies, pandemic'] 27 '2021' '08' '30' 'arts' ['music' 'popcast, lorde, solar, power'] 28 '2021' '09' '02' 'opinion' ['sway, kara, swisher, julie, cordua, ashton, kutcher'] 29 '2021' '08' '12' 'science' ['fauci, kids, and, covid, event'] 30 '2021' '09' '05' 'us' ['shooting, lakeland, florida'] 31 '2021' '09' '05' 'business' ['media' 'leah, finnegan, gawker'] 32 '2021' '09' '06' 'nyregion' ['piping, plovers, bird, rescue'] 33 '2021' '09' '05' 'us' ['anti, abortion, movement, texas, law'] 34 '2021' '09' '05' 'us' ['politics' 'bernie, sanders, budget, bill'] 35 '2021' '09' '05' 'world' ['africa' 'guinea, coup'] 36 '2021' '09' '05' 'sports' ['soccer' 'brazil, argentina, suspended'] 37 '2021' '09' '06' 'world' ['africa' 'south, africa, jacob, zuma, medical, parole'] 38 '2021' '09' '05' 'sports' ['nfl, social, justice'] 39 '2021' '09' '02' 'well' ['go, bag, essentials'] 40 '2021' '09' '01' 'parenting' ['raising, resilient, kids'] 41 '2021' '09' '03' 'books' ['911, anniversary, fiction, literature'] 42 '2021' '09' '01' 'arts' ['design' 'german, hygiene, museum'] 43 '2021' '09' '03' 'arts' ['music' 'opera, livestreams'] 44 '2021' '09' '04' 'style' ['the, return, of, the, dream, honeymoon'] &lt;class 'str'&gt; </code></pre> <p>I built a for loop to iterate over all the elements in the 'keyword' column and put them separately into a new df called df1. The loop look like this:</p> <pre><code>df1 = pd.DataFrame(columns=['word']) i = 0 for p in df.loc[i, 'keywords']: teststr = df.loc[i, 'keywords'] splitstr = teststr.split() u = 0 for p1 in splitstr: dict_1 = {'word': splitstr[u]} df1.loc[len(df1)] = dict_1 u = u + 1 i = i + 1 print(df1) </code></pre> <p>The output it produces is:</p> <pre><code> word 0 ['afghan, 1 refugees, 2 volunteers'] 3 ['politics' 4 'military, 5 drone, 6 strike, 7 kabul'] 8 ['rto, 9 return, 10 to, 11 office'] 12 ['nyc, 13 jewish, 14 high, 15 holy, 16 days'] 17 ['americas' 18 'mexico, 19 migrants, 20 asylum, 21 border'] 22 ['TAHOE, 23 CALDORFIRE, 24 WORKERS'] 25 ['queens, 26 flooding, 27 cleanup'] 28 ['new, 29 orleans, 30 power, 31 failure, 32 traps, 33 older, 34 residents, 35 in, 36 homes'] 37 ['biden, 38 flood, 39 new, 40 york, 41 new, 42 jersey'] 43 ['freedom, 44 phone, 45 smartphone, 46 conservatives'] 47 ['football' 48 'nfl, 49 preview, 50 nfc, 51 predictions'] 52 ['football' 53 'nfl, 54 preview, 55 afc, 56 predictions'] 57 ['texas, 58 abortion, 59 september, 60 11'] 61 ['coronavirus, 62 masks, 63 school, 64 board, 65 meetings'] 66 ['south, 67 republicans, 68 vaccines, 69 climate, 70 change'] 71 ['labor, 72 workers, 73 rights'] 74 ['ku, 75 kluxism, 76 trumpism'] 77 ['culture' 78 'sexually, 79 harassed, 80 pentagon'] 81 ['parenting, 82 college, 83 empty, 84 nest, 85 pandemic'] 86 ['letters' 87 'coughlin, 88 caregiving'] 89 ['kara, 90 swisher, 91 maggie, 92 haberman, 93 event'] 94 ['labor, 95 day, 96 us, 97 history'] 98 ['drowning, 99 our, 100 future, 101 in, 102 the, 103 past'] 104 ['biden, 105 job, 106 approval, 107 rating'] 108 ['dorothy, 109 day, 110 christian, 111 labor'] 112 ['goodbye, 113 office, 114 mom'] 115 ['media' 116 'burn, 117 out, 118 companies, 119 pandemic'] 120 ['music' 121 'popcast, 122 lorde, 123 solar, 124 power'] 125 ['sway, 126 kara, 127 swisher, 128 julie, 129 cordua, 130 ashton, 131 kutcher'] 132 ['fauci, 133 kids, 134 and, 135 covid, 136 event'] 137 ['shooting, 138 lakeland, 139 florida'] 140 ['media' 141 'leah, 142 finnegan, 143 gawker'] </code></pre> <p>Although the for loop works fine, it does not iterate over all the rows from df and stops more or less in the middle (it doesn't stop always at the same spot).</p> <p>Do you have an idea why? Thanks in advance</p>
<p>I think the problem is that with:</p> <pre class="lang-py prettyprint-override"><code>for p in df.loc[i, 'keywords']: </code></pre> <p>you are iterating over the letters in the first entry. So you will stop at that count.</p> <p>This should work for you:</p> <pre class="lang-py prettyprint-override"><code>for teststr in df['keywords']: splitstr = teststr.split() for p1 in splitstr: dict_1 = {'word': p1} df1.loc[len(df1)] = dict_1 print(df1) </code></pre>
python|loops
1
262
68,898,700
How use asyncio with pyqt6?
<p>qasync doesn't support pyqt6 yet and I'm trying to run discord.py in the same loop as pyqt but so far I'm not doing the best. I've tried multiprocess, multithread, and even running synchronous code from non-synchronous code but I either end up with blocking code that makes the pyqt program non responsive or it just outright doesn't work. Can somebody please point me in the right direction?</p>
<p><del>qasync does not currently support PyQt6 but I have created a <a href="https://github.com/CabbageDevelopment/qasync/pull/53" rel="nofollow noreferrer">PR</a> that implements it.</del></p> <p><del>At the moment you can install my version of qasync using the following command:</del></p> <pre><code>pip install git+https://github.com/eyllanesc/qasync.git@PyQt6 </code></pre> <p><del>Probably in future releases my PR will be accepted so there will already be support for PyQt6 </del></p> <p>They already accepted my PR so you can already install the latest version of qasync that has support for PyQt6.</p>
python|pyqt|python-asyncio|pyqt6
2
263
68,921,822
Used IDs are not available anymore in Selenium Python
<p>I am using Python and Selenium to <strong>scrape</strong> some data out of an website. This website has the following structure:</p> <p>First group item has the following base ID: <em><strong>frmGroupList_Label_GroupName</strong></em> and then you add <em><strong>_2</strong></em> or <em><strong>_3</strong></em> at the end of this base ID to get the 2nd/3rd group's ID.</p> <p>Same thing goes for the user item, it has the following base ID: <em><strong>frmGroupContacts_TextLabel3</strong></em> and then you add <em><strong>_2</strong></em> or <em><strong>_3</strong></em> at the end of this base ID to get the 2nd/3rd users's ID.</p> <p>What I am trying to do is to get all the users out of each group. And this is how I did it: find the first group, select it and grab all of it users, then, go back to the 2nd group, grab its users, and so on.</p> <pre><code>def grab_contact(number_of_members): groupContact = 'frmGroupContacts_TextLabel3' contact = browser.find_element_by_id(groupContact).text print(contact) i = 2 time.sleep(1) # write_to_excel(contact, group) while i &lt;= number_of_members: group_contact_string = groupContact + '_' + str(i) print(group_contact_string) try: contact = browser.find_element_by_id(group_contact_string).text print(contact) i = i + 1 time.sleep(1) # write_to_excel(contact, group) except NoSuchElementException: break time.sleep(3) </code></pre> <p>Same code applies for scraping the groups. And it works, up to a point!! Although the IDs of the groups are different, the IDs of the users are the same from one group to another. Example:</p> <p>group_id_1 = user_id_1, user_id_2</p> <p>group_id_2 = user_id_1, user_id_2, user_id_3, user_id_4, user_id_5</p> <p>group_id_3 = user_id_1, user_id_2, user_id_3</p> <p>The code runs, it goes to group_id_1, grabs user_id_1 and user_id_2 correctly, but when it gets to group_id_2, the user_id_1 and user_id_2 (which are different in matter of content) are EMPTY, and only user_id_3, user_id_4, user_id_5 are correct. Then, when it gets to group_id_3, all of the users are empty.</p> <p>This has to do with the users having same IDs. As soon as it gets to a certain user ID in a group, I cannot retrieve all the users before that ID in another group. I tried quitting the browser, and reopening a new browser (it doesn't work, the new browser doesn't open), tried refreshing the page (doesn't work), tried opening a new tab (doesn't work).</p> <p>I think the content of the IDs get stuck in memory when they are accessed, and are not freed when accessing a new group. Any ideas on how to get past this?</p> <p>Thanks!</p>
<p>As the saying goes... it ain't stupid, if it works.</p> <pre><code>def refresh(): # accessing the groups page url = &quot;https://google.com&quot; browser.get(url) time.sleep(5) url = &quot;https://my_url.com&quot; browser.get(url) time.sleep(5) </code></pre> <p>While trying to debug this, and finding a solution, I thought: &quot;what if you go to another website, then come back to yours, between group scraping&quot;... and it works! Until I find other solution, I'll stick with this one.</p>
python|selenium|caching|memory|browser
0
264
62,368,281
Finding an unfilled circle in an image of finite size using Python
<p>Trying to find a circle in an <a href="https://1drv.ms/u/s!AtEnXvOorHZ4sp4moxmUXtErAc2lVw?e=7tfsYe" rel="nofollow noreferrer">image</a> that has finite radius. Started off using 'HoughCircles' method from OpenCV as the parameters for it seemed very much related to my situation. But it is failing to find it. Looks like the image may need more pre-processing for it to find reliably. So, started off playing with different thresholds in opencv to no success. <a href="https://1drv.ms/u/s!AtEnXvOorHZ4sp4moxmUXtErAc2lVw?e=7tfsYe" rel="nofollow noreferrer">Here</a> is an example of an image (note that the overall intensity of the image will vary, but the radius of the circle always remain the same ~45pixels)</p> <p>Here is what I have tried so far</p> <pre><code>image = cv2.imread('image1.bmp', 0) img_in = 255-image mean_val = int(np.mean(img_in)) ret, img_thresh = cv2.threshold(img_in, thresh=mean_val-30, maxval=255, type=cv2.THRESH_TOZERO) # detect circle circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.0, 100, minRadius=40, maxRadius=50) </code></pre> <p>If you look at the image, the circle is obvious, its a thin light gray circle in the center of the blob.</p> <p>Any suggestions? <em>Edited to show expected result</em> The expected result should be like <a href="https://1drv.ms/u/s!AtEnXvOorHZ4sp4lswhhLJGJ4qJoNQ" rel="nofollow noreferrer">this</a>, as you can see, the circle is very obvious for naked eye on the original image and is always of the same radius but not at the same location on the image. But there will be only <strong>one circle</strong> of this kind on any given image.</p> <p><strong>As of 8/20/2020, here is the code I am using to get the center and radii</strong></p> <pre><code>from numpy import zeros as np_zeros,\ full as np_full from cv2 import calcHist as cv2_calcHist,\ HoughCircles as cv2_HoughCircles,\ HOUGH_GRADIENT as cv2_HOUGH_GRADIENT def getCenter(img_in, saturated, minradius, maxradius): img_local = img_in[100:380,100:540,0] res = np_full(3, -1) # do some contrast enhancement img_local = stretchHistogram(img_local, saturated) circles = cv2_HoughCircles(img_local, cv2_HOUGH_GRADIENT, 1, 40, param1=70, param2=20, minRadius=minradius, maxRadius=maxradius) if circles is not None: # found some circles circles = sorted(circles[0], key=lambda x: x[2]) res[0] = circles[0][0]+100 res[1] = circles[0][1]+100 res[2] = circles[0][2] return res #x,y,radii def stretchHistogram(img_in, saturated=0.35, histMin=0.0, binSize=1.0): img_local = img_in.copy() img_out = img_in.copy() min, max = getMinAndMax(img_local, saturated) if max &gt; min: min = histMin+min * binSize max = histMin+max * binSize w, h = img_local.shape[::-1] #create a new lut lut = np_zeros(256) max2 = 255 for i in range(0, 256): if i &lt;= min: lut[i] = 0 elif i &gt;= max: lut[i] = max2 else: lut[i] = (round)(((float)(i - min) / (max - min)) * max2) #update image with new lut values for i in range(0, h): for j in range(0, w): img_out[i, j] = lut[img_local[i, j]] return img_out def getMinAndMax(img_in, saturated): img_local = img_in.copy() hist = cv2_calcHist([img_local], [0], None, [256], [0, 256]) w, h = img_local.shape[::-1] pixelCount = w * h saturated = 0.5 threshold = (int)(pixelCount * saturated / 200.0) found = False count = 0 i = 0 while not found and i &lt; 255: count += hist[i] found = count &gt; threshold i = i + 1 hmin = i i = 255 count = 0 while not found and i &gt; 0: count += hist[i] found = count &gt; threshold i = i - 1 hmax = i return hmin, hmax </code></pre> <p>and calling the above function as</p> <pre><code>getCenter(img, 5.0, 55, 62) </code></pre> <p>But it is still very unreliable. Not sure why it is so hard to get to an algorithm that works reliably for something that is very obvious to a naked eye. Not sure why there is so much variation in the result from frame to frame even though there is no change between them.</p> <p>Any suggestions are greatly appreciated. Here are some more <a href="https://1drv.ms/u/s!AtEnXvOorHZ4sqBv50JglDDaaxJHLA?e=ohrsKg" rel="nofollow noreferrer">samples</a> to play with</p>
<p>simple, draw your circles: <code>cv2.HoughCircles</code> returns a list of circles..</p> <p>take care of <code>maxRadius = 100</code></p> <pre><code>for i in circles[0,:]: # draw the outer circle cv2.circle(image,(i[0],i[1]),i[2],(255,255,0),2) # draw the center of the circle cv2.circle(image,(i[0],i[1]),2,(255,0,255),3) </code></pre> <p>a full working code (you have to change your tresholds):</p> <pre><code>import cv2 import numpy as np image = cv2.imread('0005.bmp', 0) height, width = image.shape print(image.shape) img_in = 255-image mean_val = int(np.mean(img_in)) blur = cv2.blur(img_in , (3,3)) ret, img_thresh = cv2.threshold(blur, thresh=100, maxval=255, type=cv2.THRESH_TOZERO) # detect circle circles = cv2.HoughCircles(img_thresh, cv2.HOUGH_GRADIENT,1,40,param1=70,param2=20,minRadius=60,maxRadius=0) print(circles) for i in circles[0,:]: # check if center is in middle of picture if(i[0] &gt; width/2-30 and i[0] &lt; width/2+30 \ and i[1] &gt; height/2-30 and i[1] &lt; height/2+30 ): # draw the outer circle cv2.circle(image,(i[0],i[1]),i[2],(255,255,0),2) # draw the center of the circle cv2.circle(image,(i[0],i[1]),2,(255,0,255),3) cv2.imshow("image", image ) while True: keyboard = cv2.waitKey(2320) if keyboard == 27: break cv2.destroyAllWindows() </code></pre> <p>result: <a href="https://i.stack.imgur.com/Dl4sb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dl4sb.jpg" alt="enter image description here"></a></p>
python|opencv
2
265
56,418,087
How to Plot Time Stamps HH:MM on Python Matplotlib "Clock" Polar Plot
<p>I am trying to plot mammalian feeding data on time points on a polar plot. In the example below, there is only one day, but each day will eventually be plotted on the same graph (via different axes). I currently have all of the aesthetics worked out, but my data is not graphing correctly. How do I get the hours to plot correctly?</p> <p>I assume that the solution will likely have to do with pd.datetime and np.deg2rad, but I have not found the correct combo.</p> <p>I am importing my data from csv, and filtering each day based on the date as follows:</p> <pre><code>#Filtered portion: Day1 = df[df.Day == '5/22'] </code></pre> <p>This gives me the following data:</p> <pre><code> Day Time Feeding_Quality Feed_Num 0 5/22 16:15 G 2 1 5/22 19:50 G 2 2 5/22 20:15 G 2 3 5/22 21:00 F 1 4 5/22 23:30 G 2 </code></pre> <p>Here is the code:</p> <pre><code>fig = plt.figure(figsize=(7,7)) ax = plt.subplot(111, projection = 'polar') ax.bar(Day1['Time'], Day1['Feed_Num'], width = 0.1, alpha=0.3, color='red', label='Day 1') # Make the labels go clockwise ax.set_theta_direction(-1) #Place Zero at Top ax.set_theta_offset(np.pi/2) #Set the circumference ticks ax.set_xticks(np.linspace(0, 2*np.pi, 24, endpoint=False)) # set the label names ticks = ['12 AM', '1 AM', '2 AM', '3 AM', '4 AM', '5 AM', '6 AM', '7 AM','8 AM','9 AM','10 AM','11 AM','12 PM', '1 PM', '2 PM', '3 PM', '4 PM', '5 PM', '6 PM', '7 PM', '8 PM', '9 PM', '10 PM', '11 PM' ] ax.set_xticklabels(ticks) # suppress the radial labels plt.setp(ax.get_yticklabels(), visible=False) #Bars to the wall plt.ylim(0,2) plt.legend(bbox_to_anchor=(1,0), fancybox=True, shadow=True) plt.show() </code></pre> <p>As you can assume from the data, all bars plotted would be in the afternoon, but as you can see from the graph output, the data is all over the place.</p> <p><img src="https://i.imgur.com/eszgcIL.png" alt="polar clock plot"></p>
<pre><code>import numpy as np from matplotlib import pyplot as plt import datetime df = pd.DataFrame({'Day': {0: '5/22', 1: '5/22', 2: '5/22', 3: '5/22', 4: '5/22'}, 'Time': {0: '16:15', 1: '19:50', 2: '20:15', 3: '21:00', 4: '23:30'}, 'Feeding_Quality': {0: 'G', 1: 'G', 2: 'G', 3: 'F', 4: 'G'}, 'Feed_Num': {0: 2, 1: 2, 2: 2, 3: 1, 4: 2}}) </code></pre> <p>Create a series of datetime.datetime objects from the <code>'Time'</code> column; transform that into percentages of 24 hours; transform that into radians.</p> <pre><code>xs = pd.to_datetime(df['Time'],format= '%H:%M' ) xs = xs - datetime.datetime.strptime('00:00:00', '%H:%M:%S') xs = xs.dt.seconds / (24 * 3600) xs = xs * 2 * np.pi </code></pre> <p>Use that as the <em>x</em> values for the plot</p> <pre><code>fig = plt.figure(figsize=(7,7)) ax = plt.subplot(111, projection = 'polar') ax.bar(xs, df['Feed_Num'], width = 0.1, alpha=0.3, color='red', label='Day 1') # Make the labels go clockwise ax.set_theta_direction(-1) #Place Zero at Top ax.set_theta_offset(np.pi/2) #Set the circumference ticks ax.set_xticks(np.linspace(0, 2*np.pi, 24, endpoint=False)) # set the label names ticks = ['12 AM', '1 AM', '2 AM', '3 AM', '4 AM', '5 AM', '6 AM', '7 AM','8 AM','9 AM','10 AM','11 AM','12 PM', '1 PM', '2 PM', '3 PM', '4 PM', '5 PM', '6 PM', '7 PM', '8 PM', '9 PM', '10 PM', '11 PM' ] ax.set_xticklabels(ticks) # suppress the radial labels plt.setp(ax.get_yticklabels(), visible=False) #Bars to the wall plt.ylim(0,2) plt.legend(bbox_to_anchor=(1,0), fancybox=True, shadow=True) plt.show() </code></pre> <hr> <p>The <code>'Time'</code> column could also be transformed to radians with</p> <pre><code>def trans(x): h,m = map(int,x) return 2 * np.pi * (h + m/60)/24 xs = df['Time'].str.split(':') xs = xs.apply(trans) </code></pre> <p>Which is probably a little better than using timedelta's - that seemed a little convoluted.</p>
python|matplotlib
2
266
56,246,052
How to fix 'else' outputting more than 1 outcome
<p>Very basic problem, trying to output if a number is divisible by 3/5/both/none but else will return 2 statements when they are not true. How do I fix this?</p> <p>I've tried to move where the else is indented, first time it wouldn't output for the numbers that are not multiples of 3 or 5 and second time it would output two answers.</p> <pre><code>while True: z = input("Please enter a number- to end the program enter z as -1 ") if z % 3 == 0 and z % 5 ==0: print("Your number is a multiple of 3 and 5") elif z % 3 == 0 and z % 5 != 0: print("Your number is a multiple of 3") elif z % 3 != 0 and z % 5 ==0: print("Your number is a multiple of 5") if z == -1: break else: print("Your number is not a multiple of 3 or 5") </code></pre> <p>I.e. if 67 is entered <code>Your number is not a multiple of 3 or 5</code> as expected. But if 15 is entered <code>Your number is a multiple of 3 and 5</code> and <code>Your number is not a multiple of 3 or 5</code> is the unexpected output.</p>
<p>If you incorporate all the comment suggestions so far you get something like this:</p> <pre><code>while True: z = input("Please enter a number- to end the program enter z as -1 ") # cast to int z = int(z) # break early if z == -1: break elif z % 3 == 0 and z % 5 == 0: print("Your number is a multiple of 3 and 5") elif z % 3 == 0: print("Your number is a multiple of 3") elif z % 5 == 0: print("Your number is a multiple of 5") else: print("Your number is not a multiple of 3 or 5") </code></pre>
python
2
267
63,572,310
pytest will not run the test files in subdirectories
<p>I am new to pytest and trying to run a simple test to check if pytest works. I'm using windows 10, python 3.8.5 and pytest 6.0.1.</p> <p>Here is my project directory:</p> <pre><code>projects/ tests/ __init__.py test_sample.py </code></pre> <p>Here is what I put in test_sample.py:</p> <pre><code>def func(x): return x + 1 def test_answer(): assert func(3) == 5 </code></pre> <p>if I do the following:</p> <pre><code>&gt; pytest the test run fine (1 failed in 0.004s) &gt; pytest tests/test_sample.py the test run fine (1 failed in 0.006s) </code></pre> <p>However, if I do this:</p> <pre><code>&gt; pytest test_sample.py </code></pre> <p>It will return a message like this:</p> <pre><code>no test ran in 0.000s ERROR: file not found: test_sample.py </code></pre> <p>I tried deleting <code>__init__.py</code> file but the result was still the same. Also, I have tried this on 2 different computers but nothing changed. In case the problem can't be solved, can I just ignore it and move on with the solutions I'm having?</p>
<p>The &quot;best practices&quot; approach to configuring a project with pytest is using <a href="https://docs.pytest.org/en/latest/customize.html#initialization-determining-rootdir-and-configfile" rel="nofollow noreferrer">a config file</a>. The simplest solution is a <code>pytest.ini</code> that looks like this:</p> <pre class="lang-py prettyprint-override"><code># pytest.ini [pytest] testpaths = tests </code></pre> <p>This configures the <a href="https://docs.pytest.org/en/latest/reference.html#confval-testpaths" rel="nofollow noreferrer"><code>testpaths</code></a> relative to your <a href="https://docs.pytest.org/en/latest/customize.html#finding-the-rootdir" rel="nofollow noreferrer"><code>rootdir</code></a> (pytest will tell you what both paths are whenever you run it). This answers the specific problem you raised in your question.</p> <pre><code>C:\YourProject &lt;&lt;-- Run pytest on this path and it will be considered your rootdir. │ │ pytest.ini │ your_module.py │ ├───tests &lt;&lt;-- This is the directory you configured as testpaths in pytest.ini │ __init__.py │ test_sample.py </code></pre> <p>Your example was about running specific tests from the command line. The complete set of rules for <a href="https://docs.pytest.org/en/latest/customize.html#finding-the-rootdir" rel="nofollow noreferrer">finding the <code>rootdir</code> from <code>args</code></a> is somewhat contrived.</p> <p>You should notice that pytest currently supports <a href="https://docs.pytest.org/en/latest/goodpractices.html#choosing-a-test-layout-import-rules" rel="nofollow noreferrer">two possible layouts</a> for your tests and modules. It's currently <a href="https://docs.pytest.org/en/latest/goodpractices.html#choosing-a-test-layout-import-rules" rel="nofollow noreferrer">strongly suggested by pytest documentation to use a <code>src</code> layout</a>. Answering about the importance of using <code>__init__.py</code> depends on the former to an extent, however choosing a configuration file and layout still takes precedence over how you choose to use <code>__init__.py</code> to define your packages.</p>
python|pytest
1
268
36,341,820
Updating R that is used within IPython/ Jupyter
<p>I wanted to use R within Jupyter Notebook so I installed via R Essentials (see: <a href="https://www.continuum.io/blog/developer/jupyter-and-conda-r" rel="nofollow">https://www.continuum.io/blog/developer/jupyter-and-conda-r</a>). The version that got installed is the following:</p> <pre><code>R.Version() Out[2]: $platform "x86_64-w64-mingw32" $arch "x86_64" $os "mingw32" $system "x86_64, mingw32" $status "" $major "3" $minor "1.3" $year "2015" $month "03" $day "09" $svn rev "67962" $language "R" $version.string "R version 3.1.3 (2015-03-09)" $nickname "Smooth Sidewalk" </code></pre> <p>I have attempted to update R and install some packages (like RWeka for example) to no avail. I have looked for various sources but nothing seems to point me in the right direction. Does anyone know what to do?</p> <p>My main motivation is trying to use R libaries but will get warnings like the following:</p> <pre><code>library("RWeka") Warning message: : package 'RWeka' was built under R version 3.2.4Warning message: In unique(paths): bytecode version mismatch; using eval </code></pre>
<p>If you want to stay with conda packages, try <code>conda update --all</code>, but I think there are still no R 3.2.x packages for windows.</p> <p>You can also install R via the binary installer available at r-project.org, install the R kernel manually; e.g. via </p> <pre><code>install_github("irkernel/repr") install_github("irkernel/IRdisplay") install_github("irkernel/IRkernel") </code></pre> <p>and then make this kernel available in the notebook </p> <pre><code>IRkernel::installspec(name = 'ir32', displayname = 'R 3.2') </code></pre>
r|ipython|jupyter
5
269
19,361,740
How to find orphan process's pid
<p>How can I find child process pid after the parent process died. I have program that creates child process that continues running after it (the parent) terminates.</p> <p>i.e.,</p> <p>I run a program from python script <code>(PID = 2)</code>.</p> <p>The script calls <code>program P (PID = 3, PPID = 2)</code></p> <p>P calls <code>fork()</code>, and now I have another instance of P named P` (PID = 4 and PPID = 3).</p> <p>After P terminates P` PID is 4 and PPID is 1.</p> <p>Assuming that I have the PID of P (3), how can I find the <strong>PID</strong> of the child P`?</p> <p>Thanks.</p>
<p>The information is lost when a process-in-the-middle terminates. So in your situation there is no way to find this out.</p> <p>You can, of course, invent your own infrastructure to store this information at forking time. The middle process (PID 3 in your example) can of course save the information which child PIDs it created (e. g. in a file or by reporting back to the father process (PID 1 in your example) via pipes or similar).</p>
python|linux|process
2
270
13,506,498
"Threading" in Python, plotting received data and sending simultaneously
<p>I am asking for some high level advice here. I am using Python to plot data that is received constantly through serial. At the same time, I want the user to be able to input data through a prompt (such as the Python shell). That data will then be sent through the same serial port to talk to the device that is also sending the data. </p> <p>My problem is that the plotting app.MainLoop() "Thread" seems to block and it wont show my raw_input portion until the window is closed. I've also tried putting those 4 lines inside my while loop but the same problem occurs- it lets me input all my information once, but once plotting starts it blocks forever until I close the graphing window.</p> <pre><code>if __name__ == '__main__': app = wx.App() window = DataLoggerWindow() window.Show() app.MainLoop() prompt_counter = "main" while(1): if prompt_counter == "main": ans = raw_input("Press f for freq, press a for amplitude: \n") if ans == "f": prompt_counter = "freq" elif ans == "a": prompt_counter = "amp" else: prompt_counter = "main" elif prompt_counter == "freq": freq = raw_input("Enter the frequency you want to sample at in Hz: \n") ser.write("f"+freq+"\n") prompt_counter = "main" elif prompt_counter == "amp": amp = raw_input("Type in selection") ser.write("a"+amp+"\n") prompt_counter = "main" </code></pre> <p>All the plotting portion does is read the serial port, and print the data received. Both portions work separately with the device on the backend. So I'm pretty sure this is a problem with how I wrote the Python code but I'm not sure why....any ideas?</p>
<p>Disclaimer: I don't think that the following is good practice.</p> <p>You can put the execution of the wx stuff inside a separate thread.</p> <pre><code>app = wx.App() window = DataLoggerWindow() import threading class WindowThread(threading.Thread): def run(self): window.Show() app.MainLoop() WindowThread().start() </code></pre> <p>That way, the <code>MainLoop</code> is only blocking another thread and the main thread should still be usable.</p> <p>However, I think that this is not the optimal approach and you should rather use something like the App.OnIdle hook.</p>
python|multithreading|wxpython|blocking|pyserial
1
271
22,163,797
building dictionary to be JSON encoded - python
<p>I have a list of class objects. Each object needs to be added to a dictionary so that it can be json encoded. I've already determined that I will need to use the json library and <code>dump</code> method. The objects look like this:</p> <pre><code>class Metro: def __init__(self, code, name, country, continent, timezone, coordinates, population, region): self.code = code #string self.name = name #string self.country = country #string self.continent = continent #string self.timezone = timezone #int self.coordinates = coordinates #dictionary as {"N" : 40, "W" : 88} self.population = population #int self.region = region #int </code></pre> <p>So the json will look like this: </p> <pre><code>{ "metros" : [ { "code" : "SCL" , "name" : "Santiago" , "country" : "CL" , "continent" : "South America" , "timezone" : -4 , "coordinates" : {"S" : 33, "W" : 71} , "population" : 6000000 , "region" : 1 } , { "code" : "LIM" , "name" : "Lima" , "country" : "PE" , "continent" : "South America" , "timezone" : -5 , "coordinates" : {"S" : 12, "W" : 77} , "population" : 9050000 , "region" : 1 } , {... </code></pre> <p>Is there a simple solution for this? I've been looking into dict comprehension but it seems it will be very complicated.</p>
<p>dict comprehension will not be very complicated.</p> <pre><code>import json list_of_metros = [Metro(...), Metro(...)] fields = ('code', 'name', 'country', 'continent', 'timezone', 'coordinates', 'population', 'region',) d = { 'metros': [ {f:getattr(metro, f) for f in fields} for metro in list_of_metros ] } json_output = json.dumps(d, indent=4) </code></pre>
python|json|dictionary
3
272
43,684,048
Tensorflow: building graph with batch sizes varying in dimension 1?
<p>I'm trying to build a CNN model in Tensorflow where all the inputs within a batch are equal shape, but between batches the inputs vary in dimension 1 (i.e. minibatch sizes are the same but minibatch shapes are not). </p> <p>To make this clearer, I have data (Nx23x1) of various values N that I sort in ascending order first. In each batch (50 samples) I zero-pad every sample so that each N_i equals the max N within its minibatch. Now I have defined Tensorflow placeholder for the batch input:</p> <pre><code>input = tf.placeholder(tf.float32, shape=(batch_size, None, IMAGE_WIDTH, NUM_CHANNELS)) </code></pre> <p>I use 'None' in the input placeholder because between batches this value varies, even though within a batch it doesn't. In running my training code, I use a feed_dict to pass in values for input (numpy matrix) as defined in the tutorials.</p> <p>My CNN code takes in this input; however this is where I run into issues. I get a ValueError when trying to flatten the input just before my fully connected layers. It tries to flatten the array but one of the dimensions is still 'None'. So then I tried:</p> <pre><code>length = tf.shape(input)[1] reshaped = tf.reshape(input, [batch_size, length, IMAGE_WIDTH, NUM_CHANNELS]) </code></pre> <p>But still the value is 'None' and I am getting issues when trying to build the graph initially. My FC layer (and in flattening) explicitly takes in 'input_to_layer.get_shape()[1]' in building the weight and bias tensors, but it cannot handle the None input. </p> <p>I am quite lost as to how to proceed! Help would be much appreciated, thanks :) </p> <p><strong>## EDIT ##</strong></p> <p>Danevskyi points out below that this may not be possible. What if instead of the fully connected layer, I wanted to mean pool over the entire caption (i.e. for the 1024 flat filters of size (D,) outputted from the prior conv layer, I want to create a 1024-dim vector by mean pooling over the length D of each filter)? Is this possible with 1D Global Avg Pooling? Again between batches the value of D would vary...</p> <p><strong>## UPDATE ##</strong></p> <p>The global mean pooling method from tflearn (tflearn.layers.conv.global_avg_pool) doesn't need a specified window size, it uses the full input dimension, so it's compatible even with unknown 'None' dimensions in TensorFlow.</p>
<p>There is no way to do this, as you want to use a differently shaped matrix (for fully-connected layer) for every distinct batch. </p> <p>One possible solution is to use global average pooling (along all spatial dimensions) to get a tensor of shape <code>(batch_size, 1, 1, NUM_CHANNELS)</code> regardless of the second dimension.</p>
tensorflow|conv-neural-network
2
273
54,677,982
How can I find out if a file-like object performs newline translation?
<p>I have a <a href="https://github.com/aptiko/textbisect" rel="nofollow noreferrer">library</a> that does some kind of binary search in a seekable open file that it receives as an argument.</p> <p>The file must have been opened with <code>open(..., newline="\n")</code>, otherwise <code>.seek()</code> and <code>.tell()</code> might not work properly if there's newline translation.</p> <p>The README of the library does make this thing clear, but still it's easy to miss. I missed it myself and I was wondering why things aren't working properly. I'd therefore like to make the library raise an error or at least a warning if it receives a file-like object that performs text translation. Is it possible to make this check?</p>
<p>I see two ways around this. One is Python 3.7's <a href="https://docs.python.org/3/library/io.html#io.TextIOWrapper.reconfigure" rel="nofollow noreferrer">io.TextIOWrapper.reconfigure()</a> (thanks @martineau!).</p> <p>The second one is to make some tests to see whether <code>seek</code>/<code>tell</code> work as expected. A simple but inefficient way to do it is this:</p> <pre><code>from io import SEEK_END def has_newlines_translated(f): f.seek(0) file_size_1 = len(f.read()) file_size_2 = f.seek(0, SEEK_END) - 1 return file_size_1 != file_size_2 </code></pre> <p>It may be possible to do it more efficiently by reading character by character (with <code>f.read(1)</code>) until past the first newline and playing with <code>seek()</code>/<code>tell()</code> to see whether results are consistent, but it's tricky and it wouldn't work in all cases (e.g. if the first newline is a lone <code>\n</code> whereas other newlines are <code>\r\n</code>).</p>
python|python-3.x
0
274
54,687,461
Opencv - Ellipse Contour Not fitting correctly
<p>I want to draw contours around the concentric ellipses shown in the image appended below. I am not getting the expected result. </p> <p><strong><em>I have tried the following steps:</em></strong></p> <ol> <li>Read the Image </li> <li>Convert Image to Grayscale.</li> <li>Apply GaussianBlur</li> <li>Get the Canny edges</li> <li>Draw the ellipse contour</li> </ol> <p><strong><em>Here is the Source code:</em></strong></p> <pre><code>import cv2 target=cv2.imread('./source image.png') targetgs = cv2.cvtColor(target,cv2.COLOR_BGRA2GRAY) targetGaussianBlurGreyScale=cv2.GaussianBlur(targetgs,(3,3),0) canny=cv2.Canny(targetGaussianBlurGreyScale,30,90) kernel=cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)) close=cv2.morphologyEx(canny,cv2.MORPH_CLOSE,kernel) _,contours,_=cv2.findContours(close,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE) if len(contours) != 0: for c in contours: if len(c) &gt;= 50: hull=cv2.convexHull(c) cv2.ellipse(target,cv2.fitEllipse(hull),(0,255,0),2) cv2.imshow('mask',target) cv2.waitKey(0) cv2.destroyAllWindows() </code></pre> <p><strong><em>The image below shows the Expected &amp; Actual result:</em></strong> <a href="https://i.stack.imgur.com/dt08E.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dt08E.jpg" alt="Expected &amp; Actual Result Image"></a></p> <p><strong><em>Source Image:</em></strong> </p> <p><a href="https://i.stack.imgur.com/qZyCg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qZyCg.png" alt="Source Image"></a></p>
<p>Algorithm can be simple:</p> <ol> <li><p>Convert RGB to HSV, split and working with a V channel.</p></li> <li><p>Threshold for delete all color lines.</p></li> <li><p>HoughLinesP for delete non color lines.</p></li> <li><p>dilate + erosion for close holes in ellipses.</p></li> <li><p>findContours + fitEllipse.</p></li> </ol> <p>Result:</p> <p><a href="https://i.stack.imgur.com/6H53K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6H53K.png" alt="Result image"></a></p> <p>With new image (added black curve) my approach do not works. It seems that you need to use Hough ellipse detection instead "findContours + fitEllipse". OpenCV don't have implementation but you can find it <a href="http://scikit-image.org/docs/dev/auto_examples/edges/plot_circular_elliptical_hough_transform.html" rel="nofollow noreferrer">here</a> or <a href="https://github.com/horiken4/ellipse-detection" rel="nofollow noreferrer">here</a>.</p> <p>If you don't afraid C++ code (for OpenCV library C++ is more expressive) then:</p> <pre><code>cv::Mat rgbImg = cv::imread("sqOOE.jpg", cv::IMREAD_COLOR); cv::Mat hsvImg; cv::cvtColor(rgbImg, hsvImg, cv::COLOR_BGR2HSV); std::vector&lt;cv::Mat&gt; chans; cv::split(hsvImg, chans); cv::threshold(255 - chans[2], chans[2], 200, 255, cv::THRESH_BINARY); std::vector&lt;cv::Vec4i&gt; linesP; cv::HoughLinesP(chans[2], linesP, 1, CV_PI/180, 50, chans[2].rows / 4, 10); for (auto l : linesP) { cv::line(chans[2], cv::Point(l[0], l[1]), cv::Point(l[2], l[3]), cv::Scalar::all(0), 3, cv::LINE_AA); } cv::dilate(chans[2], chans[2], cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)), cv::Point(-1, -1), 4); cv::erode(chans[2], chans[2], cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)), cv::Point(-1, -1), 3); std::vector&lt;std::vector&lt;cv::Point&gt; &gt; contours; std::vector&lt;cv::Vec4i&gt; hierarchy; cv::findContours(chans[2], contours, hierarchy, cv::RETR_TREE, cv::CHAIN_APPROX_SIMPLE); for (size_t i = 0; i &lt; contours.size(); i++) { if (contours[i].size() &gt; 4) { cv::ellipse(rgbImg, cv::fitEllipse(contours[i]), cv::Scalar(255, 0, 255), 2); } } cv::imshow("rgbImg", rgbImg); cv::waitKey(0); </code></pre>
python-3.x|opencv|computer-vision
4
275
71,166,697
How can I delete stopwords from a column in a df?
<p>I've been trying to delete the stopwords from a column in a df, but I'm having trouble doing it.</p> <pre><code>discografia[&quot;SSW&quot;] = [word for word in discografia.CANCIONES if not word in stopwords.words('spanish')] </code></pre> <p>But in the new column I just get the same words as in the column &quot;CANCIONES&quot;. What am I doing wrong? Thanks!</p>
<p>We can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer"><code>explode</code></a> in conjunction with grouping by the original index to assign back to the original DataFrame.</p> <pre><code>stopwords = [&quot;buzz&quot;] df = pd.DataFrame({&quot;CANCIONES&quot;: [[&quot;fizz&quot;, &quot;buzz&quot;, &quot;foo&quot;], [&quot;baz&quot;, &quot;buzz&quot;]]}) words = r&quot;.|&quot;.join(stopwords) exploded = df.explode(&quot;CANCIONES&quot;) print(exploded) CANCIONES 0 fizz 0 buzz 0 foo 1 baz 1 buzz df[&quot;SSW&quot;] = exploded.loc[~exploded.CANCIONES.str.contains(words)].reset_index().groupby( &quot;index&quot;, as_index=False ).agg({&quot;CANCIONES&quot;: list}).CANCIONES print(df) CANCIONES SSW 0 [fizz, buzz, foo] [fizz, foo] 1 [baz, buzz] [baz] </code></pre>
python|dataframe
0
276
9,324,802
Running interactive python script from emacs
<p>I am a fairly proficient vim user, but friends of mine told me so much good stuff about emacs that I decided to give it a try -- especially after finding about the aptly-named evil mode...</p> <p>Anyways, I am currently working on a python script that requires user input (a subclass of cmd.Cmd). In vim, if I wanted to try it, I could simply do <code>:!python %</code> and then could interact with my script, until it quits. In emacs, I tried <code>M-! python script.py</code>, which would indeed run the script in a separate buffer, but then RETURNs seems not to be sent back to the script, but are caught by the emacs buffer instead. I also tried to have a look at python-mode's <code>C-c C-c</code>, but this runs the script in some temporary directory, whereas I just want to run it in <code>(pwd)</code>.</p> <p>So, is there any canonical way of doing that?</p>
<p>I don't know about <em>canonical</em>, but if I needed to interact with a script I'd do <kbd>M</kbd>-<kbd>x</kbd><code>shell</code><kbd>RET</kbd> and run the script from there.</p> <p>There's also <kbd>M</kbd>-<kbd>x</kbd><code>terminal-emulator</code> for more serious terminal emulation, not just shell stuff.</p>
python|emacs
4
277
39,372,778
How can I print the entire converted sentence on a single line?
<p>I am trying to expand on Codeacademy's Pig Latin converter to practice basic programming concepts. </p> <p>I believe I have the logic nearly right (I'm sure it's not as concise as it could be!) and now I am trying to output the converted Pig Latin sentence entered by the user on a single line.</p> <p>If I print from inside the for loop it prints on new lines each time. If I print from outside it only prints the first word as it is not iterating through all the words. </p> <p>Could you please advise where I am going wrong?</p> <p>Many, many thanks for your help.</p> <pre><code>pyg = 'ay' print ("Welcome to Matt's Pig Latin Converter!") def convert(original): while True: if len(original) &gt; 0 and (original.isalpha() or " " in original): print "You entered \"%s\"." % original split_list = original.split() for word in split_list: first = word[0] new_sentence = word[1:] + first + pyg final_sentence = "".join(new_sentence) print final_sentence break else: print ("That's not a valid input. Please try again.") return convert(raw_input("Please enter a word: ")) convert(raw_input("Please enter a word: ")) </code></pre>
<p>Try:</p> <pre><code>pyg = 'ay' print ("Welcome to Matt's Pig Latin Converter!") def convert(original): while True: if len(original) &gt; 0 and (original.isalpha() or " " in original): final_sentence = "" print "You entered \"%s\"." % original split_list = original.split() for word in split_list: first = word[0] new_sentence = word[1:] + first + pyg final_sentence = final_sentence.append(new_sentence) print final_sentence break else: print ("That's not a valid input. Please try again.") return convert(raw_input("Please enter a word: ")) convert(raw_input("Please enter a word: ")) </code></pre> <p>It's because you are remaking final_sentence every time in the for loop instead of adding to it.</p>
python|join|printing
0
278
52,854,560
How to use if statments on Tags in Beautiful Soup?
<p>I'm a beginner using Beautiful Soup and I have a question to do with 'if' statements.</p> <p>I am trying to scrap data from tables from a webpage but there are pro-ceding and post-ceding tables too.</p> <p>All the required tables have divisions with the form , while the useless tables have various divisions.</p> <p>What I thought of doing was going using find_all to search for all table divisions and then looping through the result and appending to a list all of the divisions who's .contents method had it's first item being a tag having the attribute align = 'center', but I didn't know how to do it with the tag being a Beautiful Soup object and not knowing how to work with it.</p> <p>I have my attempted code below and if anyone could give me some tips it would be greatly appreciated.</p> <pre><code>import requests from bs4 import BeautifulSoup r = requests.get('https://afltables.com/afl/stats/2018.html') soup = BeautifulSoup(r.text, 'html.parser') results = soup.find_all('tr') lists =[] for result in results: if result.contents[0] == 'align = centre': #append to some list </code></pre>
<p>This would get you what you are looking for I believe.</p> <pre><code>for result in results: if 'align="center"' in str(result.contents[0]): #append to some list </code></pre>
python|html|web-scraping|html-table|beautifulsoup
1
279
52,848,894
How to click HTML button in Python + Selenium
<p>I am trying to simulate button click in Python using Selenium. </p> <pre><code>&lt;li class="next" role="button" aria-disabled="false"&gt;&lt;a href="www.abc.com"&gt;Next →&lt;/a&gt;&lt;/li&gt; </code></pre> <p>The Python script is <code>driver.find_element_by_class_name('next').click()</code>.</p> <p>This gives an error. Can someone suggest me how to simulate a button <code>class</code>?</p>
<p>You can try the following code:</p> <pre><code>from selenium.webdriver.support import ui from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By ui.WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, ".next[role='button']"))).click() </code></pre> <p>Hope it helps you!</p>
python|python-3.x|selenium
1
280
47,974,874
Algorithm for grouping points in given distance
<p>I'm currently searching for an <strong>efficient</strong> algorithm that takes in a set of points from three dimensional spaces and groups them into classes (maybe represented by a list). A point should belong to a class if it is close to one or more other points from the class. Two classes are then the same if they share any point. Because I'm working with large data sets, I don't want to use recursive methods. Also, using something like a distance matrix with O(n^2) performance is what I try to avoid.</p> <p>I tried to check for some algorithms online, but most of them don't appeal to this specific purpose (e.g. k-d tree or other cluster algorithms). I thought about parting space into smaller parts, but that (potentially) results in an inexact result.</p> <p>I tried to write something myself, but it turned out to be flawed. I would sort my points after distance and append the distance as a fourth coordinate and then repeat the following the following code-segment:</p> <pre><code>def grouping_presorted(lst, distance): positions = [0] x = [] while positions: curr_el = lst[ positions[-1] ] nn_i = HasNeighbor(lst, distance, positions[-1]) if nn_i is None: x.append(lst.pop(positions[-1]) ) positions.pop(-1) else: positions.append(nn_i) return x def HasNeighbor(lst,distance,index): i = index+1 while lst[i][3]- lst[index][3] &lt; distance: dist = (lst[i][0]-lst[index][0])**2 + (lst[i][1]-lst[index][1])**2 + (lst[i][2]-lst[index][2])**2 if dist &lt; distance: return i i+=1 return None </code></pre> <p>Aside from an (probably easy to fix) overflow error, there's a bigger flaw in the logic of linking the points. If you think of my points describing lines in space, the algorithm only works for lines that strictly point outwards the origin, but not for circles or similar structures.</p> <p>Does anybody know of a prewritten code for this or have an idea what I could try?</p> <p>Thanks in advance.</p> <p><strong>Edit:</strong> It seems my spelling and maybe confusion of some terms has sparked some misunderstandings. I hope that this (badly-made) sketch helps. In this example, I marked my reference distance as d and circled the two containers I wan't to end up with in red. <a href="https://i.stack.imgur.com/g46ju.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g46ju.png" alt="sample"></a></p>
<h2>What I ended up doing</h2> <p>After following all the suggestions of your comments, help from cs.stackexchange and doing some research I was able to write down two different methods for solving this problem. In case someone might be interested, I decided to share them here. Again, the problem is to write a program that takes in a set of coordinate tuples and groups them into clusters. Two points x,y belong to the same cluster if there is a sequence of elements x=x_1,..,y=x_N such that d(x_i,x_i+1) <hr> <p><strong>DBSCAN:</strong> By fixing euclidean metric, minPts = 2 and grouping distance epsilon = r. scikit-learn provides a nice implementation of this algorithm. A minimal code snippet for the task would be:</p> <pre><code>from sklearn.cluster import DBSCAN from sklearn.datasets.samples_generator import make_blobs import networkx as nx import scipy.spatial as sp def cluster(data, epsilon,N): #DBSCAN, euclidean distance db = DBSCAN(eps=epsilon, min_samples=N).fit(data) labels = db.labels_ #labels of the found clusters n_clusters = len(set(labels)) - (1 if -1 in labels else 0) #number of clusters clusters = [data[labels == i] for i in range(n_clusters)] #list of clusters return clusters, n_clusters centers = [[1, 1,1], [-1, -1,1], [1, -1,1]] X,_ = make_blobs(n_samples=N, centers=centers, cluster_std=0.4, random_state=0) cluster(X,epsilon,N) </code></pre> <p>On my machine, <em>N=20000</em> for this clustering variation with an epsilon of <em>epsilon = 0.1</em> takes just <em>290ms</em>, so this seems really quick to me.</p> <hr> <p><strong>Graph components:</strong> One can think of this problem as follows: The coordinates define nodes of a graph, and two nodes are adjacent if their distance is smaller than epsilon/r. A cluster is then given as a connected component of this graph. At first I had problems implementing this graph, but there are many ways to write a linear time algorithm to do this. The easiest and fastest way however, for me, was to use scipy.spatial's cKDTree data structure and the corresponding query_pairs() method, that returns a list of indice tuples of points that are in given distance. One could for example write it like this:</p> <pre><code>class IGraph: def __init__(self, nodelst=[], radius = 1): self.igraph = nx.Graph() self.radii = radius self.nodelst = nodelst #nodelst is array of coordinate tuples, graph contains indices as nodes self.__make_edges__() def __make_edges__(self): self.igraph.add_edges_from( sp.cKDTree(self.nodelst).query_pairs(r=self.radii) ) def get_conn_comp(self): ind = [list(x) for x in nx.connected_components(self.igraph) if len(x)&gt;1] return [self.nodelst[indlist] for indlist in ind] def graph_cluster(data, epsilon): graph = IGraph(nodelst = data, radius = epsilon) clusters = graph.get_conn_comp() return clusters, len(clusters) </code></pre> <p>For the same dataset mentioned above, this method takes <em>420ms</em> to find the connected components. However, for smaller clusters, e.g. N=700, this snippet runs faster. It also seems to have an advantage for finding smaller clusters (that is being given smaller epsilon values) and a vast disadvantage in the other direction (all on this specific dataset of course). I think, depending on the given situation, both methods are worth considering.</p> <p>Hope this is of use for somebody.</p> <p><em>Edit:</em> Theoretically, DBSCAN has computational complexity O(n log n) when properly implemented (according to wikipedia...), while constructing the graph as well as finding its connected components runs linear in time. I'm not sure how well these statements hold for the given implementations though.</p>
python|algorithm|performance
2
281
34,401,791
Please help. I get this error: "SyntaxError: Unexpected EOF while parsing"
<pre><code>try: f1=int(input("enter first digit")) f2=int(input("enter second digit")) answ=(f1/f2) print (answ) except ZeroDivisionError: </code></pre>
<p>You can't have an <code>except</code> line with nothing after it. You have to have <em>some</em> code there, even if it doesn't do anything.</p> <pre><code>try: f1=int(input("enter first digit")) f2=int(input("enter second digit")) answ=(f1/f2) print (answ) except ZeroDivisionError: pass </code></pre>
python
1
282
34,034,812
what is the role of magic method in python?
<p>Base on my understanding, magic methods such as <code>__str__</code> , <code>__next__</code>, <code>__setattr__</code> are built-in features in Python. They will automatically called when a instance object is created. It also plays a role of overridden. What else some important features of magic method do I omit or ignore? </p>
<p>"magic" methods in python do specific things in specific contexts.</p> <p>For example, to "override" the addition operator (+), you'd define a <code>__add__</code> method. subtraction is <code>__sub__</code>, etc.</p> <p>Other methods are called during object creation (<code>__new__</code>, <code>__init__</code>). Other methods are used with specific language constructs (<code>__enter__</code>, <code>__exit__</code> and you might argue <code>__init__</code> and <code>__next__</code>).</p> <p>Really, there's nothing <em>special</em> about magic methods other than they are guaranteed to be called by the language at specific times. As the programmer, you're given the power to hook into structure and change the way an object behaves in those circumstances.</p> <p>For a near complete summary, have a look at the python <a href="https://docs.python.org/2/reference/datamodel.html" rel="nofollow">data model</a>.</p>
python
4
283
7,187,493
Persisting test data across apps
<p>My Django site has two apps — <code>Authors</code> and <code>Books</code>. My <code>Books</code> app has a model which has a foreign key to a model in <code>Authors</code>. I have some tests for the <code>Authors</code> app which tests all my models and managers and this works fine. However, my app <code>Books</code> require some data from the <code>Authors</code> app in order to function.</p> <p>Can I specify the order in which my tests are run and make the generated test data from app <code>Authors</code> persist so that I can test my <code>Books</code> app whithout having to copy over the test which generate data from the <code>Authors</code> app.</p> <p>I might be doing this all wrong. Am I?</p> <p>Thanks.</p>
<p>Create a <a href="https://docs.djangoproject.com/en/dev/howto/initial-data/#providing-initial-data-with-fixtures" rel="nofollow">fixture</a> containing the test data you need. You can then load the same data for both your <code>Authors</code> and <code>Books</code> tests.</p> <p>For details, see <a href="https://docs.djangoproject.com/en/dev/topics/testing/#django.test.TestCase.fixtures" rel="nofollow">docs on Testcase.fixures</a> and <a href="http://ericholscher.com/blog/2008/nov/5/introduction-pythondjango-testing-fixtures/" rel="nofollow">Introduction to Python/Django tests: Fixtures</a>.</p>
python|django|unit-testing|testing|integration-testing
0
284
39,694,357
loop through numpy arrays, plot all arrays to single figure (matplotlib)
<p>the functions below each plot a single numpy array<br /> plot1D, plot2D, and plot3D take arrays with 1, 2, and 3 columns, respectively</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D def plot1D(data): x=np.arange(len(data)) plot2D(np.hstack((np.transpose(x), data))) def plot2D(data): # type: (object) -&gt; object #if 2d, make a scatter plt.plot(data[:,0], data[:,1], *args, **kwargs) def plot3D(data): #if 3d, make a 3d scatter fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot(data[:,0], data[:,1], data[:,2], *args, **kwargs) </code></pre> <p>I would like the ability to input a list of 1, 2, or 3d arrays and plot all arrays from the list onto one figure</p> <p>I have added the looping elements, but am unsure how hold a figure and add additional plots...</p> <pre><code>def plot1D_list(data): for i in range(0, len(data)): x=np.arange(len(data[i])) plot2D(np.hstack((np.transpose(x), data[i]))) def plot2D_list(data): # type: (object) -&gt; object #if 2d, make a scatter for i in range(0, len(data)): plt.plot(data[i][:,0], data[i][:,1], *args, **kwargs) def plot3D_list(data): #if 3d, make a 3d scatter for i in range(0, len(data)): fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot(data[i][:,0], data[i][:,1], data[i][:,2], *args, **kwargs) </code></pre>
<p>To plot multiple data sets on the same axes, you can do something like this:</p> <pre><code>def plot2D_list(data,*args,**kwargs): # type: (object) -&gt; object #if 2d, make a scatter n = len(data) fig,ax = plt.subplots() #create figure and axes for i in range(n): #now plot data set i ax.plot(data[i][:,0], data[i][:,1], *args, **kwargs) </code></pre> <p>Your other functions can be generalised in the same way. Here's an example of using the above function with a 5 sets of randomly generated x-y coordinates, each with length 100 (each of the 5 data sets appears as a different color):</p> <pre><code>import numpy as np X = np.random.randn(5,100,2) plot2D_list(X,'o') plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/gm2eK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gm2eK.png" alt="enter image description here"></a></p>
python|arrays|numpy|matplotlib
1
285
16,529,524
remove arguments passed to chrome by selenium / chromedriver
<p>I'm using selenium with python and chromium / chromedriver. I want to REMOVE switches passed to chrome (e.g. --full-memory-crash-report), but so far I could only find out how to add further switches.</p> <p>My current setup:</p> <pre><code>from selenium import webdriver driver = webdriver.Chrome(executable_path="/path/to/chromedriver") driver.get(someurl) </code></pre> <p>As far as I understand this can be used to add arguments:</p> <pre><code>from selenium.webdriver.chrome.options import Options chrome_options = Options() chrome_options.add_argument("--some-switch") driver = webdriver.Chrome(chrome_options=chrome_options) </code></pre> <p>So, how do I get rid of default arguments or wipe all default arguments clean and pass only a custom list?</p>
<p>It helped me:</p> <pre><code>options = webdriver.ChromeOptions() options.add_experimental_option("excludeSwitches", ["test-type"]) options.add_argument("--incognito") driver = webdriver.Chrome(options=options) </code></pre> <p>Found solution here <a href="https://help.applitools.com/hc/en-us/articles/360007189411--Chrome-is-being-controlled-by-automated-test-software-notification" rel="nofollow noreferrer">https://help.applitools.com/hc/en-us/articles/360007189411--Chrome-is-being-controlled-by-automated-test-software-notification</a></p>
python|selenium|webdriver|selenium-webdriver|selenium-chromedriver
4
286
40,690,674
importing from a text file to a dictionary
<p>filename:<code>dictionary.txt</code></p> <pre><code>YAHOO:YHOO GOOGLE INC:GOOG Harley-Davidson:HOG Yamana Gold:AUY Sotheby’s:BID inBev:BUD </code></pre> <p>code:</p> <pre><code>infile = open('dictionary.txt', 'r') content= infile.readlines() infile.close() counters ={} for line in content: counters.append(content) print(counters) </code></pre> <p>i am trying to import contents of the file.txt to the dictionary. I have searched through stack overflow but please an answer in a simple way (not with open...) </p>
<p>First off, instead of opening and closing the files explicitly you can use <code>with</code> statement for opening the files which, closes the file automatically at the end of the block.</p> <p>Secondly, as the file objects are iterator-like objects (one shot iterable) you can loop over the lines and split them with <code>:</code> character. You can do all of these things as a <a href="https://www.python.org/dev/peps/pep-0289/" rel="nofollow noreferrer"><em>generator expression</em></a> within <code>dict</code> function: </p> <pre><code>with open('dictionary.txt') as infile: my_dict = dict(line.strip().split(':') for line in infile) </code></pre>
python|python-3.x|dictionary
0
287
26,123,137
Python Daemon: checking to have one daemon run at all times
<p>myalert.py</p> <pre><code>from daemon import Daemon import os, time, sys class alertDaemon(Daemon): def run(self): while True: time.sleep(1) if __name__ == "__main__": alert_pid = '/tmp/ex.pid' # if pid doesnt exists run if os.path.isfile(alert_pid): # is this check enough? sys.exit(0) daemon = alertDaemon(alert_pid) daemon.start() </code></pre> <p>Given that no other programs or users will create the pid file:</p> <p>1) Is there a case where pid does not exists yet the daemon process still running?<br> 2) Is there a case where pid does exists yet the daemon isnt running?</p> <p>Because if answer is yes to at least one of the questions above, then simply checking for the existence of pid file isnt enough if my goal is have one daemon running at all times. </p> <p>Q: If i have to check for the process then, I am hoping of avoid something like system call ps -ef and grep for the name of the script. Is there a standard way of doing this?</p> <p>Note: the script, myalert.py, will be a cronjob</p>
<p>The <a href="https://pypi.python.org/pypi/python-daemon/" rel="nofollow"><code>python-daemon</code></a> library, which is the reference implementation for <a href="http://legacy.python.org/dev/peps/pep-3143/" rel="nofollow">PEP 3143</a>: "Standard daemon process library", handles this by using a file lock (via the <a href="https://pypi.python.org/pypi/lockfile" rel="nofollow"><code>lockfile</code></a> library) on the pid file you pass to the <code>DaemonContext</code> object. The underlying OS guarantees that the file lock will be released when the daemon process exits, even if its uncleanly exited. Here's a simple usage example:</p> <pre><code>import daemon from daemon.pidfile import PIDLockFile context = daemon.DaemonContext( pidfile= PIDLockFile('/var/run/spam.pid'), ) with context: main() </code></pre> <p>So, if a new instance starts up, it doesn't have to determine if the process that created the existing pid file is still running via the pid itself; if it can acquire the file lock, then no other instances are running (since they'd have acquired the lock). If it can't acquire the lock, then another daemon instance must be running.</p> <p>The only way you'd run into trouble is if someone came along and manually deleted the pid file while the daemon was running. But I don't think you need to worry about someone deliberately breaking things in that way.</p> <p>Ideally, <code>python-daemon</code> would be part of the standard library, as was the original goal of PEP 3143. Unfortunately, the PEP got deferred, essentially because there was no one willing to actually do the remaining work needed to get in added to the standard library:</p> <blockquote> <p>Further exploration of the concepts covered in this PEP has been deferred for lack of a current champion interested in promoting the goals of the PEP and collecting and incorporating feedback, and with sufficient available time to do so effectively.</p> </blockquote>
python|daemon|python-daemon
2
288
2,084,292
Where (at which point in the code) does pyAMF client accept SSL certificate?
<p>I've set up a server listening on an SSL port. I am able to connect to it and with proper credentials I am able to access the services (echo service in the example below)</p> <p>The code below works fine, but I don't understand <b>at which point the client accepts the certificate</b></p> <p>Server:</p> <pre><code>import os.path import logging import cherrypy from pyamf.remoting.gateway.wsgi import WSGIGateway logging.basicConfig( level=logging.DEBUG, format='%(asctime)s %(levelname)-5.5s [%(name)s] %(message)s' ) def auth(username, password): users = {"user": "pwd"} if (users.has_key(username) and users[username] == password): return True return False def echo(data): return data class Root(object): @cherrypy.expose def index(self): return "This is your main website" gateway = WSGIGateway({'myservice.echo': echo,}, logger=logging, debug=True, authenticator=auth) localDir = os.path.abspath(os.path.dirname(__file__)) CA = os.path.join(localDir, 'new.cert.cert') KEY = os.path.join(localDir, 'new.cert.key') global_conf = {'global': {'server.socket_port': 8443, 'environment': 'production', 'log.screen': True, 'server.ssl_certificate': CA, 'server.ssl_private_key': KEY}} cherrypy.tree.graft(gateway, '/gateway/') cherrypy.quickstart(Root(), config=global_conf) </code></pre> <p>Client:</p> <pre><code>import logging from pyamf.remoting.client import RemotingService logging.basicConfig( level=logging.DEBUG, format='%(asctime)s %(levelname)-5.5s [%(name)s] %(message)s' ) client = RemotingService('https://localhost:8443/gateway', logger=logging) client.setCredentials('user', 'pwd') service = client.getService('myservice') print service.echo('Echo this') </code></pre> <p>Now, when I run this, it runs <b>OK</b>, the client log is below:</p> <pre><code>2010-01-18 00:50:56,323 INFO [root] Connecting to https://localhost:8443/gateway 2010-01-18 00:50:56,323 DEBUG [root] Referer: None 2010-01-18 00:50:56,323 DEBUG [root] User-Agent: PyAMF/0.5.1 2010-01-18 00:50:56,323 DEBUG [root] Adding request myservice.echo('Echo this',) 2010-01-18 00:50:56,324 DEBUG [root] Executing single request: /1 2010-01-18 00:50:56,324 DEBUG [root] AMF version: 0 2010-01-18 00:50:56,324 DEBUG [root] Client type: 0 2010-01-18 00:50:56,326 DEBUG [root] Sending POST request to /gateway 2010-01-18 00:50:56,412 DEBUG [root] Waiting for response... 2010-01-18 00:50:56,467 DEBUG [root] Got response status: 200 2010-01-18 00:50:56,467 DEBUG [root] Content-Type: application/x-amf 2010-01-18 00:50:56,467 DEBUG [root] Content-Length: 41 2010-01-18 00:50:56,467 DEBUG [root] Server: PyAMF/0.5.1 Python/2.5.2 2010-01-18 00:50:56,467 DEBUG [root] Read 41 bytes for the response 2010-01-18 00:50:56,468 DEBUG [root] Response: &lt;Envelope amfVersion=0 clientType=0&gt; (u'/1', &lt;Response status=/onResult&gt;u'Echo this'&lt;/Response&gt;) &lt;/Envelope&gt; 2010-01-18 00:50:56,468 DEBUG [root] Removing request: /1 Echo this </code></pre> <p>The line <b>2010-01-18 00:50:56,467 DEBUG [root] Read 41 bytes for the response</b> looks suspicious, since the response is too short (the certificate is ~1K) and I'd expect the cert transfer to be in the debug log.</p> <p><b>Question: At which point does the client accept the certificate? Where would it be stored by default? Which config parameter sets the default location?</b> </p>
<p>PyAMF uses <code>httplib</code> under the hood to power the remoting requests. When connecting via <code>https://</code>, <a href="http://docs.python.org/library/httplib.html#httplib.HTTPSConnection" rel="nofollow noreferrer">httplib.HTTPSConnection</a> is used as the <code>connection</code> attribute to the <code>RemotingService</code>.</p> <p>It states in the docs that (in reference to HTTPSConnection):</p> <blockquote> <p>Note: This does not do any certificate verification</p> </blockquote> <p>So, in answer to your question certificates are basically ignored, even if you supply <code>key_file</code>/<code>cert_file</code> arguments to <code>connection</code>.</p> <p>The actual ignoring is done when the <code>connect</code> method is called - when the request is actually made to the gateway ..</p> <blockquote> <p>[root] Sending POST request to /gateway</p> </blockquote> <p>The <code>Read 41 bytes for the response</code> is the unencrypted http response length.</p> <p>This answer may not contain all the info you require but should go some way to explaining the behaviour you're seeing.</p>
python|ssl|certificate|cherrypy|pyamf
2
289
1,802,971
NameError: name 'self' is not defined
<p>Why such structure</p> <pre><code>class A: def __init__(self, a): self.a = a def p(self, b=self.a): print b </code></pre> <p>gives an error <code>NameError: name 'self' is not defined</code>?</p>
<p>Default argument values are evaluated at function define-time, but <code>self</code> is an argument only available at function call time. Thus arguments in the argument list cannot refer each other.</p> <p>It's a common pattern to default an argument to <code>None</code> and add a test for that in code:</p> <pre><code>def p(self, b=None): if b is None: b = self.a print b </code></pre> <p><strong>Update 2022:</strong> Python developers are now <a href="https://www.python.org/dev/peps/pep-0671/" rel="noreferrer">considering late-bound argument defaults</a> for future Python versions.</p>
python|nameerror
199
290
63,087,983
How to send post requests using multi threading in python?
<p>I'm trying to use multi threading to send post requests with tokens from a txt file.</p> <p>I only managed to send GET requests,if i try to send post requests it results in a error. I tried modifying the GET to POST but it gets an error.</p> <p>I want to send post requests with tokens in them and verify for each token if they are true or false. (json response)</p> <p>Here is the code:</p> <pre><code>import threading import time from queue import Queue import requests file_lines = open(&quot;tokens.txt&quot;, &quot;r&quot;).readlines() # Gets the tokens from the txt file. for line in file_lines: param={ &quot;Token&quot;:line.replace('/n','') } def make_request(url): &quot;&quot;&quot;Makes a web request, prints the thread name, URL, and response text. &quot;&quot;&quot; resp = requests.get(url) with print_lock: print(&quot;Thread name: {}&quot;.format(threading.current_thread().name)) print(&quot;Url: {}&quot;.format(url)) print(&quot;Response code: {}\n&quot;.format(resp.text)) def manage_queue(): &quot;&quot;&quot;Manages the url_queue and calls the make request function&quot;&quot;&quot; while True: # Stores the URL and removes it from the queue so no # other threads will use it. current_url = url_queue.get() # Calls the make_request function make_request(current_url) # Tells the queue that the processing on the task is complete. url_queue.task_done() if __name__ == '__main__': # Set the number of threads. number_of_threads = 5 # Needed to safely print in mult-threaded programs. print_lock = threading.Lock() # Initializes the queue that all threads will pull from. url_queue = Queue() # The list of URLs that will go into the queue. urls = [&quot;https://www.google.com&quot;] * 30 # Start the threads. for i in range(number_of_threads): # Send the threads to the function that manages the queue. t = threading.Thread(target=manage_queue) # Makes the thread a daemon so it exits when the program finishes. t.daemon = True t.start() start = time.time() # Puts the URLs in the queue for current_url in urls: url_queue.put(current_url) # Wait until all threads have finished before continuing the program. url_queue.join() print(&quot;Execution time = {0:.5f}&quot;.format(time.time() - start)) </code></pre> <p>I want to send a post request for each token in the txt file.</p> <p>Error i get when using replacing get with post: Traceback (most recent call last): File &quot;C:\Users\Creative\Desktop\multithreading.py&quot;, line 40, in url_queue = Queue() NameError: name 'Queue' is not defined current_url = url_queue.post() AttributeError: 'Queue' object has no attribute 'post' File &quot;C:\Users\Creative\Desktop\multithreading.py&quot;, line 22, in manage_queue</p> <p>Also tried a solution using tornado and async but none of them with success.</p>
<p>I finally managed to do post requests using multi threading.</p> <p>If anyone sees an error or if you can do an improvement for my code feel free to do it :)</p> <pre><code>import requests from concurrent.futures import ThreadPoolExecutor, as_completed from time import time url_list = [ &quot;https://www.google.com/api/&quot; ] tokens = {'Token': '326729'} def download_file(url): html = requests.post(url,stream=True, data=tokens) return html.content start = time() processes = [] with ThreadPoolExecutor(max_workers=200) as executor: for url in url_list: processes.append(executor.submit(download_file, url)) for task in as_completed(processes): print(task.result()) print(f'Time taken: {time() - start}') </code></pre>
python|multithreading|post|python-requests
1
291
32,232,462
Scrolled Panel not working in wxPython
<pre><code> class Frame(wx.Frame): def __init__(self): wx.Frame.__init__(self, None,-1, "SCSM Observatory Log", size=(700, 700)) panel = wxScrolledPanel.ScrolledPanel(self,-1, size=(800,10000)) panel.SetupScrolling() </code></pre> <p>Could someone please explain why this code is not working? I am not getting any errors, but its like the scrolling commands are not being initialized possibly? </p> <p>Edit: The scrolling works but I have to resize the window and make it smaller to enable the scrolling capabilities. Also, it will not scroll all the way to the bottom. </p> <p>Edit 2: Apparently the scroll bar only scrolls as far as the vertical size of the frame. So if i set the frame y-size to 1000, it will scroll to 1000. The only problem is that a window that large would be too big for the monitor this is used on. Is there a way to force the scrollbar to go to a distance that is larger than the size of the frame? For example, I would like the window to open with size of (700,700), but I need the scrollbar to go to 1000.</p>
<p>Not sure why it is not working for you, following a sample which works for me. I like using sized_controls as they handle sizers nicely (in my view).</p> <pre><code>#!/usr/bin/env python # -*- coding: utf-8 -*- import wx print(wx.VERSION_STRING) import wx.lib.sized_controls as SC class MyCtrl(SC.SizedPanel): def __init__(self, parent): super(MyCtrl, self).__init__(parent) tx1 = wx.TextCtrl(self) tx1.SetSizerProps(expand=True, proportion=1) tx2 = wx.TextCtrl(self) tx2.SetSizerProps(expand=True, proportion=1) class MyFrame(SC.SizedFrame): def __init__(self, parent): super(MyFrame, self).__init__(parent, style=wx.RESIZE_BORDER|wx.DEFAULT_DIALOG_STYLE) pane = self.GetContentsPane() st = wx.StaticText(pane, label='Text') sp = SC.SizedScrolledPanel(pane) sp.SetSizerProps(expand=True, proportion=1) mc1 = MyCtrl(sp) mc2 = MyCtrl(sp) if __name__ == '__main__': import wx.lib.mixins.inspection as WIT app = WIT.InspectableApp() frame = MyFrame(None) frame.Show() app.MainLoop() </code></pre>
python|wxpython
1
292
27,988,429
Not able to add a column from a pandas data frame to mysql in python
<p>I have connected to mysql from python and I can add a whole data frame to sql by using df.to_sql command. When I am adding/updating a single column from pd.DataFrame, not able udate/add.</p> <p>Here is the information about dataset, result,</p> <pre><code>In [221]: result.shape Out[221]: (226, 5) In [223]: result.columns Out[223]: Index([u'id', u'name', u'height', u'weight', u'categories'], dtype='object') </code></pre> <p>I have the table already in the database with all the columns except categories, so I just need to add the column to the table. From these,</p> <p><a href="https://stackoverflow.com/questions/1307378/python-mysql-update-statement">Python MYSQL update statement</a></p> <p><a href="https://stackoverflow.com/questions/19288842/programmingerror-1064-you-have-an-error-in-your-sql-syntax-check-the-manual">ProgrammingError: (1064, &#39;You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax</a></p> <pre><code>cursor.execute("ALTER TABLE content_detail ADD category VARCHAR(255)" % result["categories"]) </code></pre> <p>This can be successfully add the column but with all NULL values, and when I was trying this</p> <pre><code>cursor.execute("ALTER TABLE content_detail ADD category=%s VARCHAR(255)" % result["categories"]) </code></pre> <p>ends with following error</p> <pre><code>ProgrammingError Traceback (most recent call last) &lt;ipython-input-227-ab21171eee50&gt; in &lt;module&gt;() ----&gt; 1 cur.execute("ALTER TABLE content_detail ADD category=%s VARCHAR(255)" % result["categories"]) /usr/lib/python2.7/dist-packages/mysql/connector/cursor.pyc in execute(self, operation, params, multi) 505 self._executed = stmt 506 try: --&gt; 507 self._handle_result(self._connection.cmd_query(stmt)) 508 except errors.InterfaceError: 509 if self._connection._have_next_result: # pylint: disable=W0212 /usr/lib/python2.7/dist-packages/mysql/connector/connection.pyc in cmd_query(self, query) 720 if not isinstance(query, bytes): 721 query = query.encode('utf-8') --&gt; 722 result = self._handle_result(self._send_cmd(ServerCmd.QUERY, query)) 723 724 if self._have_next_result: /usr/lib/python2.7/dist-packages/mysql/connector/connection.pyc in _handle_result(self, packet) 638 return self._handle_eof(packet) 639 elif packet[4] == 255: --&gt; 640 raise errors.get_exception(packet) 641 642 # We have a text result set ProgrammingError: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '=0 corporate 1 corporate </code></pre> <p>I think there is something I am missing with datatype, please help me to sort this out, thanks.</p>
<p>You cannot add a column to your table with data in it all in one step. You must use at least two separate statements to perform the DDL first (<code>ALTER TABLE</code>) and the DML second (<code>UPDATE</code> or <code>INSERT ... ON DUPLICATE KEY UPDATE</code>).</p> <p>This means that to add a column with a <code>NOT NULL</code> constraint requires three steps:</p> <ol> <li>Add nullable column</li> <li>Populate column with values in every row</li> <li>Add the <code>NOT NULL</code> constraint to the column</li> </ol> <p>Alternatively, by using a "dummy" default value, you can do it in two steps (just be careful not to leave any "dummy" values floating around, or use values that are meaningful/well-documented):</p> <ol> <li>Add column as <code>NOT NULL DEFAULT ''</code> (or use e.g. <code>0</code> for numeric types)</li> <li>Populate column with values in every row</li> </ol> <p>You can optionally alter the table again to remove the <code>DEFAULT</code> value. Personally, I prefer the first method because it doesn't introduce meaningless values into your table and it's more likely to throw an error if the second step has a problem. I <em>might</em> go with the second method when a column lends itself to a certain natural <code>DEFAULT</code> value and I plan to keep that in the final table definition.</p> <p>Additionally, you are not parameterizing your query correctly; you should <em>pass the parameter values to the method</em> rather than formatting the string argument inside the method call. In other words:</p> <pre><code>cursor.execute("Query with %s, %s, ...", iterable_with_values) # Do this! cursor.execute("Query with %s, %s, ..." % iterable_with_values) # NOT this! </code></pre>
mysql|python-2.7|pandas
2
293
32,845,601
count how often each field point is inside a contour
<p>I'm working with 2D geographical data. I have a long list of contour paths. Now I want to determine for every point in my domain inside how many contours it resides (i.e. I want to compute the spatial frequency distribution of the features represented by the contours).</p> <p>To illustrate what I want to do, here's a first very naive implementation:</p> <pre><code>import numpy as np from shapely.geometry import Polygon, Point def comp_frequency(paths,lonlat): """ - paths: list of contour paths, made up of (lon,lat) tuples - lonlat: array containing the lon/lat coordinates; shape (nx,ny,2) """ frequency = np.zeros(lonlat.shape[:2]) contours = [Polygon(path) for path in paths] # Very naive and accordingly slow implementation for (i,j),v in np.ndenumerate(frequency): pt = Point(lonlat[i,j,:]) for contour in contours: if contour.contains(pt): frequency[i,j] += 1 return frequency lon = np.array([ [-1.10e+1,-7.82+0,-4.52+0,-1.18+0, 2.19e+0,5.59e+0,9.01+0,1.24+1,1.58+1,1.92+1,2.26+1], [-1.20e+1,-8.65+0,-5.21+0,-1.71+0, 1.81e+0,5.38e+0,8.97+0,1.25+1,1.61+1,1.96+1,2.32+1], [-1.30e+1,-9.53+0,-5.94+0,-2.29+0, 1.41e+0,5.15e+0,8.91+0,1.26+1,1.64+1,2.01+1,2.38+1], [-1.41e+1,-1.04+1,-6.74+0,-2.91+0, 9.76e-1,4.90e+0,8.86+0,1.28+1,1.67+1,2.06+1,2.45+1], [-1.53e+1,-1.15+1,-7.60+0,-3.58+0, 4.98e-1,4.63e+0,8.80+0,1.29+1,1.71+1,2.12+1,2.53+1], [-1.66e+1,-1.26+1,-8.55+0,-4.33+0,-3.00e-2,4.33e+0,8.73+0,1.31+1,1.75+1,2.18+1,2.61+1], [-1.81e+1,-1.39+1,-9.60+0,-5.16+0,-6.20e-1,3.99e+0,8.66+0,1.33+1,1.79+1,2.25+1,2.70+1], [-1.97e+1,-1.53+1,-1.07+1,-6.10+0,-1.28e+0,3.61e+0,8.57+0,1.35+1,1.84+1,2.33+1,2.81+1], [-2.14e+1,-1.69+1,-1.21+1,-7.16+0,-2.05e+0,3.17e+0,8.47+0,1.37+1,1.90+1,2.42+1,2.93+1], [-2.35e+1,-1.87+1,-1.36+1,-8.40+0,-2.94e+0,2.66e+0,8.36+0,1.40+1,1.97+1,2.52+1,3.06+1], [-2.58e+1,-2.08+1,-1.54+1,-9.86+0,-3.99e+0,2.05e+0,8.22+0,1.44+1,2.05+1,2.65+1,3.22+1]]) lat = np.array([ [ 29.6, 30.3, 30.9, 31.4, 31.7, 32.0, 32.1, 32.1, 31.9, 31.6, 31.2], [ 32.4, 33.2, 33.8, 34.4, 34.7, 35.0, 35.1, 35.1, 34.9, 34.6, 34.2], [ 35.3, 36.1, 36.8, 37.3, 37.7, 38.0, 38.1, 38.1, 37.9, 37.6, 37.1], [ 38.2, 39.0, 39.7, 40.3, 40.7, 41.0, 41.1, 41.1, 40.9, 40.5, 40.1], [ 41.0, 41.9, 42.6, 43.2, 43.7, 44.0, 44.1, 44.0, 43.9, 43.5, 43.0], [ 43.9, 44.8, 45.6, 46.2, 46.7, 47.0, 47.1, 47.0, 46.8, 46.5, 45.9], [ 46.7, 47.7, 48.5, 49.1, 49.6, 49.9, 50.1, 50.0, 49.8, 49.4, 48.9], [ 49.5, 50.5, 51.4, 52.1, 52.6, 52.9, 53.1, 53.0, 52.8, 52.4, 51.8], [ 52.3, 53.4, 54.3, 55.0, 55.6, 55.9, 56.1, 56.0, 55.8, 55.3, 54.7], [ 55.0, 56.2, 57.1, 57.9, 58.5, 58.9, 59.1, 59.0, 58.8, 58.3, 57.6], [ 57.7, 59.0, 60.0, 60.8, 61.5, 61.9, 62.1, 62.0, 61.7, 61.2, 60.5]]) lonlat = np.dstack((lon,lat)) paths = [ [(-1.71,34.4),(1.81,34.7),(5.15,38.0),(4.9,41.0),(4.63,44.0),(-0.03,46.7),(-4.33,46.2),(-9.6,48.5),(-8.55,45.6),(-3.58,43.2),(-2.91,40.3),(-2.29,37.3),(-1.71,34.4)], [(0.976,40.7),(-4.33,46.2),(-0.62,49.6),(3.99,49.9),(4.33,47.0),(4.63,44.0),(0.976,40.7)], [(2.9,55.8),(2.37,56.0),(8.47,56.1),(3.17,55.9),(-2.05,55.6),(-1.28,52.6),(-0.62,49.6),(4.33,47.0),(8.8,44.1),(2.29,44.0),(2.71,43.9),(3.18,46.5),(3.25,49.4),(3.33,52.4),(2.9,55.8)], [(2.25,35.1),(2.26,38.1),(8.86,41.1),(5.15,38.0),(5.38,35.0),(9.01,32.1),(2.25,35.1)]] frequency = comp_frequency(paths,lonlat) </code></pre> <p>Of course this is about as inefficiently written as possible, with all the explicit loops, and accordingly takes forever.</p> <p><strong>How can I do this efficiently?</strong></p> <p>Edit: Added some sample data on request. Note that my real domain is 150**2 larger (in terms of resolution), as I've created the sample coordinates by slicing the original arrays: <code>lon[::150]</code>.</p>
<p>If your input polygons are actually contours, then you're better off working directly with your input grids than calculating contours and testing if a point is inside them.</p> <p>Contours follow a constant value of gridded data. Each contour is a polygon enclosing areas of the input grid greater than that value.</p> <p>If you need to know how many contours a given point is inside, it's faster to sample the input grid at the point's location and operate the returned "z" value. The number of contours that it's inside can be extracted directly from it if you know what values you created contours at.</p> <p>For example:</p> <pre><code>import numpy as np from scipy.interpolate import RegularGridInterpolator import matplotlib.pyplot as plt # One of your input gridded datasets y, x = np.mgrid[-5:5:100j, -5:5:100j] z = np.sin(np.hypot(x, y)) + np.hypot(x, y) / 10 contour_values = [-1, -0.5, 0, 0.5, 1, 1.5, 2] # A point location... x0, y0 = np.random.normal(0, 2, 2) # Visualize what's happening... fig, ax = plt.subplots() cont = ax.contourf(x, y, z, contour_values, cmap='gist_earth') ax.plot([x0], [y0], marker='o', ls='none', color='salmon', ms=12) fig.colorbar(cont) # Instead of working with whether or not the point intersects the # contour polygons we generated, we'll turn the problem on its head: # Sample the grid at the point location interp = RegularGridInterpolator((x[0,:], y[:,0]), z) z0 = interp([x0, y0]) # How many contours would the point be inside? num_inside = sum(z0 &gt; c for c in contour_values)[0] ax.set(title='Point is inside {} contours'.format(num_inside)) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/bNa0E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bNa0E.png" alt="enter image description here"></a></p>
python|numpy|scipy|shapely
4
294
12,622,038
Sending raw bytes over ZeroMQ in Python
<p>I'm porting some Python code that uses raw TCP sockets to ZeroMQ for better stability and a cleaner interface.</p> <p>Right off the bat I can see that a single packet of raw bytes is not sent as I'm expecting.</p> <p>In raw sockets:</p> <pre><code>import socket sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((HOST, PORT)) sock.send('\x00\x01\x02 and some more raw bytes') </code></pre> <p>Which is the current working code. This is the same code using ZeroMQ:</p> <pre><code>import zmq context = zmq.Context() sock = context.socket(zmq.REQ) # this connection utilizes REQ/REP sock.connect('tcp://{0}:{1}'.format(HOST, PORT)) sock.send('\x00\x01\x02 and some more raw bytes') </code></pre> <p>But when I inspect the packets going over the net, they're definitely not what I'm expecting. What am I missing here?</p> <p>Also, when testing this code on the loopback interface (<code>127.0.0.1</code>) with a dummy server it seems to work just fine.</p> <p>Using Python 2.7 if it matters (unicode or whatnot).</p>
<p>Oh. Wow. I overlooked a major flaw in my test, the remote server I was testing on was expecting a raw TCP connection, not a ZMQ connection.</p> <p>Of course ZMQ wasn't able to transfer the message, it didn't even negotiate the connection successfully. When I tested locally I was testing with a dummy ZMQ server, so it worked fine.</p> <p>If I'd have posted the server code it would have immediately made sense that that was the problem.</p> <p>In any case, sorry for the false alarm.</p>
python|sockets|tcp|zeromq
4
295
23,248,996
How to filter for specific objects in a HDF5 file
<p>Learning the <a href="http://ilnumerics.net/hdf5-interface.html" rel="nofollow">ILNumerics HDF5 API</a>. I really like the option to setup a complex HDF5 file in one expression using C# object initializers. I created the following file: </p> <pre><code>using (var f = new H5File("myFile.h5")) { f.Add(new H5Group("myTopNode") { new H5Dataset("dsNo1", ILMath.vec&lt;float&gt;(1,200)), // no attributes new H5Group("myGroup") { new H5Dataset("dsYes", ILMath.rand(100,200)) { // matching dataset Attributes = { { "att1", 1 }, { "att2", 2 } } }, new H5Dataset("dsNo2") { // attributes but wrong name Attributes = { { "wrong1", -100 }, { "wrong2", -200 } } } } }); } </code></pre> <p>Now I am searching for a clever way to iterate over the file and filter for datasets with specific properties. <strong>I want to find all datasets having at least one attribute with "att" in its name, collect and return their content.</strong> This is what I made so far: </p> <pre><code>IList&lt;ILArray&lt;double&gt;&gt; list = new List&lt;ILArray&lt;double&gt;&gt;(); using (var f = new H5File("myFile.h5")) { var groups = f.Groups; foreach (var g in groups) { foreach (var obj in g) { if (obj.H5Type == H5ObjectTypes.Dataset &amp;&amp; obj.Name.Contains("ds")) { var ds = obj as H5Dataset; // look for attributes foreach (var att in ds.Attributes) { //ds.Attributes["att"]. if (att.Name.Contains("att")) { list.Add(ds.Get&lt;double&gt;()); } } } } } } return list; </code></pre> <p>But it does not work recursively. I could adopt it but ILNumerics claims to be convenient so there must be some better way? Something similar to h5py in python? </p>
<p><code>H5Group</code> provides the <code>Find&lt;T&gt;</code> method which does just what you are looking for. It iterates over the whole subtree, taking arbitrary predicates into account: </p> <pre><code>var matches = f.Find&lt;H5Dataset&gt;( predicate: ds =&gt; ds.Attributes.Any(a =&gt; a.Name.Contains("att"))); </code></pre> <p>Why not make your function return 'ILCell' instead of a 'List'? This more nicely integrates into the ILNumerics memory management (there will be no storage laying around and waiting for the garbage collector to come by): </p> <pre><code>using (var f = new H5File("myFile.h5")) { // create container for the dataset contents ILCell c = cell(size(1, 1)); // one element init // retrieve datasets filtered var matches = f.Find&lt;H5Dataset&gt;(predicate: ds =&gt; { if (ds.Attributes.Any(a =&gt; a.Name.Contains("att"))) { c[end + 1] = ds.Get&lt;double&gt;(); return true; } return false; }); return c; } </code></pre> <p>Some links: </p> <p><a href="http://ilnumerics.net/hdf5-interface.html" rel="nofollow">http://ilnumerics.net/hdf5-interface.html</a></p> <p><a href="http://ilnumerics.net/Cells.html" rel="nofollow">http://ilnumerics.net/Cells.html</a> </p> <p><a href="http://ilnumerics.net/GeneralRules.html" rel="nofollow">http://ilnumerics.net/GeneralRules.html</a></p>
c#|python|hdf5|ilnumerics|hdf
1
296
23,245,915
Total/Average/Changing Salary 1,2,3,4 Menu
<p>Change your program so there is a main menu for the manager to select from with four options: </p> <ol> <li>Print the total weekly salaries bill.</li> <li>Print the average salary.</li> <li>Change a player’s salary.</li> <li>Quit</li> </ol> <p>When I run the program, I enter the number 1 and the program stops. How do I link it to the 4 below programs?</p> <p><strong>Program:</strong></p> <pre><code>Chelsea_Salaries_2014 = {'Jose Mourinho':[53, 163500, 'Unknown']} Chelsea_Salaries_2014['Eden Hazard']=[22, 185000, 'June 2017'] Chelsea_Salaries_2014['Fernando Torres']=[29, 175000, 'June 2016'] Chelsea_Salaries_2014['John Terry']=[32, 175000, 'June 2015'] Chelsea_Salaries_2014['Frank Lampard']=[35, 125000, 'June 2014'] Chelsea_Salaries_2014['Ashley Cole']=[32, 120000, 'June 2014'] Chelsea_Salaries_2014['Petr Cech']=[31, 100000, 'June 2016'] Chelsea_Salaries_2014['Gary Cahill']=[27, 80000, 'June 2017'] Chelsea_Salaries_2014['David Luiz']=[26, 75000, 'June 2017'] Chelsea_Salaries_2014['John Obi Mikel']=[26, 75000, 'June 2017'] Chelsea_Salaries_2014['Nemanja Matic']=[25, 75000, 'June 2019'] Chelsea_Salaries_2014['Marco Van Ginkel']=[20, 30000, 'June 2018'] Chelsea_Salaries_2014['Ramires']=[26, 60000, 'June 2017'] Chelsea_Salaries_2014['Oscar']=[21, 67500, 'June 2017'] Chelsea_Salaries_2014['Lucas Piazon']=[19, 15000, 'June 2017'] Chelsea_Salaries_2014['Ryan Bertrand']=[23, 35000, 'June 2017'] Chelsea_Salaries_2014['Marko Marin']=[27, 35000, 'June 2017'] Chelsea_Salaries_2014['Cesar Azpilicueta']=[23, 55000, 'June 2017'] Chelsea_Salaries_2014['Branislav Ivanovic']=[29, 67500, 'June 2016'] Chelsea_Salaries_2014['Ross Turnbull']=[22, 17000, 'June 2017'] Chelsea_Salaries_2014['Demba Ba']=[28, 65000, 'June 2016'] Chelsea_Salaries_2014['Oriol Romeu']=[22, 15000, 'June 2015'] user_input = (int('Welcome! What would you like to do? 1: Print the total salaries bill. 2: Print the average salary. 3: Change a players salary. 4: Quit. ')) if user_input == 1: print(sum(i[1] for i in Chelsea_Salaries_2014.values())) else: if user_input == 2: print(sum(i[1] for i in Chelsea_Salaries_2014.values()))/len(Chelsea_Salaries_2014) else: if user_input == 3: def change_salary(Chelsea_Salaries_2014): search_input = input('What player would you like to search for? ') print('His Current Salary is £{0:,}'.format(Chelsea_Salaries_2014[search_input][1])) new_salary = int(input('What would you like to change his salary to? ')) if new_salary &lt;= 200000: Chelsea_Salaries_2014[search_input][1] = new_salary print('Salary has been changed to £{0:,}'.format(new_salary)) else: print('This salary is ridiculous!') while True: change_salary(Chelsea_Salaries_2014) choice = input("Go again? y/n ") if choice.lower() in ('n', 'no'): break else: if user_input == 4: print('Goodbye!') </code></pre>
<p>Put the raw input in a while.</p> <pre><code> while True: user_input = raw_input("Welcome!...") if user_input == 1: ... elif user_unput == 2: ... else: print "this salary is ridic..." </code></pre> <p>After completing a 1,2,3... input ask the user if they would like to do something else y/n, if n: break, this will end the loop. If y, the loop begins again and asks for another user input.</p>
python|python-3.3
0
297
57,579,911
How to get python's json module to cope with right quotation marks?
<p>I am trying to load a utf-8 encoded json file using python's json module. The file contains several <a href="https://www.utf8-chartable.de/unicode-utf8-table.pl?start=8192&amp;number=128" rel="nofollow noreferrer">right quotation marks, encoded as <code>E2 80 9D</code></a>. When I call</p> <pre><code>json.load(f, encoding='utf-8') </code></pre> <p>I receive the message: UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 212068: character maps to </p> <p>How can I convince the json module to decode this properly?</p> <p>EDIT: Here's a minimal example:</p> <pre><code>[ { "aQuote": "“A quote”" } ] </code></pre>
<p>There is no <code>encoding</code> in the signature of <code>json.load</code>. The solution should be simply:</p> <pre><code>with open(filename, encoding='utf-8') as f: x = json.load(f) </code></pre>
python|json
1
298
70,792,656
How do I get pending windows updates in python?
<p>I am trying to get pending windows updates on python but no module returns me the pending windows updates, only windows update history, I don't need especifiation about the update I just need to know if there are pending updates or not, I'm trying to use this code:</p> <pre><code>from windows_tools.updates import get_windows_updates import os for update in get_windows_updates(filter_duplicates=True, include_all_states=False): print(update) </code></pre> <p>It returns:</p> <pre><code> {'kb': None, 'date': '2022-01-14 20:18:21', 'title': '9PLFNLNT3G5G-AppUp.IntelGraphicsExperience', 'description': '9PLFNLNT3G5G-1152921505694231446', 'supporturl': '', 'operation': 'installation', 'result': 'succeeded'} {'kb': None, 'date': '2022-01-14 20:18:21', 'title': '9NBLGGH3FRZM-Microsoft.VCLibs.140.00', 'description': '9NBLGGH3FRZM-1152921505694106457', 'supporturl': '', 'operation': 'installation', 'result': 'succeeded'} {'kb': None, 'date': '2022-01-14 20:18:21', 'title': '9MW2LKJ0TPJF-Microsoft.NET.Native.Framework.2.2', 'description': '9MW2LKJ0TPJF-1152921505692414645', 'supporturl': '', 'operation': 'installation', 'result': 'succeeded'} {'kb': None, 'date': '2022-01-14 20:18:21', 'title': '9PLL735RFDSM-Microsoft.NET.Native.Runtime.2.2', 'description': '9PLL735RFDSM-1152921505689378154', 'supporturl': '', 'operation': 'installation', 'result': 'succeeded'} {'kb': None, 'date': '2022-01-14 20:18:15', 'title': 'HP Inc. - HIDClass - 2.1.16.30156', 'description': 'HP Inc. HIDClass driver update released in November 2021', 'supporturl': 'http://support.microsoft.com/select/?target=hub', 'operation': 'installation', 'result': 'succeeded'} {'kb': None, 'date': '2022-01-14 20:18:03', 'title': 'Intel Corporation - Bluetooth - 20.100.7.1', 'description': 'Intel Corporation Bluetooth driver update released in July 2020', 'supporturl': 'http://support.microsoft.com/select/?target=hub', 'operation': 'installation', 'result': 'succeeded'} {'kb': None, 'date': '2022-01-14 20:18:01', 'title': 'Intel Corporation - Extension - 12/16/2018 12:00:00 AM - 20.110.1.1', 'description': 'Intel Corporation Extension driver update released in December 2018', 'supporturl': 'http://support.microsoft.com/select/?target=hub', 'operation': 'installation', 'result': 'succeeded'} {'kb': None, 'date': '2022-01-14 20:17:50', 'title': 'Intel Corporation - Display - 27.20.100.8681', 'description': 'Intel Corporation Display driver update released in September 2020', 'supporturl': 'http://support.microsoft.com/select/?target=hub', 'operation': 'installation', 'result': 'succeeded'} {'kb': None, 'date': '2022-01-14 20:15:12', 'title': 'Realtek Semiconductor Corp. - MEDIA - 6.0.8940.1', 'description': 'Realtek Semiconductor Corp. MEDIA driver update released in April 2020', 'supporturl': 'http://support.microsoft.com/select/?target=hub', 'operation': 'installation', 'result': 'succeeded'} {'kb': 'KB4591272', 'date': '2022-01-14 20:13:19', 'title': '2021-11 Atualização do Windows 10 Version 21H2 para sistemas baseados em x64 (KB4591272)', 'description': 'Instale esta atualização para resolver problemas no Windows. Para obter a lista completa dos problemas incluídos nesta atualização, consulte o artigo da Base de Dados de Conhecimento Microsoft associado. Talvez seja necessário reiniciar o computador após instalar este item.', 'supporturl': 'http://support.microsoft.com', 'operation': 'installation', 'result': 'succeeded'} {'kb': 'KB5003791', 'date': '2021-10-06 00:00:00', 'title': None, 'description': 'Update', 'supporturl': 'https://support.microsoft.com/help/5003791', 'operation': None, 'result': None} {'kb': 'KB5009636', 'date': '2022-01-20 00:00:00', 'title': None, 'description': 'Update', 'supporturl': None, 'operation': None, 'result': None} {'kb': 'KB5005699', 'date': '2021-10-06 00:00:00', 'title': None, 'description': 'Security Update', 'supporturl': None, 'operation': None, 'result': None} </code></pre> <p>I get all my installed updates and not the pending ones, how can I find the pending ones programmatically.</p> <p>I'm using python 3.10</p>
<p>There was no solution in python, so I did a vbs script and called from inside my function. the vbs script is</p> <pre><code>Set updateSession = CreateObject(&quot;Microsoft.Update.Session&quot;) Set updateSearcher = updateSession.CreateupdateSearcher() Set searchResult = updateSearcher.Search(&quot;IsInstalled=0 and Type='Software'&quot;) If searchResult.Updates.Count &lt;&gt; 0 Then For i = 0 To searchResult.Updates.Count - 1 Set update = searchResult.Updates.Item(i) Next End If Main Sub Main() Dim result, fso, fs result = 1 / Cos(25) Set fso = CreateObject(&quot;Scripting.FileSystemObject&quot;) Set fs = fso.CreateTextFile(&quot;output.txt&quot;, True) fs.Write searchResult.Updates.Count fs.Close End Sub </code></pre> <p>It gets the number of pending updates, then I called inside my function like this</p> <pre><code>import subprocess, time, os class update_monitor(): def __init__(self): self.output='output.txt' def updates_restando(self): os.system(r'script.vbs') time.sleep(10) with open(self.output,'r') as file: for i in file: if i == '0': print('Não há atualizações disponiveis') return 'Não há atualizações disponiveis' else: print('Existem atualizações pendentes') return 'Existem atualizações pendentes' a = update_monitor() a.updates_restando() </code></pre> <p>this solution worked perfectly fine.</p>
python|python-3.x|windows
0
299
70,928,435
Python: get values from list of dictionaries
<p>I am using <a href="https://github.com/broadinstitute/python-sudoers" rel="nofollow noreferrer">python-sudoers</a> to parse a massive load of sudoers files, alas this library returns some weird data.</p> <p>looks like a list of dictionaries, i dont really know.</p> <pre><code>[{'run_as': ['ALL'], 'tags': ['NOPASSWD'], 'command': 'TSM_SSI'}, {'run_as': ['ALL'], 'tags': ['NOPASSWD'], 'command': 'SU_TSMWIN'}, {'run_as': ['ALL'], 'tags': ['NOPASSWD'], 'command': 'SU_TSMUNIX'}, {'run_as': ['ALL'], 'tags': ['NOPASSWD'], 'command': 'SU_TSMLIBMGR'}] </code></pre> <p>this works, but i need the single values in variables, like extracted_runas = &quot;ALL&quot;, and so on...</p> <pre><code>&gt;&gt;&gt; lst = [{'run_as': ['ALL'], 'tags': ['NOPASSWD'], 'command': 'TSM_SSI'}, {'run_as': ['ALL'], 'tags': ['NOPASSWD'], 'command': 'SU_TSMWIN'}, {'run_as': ['ALL'], 'tags': ['NOPASSWD'], 'command': 'SU_TSMUNIX'}, {'run_as': ['ALL'], 'tags': ['NOPASSWD'], 'command': 'SU_TSMLIBMGR'}] &gt;&gt;&gt; print(*[val for dic in lst for val in dic.values()], sep='\n') ['ALL'] ['NOPASSWD'] TSM_SSI ['ALL'] ['NOPASSWD'] SU_TSMWIN ['ALL'] ['NOPASSWD'] SU_TSMUNIX ['ALL'] ['NOPASSWD'] SU_TSMLIBMGR </code></pre>
<p>So because in each dict we have repeated variable names the only possible solution is to name them <code>extracted_run_as_0 = 'ALL'</code>, <code>extracted_run_as_1 = 'ALL'</code> etc.</p> <pre class="lang-py prettyprint-override"><code>for i, dictionary in enumerate(lst): for k, v in dictionary.items(): v = v[0] if isinstance(v, list) else v exec(f&quot;extracted_{k}_{i} = {v!r}&quot;) print(extracted_run_as_0, extracted_tags_0, extracted_run_as_1) # etc.. </code></pre>
python-3.x
-1