Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
800 | 71,988,512 |
Is there any way to bind a differnt click handler to QPushButton in PyQt5?
|
<p>I have a <code>QPushbutton</code>:</p>
<pre class="lang-py prettyprint-override"><code>btn = QPushButton("Click me")
btn.clicked.connect(lambda: print("one"))
</code></pre>
<p>Later in my program, I want to rebind its click handler, I tried to achieve this by calling <code>connect</code> again:</p>
<pre class="lang-py prettyprint-override"><code>btn.clicked.connect(lambda: print("two"))
</code></pre>
<p>I expected to see that the console only prints <code>two</code>, but actually it printed both <code>one</code> and <code>two</code>. In other words, I actually bound two click handlers to the button.</p>
<p>How can I rebind the click handler?</p>
|
<p>Signals and slots in Qt are observer pattern (pub-sub) implementation, many objects can subscribe to same signal and subscribe many times. And they can unsubscribe with <code>disconnect</code> function.</p>
<pre><code>from PyQt5 import QtWidgets, QtCore
if __name__ == "__main__":
app = QtWidgets.QApplication([])
def handler1():
print("one")
def handler2():
print("two")
button = QtWidgets.QPushButton("test")
button.clicked.connect(handler1)
button.show()
def change_handler():
print("change_handler")
button.clicked.disconnect(handler1)
button.clicked.connect(handler2)
QtCore.QTimer.singleShot(2000, change_handler)
app.exec()
</code></pre>
<p>In case of lambda you can only disconnect all subscribers at once with <code>disconnect()</code> (without arguments), which is fine for button case.</p>
|
python|qt|pyqt|pyqt5
| 1 |
801 | 61,973,633 |
Python class not recognizing list
|
<p>I am attempting to build one of my first classes ever and after checking some documentation and other StackOverflow questions I cannot figure out why I am getting <code>NameError: name 'executed_trades' is not defined</code> in the code listed below:</p>
<pre class="lang-py prettyprint-override"><code>class Position:
def __init__(self):
self.executed_trades = []
def add_position(self, execution):
if execution not in executed_trades:
executed_trades.append(execution)
</code></pre>
<p>Does it not belong under <code>__init__()</code>? Is there something different about declaration in classes I am missing? It feels like a relatively simple error but I cannot seem to figure it out.</p>
|
<p>You are missing <code>self</code> in <code>add_position</code> method when you refer to <code>executed_trades</code>:</p>
<pre><code>class Position:
def __init__(self):
self.executed_trades = []
def add_position(self, execution):
if execution not in self.executed_trades:
self.executed_trades.append(execution)
</code></pre>
|
python|list|class
| 2 |
802 | 67,283,961 |
Decrypt message with cryptography.fernet do not work
|
<p>I just tried my hand at encrypting and decrypting data. I first generated a key, then encrypted data with it and saved it to an XML file. Now this data is read and should be decrypted again.</p>
<p>But now I get the error message "cryptography.fernet.InvalidToken".</p>
<pre><code>import xml.etree.cElementTree as ET
from cryptography.fernet import Fernet
from pathlib import Path
def load_key():
"""
Load the previously generated key
"""
return open("../login/secret.key", "rb").read()
def generate_key():
"""
Generates a key and save it into a file
"""
key = Fernet.generate_key()
with open("../login/secret.key", "wb") as key_file:
key_file.write(key)
def decrypt_message(encrypted_message):
"""
Decrypts an encrypted message
"""
key = load_key()
f = Fernet(key)
message = encrypted_message.encode('utf-8')
decrypted_message = f.decrypt(message)
return(decrypted_message)
def decryptMessage(StringToDecrypt):
decryptedMessage = decrypt_message(StringToDecrypt)
return decryptedMessage
def loginToRoster(chrome):
credentials = readXML()
user = decryptMessage(credentials[0])
pw = decryptMessage(credentials[1])
userName = chrome.find_element_by_id('UserName')
userName.send_keys(user)
password = chrome.find_element_by_id('Password')
password.send_keys(pw)
</code></pre>
<p>In the tuple "credentials" there are 2 encrypted strings.</p>
<p>Please help - have already tried everything to change the formats, but no chance.</p>
<p>Edit:</p>
<p>Errormessage:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/r/Documents/GitHub/ServiceEvaluationRK/source/main.py", line 27, in <module>
login.loginToRoster(chrome)
File "C:\Users\r\Documents\GitHub\ServiceEvaluationRK\source\login.py", line 106, in loginToRoster
user = decryptMessage(credentials[0])
File "C:\Users\r\Documents\GitHub\ServiceEvaluationRK\source\login.py", line 49, in decryptMessage
decryptedMessage = decrypt_message(StringToDecrypt)
File "C:\Users\r\Documents\GitHub\ServiceEvaluationRK\source\login.py", line 43, in decrypt_message
decrypted_message = f.decrypt(message)
File "C:\Users\r\Documents\GitHub\ServiceEvaluationRK\venv\lib\site-packages\cryptography\fernet.py", line 75, in decrypt
timestamp, data = Fernet._get_unverified_token_data(token)
File "C:\Users\r\Documents\GitHub\ServiceEvaluationRK\venv\lib\site-packages\cryptography\fernet.py", line 107, in _get_unverified_token_data
raise InvalidToken
cryptography.fernet.InvalidToken
</code></pre>
|
<p>I found an answer to my problem:</p>
<p>I took <code>ASCII</code> instead of <code>utf-8</code>. And I added a <code>.decode('ASCII')</code> at the function "loginToRoster" to both variables 'user' and 'pw'</p>
<p>Now the encryption and decryption works fine.</p>
<p>So, the 'loginToRoster' functions looks like:</p>
<pre><code>def loginToRoster(chrome):
credentials = readXML()
user = decryptMessage(credentials[0]).decode('ASCII')
pw = decryptMessage(credentials[1]).decode('ASCII')
userName = chrome.find_element_by_id('UserName')
userName.send_keys(user)
password = chrome.find_element_by_id('Password')
password.send_keys(pw)
</code></pre>
|
python|python-3.x|encryption|fernet
| 1 |
803 | 70,225,084 |
i tried to create a program for in case of an error in entering the input but after that it does not receive new output and continues in a loop
|
<p>after i type 5 it continue to loop and dont't get to the if statement</p>
<pre><code>def facility():
global user
while user != 1 and user != 2 and user != 3 and user != 4:
user =input("please choose between this four number. \n[1/2/3/4]\n")
if user == 1:
y = ("PBl Classroom")
elif user == 2:
y = ("meeting room")
elif user == 3:
y =("Workstation Computer Lab,ITMS")
elif user == 4:
y = ("swimming pool")
print("you have choose",y)
user = int(input("please choose your facility..\n "))
</code></pre>
|
<p>You use <code>int(input(...))</code> on your first call, but <code>input(...)</code> in the function. Thus the values are strings, not integers and your comparisons will fail.</p>
<p>Here is a fix with minor improvements:</p>
<pre><code>def facility():
user = int(input("please choose your facility..\n "))
while user not in (1,2,3,4):
user = int(input("please choose between this four number. \n[1/2/3/4]\n"))
if user == 1:
y = ("PBl Classroom")
elif user == 2:
y = ("meeting room")
elif user == 3:
y =("Workstation Computer Lab,ITMS")
elif user == 4:
y = ("swimming pool")
print("you have chosen", y)
facility()
</code></pre>
|
python|loops
| 1 |
804 | 11,245,031 |
Importing CSV into MySQL Database (Django Webapp)
|
<p>I'm developing a webapp in Django, and for it's database I need to import a CSV file into a particular MySQL database.</p>
<p>I searched around a bit, and found many pages which listed how to do this, but I'm a bit confused.</p>
<p>Most pages say to do this:</p>
<pre><code>LOAD DATA INFILE '<file>' INTO TABLE <tablenname>
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n';
</code></pre>
<p>But I'm confused how Django would interpret this, since we haven't mentioned any <strong>column names</strong> here.
I'm new to Django and even newer to databasing, so I don't really know how this would work out. </p>
|
<p>It looks like you are in the database admin (i.e. PostgreSQL/MySQL). Others above has given a good explanation for that.</p>
<p>But if you want to import data into Django itself -- Python has its own csv implementation, like so: <code>import csv</code>.</p>
<p>But if you're new to Django, then I recommend installing something like the Django CSV Importer: <a href="http://django-csv-importer.readthedocs.org/en/latest/index.html" rel="nofollow">http://django-csv-importer.readthedocs.org/en/latest/index.html</a>. (You install the add-ons into your Python library.)</p>
<p>The author, unfortunately, has a typo in the docs, though. You have to do <code>from csvImporter.model import CsvDbModel</code>, not <code>from csv_importer.model import CsvDbModel</code>.</p>
<p>In your models.py file, create something like:</p>
<pre><code>class MyCSVModel(CsvDbModel):
pass
class Meta:
dbModel = Model_You_Want_To_Reference
delimiter = ","
has_header = True
</code></pre>
<p>Then, go into your Python shell and do the following command:
<code>my_csv = MyCsvModel.import_data(data = open("my_csv_file_name.csv"))</code></p>
|
python|mysql|django
| 2 |
805 | 63,702,650 |
Is there a way in Robot framework to log if keywork only if there are True?
|
<p>I do know that there are no switch statements in RF. I do have 50 if-keywords (that I use because no switch exists).
My log file is very long because literally every 50 if statements are logged (even those who are not true).
I would like to know if there is a way to log only the statements that are true?</p>
<p>here is how my code is written (there are 50 keywords like these) :</p>
<pre><code># Access Apply ImportExportParams
\ Run Keyword If '${Type}' == 'ImportExportParams' and '${DealId}' != 'None' and '${ScenarioId}' != 'None' Call_API_ImportExportParams ${DealId} ${ScenarioId} ${ProductId}
# Access bulk apply cheapest quote
\ Run Keyword If '${Type}' == 'BulkApplyCheapest' and '${DealId}' != 'None' and '${ScenarioId}' != 'None' Call_API_BulkApplyCheapest ${DealId} ${ScenarioId} ${ProductId}
# SiteSelection
\ Run Keyword If '${Type}' == 'SiteSelection' and '${DealId}' != 'None' and '${ScenarioId}' != 'None' Call_API_SiteSelection ${ProductId} ${DealId} ${ScenarioId} ${Name}
# SiteSelectionFile
\ Run Keyword If '${Type}' == 'SiteSelectionFile' and '${DealId}' != 'None' and '${ScenarioId}' != 'None' Call_API_SiteSelectionFile ${ProductId} ${DealId} ${ScenarioId}
\ Run Keyword If '${Type}' == 'SiteSelectionFile2' and '${DealId}' != 'None' and '${ScenarioId}' != 'None' Call_API_SiteSelectionFile2 ${ProductId} ${DealId} ${ScenarioId}
# SiteSelectionMultiple
\ Run Keyword If '${Type}' == 'SiteSelectionMultiple' and '${DealId}' != 'None' and '${ScenarioId}' != 'None' Call_API_SiteSelectionMultiple ${ProductId} ${DealId} ${ScenarioId}
</code></pre>
<p><a href="https://i.stack.imgur.com/CgifX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CgifX.png" alt="here is a screenshot of a part of my actual log" /></a></p>
<p>thanks for your help :)</p>
|
<p>May be you are looking for <code>--removekeywords</code> and <code>--flattenkeywords</code> command line options
For more details have a look at <a href="http://robotframework.org/robotframework/2.9.2/RobotFrameworkUserGuide.html#removing-and-flattening-keywords" rel="nofollow noreferrer">Removing and flattening keyword</a></p>
<p>your code suggests that all this conditions are looping under for loop from <code>\</code> an older syntax of <code>FOR</code> loop. So by using</p>
<p><code>robot --removekeywords FOR testuitefilename.robot</code> will produce output something like in the below screenshot. In most of the cases passed steps does not required and I think this suffice to serve your requirement.</p>
<p><code>FOR</code> - This mode remove all passed iterations from for loops except the last one.</p>
<p><a href="https://i.stack.imgur.com/kcpcd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kcpcd.png" alt="enter image description here" /></a></p>
<p>Other possibility is approaching to this problem in other way instead of looking for the option to not to log the false condition keywords in the logs. for example -</p>
<ul>
<li>From the code I could see <code>'${DealId}' != 'None' and '${ScenarioId}' != 'None'</code> this condition is common. So instead of checking it all over check it once.</li>
<li>Instead of looping all over use the <code>IN</code> & <code>List</code> or <code>Dictionary</code> to check the <code>${type}</code> variable value exist in the one of collection. And get that value using keywords.</li>
<li>Check this condition before hand <code>'${DealId}' != 'None' and '${ScenarioId}' != 'None'</code> and then use variable as part of keyword syntax to call the specific keyword</li>
</ul>
<p>This I could reduce to -</p>
<pre><code>*Test cases
Run Keyword If '${DealId}' != 'None' and '${ScenarioId}' !='None' Execute the type of Keyword SiteSelection
*Keywords
Execute the type of Keyword
[Arguments] ${type}
${Type_list}= Create List ImportExportParams BulkApplyCheapest SiteSelection SiteSelectionFile
... SiteSelectionFile2 SiteSelectionMultiple
${Status} ${index} Run Keyword And Ignore Error Get Index From List ${Type_list} ${type}
${Keyword_type} Get From List ${Type_list} ${index}
log ${Status} ${index}
Run Keyword Call_API_${Keyword_type}
</code></pre>
<p><strong>Output</strong></p>
<p><a href="https://i.stack.imgur.com/3Qy47.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Qy47.png" alt="enter image description here" /></a></p>
|
python|logging|automated-tests|testng|robotframework
| 1 |
806 | 56,826,441 |
Tensorflow: customise LSTM cell with subtractive gating
|
<p>I want to use subtractive gating which is explained in <a href="https://arxiv.org/pdf/1711.02448.pdf" rel="nofollow noreferrer">this paper</a>
I'm using Tensorflow, and currently the code is: (Using CPU)</p>
<pre><code>import tensorflow.contrib.rnn as RNNCell
tgt_cell = RNNCell.LSTMCell(num_units=flags.hidden_size, state_is_tuple=True)
tgt_dropout_cell = RNNCell.DropoutWrapper(tgt_cell, output_keep_prob=self.keep_prob)
tgt_stacked_cell= RNNCell.MultiRNNCell([tgt_dropout_cell] * self.opt.num_layers, state_is_tuple=True)
</code></pre>
<p>According to the paper the changes are as follows:
where LSTM is:</p>
<p><a href="https://i.stack.imgur.com/MdhVA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MdhVA.png" alt="enter image description here"></a></p>
<p>The gating should be subtractive rather than multiplicative:</p>
<p><a href="https://i.stack.imgur.com/kfg9u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kfg9u.png" alt="enter image description here"></a></p>
<p>when I click on "LSTMCell" in my code, it opens rnn_cells.py and I'm not sure which part should be changed. May someone please help to make changes?</p>
|
<p>wow thats kind of advanced. Look like RNNCell.LSTMCell and write your own with changes you want. If you look here <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/rnn/python/ops/rnn_cell.py" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/rnn/python/ops/rnn_cell.py</a> i guess the operation for cells are defined in call like starting from line 220 then find ops you need. </p>
|
python|tensorflow|machine-learning|neural-network|lstm
| 0 |
807 | 60,850,639 |
list' object cannot be interpreted as an integer in RandomForest code
|
<p>i have used following code from machine Learning book</p>
<pre><code>from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import mglearn
X,y =make_moons(n_samples=100,noise=0.25,random_state=3)
X_train,X_test,y_train,y_test =train_test_split(X,y,stratify=y,random_state=42)
#sketch random forest
forest =RandomForestClassifier(n_estimators=5,random_state=2)
forest.fit(X_train,y_train)
#draw random forest
fix, axes =plt.subplots(2,3,figsize=(20,10))
for i,(ax,tree) in enumerate(list(zip(axes.ravel())),forest.estimators_):
ax.set_title("tree{}".format(i))
mglearn.plots.plot_tree_partition(X_train,y_train,tree,ax=ax)
mglearn.plots.plot_2d_separator(forest, X_train, fill=True, ax=axes[-1, -1],alpha=.4)
axes[-1, -1].set_title("Random Forest")
mglearn.discrete_scatter(X_train[:, 0], X_train[:, 1], y_train)
</code></pre>
<p>it says following error :</p>
<pre><code>TypeError: 'list' object cannot be interpreted as an integer
</code></pre>
<p>i know that in python3 , for zip there is necessary list command, so in book originally it was written </p>
<pre><code>for i, (ax, tree) in enumerate(zip(axes.ravel(), forest.estimators_)):
</code></pre>
<p>and i have added list command, but still it shows me this error, can you help me to clarify what is wrong?</p>
|
<p>In</p>
<pre><code>enumerate(list(zip(axes.ravel())),forest.estimators_)
</code></pre>
<p><code>forest.estimators_</code> is outside your <code>list(zip())</code> call and is treated as the second argument for <code>enumerate</code>, which, <a href="https://docs.python.org/3/library/functions.html#enumerate" rel="nofollow noreferrer">from the docs</a>, represents the start index. Since <code>forest.estimators_</code> is a list, this will fail as an integer is required.</p>
<p>What you mean to write is:</p>
<pre><code>enumerate(list(zip(axes.ravel(), forest.estimators_)))
</code></pre>
|
python|random-forest
| 1 |
808 | 72,623,140 |
get name attr value for formset with appropriate prefix and formset form index
|
<p>I am manually displaying modelformset_factory values in a Django template using the snippet below. One of the inputs is using the select type and I'm populating the options using another context value passed from the view after making external API calls, so there is no model relationship between the data and the form I'm trying to display.</p>
<p>view</p>
<pre><code>my_form = modelformset_factory(MyModel, MyModelForm, fields=("col1", "col2"), extra=0, min_num=1, can_delete=True,)
</code></pre>
<p>template</p>
<pre><code>{{ my_form.management_form }}
{% for form in my_form %}
<label for="{{ form.col1.id_for_label }}">{{ form.col1.label }}</label>
{{ form.col1 }}
<label for="{{ form.col2.id_for_label }}">{{ form.col2.label }}</label>
<select id="{{ form.col2.id_for_label }}" name="{{ form.col2.name }}">
<option disabled selected value> ---- </option>
{% for ctx in other_ctx %}
<option value="{{ ctx.1 }}">{{ ctx.2 }}</option>
{% endfor %}
</select>
{% endfor %}
</code></pre>
<p>The <code>other_ctx</code> populating the select option is a List[Tuple]</p>
<p>I am trying to get the <code>name</code> value for the <code>col2</code> input using <code>{{ form.col2.name }}</code> but only <code>col2</code> is getting returned instead of <code>form-0-col2</code>. I could prepend the <code>form-0-</code> value to the <code>{{ form.col2.name }}</code> but wondering if:</p>
<ol>
<li>I could do the above automatically? I'm assuming the formset should be aware of the initial formset names with appropriate formset index coming from the view.</li>
<li>Is there a way to include the select options in the initial formset that is sent to the template so that I can simply use <code>{{ form.col2 }}</code>?</li>
</ol>
<p>Just saw that I could use <code>form-{{ forloop.counter0 }}-{{ form.col2.name }}</code> as an alternative as well, if getting it automatically does not work.</p>
|
<p>I think what you are after is form.col2.html_name - <a href="https://docs.djangoproject.com/en/4.0/ref/forms/api/#django.forms.BoundField.html_name" rel="nofollow noreferrer">docs</a></p>
<p>This is the name that will be used in the widget’s HTML name attribute. It takes the form prefix into account.</p>
|
python|html|django|formset
| 1 |
809 | 68,416,126 |
Identify values in column A not in column B and column C using Python
|
<p>Python newbies looking for help. A dataset has 3 numerical columns: A, B, C. How do I find the values only exist in A but not B and C?</p>
|
<p>Your question need more details but you can adapt the code below:</p>
<pre><code>A = [1, 2, 3]
B = [1, 3, 4]
C = [1, 4, 5]
>>> set(A).difference(set(B).union(C))
{2}
</code></pre>
|
python|python-3.x
| 0 |
810 | 68,407,571 |
BranchPythonOperator not running with past skipped state task
|
<p>This is how my airflow dag looks like</p>
<p><a href="https://i.stack.imgur.com/GVuWn.jpg" rel="nofollow noreferrer">1</a>:<a href="https://i.stack.imgur.com/GVuWn.jpg" rel="nofollow noreferrer">Airflow dag</a></p>
<p>There is a branch task which checks for a condition and then either :</p>
<p>Runs Task B directly, skipping task A or</p>
<p>Runs task A and then runs task B</p>
<p>When task A is skipped, in the next(future) run of the dag, branch task never runs(execution stops at main task) although default trigger rule is 'none_failed' and no task is failed in the dag only skipped.</p>
<pre><code>default_args = {
'owner': 'airflow',
'depends_on_past': True,
'wait_for_downstream': True,
'email_on_failure': True,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=2),
'trigger_rule': 'none_failed'
}
dag = DAG(
dag_id='main_task',
default_args=default_args,
schedule_interval='0 2 * * *',
start_date=datetime(2021,6,2),
max_active_runs=8,
)
def check_condition():
if(conditionA == conditionB):
return ['task_A','task_B']
else :
return 'task_B'
branch_task = BranchPythonOperator(
task_id='branching',
python_callable=check_condition,
dag=dag,
depends_on_past=False,
)
</code></pre>
<p>Using Airflow 1.10.12.
Could someone guide me why branch task never runs after task A is skipped in the past run.</p>
|
<p>The reason is happens isn't related to trigger rules.
It happens because <code>default_args</code> in DAG constructor contains <code>wait_for_downstream=True</code>
so when you do:</p>
<pre><code>branch_task = BranchPythonOperator(
task_id='branching',
python_callable=check_condition,
dag=dag,
depends_on_past=False,
)
</code></pre>
<p>What actually happens is that <code>depends_on_past</code> is set to <code>True</code> by the <a href="https://github.com/apache/airflow/blob/7529546939250266ccf404c2eea98b298365ef46/airflow/models/baseoperator.py#L559-L560" rel="nofollow noreferrer">constructor</a> of <code>BaseOpeartor</code>.
Since <code>wait_for_downstream=True</code> will cause a task instance to also wait for <strong>all</strong> task instances immediately downstream of the previous task instance to succeed this causes the <code>BranchPythonOperator</code> to not start running. This is a problem as branch operator usually have direct downstream tasks in Skip status.</p>
<p>You can fix it by:</p>
<pre><code>branch_task = BranchPythonOperator(
task_id='branching',
python_callable=check_condition,
dag=dag,
depends_on_past=False,
wait_for_downstream=False
)
</code></pre>
<p>I'd like to note that this is an issue only in Airlfow<2.0.0 because <code>wait_for_downstream</code> consider only Success status as accepted (<a href="https://github.com/apache/airflow/blob/d3b066931191b82880d216af103517ea941c74ba/airflow/models/baseoperator.py#L132" rel="nofollow noreferrer">1.10 operator description</a>).</p>
<p>For Airflow > 2.0.0 this issue won't happen as the behavior was changed in <a href="https://github.com/apache/airflow/pull/7735" rel="nofollow noreferrer">PR</a> by making <code>wait_for_downstream</code> consider both Successful and Skipped tasks as accepted statuses (<a href="https://github.com/apache/airflow/blob/e0a41971a1c57221a5e03c70fc670a4c09f19d8a/airflow/models/baseoperator.py#L271" rel="nofollow noreferrer">2.0 operator description</a>).</p>
|
python|airflow
| 0 |
811 | 62,400,540 |
User authentication for Spotify in Python using Spotipy on AWS
|
<p>I am currently building a web-app that requires a Spotify user to login using their credentials in order to access their playlists</p>
<p>I'm using the Spotipy python wrapper for Spotify's Web API and generating an access token using, </p>
<pre><code>token = util.prompt_for_user_token(username,scope,client_id,client_secret,redirect_uri)
</code></pre>
<p>The code runs without any issues on my local machine. But, when I deploy the web-app on AWS, it does not proceed to the redirected uri and allow for user login.</p>
<p>I have tried transferring the ".cache-username" file via SCP to my AWS machine instance and gotten it to work in limited fashion. </p>
<p>Is there a solution to this issue? I'm fairly new to AWS and hence don't have much to go on or any idea where to look. Any help would be greatly appreciated. Thanks in advance!!</p>
|
<h3>The quick way</h3>
<ol>
<li>Run the script locally so the user can sign in once</li>
<li>In the local project folder, you will find a file <code>.cache-{userid}</code></li>
<li>Copy this file to your project folder on AWS</li>
<li>It should work</li>
</ol>
<hr>
<h3>The database way</h3>
<p>There is currently an open feature request on Github that suggests to store tokens in a DB. Feel free to subscribe to the issue or to contribute <a href="https://github.com/plamere/spotipy/issues/51" rel="nofollow noreferrer">https://github.com/plamere/spotipy/issues/51</a> </p>
<p>It's also possible to write a bit of code to persist new tokens into a DB and then read from it. That's what I'm doing as part of an AWS Lambda using DynamoDB, it's not very nice but it works perfectly <a href="https://github.com/resident-archive/resident-archive/blob/a869b73f1f64538343be1604d43693b6165cc58a/functions/to-spotify/main.py#L129..L157" rel="nofollow noreferrer">https://github.com/resident-archive/resident-archive/blob/a869b73f1f64538343be1604d43693b6165cc58a/functions/to-spotify/main.py#L129..L157</a></p>
<hr>
<h3>The API way</h3>
<p>This is probably the best way, as it allows multiple users to sign in simultaneously. However it is a bit more complex and requires you host a server that's accessible by URL.</p>
<p>This example uses Flask but one could adapt it to Django for example <a href="https://github.com/plamere/spotipy/blob/master/examples/app.py" rel="nofollow noreferrer">https://github.com/plamere/spotipy/blob/master/examples/app.py</a></p>
|
python|amazon-web-services|authentication|spotify|spotipy
| 2 |
812 | 35,449,968 |
Python not in dict condition sentence performance
|
<p>Does anybody know about what is better to use thinking about speed and resources? Link to some trusted sources would be much appreciated.</p>
<pre><code>if key not in dictionary.keys():
</code></pre>
<p>or</p>
<pre><code>if not dictionary.get(key):
</code></pre>
|
<p>Firstly, you'd do</p>
<pre><code>if key not in dictionary:
</code></pre>
<p>since dicts are iterated over by keys.</p>
<p>Secondly, the two statements are not equivalent - the second condition would be true if the corresponding values is falsy (<code>0</code>, <code>""</code>, <code>[]</code> etc.), not only if the key doesn't exist.</p>
<p>Lastly, the first method is definitely faster and more pythonic. Function/method calls are expensive. If you're unsure, <code>timeit</code>.</p>
|
python|performance|dictionary
| 6 |
813 | 31,293,854 |
Python apply a func to two lists of lists, store the result in a Dataframe
|
<p>To simplify my problem, say I have two lists of lists and a function shown below:</p>
<pre><code>OP = [[1,2,3],[6,2,7,4],[4,1],[8,2,6,3,1],[6,2,3,1,5], [3,1],[3,2,5,4]]
AP = [[2,4], [2,3,1]]
def f(listA, listB):
return len(listA+listB) # my real f returns a number as well
</code></pre>
<p>I want to get the <code>f(OP[i],AP[j]) for each i, j</code>, so my idea is to create a <code>pandas.Dataframe</code> which looks like this:</p>
<pre><code> AP[0] AP[1]
OP[0] f(AP[0],OP[0]) f(AP[1],OP[0])
OP[1] f(AP[0],OP[1]) f(AP[1],OP[1])
OP[2] f(AP[0],OP[2]) f(AP[1],OP[2])
OP[3] f(AP[0],OP[3]) f(AP[1],OP[3])
OP[4] f(AP[0],OP[4]) f(AP[1],OP[4])
OP[5] f(AP[0],OP[5]) f(AP[1],OP[5])
OP[6] f(AP[0],OP[6]) f(AP[1],OP[6])
</code></pre>
<p>My real data actually has around 80,000 lists in OP and 20 lists in AP, and the function <code>f</code> is a little bit time consuming, so the computational cost should be worried. </p>
<p>My idea to achieve the goal would be constructing a <code>pandas.Series</code> of length <code>len(AP)</code>for each <code>OP</code>, and then append the <code>Series</code> to the final <code>Dataframe</code>.
For example, for <code>OP[0]</code>, first create a <code>Series</code> which have all the information for <code>f(OP[0],AP[i]) for each i</code>.</p>
<p>I am stuck for constructing the <code>Series</code>. I tried <code>pandas.Series.apply()</code> and <code>map()</code>but neither or them worked since my function <code>f</code> needs two parameters. </p>
<p>I'm also open to any other suggestions to get <code>f(OP[i],AP[j]) for each i, j</code>, thanks. </p>
|
<p>You could do so with some nested <a href="https://docs.python.org/2/tutorial/datastructures.html" rel="nofollow">list comprehension</a>, followed by an application of <a href="http://pandas-docs.github.io/pandas-docs-travis/dsintro.html?highlight=from_records" rel="nofollow"><code>pandas.DataFrame.from_records</code></a>:</p>
<pre><code>import pandas as pd
records = [tuple(f(A, O) for A in AP) for O in OP]
pd.DataFrame.from_records(records)
</code></pre>
|
python|pandas|dataframe|series
| 1 |
814 | 49,258,681 |
Share functions across colaboratory files
|
<p>I'm sharing a colaboratory file with my colleagues and we are having fun with it. But it's getting bigger and bigger, so we want to offload some of the functions to another colaboratory file. How can we load one colaboratory file into another? </p>
|
<p>There's no way to do this right now, unfortunately: you'll need to move the code into a .py file that you load (say by cloning from github).</p>
|
python-3.x|google-colaboratory
| 3 |
815 | 42,874,853 |
Cannot return a float value of -1.00
|
<p>I am currently doing an assignment for a computer science paper at university. I am in my first year.</p>
<p>in one of the questions, if the gender is incorrect the function is suppose to return a value of -1. But in the testing column, it says the expected value is -1.00. And I cannot seem to be able to return the value of '-1.00', it will always return a value of -1.0 (with one zero). I used the .format to make the value 2sf (so it will appear with two zero's) but when converting it to a float the value always returns "-1.0". </p>
<pre><code>return float('{:.2f}'.format(-1))
</code></pre>
|
<p>This isn’t as clear as it could be. Does your instructor or testing
software expect a string <code>'-1.00'</code>? If so, just return that. Is a
<code>float</code> type expected? Then return <code>-1.0</code>; the number of digits shown does
not affect the value.</p>
|
python-3.x
| 1 |
816 | 42,738,126 |
nosetests default encoding is ascii, main program is utf-8
|
<p>All my files start with <code>#-*- coding: utf-8 -*-</code></p>
<p>My virtualenv is set to python 3.5, <code>virtualenv -p python3 venv</code></p>
<p>My app hierarchy looks like this :</p>
<pre><code>app/app/[file].py
__init__.py
/tests/test_[file].py
/__init__.py
main.py
</code></pre>
<p><code>python --version</code> is 3.5 (venv is active)</p>
<p>If I <code>python main.py</code> and uses <code>sys.getdefaultencoding()</code> and <code>print("é")</code> everyting is fine, i get :</p>
<p><code>> utf-8</code> and </p>
<p><code>> é</code></p>
<p>Under /tests, if i <code>nosetests</code> i get errors related to unicode, which is normal since <code>sys.getdefaultencoding()</code> gives me :</p>
<p><code>ascii</code></p>
<p><code>which pip</code>, <code>which nosetests</code> and <code>which python</code> all points to my venv.</p>
<p>Why would nose default to ascii when everything is not?</p>
<p><code>pip freeze</code> :</p>
<pre><code>appdirs==1.4.2
beautifulsoup4==4.5.3
nose==1.3.7
packaging==16.8
pkg-resources==0.0.0
pyparsing==2.1.10
requests==2.13.0
six==1.10.0
</code></pre>
<p>Edit:
An example of nose error would be :
<code>TypeError: descriptor 'strip' requires a 'str' object but received a 'unicode'</code>. I get why the error is happening, my misunderstanding is why only nose is doing it. I'm on Ubuntu 16.04.</p>
|
<p>Python 2.7 nose default installation on my system was in fault.</p>
<p>Without being in venv, i <code>pip uninstall nose</code>. Then i activated my virtualenv which is using Python 3.5. Being in my venv, nose could then only choose nosetests from it. It worked!</p>
<p>It seems nosetests was prioritizing "global" nose before the specific one. I still don't know why it was this way.</p>
|
python-3.x|unicode|nose
| 1 |
817 | 51,091,576 |
Undefined is not an object (tensorflow imagerecognition)
|
<p>When trying to integrate a pretrained tensorflow model with expo (react-native), the following error occurs within these lines:</p>
<pre><code>async classify(photo) {
try {
const tfImageRecognition = new TfImageRecognition({
model: require('./assets/output_graph.pb'),
labels: require('./assets/output_labels.txt')
});
const results = await tfImageRecognition.recognize({
image: photo,
inputName: "input", //Optional, defaults to "input"
inputSize: 224, //Optional, defaults to 224
outputName: "output", //Optional, defaults to "output"
maxResults: 3, //Optional, defaults to 3
threshold: 0.1, //Optional, defaults to 0.1
});
results.forEach(result =>
console.log(
result.id, // Id of the result
result.name, // Name of the result
result.confidence // Confidence value between 0 - 1
)
);
await tfImageRecognition.close(); // Necessary in order to release objects on native side
} catch (e) {
console.log(e);
}
}
</code></pre>
<p>Which generates the following error</p>
<pre><code>[23:30:09] undefined is not an object (evaluating 'RNImageRecognition.initImageRecognizer')
- node_modules\react-native-tensorflow\index.js:121:35 in TfImageRecognition
</code></pre>
<p>I have been trying to find the reason why this is not working but I cannot find a definite solution. The relative paths linking to the assets are correct and the extensions are present in the app.json. Furthermore the model is trained using the tensorflow api which should make it compatible with the react-native implementation.</p>
<p>I am using expo SDK version 28.0.0 and react-native-tensorflow version ^0.1.8</p>
|
<p>I have the same problem, in my case I forget to link the library.</p>
<p><strong>Linking</strong></p>
<p><code>$ react-native link react-native-tensorflow</code></p>
|
react-native|tensorflow|object-detection|expo
| 0 |
818 | 50,603,815 |
Appending two texts keeping the line structure
|
<p>Sorry in advance if my question is not smart enough, but I am new in Python:
I have two string files: file A and file B. The are something like this:
File A:</p>
<pre><code>File A is the master file{
sdfsf
sdfsdf
sdfsd
sdfdf
}
</code></pre>
<p>File B is similar.
I want to append file A to file B(and to other files later), but when I try to append it with "with open" it is in one line. I want to manipulate it line by line(to add or remove lines, so I need it to be list), so I am making it list separated by lines, but later, when I try to append it to the other file it is not the same line structure or the text is on one line.
So I have tried this and again it doesn't work:</p>
<p>import os</p>
<pre><code> file_A=open('C:\\Users\\admin\\Desktop\\...\\Sofa.txt').readlines()
file_B = open('C:\\Users\\admin\\Desktop\\.... ....\\....\\...\\view_1.txt', 'a')
for line in File_A:
write.line
file.close()
</code></pre>
|
<p>To append the contents of File_A to File_B, you can just treat it as a single string.</p>
<pre><code>with open('C:\\Users\\admin\\Desktop\\...\\Sofa.txt') as file_a:
contents_a = file_a.read()
with open('C:\\Users\\admin\\Desktop\\.... ....\\....\\...\\view_1.txt', 'a') as file_b:
file_b.write(contents_a)
</code></pre>
|
python
| 1 |
819 | 44,934,876 |
Making my cython code more efficient
|
<p>I've written a python program which I try to cythonize.
Is there any suggestion how to make the for-loop more efficient, as this is taking 99% of the time?</p>
<p>This is the for-loop:</p>
<pre><code> for i in range(l):
b1[i] = np.nanargmin(locator[i,:]) # Closer point
locator[i, b1[i]] = NAN # Do not consider Closer point
b2[i] = np.nanargmin(locator[i,:]) # 2nd Closer point
Adjacents[i,0] = np.array((Existed_Pips[b1[i]]), dtype=np.double)
Adjacents[i,1] = np.array((Existed_Pips[b2[i]]), dtype=np.double)
</code></pre>
<p>This is the rest of the code:</p>
<pre><code>import numpy as np
cimport numpy as np
from libc.math cimport NAN #, isnan
def PIPs(np.ndarray[np.double_t, ndim=1, mode='c'] ys, unsigned int nofPIPs, unsigned int typeofdist):
cdef:
unsigned int currentstate, j, i
np.ndarray[np.double_t, ndim=1, mode="c"] D
np.ndarray[np.int64_t, ndim=1, mode="c"] Existed_Pips
np.ndarray[np.int_t, ndim=1, mode="c"] xs
np.ndarray[np.double_t, ndim=2] Adjacents, locator, Adjy, Adjx, Raw_Fire_PIPs, Raw_Fem_PIPs
np.ndarray[np.int_t, ndim=2, mode="c"] PIP_points, b1, b2
cdef unsigned int l = len(ys)
xs = np.arange(0,l, dtype=np.int) # Column vector with xs
PIP_points = np.zeros((l,1), dtype=np.int) # Binary indexation
PIP_points[0] = 1 # One indicate the PIP points.The first two PIPs are the first and the last observation.
PIP_points[-1] = 1
Adjacents = np.zeros((l,2), dtype=np.double)
currentstate = 2 # Initial PIPs
while currentstate <= nofPIPs: # for eachPIPs in range(nofPIPs)
Existed_Pips = np.flatnonzero(PIP_points)
currentstate = len(Existed_Pips)
locator = np.full((l,currentstate), NAN, dtype=np.double) #np.int*
for j in range(currentstate):
locator[:,j] = np.absolute(xs-Existed_Pips[j])
b1 = np.zeros((l,1), dtype=np.int)
b2 = np.zeros((l,1), dtype=np.int)
for i in range(l):
b1[i] = np.nanargmin(locator[i,:]) # Closer point
locator[i, b1[i]] = NAN # Do not consider Closer point
b2[i] = np.nanargmin(locator[i,:]) # 2nd Closer point
Adjacents[i,0] = np.array((Existed_Pips[b1[i]]), dtype=np.double)
Adjacents[i,1] = np.array((Existed_Pips[b2[i]]), dtype=np.double)
##Calculate Distance
Adjx = Adjacents
Adjy = np.array([ys[np.array(Adjacents[:,0], dtype=np.int)], ys[np.array(Adjacents[:,1], dtype=np.int)]]).transpose()
Adjx[Existed_Pips,:] = NAN # Existed PIPs are not candidates for new PIP.
Adjy[Existed_Pips,:] = NAN
if typeofdist == 1: #Euclidean Distance
##[D] = EDist(ys,xs,Adjx,Adjy)
ED = np.power(np.power((Adjx[:,1]-xs),2) + np.power((Adjy[:,1]-ys),2),(0.5)) + np.power(np.power((Adjx[:,0]-xs),2) + np.power((Adjy[:,0]-ys),2),(0.5))
EDmax = np.nanargmax(ED)
PIP_points[EDmax]=1
currentstate=currentstate+1
return np.array([Existed_Pips, ys[Existed_Pips]]).transpose()
</code></pre>
|
<p>A couple of suggestions:</p>
<ol>
<li><p>Take the calls to <code>np.nanargmin</code> out of the loop (use the <code>axis</code> parameter to let you operate on the whole array at once. This reduces the number of Python function calls you have to make:</p>
<pre><code>b1 = np.nanargmin(locator,axis=1)
locator[np.arange(locator.shape[0]),b1] = np.nan
b2 = np.nanargmin(locator,axis=1)
</code></pre></li>
<li><p>Your assignment to <code>Adjacents</code> is odd - you seem to be creating a length-1 array for the right-hand side first. Instead just do</p>
<pre><code>Adjacents[i,0] = Existed_Pips[b1[i]]
# ...
</code></pre>
<p>However, in this case, you can also take both lines outside the loop, eliminating the entire loop:</p>
<pre><code>Adjacents = np.vstack((Existing_Pips[b1], Existings_Pips[b2])).T
</code></pre></li>
</ol>
<p>All of this is relying on numpy, rather than Cython, for the speed-up, but it probably beats your version.</p>
|
numpy|cython|cythonize
| 1 |
820 | 45,251,046 |
Referencing results with Python in Maya
|
<p>I've been working on a script in Maya that will allow me to work with the cameras without having to go into the <code>Attribute Editor</code> all the time. Currently I have a menu with a menu item and within that menu item I have the check box flag active as well. When the check box button is toggled it runs a command that prints out the result of the check box. What I would like to do is have an <code>if statement</code> that will toggled the <code>dof</code> attribute in any camera but does this by reading the result of the checkbox flag. I know how to properly work with <code>if statements</code> and also find the correct camera, but I don't know how to query the result. Some of the script is below and line four, the <code>if statement</code>, is where I am having issues. Thank you for your help!</p>
<pre><code>#Window Functions go here
def dofToggle(self):
print(cmds.menuItem("dof", q=1, cb=1))
# query the result
if (cmds.menuItem("dof") == 1):
cmds.setAttr(camera1.dof=True)
# window settings go here
if (cmds.window("Camera Tools", exists=True)):
cmds.deleteUI("Camera Tools")
cmds.window(title="Camera Tools", nestedDockingEnabled=True, rtf=True, sizeable=False, menuBar=True, menuBarResize=True, menuBarVisible=True)
cmds.menu(label="dof")
cmds.menuItem("dof", label="on/off", checkBox=True, command=dofToggle)
</code></pre>
|
<p>To get the DOF of the camera use this command:</p>
<pre><code>import maya.cmds as cmds
print(cmds.camera('cameraShape1', q=True, dof=True))
</code></pre>
<p>To disable the DOF of the camera use this command:</p>
<pre><code>cmds.camera('cameraShape1', e=True, dof=False)
</code></pre>
<p>So your <code>if statement</code> should look like this:</p>
<pre><code>if(cmds.camera('cameraShape1', q=True, dof=True) == 1):
cmds.camera('cameraShape1', e=True, dof=False)
</code></pre>
|
python|scripting|maya
| 1 |
821 | 57,872,850 |
Executing a command out of conda env
|
<p>Im activating a conda environment beginning of the script execution but in which I want to execute a command using os.system() out of conda environment with in a loop.</p>
<p>Example:-</p>
<pre><code>conde continues ...
for n in range(5):
# Some code here with in conda environment
# Only the following command should be executed out of current conda environment
os.system('some command ...')
# Some code here with in same conda environment
conde continues ...
</code></pre>
<p>Is this possible?</p>
|
<p>Commands run with <code>os.system</code> will inherit the environment variables, and hence run in the activated Conda env:</p>
<pre class="lang-bash prettyprint-override"><code>$ which python
/usr/bin/python
$ python -c "import os; os.system('which python')"
/usr/bin/python
$ conda activate
(base) $ which python
/Users/user/miniconda3/bin/python
(base) $ python -c "import os; os.system('which python')"
/Users/user/miniconda3/bin/python
</code></pre>
<p>and there aren't any options to manipulate the environment variables without actually manipulating the current environment, which you likely don't want to do.</p>
<p>Instead, you want <a href="https://docs.python.org/3/library/subprocess.html#subprocess.run" rel="nofollow noreferrer">the <code>subprocess</code> module</a>, which provides more control over how the subprocess is run. As a simple example, let's strip the <code>$PATH</code> of any entries with <code>"conda"</code> in them and rerun with this reduced <code>$PATH</code></p>
<pre class="lang-python prettyprint-override"><code>import os
import subprocess
path_cur = os.environ['PATH']
# remove '*conda*' entries
path_new = ':'.join(p for p in path_cur.split(':') if 'conda' not in p)
subprocess.run(['which', 'python'], env={'PATH': path_cur})
# /Users/user/miniconda3/bin/python
# CompletedProcess(args=['which', 'python'], returncode=0)
subprocess.run(['which', 'python'], env={'PATH': path_new})
# /usr/bin/python
# CompletedProcess(args=['which', 'python'], returncode=0)
</code></pre>
|
python|shell|anaconda
| 1 |
822 | 54,002,432 |
(Thailanguage)I have problem about read csv file and uploading file by flask
|
<p>I have just started learning Flask and Python. I have problems when I upload file csv and I want to lead data in file show on webpage(generate by html)<br>
now mywebpage show</p>
<p>Timestamp ... เลือกข้อที่ถูกที่สุด 0 2561/12/25 2:30:50 หลังเที่ยง GMT+7 ... NaN 1 2561/12/25 2:31:40 หลังเที่ยง GMT+7 ... NaN 2 2561/12/25 2:32:01 หลังเที่ยง GMT+7 ... NaN 3 2561/12/25 2:32:15 หลังเที่ยง GMT+7 ... NaN 4 2561/12/25 2:33:18 หลังเที่ยง GMT+7 ... NaN 5 2561/12/25 2:39:02 หลังเที่ยง GMT+7 ... ตัวเลือก 1 6 2561/12/25 2:40:19 หลังเที่ยง GMT+7 ... NaN 7 NaN ... NaN 8 NaN ... NaN 9 ,ขอโทษค่ะ,ตามนั้ค่ะ ... NaN 10 NaN ... NaN 11 NaN ... NaN 12 NaN ... NaN [13 rows x 16 columns]</p>
<p>but i want
<a href="https://i.stack.imgur.com/gap2h.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>Thank you for help.</p>
|
<p>Try this <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_html.html#pandas-dataframe-to-html" rel="nofollow noreferrer">pandas.DataFrame.to_html</a></p>
<p>For example</p>
<pre><code>>> print(yourdataframe.to_html())
</code></pre>
<p>Remember that Python and HTML are different structures. You have to set up HTML table properly.</p>
<p>The output looks like:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>foo1</th>
<th>foo2</th>
<th>foo3</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>-0.623329</td>
<td>0.086472</td>
<td>0.506933</td>
</tr>
<tr>
<th>1</th>
<td>0.988126</td>
<td>0.172142</td>
<td>0.903697</td>
</tr>
</tbody>
</table></code></pre>
</div>
</div>
</p>
|
python|html|csv|flask
| 0 |
823 | 53,939,222 |
python pandas: replace a str value of column in another str column with a special character
|
<p>There is a dataframe like as following.</p>
<pre><code> id num text
1 1.2 price is 1.2
1 2.3 price is 1.2 or 2.3
2 3 The total value is $3 and $130
3 5 The apple value is 5dollar and $150
</code></pre>
<p>I want to replace the num in the text with character 'UNK'</p>
<p>and the new dataframe is changed to:</p>
<pre><code> id num text
1 1.2 price is UNK
1 2.3 price is 1.2 or UNK
2 3 The total value is UNK and 130
3 5 The apple value is UNK dollar and $150
</code></pre>
<p>z
My current code is as following</p>
<pre><code>df_dev['text'].str.replace(df_dev['num'], 'UNK')
</code></pre>
<p>and there is error:</p>
<pre><code>TypeError: 'Series' objects are mutable, thus they cannot be hashed
</code></pre>
|
<p>Let us using <code>regex</code> and <code>replace</code></p>
<pre><code>df.text.replace(regex=r'(?i)'+ df.num.astype(str),value="UNK")
0 price is UNK
1 price is 1.2 or UNK
2 The total value is UNK
Name: text, dtype: object
#df.text=df.text.replace(regex=r'(?i)'+ df.num.astype(str),value="UNK")
</code></pre>
<p>Update </p>
<pre><code>(df.text+' ').replace(regex=r'(?i) '+ df.num.astype(str)+' ',value=" UNK ")
0 price is UNK
1 price is 1.2 or UNK
2 The total value is UNK and 130
Name: text, dtype: object
</code></pre>
|
python|python-3.x|string|pandas
| 2 |
824 | 53,859,101 |
concatante list of lists dataframe from pd.read_html DF[0] format
|
<p>I have a DF[number] = pd.read_html(url.text)</p>
<p>I want to concantante or join the DF lists theres hundreads of e.g. DFs[400] into a single pandas dataframe</p>
<p>the dataframes are in list format so list of lists but python index lists like pandas dataframe</p>
<pre><code> [ Vessel Built GT DWT Size (m) Unnamed: 5
0 x XIN HUA Bulk Carrier 2012 44543 82269 229 x 32
1 b FRANCESCO CORRADO Bulk Carrier 2008 40154 77061 225 x 32
2 5 NAN XIN 17 Bulk Carrier 2001 40570 75220 225 x 32
3 p DIAMOND INDAH Bulk Carrier 2002 43321 77830 229 x 37
4 NaN PRIME LILY Bulk Carrier 2012 44485 81507 229 x 32
5 s EVGENIA Bulk Carrier 2011 92183 176000 292 x 45
df[number] = pd.read_html(url.text)
for number in range(494):
df=pd.concat(df[number])
</code></pre>
<p>methods but that doesn't seem to work</p>
<pre><code> df1=pd.concat(df[1])
df2=pd.concat(df[2])
df3=pd.concat(df[3])
dfx=pd.concat([df1,df2,df3],ignore_index=True)
</code></pre>
<p>this is not what I want as there is hundreads of [] python list dataframes</p>
<p>I want one pandas dataframe that joins all of the list dataframes into one </p>
<p>just be clear the df[] container of the lists is a dict type while df[1] is list</p>
|
<p>You can use list comprehension:</p>
<pre><code>pd.concat([dfs[i] for i in range(len(dfs))])
</code></pre>
|
python|pandas|list|concat|scrape
| 0 |
825 | 58,396,435 |
I am not getting output for map widget in jupyter notebook
|
<p>I am working on Jupyter Notebook and installed ArcGis api. When I called the map from that api then map widget is not showing. All the features of arcgis api is working quite well, except it's map widget.</p>
<blockquote>
<p>Following is the code:-</p>
</blockquote>
<pre><code>from arcgis.gis import GIS
myGIS = GIS()
myGIS.map()
</code></pre>
<blockquote>
<p>The above mentioned code is showing only the following :-</p>
</blockquote>
<pre><code>MapView(layout=Layout(height='400px', width='100%'))
A world map should appear, but it's only showing a line of text i.e;
"MapView(layout=Layout(height='400px', width='100%'))"
</code></pre>
|
<p>Using Chrome was my answer. Everyting is working fine, as long as Chrome is the browser. Cheers.</p>
|
python-3.x|jupyter-notebook|arcgis
| 0 |
826 | 28,629,910 |
Python Class: Global/Local variable name not defined
|
<p>I have two sets of code, one which I use 'Class' (Second piece of code) to manage my code, and the other I just define functions, in my second piece of code I get a NameError: global name '...' is not defined.
Both pieces of code are are for the same purpose.</p>
<pre><code>from Tkinter import *
import ttk
import csv
USER_LOGIN = "user_login.csv"
class Login:
def __init__(self, master):
frame = Frame(master)
frame.pack()
lment1 = StringVar()
lment2 = StringVar()
self.usernameLabel = Label(frame, text="Username:")
self.usernameLabel.grid(row=0, sticky=E)
self.passwordLabel = Label(frame, text="Password:")
self.passwordLabel.grid(row=1, sticky=E)
self.usernameEntry = Entry(frame, textvariable=lment1)
self.usernameEntry.grid(row=0, column=1)
self.passwordEntry = Entry(frame, textvariable=lment2)
self.passwordEntry.grid(row=1, column=1)
self.loginButton = ttk.Button(frame, text="Login", command=self.login_try)
self.loginButton.grid(row=2)
self.cancelButton = ttk.Button(frame, text="Cancel", command=frame.quit)
self.cancelButton.grid(row=2, column=1)
def login_try(self):
ltext1 = lment1.get()
ltext2 = lment2.get()
if in_csv(USER_LOGIN, [ltext1, ltext2]):
login_success()
else:
login_failed()
def in_csv(fname, row, **kwargs):
with open(fname) as inf:
incsv = csv.reader(inf, **kwargs)
return any(r == row for r in incsv)
def login_success():
print 'Login successful'
tkMessageBox.showwarning(title="Login successful", message="Welcome back")
def login_failed():
print 'Failed to login'
tkMessageBox.showwarning(title="Failed login", message="You have entered an invalid Username or Password")
root = Tk()
root.geometry("200x70")
root.title("title")
app = Login(root)
root.mainloop()
</code></pre>
<p>That is the second piece of code ^^^</p>
<pre><code># **** Import modules ****
import csv
from Tkinter import *
import ttk
import tkMessageBox
# **** Declare Classes ****
lGUI = Tk()
lment1 = StringVar()
lment2 = StringVar()
USER_LOGIN = "user_login.csv"
def in_csv(fname, row, **kwargs):
with open(fname) as inf:
incsv = csv.reader(inf, **kwargs)
return any(r==row for r in incsv)
def login_try():
ltext1 = lment1.get()
ltext2 = lment2.get()
if in_csv(USER_LOGIN, [ltext1, ltext2]):
login_success()
else:
login_failed()
def login_success():
print 'Login successful'
tkMessageBox.showwarning(title="Login successful", message="Welcome back")
def login_failed():
print 'Failed to login'
tkMessageBox.showwarning(title="Failed login", message="You have entered an invalid Username or Password")
lGUI.geometry('200x100+500+300')
lGUI.title('PVH')
lButton = Button(lGUI, text="Login", command=login_try)
lButton.grid(row=3)
label_1 = Label(lGUI, text="Username")
label_2 = Label(lGUI, text="Password")
entry_1 = Entry(lGUI, textvariable=lment1)
entry_2 = Entry(lGUI, textvariable=lment2)
label_1.grid(row=0)
label_2.grid(row=1)
entry_1.grid(row=0, column=1)
entry_2.grid(row=1, column=1)
lGUI.mainloop()
</code></pre>
<p>And that is the piece of code that works^</p>
<p>I get the error:</p>
<pre><code>Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python27\lib\lib-tk\Tkinter.py", line 1486, in __call__
return self.func(*args)
File "C:/Users/User/Desktop/PVH_work/PVH_program/blu.py", line 33, in login_try
ltext1 = lment1.get()
NameError: global name 'lment1' is not defined
</code></pre>
<p>Any help would be appreciated :D</p>
|
<p>In your first code piece, you define the variable 'lment1' in the __init __ method, making it local to that single method.
When you then try to access the same variable in the 'login_try', Python doesn't know what it is.</p>
<p>If you wish to access the variable form wherever in the class, you should define it on the class level, by setting it on 'self'</p>
<pre><code>def __init__(self, master):
[...]
self.lment1 = StringVar()
[...]
</code></pre>
<p>That way, you can access it later with:</p>
<pre><code>def login_try(self):
[...]
ltext1 = self.lment1.get()
[...]
</code></pre>
<p>The reason it works in your second code sample, is because you defined it outside of any class - Making it globally available</p>
|
python|class|tkinter
| 2 |
827 | 25,864,955 |
Regex to select and replace spaces inside double brackets
|
<p>I'm writing a script which is used to tidy up MediaWiki files prior to conversion to confluence mark-up, this particular scenario I'm needing to fix page links which in MediaWiki are something like this</p>
<pre><code>[[this is a page]]
</code></pre>
<p>the problem being that the actual page link would be this_is_a_page, the universal wiki converter isn't smart enough to realise this when it converts to confluence mark-up so you end up with broken links.</p>
<p>I've been trying to create a regex as part of my python script (I've already stripped out html and some other tags like < gallery> etc., the following regex selects all the links in question:</p>
<pre><code>'\[\[(.*?)\]\]'
</code></pre>
<p>I just cant find a programmatic way to select only the spaces inside the [[ ]] so I can substitute them out for underscores. I've attempted using matches with no success.</p>
|
<p>Try with <code>re.sub</code> and lambda expression</p>
<pre><code>>>> import re
>>> test = '[[this is a page]] bla bla [[this is another page]]'
>>> re.sub(r'\[\[.+?\]\]', lambda x:x.group().replace(" ","_"), test)
'[[this_is_a_page]] bla bla [[this_is_another_page]]'
</code></pre>
|
python|regex
| 3 |
828 | 53,638,832 |
Bold, underlining, and Iterations with python-docx
|
<p>I am writing a program to take data from an ASCII file and place the data in the appropriate place in the Word document, and making only particular words bold and underlined. I am new to Python, but I have extensive experience in Matlab programming. My code is:</p>
<pre><code>#IMPORT ASCII DATA AND MAKE IT USEABLE
#Alternatively Pandas - gives better table display results
import pandas as pd
data = pd.read_csv('203792_M-51_Niles_control_SD_ACSF.txt', sep=",",
header=None)
#print data
#data[1][3] gives value at particular data points within matrix
i=len(data[1])
print 'Number of Points imported =', i
#IMPORT WORD DOCUMENT
import docx #Opens Python Word document tool
from docx import Document #Invokes Document command from docx
document = Document('test_iteration.docx') #Imports Word Document to Modify
t = len(document.paragraphs) #gives the number of lines in document
print 'Total Number of lines =', t
#for paragraph in document.paragraphs:
# print(para.text) #Prints the text in the entire document
font = document.styles['Normal'].font
font.name = 'Arial'
from docx.shared import Pt
font.size = Pt(8)
#font.bold = True
#font.underline = True
for paragraph in document.paragraphs:
if 'NORTHING:' in paragraph.text:
#print paragraph.text
paragraph.text = 'NORTHING: \t', str(data[1][0])
print paragraph.text
elif 'EASTING:' in paragraph.text:
#print paragraph.text
paragraph.text = 'EASTING: \t', str(data[2][0])
print paragraph.text
elif 'ELEV:' in paragraph.text:
#print paragraph.text
paragraph.text = 'ELEV: \t', str(data[3][0])
print paragraph.text
elif 'CSF:' in paragraph.text:
#print paragraph.text
paragraph.text = 'CSF: \t', str(data[8][0])
print paragraph.text
elif 'STD. DEV.:' in paragraph.text:
#print paragraph.text
paragraph.text = 'STD. DEV.: ', 'N: ', str(data[5][0]), '\t E: ',
str(data[6][0]), '\t EL: ', str(data[7][0])
print paragraph.text
#for paragraph in document.paragraphs:
#print(paragraph.text) #Prints the text in the entire document
#document.save('test1_save.docx') #Saves as Word Document after Modification
</code></pre>
<p>My question is how to make only the "NORTHING:" bold and underlined in:</p>
<pre><code> paragraph.text = 'NORTHING: \t', str(data[1][0])
print paragraph.text
</code></pre>
<p>So I wrote a pseudo "find and replace" command that works great if all the values being replaced are the exactly same. However, I need to replace the values in the second paragraph with the values from the second array of the ASCII file, and the third paragraph with the values from the third array..etc. (I have to use find and replace because the formatting of the document is to advanced for me to replicate in a program, unless there is a program that can read the Word file and write the programming back as Python script...reverse engineer it.) </p>
<p>I am still just learning, so the code may seem crude to you. I am just trying to automate this boring process of copy and pasting.</p>
|
<p>Untested, but assuming python-docx is similar to python-pptx (it should be, it's maintained by the same developer, and a cursory review of the documentation suggests that the way it interfaces withthe PPT/DOC files is the same, uses the same methods, etc.)</p>
<p>In order to manipulate substrings of paragraphs or words, you need to use the <code>run</code> object:</p>
<p><a href="https://python-docx.readthedocs.io/en/latest/api/text.html#run-objects" rel="nofollow noreferrer">https://python-docx.readthedocs.io/en/latest/api/text.html#run-objects</a></p>
<p>In practice, this looks something like:</p>
<pre><code>for paragraph in document.paragraphs:
if 'NORTHING:' in paragraph.text:
paragraph.clear()
run = paragraph.add_run()
run.text = 'NORTHING: \t'
run.font.bold = True
run.font.underline = True
run = paragraph.add_run()
run.text = str(data[1][0])
</code></pre>
<p>Conceptually, you create a <code>run</code> instance for each <em>part</em> of the paragraph/text that you need to manipulate. So, first we create a <code>run</code> with the bolded font, then we add another run (which I think will not be bold/underline, but if it is just set those to <code>False</code>).</p>
<p>Note: it's preferable to put all of your <code>import</code> statements at the top of a module. </p>
<p>This can be optimized a bit by using a mapping object like a dictionary, which you can use to associate the matching values ("NORTHING") as <code>keys</code> and the remainder of the paragraph text as <code>values</code>. <strong>ALSO UNTESTED</strong></p>
<pre><code>import pandas as pd
from docx import Document
from docx.shared import Pt
data = pd.read_csv('203792_M-51_Niles_control_SD_ACSF.txt', sep=",",
header=None)
i=len(data[1])
print 'Number of Points imported =', i
document = Document('test_iteration.docx') #Imports Word Document to Modify
t = len(document.paragraphs) #gives the number of lines in document
print 'Total Number of lines =', t
font = document.styles['Normal'].font
font.name = 'Arial'
font.size = Pt(8)
# This maps the matching strings to the data array values
data_dict = {
'NORTHING:': data[1][0],
'EASTING:': data[2][0],
'ELEV:': data[3][0],
'CSF:': data[8][0],
'STD. DEV.:': 'N: {0}\t E: {1}\t EL: {2}'.format(data[5][0], data[6][0], data[7][0])
}
for paragraph in document.paragraphs:
for k,v in data_dict.items():
if k in paragraph.text:
paragraph.clear()
run = paragraph.add_run()
run.text = k + '\t'
run.font.bold = True
run.font.underline = True
run = paragraph.add_run()
run.text = '{0}'.format(v)
</code></pre>
|
python|ascii|python-docx
| 4 |
829 | 24,899,785 |
django - view returning no value?
|
<p>I have the following basic views.py to test out doing queries based on the user.</p>
<pre><code>def Vendor_Matrix(request):
username = request.session.get('username','')
queryset = User.objects.filter(username=username).values_list('user_permissions', 'username', 'first_name')
return JSONResponse(queryset)
</code></pre>
<p>I'm logged in (using Mezzanine) into my site. I then have that view referenced in the following urls.py</p>
<pre><code>from django.conf.urls import patterns, url
from api import views
urlpatterns = patterns('',
url(r'^your-data/vendor-matrix/$', 'api.views.Vendor_Matrix'),
)
</code></pre>
<p>When I go to the URL it comes up with a blank page. specifically this - </p>
<pre><code>[]
</code></pre>
<p>I can only imagine it's not registering the logged in user?</p>
<p>I've simplified my views.py even further - Definitely not registering the username that is logged in. It still returns nothing.</p>
<pre><code>def Vendor_Matrix(request):
username = request.session.get('username','')
return HttpResponse(username)
</code></pre>
|
<p><a href="https://docs.djangoproject.com/en/dev/ref/request-response/#django.http.HttpRequest.user" rel="nofollow">That's not where Django keeps the logged-in user...</a></p>
<pre><code>return JSONResponse(operator.attrgetter('user_permissions', 'username', 'first_name')(request.user))
</code></pre>
|
python|django
| 1 |
830 | 38,426,414 |
Tensorflow Inception Android
|
<p>I am trying to build the [TensorFlow Android Camera Demo][1].<br>
As i understand the error something is wrong with build-tools/23.0.1 removed it and reinstalled it but to no effect. what is wrong or any thoughts on how to find out what the problem is?</p>
<p>used:<br>
ndk: android-ndk-r12b<br>
tensorflow: master branch ( tried 0.8 and 0.9 as well ) </p>
<p>i tried to use buildtoolversion 24.0.0 and got a different error (included below) </p>
<p>WORKSPACE file:</p>
<pre><code># Uncomment and update the paths in these entries to build the Android demo.
android_sdk_repository(
name = "androidsdk",
api_level = 23,
build_tools_version = "23.0.1",
# Replace with path to Android SDK on your system
path = "/home/boss/Android/Sdk",
)
android_ndk_repository(
name="androidndk",
path="/home/boss/Downloads/android-ndk-r12b",
api_level=21)
</code></pre>
<p>Error: buildtool 23.0.1</p>
<pre><code>ERROR: /home/boss/Downloads/tensorflow-master/tensorflow/examples/android/BUILD:47:1: Processing Android resources for //tensorflow/examples/android:tensorflow_demo failed: namespace-sandbox failed: error executing command
(cd /home/boss/.cache/bazel/_bazel_boss/f65f721012b7fd201233c0708275aaf3/execroot/tensorflow-master && \
exec env - \
/home/boss/.cache/bazel/_bazel_boss/f65f721012b7fd201233c0708275aaf3/execroot/tensorflow-master/_bin/namespace-sandbox @/home/boss/.cache/bazel/_bazel_boss/f65f721012b7fd201233c0708275aaf3/execroot/tensorflow-master/bazel-sandbox/565ee075-9d3c-4af1-adce-59fc5a2f3c06-0.params -- bazel-out/host/bin/external/bazel_tools/tools/android/resources_processor --buildToolsVersion 23.0.1 --aapt bazel-out/host/bin/external/androidsdk/aapt_binary --annotationJar external/androidsdk/tools/support/annotations.jar --androidJar external/androidsdk/platforms/android-23/android.jar --primaryData tensorflow/examples/android/res:tensorflow/examples/android/assets:tensorflow/examples/android/AndroidManifest.xml --rOutput bazel-out/local-fastbuild/bin/tensorflow/examples/android/tensorflow_demo_symbols/R.txt --srcJarOutput bazel-out/local-fastbuild/bin/tensorflow/examples/android/tensorflow_demo.srcjar --proguardOutput bazel-out/local-fastbuild/bin/tensorflow/examples/android/proguard/tensorflow_demo/_tensorflow_demo_proguard.cfg --manifestOutput bazel-out/local-fastbuild/bin/tensorflow/examples/android/tensorflow_demo_processed_manifest/AndroidManifest.xml --resourcesOutput bazel-out/local-fastbuild/bin/tensorflow/examples/android/tensorflow_demo_files/resource_files.zip --packagePath bazel-out/local-fastbuild/bin/tensorflow/examples/android/tensorflow_demo.ap_ --debug --packageForR org.tensorflow.demo).
Error: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory
Error: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory
Error: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory
Error: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory
Error: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory
Error: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory
Error: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory
Error: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory
Error: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory
Jul 17, 2016 11:51:48 PM com.google.devtools.build.android.AndroidResourceProcessingAction main
SEVERE: Error during merging resources
Error: Failed to run command:
bazel-out/host/bin/external/androidsdk/aapt_binary s -i /tmp/android_resources_tmp1770729823994372609/tmp-deduplicated/tensorflow/examples/android/res/drawable-xxhdpi/ic_launcher.png -o /tmp/android_resources_tmp1770729823994372609/merged_resources/drawable-xxhdpi-v4/ic_launcher.png
Error Code:
127
Output:
bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory
at com.android.ide.common.res2.MergeWriter.end(MergeWriter.java:54)
at com.android.ide.common.res2.MergedResourceWriter.end(MergedResourceWriter.java:113)
at com.android.ide.common.res2.DataMerger.mergeData(DataMerger.java:291)
at com.android.ide.common.res2.ResourceMerger.mergeData(ResourceMerger.java:48)
at com.google.devtools.build.android.AndroidResourceProcessor.mergeData(AndroidResourceProcessor.java:724)
at com.google.devtools.build.android.AndroidResourceProcessingAction.main(AndroidResourceProcessingAction.java:254)
Caused by: com.android.ide.common.internal.LoggedErrorException: Failed to run command:
bazel-out/host/bin/external/androidsdk/aapt_binary s -i /tmp/android_resources_tmp1770729823994372609/tmp-deduplicated/tensorflow/examples/android/res/drawable-xxhdpi/ic_launcher.png -o /tmp/android_resources_tmp1770729823994372609/merged_resources/drawable-xxhdpi-v4/ic_launcher.png
Error Code:
127
Output:
bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory
at com.android.ide.common.internal.CommandLineRunner.runCmdLine(CommandLineRunner.java:123)
at com.android.ide.common.internal.CommandLineRunner.runCmdLine(CommandLineRunner.java:96)
at com.android.ide.common.internal.AaptCruncher.crunchPng(AaptCruncher.java:58)
at com.android.ide.common.res2.MergedResourceWriter$1.call(MergedResourceWriter.java:188)
at com.android.ide.common.res2.MergedResourceWriter$1.call(MergedResourceWriter.java:139)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "main" Error: Failed to run command:
bazel-out/host/bin/external/androidsdk/aapt_binary s -i /tmp/android_resources_tmp1770729823994372609/tmp-deduplicated/tensorflow/examples/android/res/drawable-xxhdpi/ic_launcher.png -o /tmp/android_resources_tmp1770729823994372609/merged_resources/drawable-xxhdpi-v4/ic_launcher.png
Error Code:
127
Output:
bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory
at com.android.ide.common.res2.MergeWriter.end(MergeWriter.java:54)
at com.android.ide.common.res2.MergedResourceWriter.end(MergedResourceWriter.java:113)
at com.android.ide.common.res2.DataMerger.mergeData(DataMerger.java:291)
at com.android.ide.common.res2.ResourceMerger.mergeData(ResourceMerger.java:48)
at com.google.devtools.build.android.AndroidResourceProcessor.mergeData(AndroidResourceProcessor.java:724)
at com.google.devtools.build.android.AndroidResourceProcessingAction.main(AndroidResourceProcessingAction.java:254)
Caused by: com.android.ide.common.internal.LoggedErrorException: Failed to run command:
bazel-out/host/bin/external/androidsdk/aapt_binary s -i /tmp/android_resources_tmp1770729823994372609/tmp-deduplicated/tensorflow/examples/android/res/drawable-xxhdpi/ic_launcher.png -o /tmp/android_resources_tmp1770729823994372609/merged_resources/drawable-xxhdpi-v4/ic_launcher.png
Error Code:
127
Output:
bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory
at com.android.ide.common.internal.CommandLineRunner.runCmdLine(CommandLineRunner.java:123)
at com.android.ide.common.internal.CommandLineRunner.runCmdLine(CommandLineRunner.java:96)
at com.android.ide.common.internal.AaptCruncher.crunchPng(AaptCruncher.java:58)
at com.android.ide.common.res2.MergedResourceWriter$1.call(MergedResourceWriter.java:188)
at com.android.ide.common.res2.MergedResourceWriter$1.call(MergedResourceWriter.java:139)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Target //tensorflow/examples/android:tensorflow_demo failed to build
</code></pre>
<p>error: buildtool 24.0.0</p>
<pre><code>ERROR: /home/boss/.cache/bazel/_bazel_boss/f65f721012b7fd201233c0708275aaf3/external/gif_archive/BUILD:14:1: C++ compilation of rule '@gif_archive//:gif' failed: namespace-sandbox failed: error executing command
(cd /home/boss/.cache/bazel/_bazel_boss/f65f721012b7fd201233c0708275aaf3/execroot/tensorflow-master && \
exec env - \
PATH=/home/boss/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin \
/home/boss/.cache/bazel/_bazel_boss/f65f721012b7fd201233c0708275aaf3/execroot/tensorflow-master/_bin/namespace-sandbox @/home/boss/.cache/bazel/_bazel_boss/f65f721012b7fd201233c0708275aaf3/execroot/tensorflow-master/bazel-sandbox/937cd00e-9340-4e7e-b3fe-a3006d83a7e6-2.params -- /usr/bin/gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -Wall -Wl,-z,-relro,-z,now -B/usr/bin -B/usr/bin -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections -g0 -DHAVE_CONFIG_H -iquote external/gif_archive -iquote bazel-out/host/genfiles/external/gif_archive -iquote external/bazel_tools -iquote bazel-out/host/genfiles/external/bazel_tools -isystem external/gif_archive/giflib-5.1.4/lib -isystem bazel-out/host/genfiles/external/gif_archive/giflib-5.1.4/lib -isystem external/bazel_tools/tools/cpp/gcc3 -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -MD -MF bazel-out/host/bin/external/gif_archive/_objs/gif/external/gif_archive/giflib-5.1.4/lib/quantize.d -c external/gif_archive/giflib-5.1.4/lib/quantize.c -o bazel-out/host/bin/external/gif_archive/_objs/gif/external/gif_archive/giflib-5.1.4/lib/quantize.o).
external/gif_archive/giflib-5.1.4/lib/quantize.c:17:29: fatal error: gif_lib_private.h: No such file or directory
compilation terminated.
Target //tensorflow/examples/android:tensorflow_demo failed to build
</code></pre>
|
<p>Actual problem: 64 bit machine's 32 bit compatibility
Solution found: <a href="https://stackoverflow.com/questions/17020298/android-sdks-build-tools-17-0-0-aapt-error-while-loading-shared-libraries-libz">this post</a></p>
|
android|android-ndk|tensorflow
| 0 |
831 | 36,467,613 |
Need help,writing a python BMI calc
|
<p>I am new to python and and currently learning to use functions properly.</p>
<pre><code>h = 1.75
w = 70.5
bmi = float(w / h ** 2)
if bmi < 18.5:
print('过轻')
elif 18.5 <= bmi < 25:
print('正常')
elif 25 <= bmi < 28:
print('过重')
elif 28 <= bmi < 32:
print('肥胖')
else bmi >= 32:
print('严重肥胖')
</code></pre>
<p>Every time I run this program as an attempt I come into this error</p>
<pre><code>File "/Users/frank/Coding/bmimyself.py", line 17
else bmi >= 32:
^
SyntaxError: invalid syntax
</code></pre>
<p>I would appreciate any assistance with my coding errors I must have made</p>
|
<p>This statement is not "else", it is another "elif".</p>
<pre><code>elif bmi >= 32:
print 'foo'
else:
print 'bar'
</code></pre>
|
python
| 0 |
832 | 34,160,995 |
Python Turtle fill the triangle with color?
|
<p>I am currently using the <code>turtle.goto</code> cords from a text file. I have the triangle drawn and everything but I don't know how to fill the triangle.</p>
|
<p>You are ending fill after every new coordinate. You need to call <code>t.begin_fill()</code> before your <code>for</code> loop and call <code>t.end_fill()</code> after the last coordinate, otherwise you are just filling in your single line with each iteration.</p>
|
python|colors|turtle-graphics
| 2 |
833 | 38,668,814 |
constrain a series or array to a range of values
|
<p>I have a series of values that I want to have constrained to be within +1 and -1.</p>
<pre><code>s = pd.Series(np.random.randn(10000))
</code></pre>
<p>I know I can use <code>apply</code>, but is there a simple vectorized approach?</p>
<pre><code>s_ = s.apply(lambda x: min(max(x, -1), 1))
s_.head()
0 -0.256117
1 0.879797
2 1.000000
3 -0.711397
4 -0.400339
dtype: float64
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/version/0.13.1/generated/pandas.Series.clip.html" rel="nofollow"><code>clip</code></a>:</p>
<pre><code>s = s.clip(-1,1)
</code></pre>
<p>Example Input:</p>
<pre><code>s = pd.Series([-1.2, -0.5, 1, 1.1])
0 -1.2
1 -0.5
2 1.0
3 1.1
</code></pre>
<p>Example Output:</p>
<pre><code>0 -1.0
1 -0.5
2 1.0
3 1.0
</code></pre>
|
python|numpy|pandas
| 4 |
834 | 40,552,485 |
PyQt5 equivalent of QtWebKitWidgets.QWebView.page.mainFrame() for QtWebEngineWidgets.QWebEngineView()?
|
<p>I am very new to PyQt and started to play around with the following code (which originally comes from <a href="http://pythoncentral.io/pyside-pyqt-tutorial-qwebview/" rel="nofollow noreferrer">this blog post</a>):</p>
<pre><code># Create an application
app = QApplication([])
# And a window
win = QWidget()
win.setWindowTitle('QWebView Interactive Demo')
# And give it a layout
layout = QVBoxLayout()
win.setLayout(layout)
# Create and fill a QWebView
view = QWebView()
view.setHtml('''
<html>
<head>
<title>A Demo Page</title>
<script language="javascript">
// Completes the full-name control and
// shows the submit button
function completeAndReturnName() {
var fname = document.getElementById('fname').value;
var lname = document.getElementById('lname').value;
var full = fname + ' ' + lname;
document.getElementById('fullname').value = full;
document.getElementById('submit-btn').style.display = 'block';
return full;
}
</script>
</head>
<body>
<form>
<label for="fname">First name:</label>
<input type="text" name="fname" id="fname"></input>
<br />
<label for="lname">Last name:</label>
<input type="text" name="lname" id="lname"></input>
<br />
<label for="fullname">Full name:</label>
<input disabled type="text" name="fullname" id="fullname"></input>
<br />
<input style="display: none;" type="submit" id="submit-btn"></input>
</form>
</body>
</html>
''')
# A button to call our JavaScript
button = QPushButton('Set Full Name')
# Interact with the HTML page by calling the completeAndReturnName
# function; print its return value to the console
def complete_name():
frame = view.page().mainFrame()
print frame.evaluateJavaScript('completeAndReturnName();')
# Connect 'complete_name' to the button's 'clicked' signal
button.clicked.connect(complete_name)
# Add the QWebView and button to the layout
layout.addWidget(view)
layout.addWidget(button)
# Show the window and run the app
win.show()
app.exec_()
</code></pre>
<p>I made some slight changes in order to try to make it run with the latest pyqt5 version, but I don't understand how I should change</p>
<pre><code>frame = view.page().mainFrame()
</code></pre>
<p>in order to make the script run without errors. Here is the code I have so far:</p>
<pre><code>from PyQt5 import QtWidgets, QtGui, QtCore
from PyQt5 import QtWebEngineWidgets
# Create an application
app = QtWidgets.QApplication([])
# And a window5
win = QtWidgets.QWidget()
win.setWindowTitle('QWebView Interactive Demo')
# And give it a layout
layout = QtWidgets.QVBoxLayout()
win.setLayout(layout)
# Create and fill a QWebView
#view = QtWebKitWidgets.QWebView() # depecated?
view = QtWebEngineWidgets.QWebEngineView()
view.setHtml('''
<html>
<head>
<title>A Demo Page</title>
<script language="javascript">
// Completes the full-name control and
// shows the submit button
function completeAndReturnName() {
var fname = document.getElementById('fname').value;
var lname = document.getElementById('lname').value;
var full = fname + ' ' + lname;
document.getElementById('fullname').value = full;
document.getElementById('submit-btn').style.display = 'block';
return full;
}
</script>
</head>
<body>
<form>
<label for="fname">First name:</label>
<input type="text" name="fname" id="fname"></input>
<br />
<label for="lname">Last name:</label>
<input type="text" name="lname" id="lname"></input>
<br />
<label for="fullname">Full name:</label>
<input disabled type="text" name="fullname" id="fullname"></input>
<br />
<input style="display: none;" type="submit" id="submit-btn"></input>
</form>
</body>
</html>
''')
# A button to call our JavaScript
button = QtWidgets.QPushButton('Set Full Name')
# Interact with the HTML page by calling the completeAndReturnName
# function; print its return value to the console
def complete_name():
frame = view.page().mainFrame() # THIS raises an error. I'm stuck here.
print(frame.evaluateJavaScript('completeAndReturnName();'))
# Connect 'complete_name' to the button's 'clicked' signal
button.clicked.connect(complete_name)
# Add the QWebView and button to the layout
layout.addWidget(view)
layout.addWidget(button)
# Show the window and run the app
win.show()
app.exec_()
</code></pre>
<p>I have seen <a href="https://stackoverflow.com/questions/37754138/how-to-render-html-with-pyqt5s-qwebengineview">this post</a>, which I thought might help, but unfortunately I'm still stuck with it. Does anyone know how to make this work with the latest PyQt5 version? Help would be very much appreciated.</p>
|
<p>There is nothing equivalent to the Qt WebKit <code>QWebFrame</code> class in Qt Web Engine. Frames are just considered part of the content, so there are no dedicated APIs for dealing with them - there is just a single <code>QWebEnginePage</code>, which provides access to the whole web document.</p>
<p>There is also no <code>evaluateJavaScript</code> method. Instead, there is an asynchronous <code>runJavaScript</code> method, which needs a callback to receive the result. So your code should be re-written like this:</p>
<pre><code>def js_callback(result):
print(result)
def complete_name():
view.page().runJavaScript('completeAndReturnName();', js_callback)
</code></pre>
|
python|pyqt|pyqt5|qtwebengine
| 4 |
835 | 32,722,671 |
Combining multiple columns in a DataFrame
|
<p>I have a DataFrame with 40 columns (columns 0 through 39) and I want to group them four at a time: </p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.binomial(1, 0.2, (100, 40)))
</code></pre>
<hr>
<pre><code>new_df["0-3"] = df[0] + df[1] + df[2] + df[3]
new_df["4-7"] = df[4] + df[5] + df[6] + df[7]
...
new_df["36-39"] = df[36] + df[37] + df[38] + df[39]
</code></pre>
<p>Can I do this in a single statement (or in a better way than summing them separately)? The column names in the new DataFrame are not important.</p>
|
<p>You could select out the columns and sum on the row axis, like this.</p>
<pre><code>df['0-3'] = df.loc[:, 0:3].sum(axis=1)
</code></pre>
<p>A couple things to note:</p>
<ol>
<li>Summing like this will ignore missing data while <code>df[0] + df[1] ...</code> propagates it. Pass <code>skipna=False</code> if you want that behavior.</li>
<li>Not necessarily any performance benefit, may actually be a little slower.</li>
</ol>
|
python|pandas|dataframe
| 2 |
836 | 30,012,795 |
I'm designing a flow rate based traffic controller on raspberry pi, it runs in an infinite loop
|
<p>I'm designing a flow rate based traffic controller on raspberry pi using buttons as traffic simulators. the problem i'm facing is that the maximum value gets selected at first and there can be no increments possible to that value and the code runs in an infinite loop.</p>
<p>For ex if i press the button at road 1 on the circuit; it will take it as the maximum value as the count of other three roads viz. road two, road three, road 4 are zero and the loop continues with only the traffic lights at road 1 going green and red and the counts of the button presses at the other three streets are not considered at all.
Please help me with the logic as i'm a newbie with python.
Here's my code.</p>
<pre><code>#!/usr/bin/python
import os
import time
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
count = 0
count2 = 0
count3 = 0
count4 = 0
GPIO.setwarnings(False)
GPIO.setup(20, GPIO.IN)
GPIO.setup(21, GPIO.IN)
GPIO.setup(19, GPIO.IN)
GPIO.setup(25, GPIO.IN)
#red1
GPIO.setup(17,GPIO.OUT)
#yellow1
GPIO.setup(27,GPIO.OUT)
#green1
GPIO.setup(22,GPIO.OUT)
#RT1
GPIO.setup(12,GPIO.OUT)
#red2
GPIO.setup(14,GPIO.OUT)
#yellow2
GPIO.setup(15,GPIO.OUT)
#green2
GPIO.setup(18,GPIO.OUT)
#RT2
GPIO.setup(16,GPIO.OUT)
#red3
GPIO.setup(10,GPIO.OUT)
#yellow3
GPIO.setup(9,GPIO.OUT)
#green3
GPIO.setup(11,GPIO.OUT)
#RT3
GPIO.setup(24,GPIO.OUT)
#red4
GPIO.setup(2,GPIO.OUT)
#yellow4
GPIO.setup(3,GPIO.OUT)
#green4
GPIO.setup(13,GPIO.OUT)
#RT4
GPIO.setup(23,GPIO.OUT)
while True:
if (GPIO.input(20) == False):
count=count+1
time.sleep(1)
print(count)
if (GPIO.input(21) == False):
count2=count2+1
time.sleep(1)
print(count2)
if (GPIO.input(19) == False):
count3=count3+1
time.sleep(1)
print(count3)
if (GPIO.input(25) == False):
count4=count4+1
time.sleep(1)
print(count4)
if count > count2 :
if count > count3:
if count > count4:
print ("Traffic on road 1 highest")
#go go go...RT1+Str1
GPIO.output(17,False)
GPIO.output(27,False)
GPIO.output(22,True)
GPIO.output(12,True)
GPIO.output(14,True)
GPIO.output(15,False)
GPIO.output(18,False)
GPIO.output(16,False)
GPIO.output(10,True)
GPIO.output(9,False)
GPIO.output(11,False)
GPIO.output(24,False)
GPIO.output(2,True)
GPIO.output(3,False)
GPIO.output(13,False)
GPIO.output(23,False)
time.sleep(10)
#RT1 blinks
GPIO.output(12,False)
time.sleep(1)
GPIO.output(12,True)
time.sleep(1)
GPIO.output(12,False)
time.sleep(1)
GPIO.output(12,True)
time.sleep(1)
GPIO.output(12,False)
time.sleep(1)
GPIO.output(27,True) #yellow
time.sleep(3)
GPIO.output(27,False)
GPIO.output(17,True)
time.sleep(1) #red
elif count2 > count:
if count2 > count3:
if count2 > count4:
print ("Traffic on road 2 highest")
GPIO.output(11,False)
GPIO.output(10,True)
GPIO.output(18,True)
GPIO.output(16,True)
GPIO.output(14,False)
GPIO.output(24,False)
GPIO.output(17,True)
GPIO.output(27,False)
GPIO.output(22,False)
GPIO.output(12,False)
GPIO.output(15,False)
GPIO.output(9,False)
GPIO.output(11,False)
GPIO.output(24,False)
GPIO.output(2,True)
GPIO.output(3,False)
GPIO.output(13,False)
GPIO.output(23,False)
time.sleep(10)
#RT2 blinks
GPIO.output(16,False)
time.sleep(1)
GPIO.output(16,True)
time.sleep(1)
GPIO.output(16,False)
time.sleep(1)
GPIO.output(16,True)
time.sleep(1)
GPIO.output(16,False)
time.sleep(1)
GPIO.output(15,True)
time.sleep(3)
GPIO.output(15,False) #yellow
time.sleep(1)
GPIO.output(14,True) #red
time.sleep(1)
elif count3 > count:
if count3 > count2:
if count3 > count4:
print ("Traffic on road 3 highest")
GPIO.output(11,False)
GPIO.output(10,False)
GPIO.output(18,False)
GPIO.output(16,False)
GPIO.output(14,True)
GPIO.output(24,False)
GPIO.output(17,True)
GPIO.output(27,False)
GPIO.output(22,False)
GPIO.output(12,False)
GPIO.output(15,False)
GPIO.output(9,False)
GPIO.output(11,True)
GPIO.output(24,True)
GPIO.output(2,True)
GPIO.output(3,False)
GPIO.output(13,False)
GPIO.output(23,False)
time.sleep(10)
#RT3 blinks
GPIO.output(24,False)
time.sleep(1)
GPIO.output(24,True)
time.sleep(1)
GPIO.output(24,False)
time.sleep(1)
GPIO.output(24,True)
time.sleep(1)
GPIO.output(24,False)
time.sleep(1)
GPIO.output(9,True)
time.sleep(3)
GPIO.output(9,False) #yellow
time.sleep(1)
GPIO.output(10,True) #red
time.sleep(1)
elif count4 > count:
if count4 > count2:
if count4 > count3:
print ("Traffic on road 4 highest")
GPIO.output(11,False)
GPIO.output(10,True)
GPIO.output(18,False)
GPIO.output(16,False)
GPIO.output(14,True)
GPIO.output(24,False)
GPIO.output(17,True)
GPIO.output(27,False)
GPIO.output(22,False)
GPIO.output(12,False)
GPIO.output(15,False)
GPIO.output(9,False)
GPIO.output(11,False)
GPIO.output(24,False)
GPIO.output(2,False)
GPIO.output(3,False)
GPIO.output(13,True)
GPIO.output(23,True)
time.sleep(10)
#RT2 blinks
GPIO.output(23,False)
time.sleep(1)
GPIO.output(23,True)
time.sleep(1)
GPIO.output(23,False)
time.sleep(1)
GPIO.output(23,True)
time.sleep(1)
GPIO.output(23,False)
time.sleep(1)
GPIO.output(3,True)
time.sleep(3)
GPIO.output(3,False) #yellow
time.sleep(1)
GPIO.output(2,True) #red
time.sleep(1)
</code></pre>
|
<p>I was getting confused by all the layered if statements. You can use the keyword "and" in between them to combine all the ifs into one. Still, I didn't see any fault that would cause your problem. Also, if you want to change variables that are declared outside a loop, you will need to use a <code>global</code> statement. Here's some corrected code:</p>
<pre><code>#import statements
count1 = 0
count2 = 0
count3 = 0
count4 = 0
#declare gpio pins
while True:
global count1
global count2
global count3
global count4
#logic
</code></pre>
<p>However, it is advised to not use global variables in your code. So, if you find another solution, use it instead.</p>
|
python|raspberry-pi
| 0 |
837 | 43,291,801 |
extracting chars from string using regex and pythonic way
|
<p>I have a string like this: "32H74312"
I want to extract some parts and put them in different variables. </p>
<pre><code>first_part = 32 # always 2 digits
second_part = H # always 1 chars
third_part = 743 # always 3 digit
fourth_part = 12 # always 2 digit
</code></pre>
<p>Is there some way to this in pythonic way? </p>
|
<p>There's now reason to use a regex for such a simple task.
The <em>pythonic</em> way could be something like:</p>
<pre><code>string = "32H74312"
part1 = string[:2]
part2 = string[2:3]
part3 = string[3:6]
part4 = string[6:]
</code></pre>
|
python|regex
| 2 |
838 | 43,128,229 |
TimeoutException using selenium with python
|
<p>I'm getting TimeoutException when using this code to get the fill in the CardNum textbox with a number</p>
<pre><code>CardNUM = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.XPATH, '//*[@id="number"]')))
CardNUM.send_keys(cardNum)
</code></pre>
<p>Xpath is taken directly from right clicking and inspecting the textbox and copying the XPATH for the block </p>
<pre><code><input autocomplete="cc-number" id="number" name="number" type="tel" aria-describedby="error-for-number" data-current-field="number" class="input-placeholder-color--lvl-30" placeholder="Card number" style="color: rgb(151, 151, 151); font-family: &quot;Helvetica Neue&quot;; padding: 0.94em 0.8em; transition: padding 0.2s ease-out;">
</code></pre>
<p>Is there something else I need to do to be able to fill in the box, for example is the text box hidden and is there some manipulation that I would need to do beforehand to be able to find the text box?</p>
|
<p>Most likely the element is inside an IFRAME, especially since it seems to be a credit card number. The payment portion of payment pages are typically in an IFRAME for security. Try switching to the IFRAME first then your code should work.</p>
|
python|selenium|xpath
| 0 |
839 | 48,704,369 |
How to apply linear transform on a 3D feature vector in Tensorflow?
|
<p>Imagine there is a tensor with the following dimensions <code>(32, 20, 3)</code> where <strong>batch_size</strong> = 32, <strong>num_steps</strong> = 20 and <strong>features</strong> = 3. The features are taken from a .csv file that has the following format:</p>
<pre><code>feat1, feat2, feat3
200, 100, 0
5.5, 200, 0.5
23.2, 1, 9.3
</code></pre>
<p>Each row is transformed into 3-dim vector (numpy array): <code>[200, 100, 0]</code>, <code>[5.5, 200, 0.5]</code>, <code>[23.2, 1, 9.3]</code>.</p>
<p>We want to use these features in a recurrent neural network but directly feeding them into rnn won't do, we'd like to process these feature vectors first by applying linear transformation to each 3-dim vector inside the batch sample and reshape the input tensor into <code>(32, 20, 100)</code>. </p>
<p>This is easily done in Torch for example via: <code>nn.MapTable():add(nn.Linear(3, 100))</code> which is applied on the input batch tensor of size <code>20 x 32 x 3</code> (num_steps and batch_size are switched in Torch). We split it into 20 arrays each <code>32x3</code> in size </p>
<pre><code> 1 : DoubleTensor - size: 32x3
2 : DoubleTensor - size: 32x3
3 : DoubleTensor - size: 32x3
...
</code></pre>
<p>and use <code>nn.Linear(3, 100)</code> to transform them into <code>32x100</code> vectors. We then pack them up back into <code>20 x 32 x 100</code> tensor. How can we implement the same operation in Tensorflow?</p>
|
<p>Could reshape into [batchsize*num_steps, features] use a Tensorflow linear layer with 100 outputs and then reshape back would that work?</p>
<pre><code>reshaped_tensor = tf.reshape(your_input, [batchsize*num_steps, features])
linear_out = tf.layers.dense(inputs=reshaped_tensor, units=100)
reshaped_back = tf.reshape(linear_out, [batchsize, num_steps, features]
</code></pre>
|
python|tensorflow
| 2 |
840 | 51,493,185 |
matching similar elements in between two lists
|
<p>I'm new to python so apologies if its a silly question.</p>
<p>I have two lists<br>
<code>L1=['marvel','audi','mercedez','honda']</code> and </p>
<p><code>L2=['marvel comics','bmw','mercedez benz','audi']</code>.</p>
<p>I want to extract matching elements which contains in <code>list L2</code> matched with <code>list L1</code>. So what I done :</p>
<pre><code>for i in L1:
for j in L2:
if j in i:
print (j)
output is ['audi']
</code></pre>
<p>But, I also wants to return elements if its also consist any word match like <code>mercedez</code> in <code>mercedez benz</code> and <code>marvel</code> in <code>marvel comics</code>. so final output would be:</p>
<pre><code>j=['audi','mercedez benz','marvel comics']
</code></pre>
|
<p>I think what you really want here is the elements of <code>L2</code> that contain any elements in <code>L1</code>. So simply replace <code>if j in i</code> with <code>if i in j</code>:</p>
<pre><code>for i in L1:
for j in L2:
if i in j:
print (j)
</code></pre>
<p>This outputs:</p>
<pre><code>marvel comics
audi
mercedez benz
</code></pre>
|
python|arrays|python-3.x|pandas|keyword-search
| 4 |
841 | 65,887,235 |
Generate a list from list of dicts if value not exists in another list
|
<p>im triying to filter a list of dicts using the elements of a list</p>
<pre><code>a=[{"item_id": "ITEM2090", "seller_id":1009954},
{"item_id": "ITEM2050", "seller_id":1009920},
{"item_id": "ITEM2032", "seller_id":1009960},
{"item_id": "ITEM2080", "seller_id":1009954}]
b=["ITEM2032","ITEM2060","ITEM2070","ITEM2090"]
</code></pre>
<p>expected result (the two dict from list a wich values for item_id not exists in list b):</p>
<pre><code>c=[{"item_id": "ITEM2050", "seller_id":1009920},
{"item_id": "ITEM2080", "seller_id":1009954}]
</code></pre>
<p>I've tried:</p>
<pre><code>c=[x["item_id"] for x in a if x["item_id"] not in b]
</code></pre>
<p>My problem is that it returns a list of the <code>item_id</code> values, not a list of dicts as I would like.</p>
|
<pre><code>c = [item for item in a if item["item_id"] not in b]
</code></pre>
<p>Will be better to make "b" as a set in case of a large amount of items.</p>
|
python|dictionary
| 2 |
842 | 65,615,634 |
Trying to predict with a loaded classification model with .h5 on Tensorflow, returning IndexError: list index out of range
|
<p>I created a classification model with both saved_model format and .h5 format. I am trying to load the model so I can deploy it with</p>
<p><code>new_model = tf.keras.models.load_model('my_model.h5')</code></p>
<p>Then I predict</p>
<pre><code>print(new_model.predict('/content/images/image.jpg'))
</code></pre>
<p>Then it returns</p>
<pre><code>> IndexError Traceback (most recent call last)
<ipython-input-26-749bd8c0774b> in <module>()
1 new_model = tf.keras.models.load_model('my_model.h5')
----> 2 print(new_model.predict('/content/images/image.jpg'))
>5 frames
>/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_shape.py in __getitem__(self, key)
887 else:
888 if self._v2_behavior:
--> 889 return self._dims[key].value
890 else:
891 return self._dims[key]
>IndexError: list index out of range
</code></pre>
<p>I've tried other similar solutions but they don't work. Do I need to retrain the model? What do I do so I can predict on one image in a clean environment?</p>
|
<p>for model.predict to produce proper predictions it is necessary that the input be of the same nature as the inputs that the model was trained on. For example in training you read in an image from the training set. Then typically you will rescale the pixel values, usually in the range from 0 to +1 or in some cases -1 to +1. Then you typically resize the images so all training images are of the same size. Now when you want to input an image to be predicted you should follow the same process. Read in the image, rescale it and resize it as you did for the training images.</p>
|
python|tensorflow|deployment
| 0 |
843 | 72,336,739 |
Setting the minimum value of a pandas column using clip
|
<p>I want to set the <code>minimum</code> value of a column of <code>pandas</code> dataframe using <code>clip</code> method. Below is my code</p>
<pre><code>import pandas as pd
data = pd.DataFrame({'date' : pd.to_datetime(['2010-12-31', '2012-12-31', '2012-12-31']), 'val' : [1,2, 5]})
data.clip(lower=pd.Series({'val': 4}), axis=1)
</code></pre>
<p>Above code is giving error. Could you please help on how to eliminate the error?</p>
|
<p>You can try <code>Series.clip</code> or set the <code>date</code> column as index then <code>DataFrame.clip</code>.</p>
<pre class="lang-py prettyprint-override"><code>data['val'] = data['val'].clip(4)
# or
data = (data.set_index('date')
.clip(4)
.reset_index())
</code></pre>
<pre><code>print(data)
date val
0 2010-12-31 4
1 2012-12-31 4
2 2012-12-31 5
</code></pre>
|
python-3.x|pandas
| 1 |
844 | 37,015,648 |
python plot large dimension data
|
<p>I have a 1800*100000000 matrix, and I want to plot it in python using code below:</p>
<pre><code>import matplotlib.pyplot as plt
plt.spy(m)
plt.show()
</code></pre>
<p>The result is disappointing, it looks like a line because of little row number compared to column number:</p>
<p><a href="https://i.stack.imgur.com/4Iobm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4Iobm.png" alt="enter image description here"></a></p>
<p>How can I do it correctly?</p>
|
<p><a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.spy" rel="nofollow noreferrer"><code>spy()</code> accepts a number of keyword arguments, <code>aspect</code> in particular is interesting...</a></p>
<pre><code>In [1]: import numpy as np
In [2]: import matplotlib.pyplot as plt
In [3]: a = np.random.random((25,250))>0.6
In [4]: %matplotlib
Using matplotlib backend: Qt4Agg
In [5]: plt.spy(a)
Out[5]: <matplotlib.image.AxesImage at 0x7f9ad1a790b8>
</code></pre>
<p><a href="https://i.stack.imgur.com/BGWY9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BGWY9.png" alt="standard"></a></p>
<pre><code>In [6]: plt.spy(a, aspect='auto')
Out[6]: <matplotlib.image.AxesImage at 0x7f9ad1139d30>
</code></pre>
<p><a href="https://i.stack.imgur.com/J7YnH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J7YnH.png" alt="enter image description here"></a></p>
|
python|matplotlib
| 1 |
845 | 48,698,062 |
Set schema in pyspark dataframe read.csv with null elements
|
<p>I have a data set (example) that when imported with </p>
<pre><code>df = spark.read.csv(filename, header=True, inferSchema=True)
df.show()
</code></pre>
<p>will assign the column with 'NA' as a stringType(), where I would like it to be IntegerType() (or ByteType()).</p>
<p><a href="https://i.stack.imgur.com/XggfZ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XggfZ.png" alt="inferSchema"></a></p>
<p>I then tried to set </p>
<pre><code>schema = StructType([
StructField("col_01", IntegerType()),
StructField("col_02", DateType()),
StructField("col_03", IntegerType())
])
df = spark.read.csv(filename, header=True, schema=schema)
df.show()
</code></pre>
<p>The output shows the entire row with <em>'col_03' = null</em> to be null.</p>
<p><a href="https://i.stack.imgur.com/59B8w.png" rel="noreferrer"><img src="https://i.stack.imgur.com/59B8w.png" alt="entire_row_null"></a></p>
<p>However <em>col_01</em> and <em>col_02</em> return appropriate data if they are called with</p>
<pre><code>df.select(['col_01','col_02']).show()
</code></pre>
<p><a href="https://i.stack.imgur.com/FTOGn.png" rel="noreferrer"><img src="https://i.stack.imgur.com/FTOGn.png" alt="row_actually_not_null"></a></p>
<p>I can find a way around this by post casting the data type of <em>col_3</em></p>
<pre><code>df = spark.read.csv(filename, header=True, inferSchema=True)
df = df.withColumn('col_3',df['col_3'].cast(IntegerType()))
df.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/7xyWd.png" rel="noreferrer"><img src="https://i.stack.imgur.com/7xyWd.png" alt="import_then_cast"></a></p>
<p>, but I think it is not ideal and would be much better if I can assign the data type for each column directly with setting schema.</p>
<p>Would anyone be able to guide me what I do incorrectly? Or casting the data types after importing is the only solution? Any comment regarding performance of the two approaches (if we can make assigning schema to work) is also welcome.</p>
<p>Thank you,</p>
|
<p>You can set a new null value in spark's csv loader using <code>nullValue</code>:</p>
<p>for a csv file looking like this:</p>
<pre class="lang-py prettyprint-override"><code>col_01,col_02,col_03
111,2007-11-18,3
112,2002-12-03,4
113,2007-02-14,5
114,2003-04-16,NA
115,2011-08-24,2
116,2003-05-03,3
117,2001-06-11,4
118,2004-05-06,NA
119,2012-03-25,5
120,2006-10-13,4
</code></pre>
<p>and forcing schema:</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql.types import StructType, IntegerType, DateType
schema = StructType([
StructField("col_01", IntegerType()),
StructField("col_02", DateType()),
StructField("col_03", IntegerType())
])
</code></pre>
<p>You'll get:</p>
<pre class="lang-py prettyprint-override"><code>df = spark.read.csv(filename, header=True, nullValue='NA', schema=schema)
df.show()
df.printSchema()
+------+----------+------+
|col_01| col_02|col_03|
+------+----------+------+
| 111|2007-11-18| 3|
| 112|2002-12-03| 4|
| 113|2007-02-14| 5|
| 114|2003-04-16| null|
| 115|2011-08-24| 2|
| 116|2003-05-03| 3|
| 117|2001-06-11| 4|
| 118|2004-05-06| null|
| 119|2012-03-25| 5|
| 120|2006-10-13| 4|
+------+----------+------+
root
|-- col_01: integer (nullable = true)
|-- col_02: date (nullable = true)
|-- col_03: integer (nullable = true)
</code></pre>
|
python-3.x|pyspark|spark-dataframe|pyspark-sql
| 9 |
846 | 20,338,539 |
ValueError: could not convert string to float in simple code
|
<pre><code> # -*- coding: cp1250 -*-
print ('euklides alpha 1.0')
a = raw_input('podaj liczbę A : ')
b = raw_input('podaj liczbę B : ')
a = float('a')
b = float('b')
if 'a'=='b':
print 'a'
elif 'a' > 'b':
while 'a' > 'b':
print('a'-'b')
if 'a'=='b': break
if 'a' > 'b': continue
elif 'b' > 'a':
while 'b' > 'a':
print('b'-'a')
if 'b'=='a': break
if 'b' > 'a': continue
</code></pre>
<p>So, this is a code, which I made few hours ago. Now I get a <code>ValueError: could not convert string to float: a</code>, and I have no idea why. Can you explain it to me? I'm a beginner.</p>
|
<p>the float function can take a string but it must contain a possibly signed decimal or floating point number. You want to make the variable <code>a</code> a float not the char <code>'a'</code>. </p>
<p>You don't need all the <code>'</code> around your variable names. When you put quotes around them <code>'b'</code> you are making them a string. </p>
<p>On another note once you reach on of those <code>while</code> statements there's nothing that will get you out of there.</p>
<pre><code>a = float(a)
if a == b: # you need to get rid of all the ' unless you are talking about chars
# -*- coding: cp1250 -*-
print ('euklides alpha 1.0')
a = raw_input('podaj liczbę A : ')
b = raw_input('podaj liczbę B : ')
a = float('a')
b = float('b')
if a==b:
print a
elif a > b:
while a > b: # nothing will get you out of the while loop
print(a-b)
if a == b:
break
if a > b: # no need for this if, the while loop will do that check for you
continue
elif b > a:
while b > a: # nothing will get you out of the while loop
print(b-a)
if b==a:
break
if b > a: # no need for this if, the while loop will do that check for you
continue
</code></pre>
|
python|string|python-2.7|floating-point
| 2 |
847 | 48,024,098 |
Can the print function be used reliably in GCE apps?
|
<p>I have a GCE app consisting of a single Python script that has some long running functions (most of these are querying databases and sending the results somewhere). It seems that when the script hangs on one of these longer running tasks that nothing is printed to Stackdriver Logging, <strong>even <code>print()</code> statements that come before the place the script is hanging</strong>. This seems like bug in Compute Engine or Stackdriver and makes debugging scripts very difficult (e.g. I can't see where the last successful <code>print</code> statement occurred).</p>
<p>I'd prefer this bug to just be fixed instead of having to add the <code>logging</code> module as it seems there's a good amount of overhead to set that up.</p>
|
<p>Per <a href="https://unix.stackexchange.com/a/182541">this answer</a> from <a href="https://unix.stackexchange.com">unix.stackexchange.com</a>, when a process's output is redirected to something other than a terminal, the output may be temporarily stored in a buffer by the operating system. Buffering output increases efficiency by reducing the number of system calls and IO operations. </p>
<p>Buffered output can be flushed manually from within a python script or application.</p>
<ul>
<li>In python3, set the <code>flush</code> flag on the <code>print</code> function.
<ul>
<li><code>print('foo', flush=True)</code></li>
</ul></li>
<li>In python2, flush <code>sys.stdout</code> after printing.
<ul>
<li><code>print 'foo'; sys.stdout.flush()</code></li>
</ul></li>
</ul>
|
python|google-compute-engine|stackdriver
| 1 |
848 | 48,112,036 |
SQLAlchemy: session.query.one() and session.add() in one transaction
|
<p>I want to add row with <code>VALUE1</code> to table <code>TABLE1</code> only if <code>TABLE2</code> have row with <code>VALUE2</code></p>
<p>I can do something like that:</p>
<pre><code>session.query(TABLE2)
.filter(TABLE2.FIELD2 == VALUE2)
.update({TABLE2.FIELD2: VALUE2}) # without change. only for check
session.add(TABLE1(FIELD1=VALUE1))
session.commit()
</code></pre>
<p>But I think it is strange that I use <code>update</code> without any update.</p>
<p>I want to use <code>one</code> instead of <code>update</code> but it doesn't support transactions.</p>
<p>UPDATED: this solution is wrong...</p>
<p>This simple solution is wrong also:</p>
<pre><code>my_flag = session.query(TABLE2).filter(TABLE2.FIELD2 == VALUE2).first()
# database can be updated here!
if my_flag:
session.add(TABLE1(FIELD1=VALUE1))
session.commit()
</code></pre>
|
<p>Provided you have a unique key on <code>TABLE1.VALUE1</code>, you could first query
<code>TABLE2</code> and try to insert to <code>TABLE1</code>. In case the <code>VALUE1</code> already exists in <code>TABLE1</code>, the error will be thrown and you will be able to rollback the transaction. </p>
<pre><code>from sqlalchemy.sql import exists
value_exists = session.query(exists().where(TABLE2.KEY2 == VALUE2)).scalar()
if not value_exists:
return
try:
dbsession.add(...)
dbsession.commit()
except IntegrityError as e:
dbsession.rollback()
raise e
</code></pre>
|
python|sql|sqlalchemy
| 0 |
849 | 51,391,271 |
Using .txt file as a Dictionary
|
<p>I have a <strong>.txt</strong> file formatted like a dictionary is, for example:</p>
<p><code>{'planet': 'earth', "country": "uk"}</code></p>
<p>Just that, that's all. I would want to add more to this later. At the moment, I can save more keys to it and have it saved but...</p>
<p>How can I import this <strong>.txt</strong> file and use it as a dictionary?</p>
|
<p>You can use <a href="https://docs.python.org/2/library/ast.html#ast.literal_eval" rel="nofollow noreferrer"><code>ast.literal_eval</code></a></p>
<pre><code>import ast
with open('myfile.txt') as f:
mydict = ast.literal_eval(f.read())
</code></pre>
<p>Some <a href="https://stackoverflow.com/questions/15197673/using-pythons-eval-vs-ast-literal-eval">extra reading</a> on using <code>eval</code> vs <code>ast.literal_eval</code>.</p>
|
python|file|dictionary
| 1 |
850 | 55,973,952 |
Error during backward migration of DeleteModel in Django
|
<p>I have two models with one-to-one relationship in Django 1.11 with PostgreSQL. These two models are defined in <code>models.py</code> as follows:</p>
<pre class="lang-py prettyprint-override"><code>class Book(models.Model):
info = JSONField(default={})
class Author(models.Model):
book = models.OneToOneField(Book, on_delete=models.CASCADE)
</code></pre>
<p>The auto-created migration file regarding these models is like:</p>
<pre class="lang-py prettyprint-override"><code>class Migration(migrations.Migration):
dependencies = [
('manager', '0018_some_migration_dependency'),
]
operations = [
migrations.CreateModel(
name='Book',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('info', JSONField(default={})),
],
),
migrations.AddField(
model_name='author',
name='book',
field=models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to='manager.Book'),
),
]
</code></pre>
<p>These implementations have worked successfully. In addition to this migration, we also had some other additional migrations related to other tasks of our project.</p>
<p>Due to our design changes we made today, we decided to move all of the Book info data into our cloud storage. In order to do that, I have implemented a custom migration code as follows:</p>
<pre class="lang-py prettyprint-override"><code>def push_info_to_cloud(apps, schema_editor):
Author = apps.get_model('manager', 'Author')
for author in Author.objects.all():
if author.book.info is not None and author.book.info != "":
# push author.book.info to cloud storage
author.book.info = {}
author.book.save()
def pull_info_from_cloud(apps, schema_editor):
Author = apps.get_model('manager', 'Author')
Book = apps.get_model('manager', 'Book')
for author in Author.objects.all():
# pull author.book.info back from cloud storage
book = Book.objects.create(info=info)
author.book = book
author.save()
class Migration(migrations.Migration):
dependencies = [
('manager', '0024_some_migration_dependency'),
]
operations = [
migrations.RunPython(push_info_to_cloud, pull_info_from_cloud)
]
</code></pre>
<p>As the code tells itself, this migrations push each non-null book info data to our cloud storage and replace that with an empty dict in the database. I have tested this migration back and forth and make sure that both the forward and backward migration work successfully.</p>
<p>Then, to get rid of the redundant <code>Book</code> table and <code>book</code> column in <code>Author</code> table, I deleted the <code>Book</code> model and the <code>OneToOneField</code> book field in the <code>Author</code> model and run <code>manage.py makemigrations</code>, which resulted in the following auto-generated migration code:</p>
<pre class="lang-py prettyprint-override"><code>class Migration(migrations.Migration):
dependencies = [
('manager', '0025_some_migration_dependency'),
]
operations = [
migrations.RemoveField(
model_name='user',
name='book',
),
migrations.DeleteModel(
name='Book',
),
]
</code></pre>
<p>Running <code>manage.py migrate</code> did worked. In the end, the <code>Book</code> table and the <code>book</code> column of the <code>Author</code> table are deleted.</p>
<p>Now, the problem is; when I want to migrate back to <code>0024_some_migration_dependency</code>, I get the following error during the execution of the latest migration file:</p>
<pre><code> Unapplying manager.0026_auto_20190503_1702...Traceback (most recent call last):
File "/home/cagrias/Workspace/Project/backend/venv/lib/python3.6/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
psycopg2.IntegrityError: column "book_id" contains null values
</code></pre>
<p>I have seen this <a href="https://stackoverflow.com/a/37244199/4665915">answer</a>. To try that, I have manually re-create <code>Book</code> model and the OneToOneField <code>book</code> field of the <code>Author</code> model, by using <code>blank=True, null=True</code> parameters this time. But after I apply the migrations above successfully, I got the same exceptions when migrating backwards.</p>
<p>What might be the problem?</p>
|
<p>I have managed to solve problem by changing the order of the migrations. </p>
<p>As I mentioned in my question, I have applied <a href="https://stackoverflow.com/a/37244199/4665915">this answer</a> by adding <code>blank=True, null=True</code> parameters to both <code>info</code> and <code>book</code> fields. But it's related migration file was created after the migration file that moves our book info to the cloud storage. When I've changed the orders of these two migration files, the problem was solved.</p>
|
python|django|postgresql|psycopg2|django-migrations
| 0 |
851 | 71,853,039 |
How do I convert Python scripts files to images files representing the code with highlighting?
|
<p>In short, how do I get this:</p>
<p><a href="https://i.stack.imgur.com/JBBws.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JBBws.jpg" alt="enter image description here" /></a></p>
<p>From this:</p>
<pre class="lang-py prettyprint-override"><code>def fiblike(ls, n):
store = []
for i in range(n):
a = ls.pop(0)
ls.append(sum(ls)+a)
store.append(a)
return store
</code></pre>
<p>With all the indentation guide and code highlighting.</p>
<p>I have written hundreds of Python scripts and I need to convert all of them to images...</p>
<p>I have seen this:</p>
<pre class="lang-py prettyprint-override"><code>import Image
import ImageDraw
import ImageFont
def getSize(txt, font):
testImg = Image.new('RGB', (1, 1))
testDraw = ImageDraw.Draw(testImg)
return testDraw.textsize(txt, font)
if __name__ == '__main__':
fontname = "Arial.ttf"
fontsize = 11
text = "[email protected]"
colorText = "black"
colorOutline = "red"
colorBackground = "white"
font = ImageFont.truetype(fontname, fontsize)
width, height = getSize(text, font)
img = Image.new('RGB', (width+4, height+4), colorBackground)
d = ImageDraw.Draw(img)
d.text((2, height/2), text, fill=colorText, font=font)
d.rectangle((0, 0, width+3, height+3), outline=colorOutline)
img.save("D:/image.png")
</code></pre>
<p>from <a href="https://www.codegrepper.com/code-examples/python/how+to+convert+text+file+to+image+in+python" rel="nofollow noreferrer">here</a></p>
<p>But it does not do code highlighting and I want either a <code>numpy</code> or <code>cv2</code> based solution.</p>
<p>How can I do it?</p>
|
<p><a href="https://marketplace.visualstudio.com/items?itemName=adpyke.codesnap" rel="nofollow noreferrer">CodeSnap</a> is a very nice tool to do just that for VSCode.</p>
|
python|python-3.x
| 0 |
852 | 71,368,471 |
in discord.py How can i make my bot that the commands only can be use in specific channel or specific server?
|
<p>So, anyone can invite my personal bot to their server. So, I want the command will work on specific channel or specific server with @bot.event not client.</p>
|
<p>if you use <code>await bot.process_commands(message)</code> you can try this</p>
<pre><code>@bot.event
async def on_message(message):
if message.channel.id == yourchannelid:
await bot.process_commands(message)
if message.guild.id = yourguildid:
await bot.process_commands(message)
</code></pre>
<p>you can add checks in on_message so the bot doesn't reply to itself</p>
|
javascript|python|discord
| 1 |
853 | 63,358,767 |
How to filter rows and words in lower case in pandas dataframe?
|
<p>Hi I would like to know how to select rows which contains lower cases in the following dataframe:</p>
<pre><code>ID Name Note
1 Fin there IS A dog outside
2 Mik NOTHING TO DECLARE
3 Lau no house
</code></pre>
<p>What I would like to do is to filter rows where Note column contains at least one word in lower case:</p>
<pre><code>ID Name Note
1 Fin there IS A dog outside
3 Lau no house
</code></pre>
<p>and collect in a list all the words in lower case: <code>my_list=['there','dog','outside','no','house']</code></p>
<p>I have tried to filter rows is :</p>
<pre><code>df1=df['Note'].str.lower()
</code></pre>
<p>For appending words in the list, I think I should first tokenise the string, then select all the terms in lower case. Am I right?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>Series.str.contains</code></a> for filter at least one lowercase character in <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p>
<pre><code>df1 = df[df['Note'].str.contains(r'[a-z]')]
print (df1)
ID Name Note
0 1 Fin there IS A dog outside
2 3 Lau no house
</code></pre>
<p>And then <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.extractall.html" rel="nofollow noreferrer"><code>Series.str.extractall</code></a> for extract lowercase words:</p>
<pre><code>my_list = df1['Note'].str.extractall(r'(\b[a-z]+\b)')[0].tolist()
print (my_list)
['there', 'dog', 'outside', 'no', 'house']
</code></pre>
<p>Or use list comprehension with split sentences and filter by <code>islower</code>:</p>
<pre><code>my_list = [y for x in df1['Note'] for y in x.split() if y.islower()]
print (my_list)
['there', 'dog', 'outside', 'no', 'house']
</code></pre>
|
python|pandas
| 2 |
854 | 69,285,376 |
Problem importing matrix in Python from Excel and maybe some problems with if elif statments
|
<p>I'm trying running this code with some problems to solve. I'm at first trying inserting "BOD" as the name of the output and "6" as the number of input parameters.</p>
<pre class="lang-py prettyprint-override"><code> import os
import numpy as np
import pandas as pd
from pandas import ExcelWriter
from numpy import *
OutputName = input('please enter the name of the output (BOD,COD,TSS)');
InputNum = input('please enter the number of input parameters (6 or 12) = ');
file_name = 'biowin_withMalfunction.xlsx'
if OutputName == 'BOD':
Output_num=1
if InputNum == 6:
Data = pd.read_excel(open(r'C:\Users\Elisa\test_conv_Fatone\biowin_withMalfunction.xlsx', 'rb'), sheet_name='ANN full data for BOD_6Params')
print (Data)
elif InputNum ==12:
Data = pd.read_excel(open(r'C:\Users\Elisa\test_conv_Fatone\biowin_withMalfunction.xlsx', 'rb'), sheet_name='ANN full data for BOD')
elif OutputName == 'COD':
Output_num=2
if InputNum == 6:
Data = pd.read_excel(open(r'C:\Users\Elisa\test_conv_Fatone\biowin_withMalfunction.xlsx', 'rb'), sheet_name='ANN full data for COD_6ParamsD')
elif InputNum ==12:
Data = pd.read_excel(open(r'C:\Users\Elisa\test_conv_Fatone\biowin_withMalfunction.xlsx', 'rb'), sheet_name='ANN full data for COD')
else:
Output_num=3
if InputNum == 6:
Data = pd.read_excel(open(r'C:\Users\Elisa\test_conv_Fatone\biowin_withMalfunction.xlsx', 'rb'), sheet_name='ANN full data for TSS_6Params')
elif InputNum ==12:
Data = pd.read_excel(file_name, sheet_name="ANN full data for TSS")
index = Output_num -3;
X = Data[0:end-2,0:end]
</code></pre>
<p>the error is:</p>
<pre class="lang-py prettyprint-override"><code> Traceback (most recent call last):
File "C:\Users\Elisa\test_conv\ANN_Converted.py", line 42, in <module>
X = Data[0:end-2,0:end]
NameError: name 'Data' is not defined
</code></pre>
<p>It seems that the variable Data is not created with pd.reading, in fact, if I try print(Data) it does not exist. Can anybody help me finding the problem/problems?
Can I share the input excel file? How?</p>
|
<p>Think about your conditions. What happens if every individual test is <code>False</code>? What happens if <em>all</em> your tests are <code>False</code> ?</p>
<p>There is a path through your decision tree in which <em>no file is opened</em>. This is currently obtaining, so <code>Data</code> doesn't exist, as you determined.</p>
<p>In this case the problem is likely that <code>input()</code> returns a <em>string</em>, whereas you are testing for an <em>integer</em>.</p>
<p>Thus either test for strings:</p>
<pre class="lang-py prettyprint-override"><code>if inputNum == "5"
</code></pre>
<p>or cast inputNum to an int:</p>
<pre class="lang-py prettyprint-override"><code>inputNum = int(inputNum)
</code></pre>
<p><em>before</em> you do any testing.</p>
|
python|excel|pandas|if-statement
| 1 |
855 | 68,435,475 |
My scrollbar is not working with mouse's scroller
|
<p>I have reused the code.<br />
I am trying to scroll this frame and the scrollbar is working but
I want it to be scrolled using the scroller of mouse.
What should I do?
I want it to be scrolled vertically only.</p>
<pre><code>from tkinter import *
root = Tk()
root['bg'] = 'wheat'
frame_container=Frame(root, width = 1000)
frame_container['bg'] = 'wheat'
canvas_container=Canvas(frame_container, width = 1000)
canvas_container['bg'] = 'wheat'
frame2=Frame(canvas_container, width = 1000)
frame2['bg'] = 'wheat'
scrollbar_tk = Scrollbar(frame_container,
orient="vertical",command=canvas_container.yview)#,
yscrollcommand=scrollbar_tk.set
# will be visible if the frame2 is to to big for the canvas
canvas_container.create_window((0,0),window=frame2,anchor='nw')
naan = IntVar()
roti=IntVar()
dal=IntVar()
manchurian = IntVar()
makhani=IntVar()
masala_bhindi = IntVar()
chole = IntVar()
rajma = IntVar()
shahi_panneer = IntVar()
kadahi_paneer = IntVar()
masala_gobhi = IntVar()
allo_gobhi = IntVar()
matar_paneer = IntVar()
menu_roti = "Tava Roti 25 ₹/piece"
menu_dal = "Dal 80 ₹/bowl"
menu_makhani = "Dal Makhni 110 ₹/bowl"
menu_naan = "Naan 50 ₹/piece"
menu_manchurian = "Manchurian 110 ₹/plate"
menu_shahi_panneer = "Shahi paneer 110₹/bowl"
menu_kadahi_paneer = "Kadhai paneer 150/bowl"
menu_masala_gobhi = "Masala gobhi 130₹/bowl"
menu_allo_gobhi = "Aloo gobhi 120₹/bowl"
menu_matar_paneer = "Matar paneer 135₹/bowl"
menu_masala_bhindi = "Masala bhindi 110₹/bowl"
menu_chole = "Chole 100₹/bowl"
menu_rajma = "Rajama 150₹/bowl"
menu_chaap = "Chaap 125₹/bowl"
menu_aloo_parntha = "Aloo parantha 35₹/peice"
menu_cheele = "Cheele 55₹/peice "
listItems = [menu_roti,menu_dal,menu_makhani, menu_naan,
menu_manchurian, menu_shahi_panneer,
menu_kadahi_paneer, menu_masala_gobhi,
menu_allo_gobhi, menu_matar_paneer, menu_masala_bhindi,
menu_chole, menu_rajma, menu_chaap, menu_aloo_parntha,
menu_cheele]
Title = Label(frame2, text = " Food Items
Prices Quantities", fg = 'red', bg = 'wheat', font=
("arial", 30))
Title.grid()
for item in listItems:
label = Label(frame2,text=item, fg = 'yellow', bg =
'wheat', font=("arial", 30))
label.grid(column=0, row=listItems.index(item)+1)
q_roti = Entry(frame2, font=("arial",20), textvariable = roti,
fg="Black", width=10)
q_roti.grid(column = 1, row = 1)
q_dal = Entry(frame2, font=("arial",20), textvariable = dal,
fg="black", width=10)
q_dal.grid(column = 1, row = 2)
q_makhani = Entry(frame2, font=("arial",20), textvariable =
makhani, fg="black", width=10)
q_makhani.grid(column = 1, row = 3)
q_naan = Entry(frame2, font=("arial",20), textvariable = naan,
fg="black", width=10)
q_naan.grid(column = 1, row = 4)
q_manchurian = Entry(frame2,font=("arial",20), textvariable =
manchurian, fg="black", width=10)
q_manchurian.grid(column = 1, row = 5)
q_shahi_panneer = Entry(frame2, font=("arial",20), textvariable
= shahi_panneer, fg="black", width=10)
q_shahi_panneer.grid(column = 1, row = 6)
q_kadahi_panneer = Entry(frame2, font=("arial",20),
textvariable = kadahi_paneer, fg="black", width=10)
q_kadahi_panneer.grid(column = 1, row = 7)
q_masala_gobhi = Entry(frame2, font=("arial",20), textvariable
= masala_gobhi, fg="black", width=10)
q_masala_gobhi.grid(column = 1, row = 8)
q_allo_gobhi = Entry(frame2, font=("arial",20), textvariable =
allo_gobhi, fg="black", width=10)
q_allo_gobhi.grid(column = 1, row = 9)
q_matar_panneer = Entry(frame2, font=("arial",20), textvariable
= matar_paneer, fg="black", width=10)
q_matar_panneer.grid(column = 1, row = 10)
q_masala_bhindi = Entry(frame2, font=("arial",20), textvariable
= masala_bhindi, fg="black", width=10)
q_masala_bhindi.grid(column = 1, row = 11)
q_cholle = Entry(frame2,font=("arial",20), textvariable =
chole, fg="black", width=10)
q_cholle.grid(column = 1, row = 12)
q_rajma = Entry(frame2,font=("arial",20), textvariable = rajma,
fg="black", width=10)
q_rajma.grid(column = 1, row = 13)
frame2.update() # update frame2 height so it's no longer 0 (
height is 0 when it has just been created )
canvas_container.configure(yscrollcommand=scrollbar_tk.set,
scrollregion="0 0 0 %s" % frame2.winfo_height()) # the
scrollregion
mustbe the size of the frame inside it,
#in this case "x=0 y=0 width=0 height=frame2height"
#width 0 because we only scroll verticaly so don't mind about
the width.
canvas_container.grid(column = 1, row = 0)
scrollbar_tk.grid(column = 0, row = 0, sticky='ns')
frame_container.grid()#.pack(expand=True, fill='both')
root.mainloop()
</code></pre>
<p>Sorry for this code. This is not much understandable but maybe it is sufficient for someone of my level. please someone give me some advices to improve my skills.</p>
|
<p>You can use <code><MouseWheel></code> virtual event to scroll the canvas and ultimately the frame.</p>
<pre><code>canvas_container.create_window((0,0),window=frame2,anchor='nw')
def _on_mousewheel(event):
canvas_container.yview_scroll(-1*int(event.delta/120), "units")
canvas_container.bind_all("<MouseWheel>", _on_mousewheel)
</code></pre>
|
python|tkinter|scrollbar|mouse
| 0 |
856 | 71,035,037 |
Authorization header of GET request in python/wsgi
|
<p>I'm in the process of creating a POST/GET API in Python 3. I'm running Apache2 connected to a WSGI script. I've managed to retrieve very simple GET requests succesfully. My code so far:</p>
<pre><code>def application(environ, start_response):
status = '200 OK'
output = b'Hello'
print(environ)
# print(environ['HTTP_AUTHORIZATION'])
response_headers = [('Content-type', 'text/plain'),('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
</code></pre>
<p>I use <a href="https://reqbin.com/" rel="nofollow noreferrer">reqbin</a> to test-send GET requests to my server. When you enter a token inside the Bearer token field, it is automatically added to the headers. I tested this with a server I have a bearer token for and validation completes succesfully, so I know reqbin is actually sending the token.</p>
<p>However, I seem to be unable to acces the authorization header on my server. Apparently, it should be inside the environ object prefixed by HTTP_. But printing <code>environ['HTTP_AUTHORIZATION']</code> yields a KeyError. I then tried printing the full environ object and retrieved it from the apache log:</p>
<pre><code>{
'mod_wsgi.listener_port': '443',
'CONTEXT_DOCUMENT_ROOT': '/var/www/gosharing',
'SERVER_SOFTWARE': 'Apache/2.4.41 (Ubuntu)',
'SCRIPT_NAME': '',
'mod_wsgi.enable_sendfile': '0',
'mod_wsgi.handler_script': '',
'SERVER_SIGNATURE': '<address>Apache/2.4.41 (Ubuntu) Server at domain.ext Port 443</address>\\n',
'REQUEST_METHOD': 'GET',
'PATH_INFO': '/',
'SERVER_PROTOCOL': 'HTTP/1.1',
'QUERY_STRING': '',
'wsgi.errors': <mod_wsgi.Log object at 0x7f0b517c0c10>,
'HTTP_X_REAL_IP': '2a02:a44a:ea1e:1:9053:2c7a:daaa:16',
'HTTP_USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36',
'SERVER_NAME': 'domain.ext',
'REMOTE_ADDR': '206.189.205.251',
'mod_wsgi.queue_start': '1644325870796726',
'mod_wsgi.request_handler': 'wsgi-script',
'apache.version': (2, 4, 41),
'mod_wsgi.version': (4, 6, 8),
'wsgi.url_scheme': 'https',
'PATH_TRANSLATED': '/var/www/gosharing/gosharing.wsgi/',
'SERVER_PORT': '443',
'mod_wsgi.total_requests': 0L,
'wsgi.multiprocess': False,
'SERVER_ADDR': '185.45.113.35',
'DOCUMENT_ROOT': '/var/www/gosharing',
'mod_wsgi.process_group': 'gosharing',
'mod_wsgi.thread_requests': 0L,
'mod_wsgi.daemon_connects': '1',
'mod_wsgi.request_id': 'sn1scyGWCVM',
'SCRIPT_FILENAME': '/var/www/gosharing/gosharing.wsgi',
'SERVER_ADMIN': 'webmaster@localhost',
'mod_wsgi.ignore_activity': '0',
'wsgi.input': <mod_wsgi.Input object at 0x7f0b48f01030>,
'HTTP_HOST': 'domain.ext',
'CONTEXT_PREFIX': '',
'wsgi.multithread': True,
'mod_wsgi.callable_object': 'application',
'mod_wsgi.daemon_restarts': '0',
'REQUEST_URI': '/',
'HTTP_ACCEPT': '*/*',
'mod_wsgi.path_info': '/',
'wsgi.file_wrapper': <type 'mod_wsgi.FileWrapper'>,
'wsgi.version': (1, 0),
'GATEWAY_INTERFACE': 'CGI/1.1',
'wsgi.run_once': False,
'mod_wsgi.script_name': '',
'REMOTE_PORT': '39762',
'mod_wsgi.listener_host': '',
'REQUEST_SCHEME': 'https',
'SSL_TLS_SNI': 'domain.ext',
'wsgi.input_terminated': True,
'mod_wsgi.script_start': '1644325870815229',
'mod_wsgi.application_group': '',
'mod_wsgi.script_reloading': '1',
'mod_wsgi.thread_id': 1,
'mod_wsgi.request_start': '1644325870796210',
'HTTP_ACCEPT_ENCODING': 'deflate, gzip',
'mod_wsgi.daemon_start': '1644325870800682'
}
</code></pre>
<p>In fact, I can add any header on reqbin and be able to see it in my apache log, except for the authorization header. Maybe it is in a more protected place? Please help me out here.</p>
|
<p>I figured it out. In your <em>000-default-le-ssl.conf</em> or <em>000-default.conf</em> file (depending on whether you use a secure connection or not) you're supposed to turn on authorization passing manually by writing <strong>WSGIPassAuthorization On</strong> inside your <strong>VirtualHost</strong> tag:</p>
<pre><code><VirtualHost *:443> # or port 80 if you are using an insecure connection
# [...]
WSGIPassAuthorization On
# [...]
</VirtualHost>
</code></pre>
|
python|get|header|apache2|wsgi
| 1 |
857 | 45,224,527 |
How to separate a string of repeating characters?
|
<p>All continuous groups of characters must be grouped together and put into a list. For example, if I have this string:</p>
<pre><code>1112221121
</code></pre>
<p>I would want to split this into a list:</p>
<pre><code>['111', '222', '11', '2', '1']`
</code></pre>
<p>Another example would be </p>
<pre><code>0011100000
</code></pre>
<p>Output: <code>['00', '111', '00000']</code></p>
<p>This is what I've come up with:</p>
<pre><code>In [146]: t = '0011100000'
...: out = []
...: prev = None
...: for c in t:
...: if c != prev:
...: prev = c
...: out.append('')
...: out[-1] += c
...:
In [147]: out
Out[147]: ['00', '111', '00000']
</code></pre>
<p>Is there a simpler solution? I think I am overthinking this.</p>
|
<p><code>itertools.groupby</code> does just that:</p>
<pre><code>>>> from itertools import groupby
>>> [''.join(g) for _, g in groupby('1112221121')]
['111', '222', '11', '2', '1']
</code></pre>
|
python
| 3 |
858 | 51,840,156 |
How to execute script in Anaconda with different installed Python versions?
|
<p>I want to run a script in Anaconda using Python 2.7.
I am using Windows 8 with Anaconda 3 and Python 3.6.5.
I created another environment with python 2.7.15 and activated it in Anaconda Prompt like advised here: <a href="https://conda.io/docs/user-guide/tasks/manage-python.html" rel="nofollow noreferrer">https://conda.io/docs/user-guide/tasks/manage-python.html</a></p>
<p>How can I run this:
<code>print "HellWorld!"</code></p>
<p>I remember there was a way to run the script from the spyder console just adding the version to the command line but I cannot remember the syntax.</p>
<p>What I did so far:</p>
<p>I activated py27 by typing into Anaconda Prompt:</p>
<pre><code>conda activate py27
</code></pre>
<p>I checked if it was correctly activated (yes, it was) by:</p>
<pre><code>python --version
</code></pre>
|
<p>Using conda and environments is easy, once you get to know how to manage environments. </p>
<p>When creating an environment you may choose the python version to use and also what other libraries. </p>
<p>Let's begin creating two different environments.</p>
<pre><code>jalazbe@DESKTOP:~$ conda create --name my-py27-env python=2.7
jalazbe@DESKTOP:~$ conda create --name my-py36-env python=3.6
</code></pre>
<p>It might prompt to you a message like:
The following NEW packages will be INSTALLED:</p>
<pre><code> ca-certificates: 2018.03.07-0
certifi: 2018.8.13-py27_0
libedit: 3.1.20170329-h6b74fdf_2
libffi: 3.2.1-hd88cf55_4
Proceed ([y]/n)?
</code></pre>
<p>Just type <code>Y</code> and press <code>Enter</code></p>
<p>So now you have two environments. One of them with python 2.7 and the other one with python 3.6 </p>
<p>Before executing any script you need to select which environment to use. In this example I'll you the environment with python 2.7</p>
<pre><code>jalazbe@DESKTOP:~$ conda activate my-py27-env
</code></pre>
<p>Once you activate an environment you will see it at the left side of the prompt and between parenthesis like this <code>(environment-name)</code></p>
<pre><code>(my-py27-env) jalazbe@DESKTOP:~$
</code></pre>
<p>So now everything to execute will use the libraries in the environment.
if you execute python -V</p>
<pre><code>(my-py27-env) jalazbe@DESKTOP:~$ python -V
</code></pre>
<p>The output will be:</p>
<pre><code>Python 2.7.15 :: Anaconda, Inc.
</code></pre>
<p>Then you may change to another environment by first: exit the one you are on (called deactivate) and entering (activating) the other environment like this:</p>
<pre><code>(my-py27-env) jalazbe@DESKTOP:~$ conda deactivate
jalazbe@DESKTOP:~$
jalazbe@DESKTOP:~$ conda activate my-py36-env
(my-py36-env) jalazbe@DESKTOP:~$
</code></pre>
<p>At this point if you execute python -V you get</p>
<pre><code>(my-py36-env) jalazbe@DESKTOP:~$ python -V
Python 3.6.5 :: Anaconda, Inc.
</code></pre>
<p>To answer your question you need two environments with different libraries and python versions. When executing an script you have to choose which environment to use.</p>
<p>For further usage of conda commands see <a href="https://conda.io/docs/_downloads/conda-cheatsheet.pdf" rel="nofollow noreferrer">conda cheet sheet</a> or read documentation about <a href="https://conda.io/docs/" rel="nofollow noreferrer">conda</a></p>
|
python-3.x|python-2.7|console|anaconda|conda
| 1 |
859 | 54,534,516 |
How to stop Scrapy Selector wrap an xml with html?
|
<p>I do this:</p>
<pre><code>xmlstr="<root><first>info</first></root>"
res = Selector(text=xmlstr).xpath('.').getall()
print(res)
</code></pre>
<p>The output is:</p>
<pre><code>['<html><body><root><first>info</first></root></body></html>']
</code></pre>
<p>How can I stop Selector wrapping the xml with html and body? Thanks</p>
|
<p><a href="http://doc.scrapy.org/en/latest/topics/selectors.html#selector-objects" rel="nofollow noreferrer">scrapy.Selector</a> assumes html, but takes a <code>type</code> argument to change that.</p>
<blockquote>
<p><code>type</code> defines the selector type, it can be <code>"html"</code>, <code>"xml"</code> or <code>None</code> (default).</p>
<p>If <code>type</code> is <code>None</code>, the selector automatically chooses the best type based on <code>response</code> type (see below), or defaults to <code>"html"</code> in case it is used together with text.</p>
</blockquote>
<p>So, to make an xml selector, simply use <code>Selector(text=xmlstr, type='xml')</code></p>
|
python|xpath|scrapy
| 3 |
860 | 53,700,234 |
Assigning current 'User' as foreign key to nested serializers
|
<p>I am trying to assign current 'User' to two models using nested serializers.</p>
<pre><code>class UserAddressSerializer(serializers.ModelSerializer):
class Meta:
model = UserAddress
fields = ('user', 'address_1', 'address_2', 'country',
'state_province', 'city', 'zip_code')
class UserProfileSerializer(serializers.ModelSerializer):
user_address = UserAddressSerializer()
user = serializers.HiddenField(default=serializers.CurrentUserDefault())
class Meta:
model = UserProfile
fields = ('user', 'first_name', 'middle_name', 'last_name',
'title', 'display_name', 'time_zone', 'user_address', 'default_office')
def create(self, validated_data):
user = validated_data.pop('user')
user_address_data = validated_data.pop('user_address')
user_address_object = UserAddress.objects.create(
user=user, **user_address_data)
user_profile_object = UserProfile.objects.create(
user=user, **validated_data)
return user
</code></pre>
<p>What I am getting is this output in Postman.</p>
<pre><code>{
"user_address": {
"user": [
"This field is required."
]
}
}
</code></pre>
<p>I want to know a way to pass 'User' to both of these model creation.</p>
|
<p>You need to remove <code>user</code> from fields of <code>UserAddressSerializer</code>:</p>
<pre><code>class UserAddressSerializer(serializers.ModelSerializer):
class Meta:
model = UserAddress
fields = ('address_1', 'address_2', 'country', # <-- Here
'state_province', 'city', 'zip_code')
</code></pre>
|
python|django|python-3.x|django-rest-framework
| 1 |
861 | 53,398,884 |
Pandas series giving incorrect sum
|
<p>Why is this Pandas series giving sum = .99999999 where as answer is 1. In my program, I need to assert on 'sum is equal to 1'. And, assertion is failing even if condition is correct.</p>
<pre><code>s = pd.Series([0.41,0.25,0.25,0.09])
print("Pandas version = " + pd.__version__)
print(s)
print(type(s))
print(type(s.values))
print(s.values.sum())
</code></pre>
<p>The output is:</p>
<pre><code>Pandas version = 0.23.4
0 0.41
1 0.25
2 0.25
3 0.09
dtype: float64
<class 'pandas.core.series.Series'>
<class 'numpy.ndarray'>
0.9999999999999999
</code></pre>
|
<p>Use <a href="https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.isclose.html" rel="nofollow noreferrer">np.isclose</a> to determine if two values are arbitrarily close. It's a remnant of how floats are stored in the machine</p>
|
python|pandas|series|numpy-ndarray
| 3 |
862 | 53,792,144 |
print text inside parent div beautifulsoup
|
<p>i'm trying to fetch each product's name and price from
<a href="https://www.daraz.pk/catalog/?q=risk" rel="nofollow noreferrer">https://www.daraz.pk/catalog/?q=risk</a> but nothing shows up.</p>
<pre><code>containers = page_soup.find_all("div",{"class":"c2p6A5"})
for container in containers:
pname = container.findAll("div", {"class": "c29Vt5"})
name = pname[0].text
price1 = container.findAll("span", {"class": "c29VZV"})
price = price1[0].text
print(name)
print(price)
</code></pre>
|
<p>if the page is dynamic, Selenium should take care of that</p>
<pre><code>from bs4 import BeautifulSoup
import requests
from selenium import webdriver
browser = webdriver.Chrome()
browser.get('https://www.daraz.pk/catalog/?q=risk')
r = browser.page_source
page_soup = bs4.BeautifulSoup(r,'html.parser')
containers = page_soup.find_all("div",{"class":"c2p6A5"})
for container in containers:
pname = container.findAll("div", {"class": "c29Vt5"})
name = pname[0].text
price1 = container.findAll("span", {"class": "c29VZV"})
price = price1[0].text
print(name)
print(price)
browser.close()
</code></pre>
<p>output:</p>
<pre><code>Risk Strategy Game
Rs. 5,900
Risk Classic Board Game
Rs. 945
RISK - The Game of Global Domination
Rs. 1,295
Risk Board Game
Rs. 1,950
Risk Board Game - Yellow
Rs. 3,184
Risk Board Game - Yellow
Rs. 1,814
Risk Board Game - Yellow
Rs. 2,086
Risk Board Game - The Game of Global Domination
Rs. 975
...
</code></pre>
|
python|web-scraping|beautifulsoup
| 3 |
863 | 45,956,128 |
Save unittest results in text file
|
<p>I'm writing code that tests via unittest if several elements exist on a certain homepage. After the test I want that the results were saved in a text file. But the results in the text file look like this:</p>
<pre><code>......................
.........
------------------------------------------
Ran 12 tests in 22.562s
OK.
</code></pre>
<p>But i want that the output looks like this:</p>
<pre><code>test_test1 (HomepageTest.HomePageTest) ... ok
test_test2 (HomepageTest.HomePageTest) ... ok
test_test3 (HomepageTest.HomePageTest) ... ok
etc....
-------------------------------------------------
Ran 12 tests in ...s
OK
</code></pre>
<p>This is the code I use for saving the output into a text file:</p>
<pre><code>class SaveTestResults(object):
def save(self):
self.f = open(log_file, 'w')
runner = unittest.TextTestRunner(self.f)
unittest.main(testRunner = runner, defaultTest ='suite', verbosity = 2)
def main():
STR = SaveTestResults()
STR.save()
if __name__ == '__main__':
main()
</code></pre>
<p>What am I missing or doing wrong?</p>
|
<p>If the output you wish to save in a file corresponds to what is printed out to the console, you have two main options.</p>
<h3>1 - You're using Linux</h3>
<p>Then just redirect the output to a file:</p>
<pre><code>python script.py > output.txt
</code></pre>
<p>However, the output will not be printed out to the console anymore.
If you want to keep the console output, use the <code>tee</code> unix command:</p>
<pre><code>python script.py | tee output.txt
</code></pre>
<h3>2 - You're using Windows, or you don't want to redirect the whole output to a file</h3>
<p>You can achieve more or less the same thing using exclusively Python.
You need to set the value of <code>sys.stdout</code> to the file descriptor where you want the output to be written.</p>
<pre><code>import sys
sys.stdout = open("output.txt", 'w')
run_tests()
</code></pre>
<p>This will set the output stream <code>stdout</code> to the given file for the whole script.
I would suggest defining a decorator instead:</p>
<pre><code>def redirect_to_file(func):
def decorated(*args, **kwargs):
actualStdout = sys.stdout
sys.stdout = open("log.txt", 'a')
result = func(*args, **kwargs)
sys.stdout = actualStdout
return result
return decorated
</code></pre>
<p>Then, just decorate the functions whose you want to write the output to a file:</p>
<pre><code>@redirect_to_file
def run_test():
...
</code></pre>
<p>If you want a similar behaviour to <code>tee</code>, have a look at <a href="https://stackoverflow.com/a/616686/7051394">this post</a>.
The idea is to define a <code>Tee</code> class that holds the two desired streams:</p>
<pre><code>class Tee:
def __init__(self, stream1, stream2):
self.stream1 = stream1
self.stream2 = stream2
def write(self, data):
self.stream1.write(data)
self.stream2.write(data)
def close(self):
self.stream1.close()
self.stream2.close()
</code></pre>
<p>Then, set <code>sys.stdout</code> to a <code>Tee</code> instance whose one of the stream is the actual <code>stdout</code>:</p>
<pre><code>tee = Tee(sys.stdout, open("output.txt", 'w'))
sys.stdout = tee
</code></pre>
<p>Don't forget to close the <code>tee</code> instance at the end of your script; else, the data written to <code>output.txt</code> will not be saved:</p>
<pre><code>tee.close()
</code></pre>
|
python|unit-testing
| 3 |
864 | 54,750,890 |
Multiple metrics to specific inputs
|
<p>I have multiple losses and metrics whether custom or imported from keras. Is there a way to specify which model outputs could be inputted to which metric instead of all of them being printed or calculated?</p>
|
<p>Yes, you can pass the losses/metrics as a dictionary that maps <strong>layer name</strong> to a loss/metrics.</p>
<p>A quote from the <a href="https://keras.io/models/model/" rel="noreferrer">documentation</a>:</p>
<blockquote>
<p>loss: ... If the model has multiple outputs, you can use a different
loss on each output by passing a dictionary or a list of losses. The
loss value that will be minimized by the model will then be the sum of
all individual losses. </p>
</blockquote>
<p>and </p>
<blockquote>
<p>metrics: ... To specify different metrics for different
outputs of a multi-output model, you could also pass a dictionary,
such as metrics={'output_a': 'accuracy'}.</p>
</blockquote>
<p>Example:</p>
<pre><code>model.compile(
optimizer='rmsprop',
loss={'output_1': 'loss_1', 'output_2': 'loss_2'},
loss_weights={'output_1': 1., 'output_2': 0.2},
metrics={'output_1': 'metric_1', 'output_2': ['metric_2', 'metric_3']})
</code></pre>
<p>You can read more about multi-output model with Keras in: <a href="https://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models" rel="noreferrer">https://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models</a></p>
|
python|tensorflow|keras
| 10 |
865 | 54,853,238 |
Power function from math module seems to stop working in Python
|
<p>So i'm trying to write a program which finds a Pythagorean triplet, checks if all the numbers which make up the triplet add up to 1000, and if they do then multiply the 3 numbers together and output the result. Here is my sample code:</p>
<pre><code> import math
numbers = [1,2,3]
found = False
while not found:
if (math.pow(numbers[0], 2) + math.pow(numbers[1], 2)) == (math.pow(numbers[2], 2)): #Checks to see if its a pythag triplet
total = 0
for x in numbers:#adds the 3 numbers together
total += x
if total == 1000: #if the total of the three numbers is 1000, multiply them all together
product = 1
for y in numbers:
product *= y
print (product)
found = True #print the product total and end the while loop
else:
numbers = [z+1 for z in numbers] #if the total isnt 100, then just add 1 to each of the three numbers
print (numbers)
else:
numbers = [z+1 for z in numbers]#if the three numbers arent pythag triplet, then add 1 to each number
</code></pre>
<p>When the first triplet has been found the program seems to stop working. It dosnt seem to be able to identify any pythag triplets anymore, so I guess this is due to the "pow" function not working correctly anymore? I am new to programming so would appreciate any advice on how to overcome this and also how I could improve efficiency aswell!</p>
|
<p>Turns out, your math is incorrect.</p>
<ol>
<li>On each iteration, <em>every</em> number in the triplet is increased by 1</li>
<li><p>After <code>a</code> iterations, in order for it to be a Pythagorean triplet, the following must hold true:</p>
<pre><code>(a + 1)**2 + (a + 2)**2 == (a + 3)**2
</code></pre>
<p>Here 1, 2 and 3 inside the parentheses are the initial contents of the list <code>numbers</code>.</p></li>
<li><p>This simplifies to <code>2 * a**2 + 6 * a + 5 == a **2 + 6 * a + 9</code></p></li>
<li>Which is true only for <code>a == 2</code></li>
</ol>
<p>So, your code executes <code>print (numbers)</code> on the <em>third (<code>a + 1</code>)</em> iteration and will never terminate since <code>a</code> is always increasing.</p>
|
python
| 1 |
866 | 55,064,651 |
Cannot parse address which contain ".html#/something" using bs4 in python3
|
<p>My goal is to parse images from second page. I am using bf4 and Python3 for this.
Please, look at those two pages:</p>
<p>1) Only <a href="https://azzardo.com.pl/lampy-techniczne/2111-bross-1-tuba-lampa-techniczna-azzardo.html" rel="nofollow noreferrer">page</a> with images for all 4 colors (I can parse this page);</p>
<p>2) And <a href="https://azzardo.com.pl/lampy-techniczne/2111-bross-1-tuba-lampa-techniczna-azzardo.html#/kolor-chrom" rel="nofollow noreferrer">page</a> which contain images only for 1 color (chrom color in this example). I need to parse this page.</p>
<p>Using browser I can see that second page different from the first one. But, using bs4 I got similar results for first and second page as python didn't recognize this ".html#/kolor-chrom" in second page address.</p>
<p>First page address: "<a href="https://azzardo.com.pl/lampy-techniczne/2111-bross-1-tuba-lampa-techniczna-azzardo.html" rel="nofollow noreferrer">https://azzardo.com.pl/lampy-techniczne/2111-bross-1-tuba-lampa-techniczna-azzardo.html</a>".</p>
<p>Second page address: "<a href="https://azzardo.com.pl/lampy-techniczne/2111-bross-1-tuba-lampa-techniczna-azzardo.html#/kolor-chrom" rel="nofollow noreferrer">https://azzardo.com.pl/lampy-techniczne/2111-bross-1-tuba-lampa-techniczna-azzardo.html#/kolor-chrom</a>".</p>
<p>Code to reproduce:</p>
<pre><code>from bs4 import BeautifulSoup
import requests
adres1 = "https://azzardo.com.pl/lampy-techniczne/2111-bross-1-tuba-lampa-techniczna-azzardo.html"
adres2 = "https://azzardo.com.pl/lampy-techniczne/2111-bross-1-tuba-lampa-techniczna-azzardo.html#/kolor-chrom"
def parse_one_page(adres):
"""Parse one page and get all the img src from adres"""
# Use headers to prevent hide our script
headers = {'User-Agent': 'Mozilla/5.0'}
# Get page
page = requests.get(adres, headers=headers) # read_timeout=5
# Get all of the html code
soup = BeautifulSoup(page.content, 'html.parser')
# Find div
divclear = soup.find_all("div", class_="clearfix")
divclear = divclear[9]
# Find img tag
imgtag = [i.find_all("img") for i in divclear][0]
# Find src
src = [i["src"] for i in imgtag]
# See how much images are here
print(len(src))
# return list with img src
return src
print(parse_one_page(adres1))
print(parse_one_page(adres2))
</code></pre>
<p>After running those code you will see that output from those two addresses are similar: 24 images from both adresses. In first page here are 24 images (that's correct). But in second page here must be only 2 images, not 24 (incorrect)!</p>
<p>So hope, that someone help me how to parse second page in python3 using bs4 correctly.</p>
|
<p>Yep, looks like it's not possible to parse such responsive page using bs4</p>
|
python|beautifulsoup|html-parsing
| 0 |
867 | 21,764,475 |
Python: Scaling numbers column by column with pandas
|
<p>I have a Pandas data frame 'df' in which I'd like to perform some scalings column by column.</p>
<ul>
<li>In column 'a', I need the maximum number to be 1, the minimum number to be 0, and all other to be spread accordingly.</li>
<li>In column 'b', however, I need the <strong>minimum number to be 1</strong>, the <strong>maximum number to be 0</strong>, and all other to be spread accordingly.</li>
</ul>
<p>Is there a Pandas function to perform these two operations? If not, numpy would certainly do.</p>
<pre><code> a b
A 14 103
B 90 107
C 90 110
D 96 114
E 91 114
</code></pre>
|
<p>This is how you can do it using <code>sklearn</code> and the <code>preprocessing</code> module. Sci-Kit Learn has many pre-processing functions for scaling and centering data.</p>
<pre><code>In [0]: from sklearn.preprocessing import MinMaxScaler
In [1]: df = pd.DataFrame({'A':[14,90,90,96,91],
'B':[103,107,110,114,114]}).astype(float)
In [2]: df
Out[2]:
A B
0 14 103
1 90 107
2 90 110
3 96 114
4 91 114
In [3]: scaler = MinMaxScaler()
In [4]: df_scaled = pd.DataFrame(scaler.fit_transform(df), columns=df.columns)
In [5]: df_scaled
Out[5]:
A B
0 0.000000 0.000000
1 0.926829 0.363636
2 0.926829 0.636364
3 1.000000 1.000000
4 0.939024 1.000000
</code></pre>
|
python|pandas
| 85 |
868 | 31,037,751 |
Reading variable number of columns in pandas
|
<p>I have a poorly formatted delimited file, in which the there are errors with the delimiter, so it sometimes appears that there are an inconsistent number of columns in different rows.</p>
<p>When I run</p>
<pre><code>pd.read_csv('patentHeader.txt', sep="|", header=0)
</code></pre>
<p>the process dies with this error:</p>
<blockquote>
<p>CParserError: Error tokenizing data. C error: Expected 9 fields in line 1034558, saw 15</p>
</blockquote>
<p>Is there a way to have pandas skip these lines and continuing? Or put differently, is there some way to make <code>read_csv</code> be more flexible about how many columns it encounters?</p>
|
<p>Try this.</p>
<pre><code>pd.read_csv('patentHeader.txt', sep="|", header=0, error_bad_lines=False)
</code></pre>
<p><code>error_bad_lines</code>: if False then any lines causing an error will be skipped bad lines, and it will be reported once the reading process is done.</p>
|
pandas
| 2 |
869 | 30,920,380 |
Displaying ggplot2 graphs from R in Jupyter
|
<p>When I create a plot in Jupyter using the <code>ggplot2</code> R package, I get a link to the chart that says "View PDF" instead of the chart being presented inline.</p>
<p>I know that traditionally in IPython Notebook you were able to show the charts inline using the <code>%matplotlib</code> magic function. Does Jupyter have something similar for R and ggplot2?</p>
<p>What do I need to do to show the graph inline versus as a link to a PDF?</p>
|
<p>You can show the graphs inline with this option.</p>
<pre><code>options(jupyter.plot_mimetypes = 'image/png')
</code></pre>
<p>You can also produce pdf files as you would regularly in R, e.g.</p>
<pre><code>pdf("test.pdf")
ggplot(data.frame(a=rnorm(100,1,10)),aes(a))+geom_histogram()
dev.off()
</code></pre>
|
r|ggplot2|ipython-notebook|jupyter
| 5 |
870 | 29,319,933 |
Best practise to apply several rules on 1 string
|
<p>i'm getting url as string and need to apply several rules to it. First rule is to remove anchors, then remove '../' notation, because urljoin joins url incorrect in some cases, and finally remove leading slash. For now i have such code:</p>
<pre><code>def construct_url(parent_url, child_url):
url = urljoin(parent_url, child_url)
url = url.split('#')[0]
url = url.replace('../', '')
url = url.rstrip('/')
return url
</code></pre>
<p>But i dont think this is the best practise. I think it can be done much simpler. Could you help me please? Thanks.</p>
|
<p>Unfortunately, there isn't much that really could make your function <em>simpler</em> here, since you're dealing with some pretty odd cases.</p>
<p>But you can make it more <em>robust</em> by using Python's <a href="https://docs.python.org/2/library/urlparse.html#urlparse.urlsplit" rel="nofollow noreferrer"><code>urlparse.urlsplit()</code></a> to split the URL in well-defined components, do your processing, and put it back together by using <a href="https://docs.python.org/2/library/urlparse.html#urlparse.urlunsplit" rel="nofollow noreferrer"><code>urlparse.urlunsplit()</code></a>:</p>
<pre><code>from urlparse import urljoin
from urlparse import urlsplit
from urlparse import urlunsplit
def construct_url(parent_url, child_url):
url = urljoin(parent_url, child_url)
scheme, netloc, path, query, fragment = urlsplit(url)
path = path.replace('../', '')
path = path.rstrip('/')
url = urlunsplit((scheme, netloc, path, query, ''))
return url
parent_url = 'http://user:[email protected]'
child_url = '../../../chrome/#foo'
print construct_url(parent_url, child_url)
</code></pre>
<p>Output:</p>
<pre><code>http://user:[email protected]/chrome
</code></pre>
<p>Using the tools from <code>urlparse</code> has the advantage that you know exactly what your processing operates on (path and fragment in your case), and it handles all the things like user credentials, query strings, parameters etc. for you.</p>
<hr />
<p><strong>Note</strong>: Contrary to what I suggested in the comments, <code>urljoin</code> does in fact normalize URLs:</p>
<pre><code>>>> from urlparse import urljoin
>>> urljoin('http://google.com/foo/bar', '../qux')
'http://google.com/qux'
</code></pre>
<p>But it does so by strictly following RFC 1808.</p>
<p>From <a href="https://www.rfc-editor.org/rfc/rfc1808.html#section-5.2" rel="nofollow noreferrer">RFC 1808 Section 5.2: Abnormal Examples</a>:</p>
<blockquote>
<p>Within an object with a well-defined base URL of</p>
<p>Base: <code><URL:http://a/b/c/d;p?q#f></code></p>
<p>[...]</p>
<p>Parsers must be careful in handling the case where there are more
relative path <code>".."</code> segments than there are hierarchical levels in the
base URL's path. Note that the <code>".."</code> syntax cannot be used to change
the <code><net_loc></code> of a URL.</p>
<pre><code>../../../g = <URL:http://a/../g>
../../../../g = <URL:http://a/../../g>
</code></pre>
</blockquote>
<p>So <code>urljoin</code> does exactly the right thing by preserving those extraneous <code>../</code>, therefore you need to remove them by manual processing.</p>
|
python
| 0 |
871 | 8,874,276 |
Trouble with making background of image transparent in python using pygame
|
<p>I have a rather confusing problem in running our game. I am trying to make a game using Python's pygame and I am using images downloaded from the internet. The problem is that some images have a white background and some have a colored background. I used Photoshop to get rid of the white background and re-saved the image. However, when I ran the simulation in Python, it gave me the original picture with the original background. This is slightly perplexing to me. </p>
<p>Here's the part of the code using pygame I used to implement the image:</p>
<pre><code> self.image = pygame.image.load("jellyfishBad.png").convert()
self.image.set_colorkey(white)
self.rect = self.image.get_rect()
</code></pre>
<p>Thanks.</p>
|
<p>You need to use the .convert_alpha() method when loading the image for per pixel transparency.</p>
<p>So,</p>
<pre><code>self.image = pygame.image.load("jellyfishBad.png").convert_alpha()
</code></pre>
|
python|image|pygame|transparent
| 2 |
872 | 8,936,297 |
Attempting to display total amount_won for each user in database via For loop
|
<p>I'm trying to display the Sum of amount_won for each user_name in the database. My database is:</p>
<p>Stakes table</p>
<pre><code>id
player_id
stakes
amount_won
last_play_date
</code></pre>
<p>Player table</p>
<pre><code>id
user_name
real_name
site_played
</code></pre>
<p>models.py</p>
<pre><code>class Player(models.Model):
user_name = models.CharField(max_length=200)
real_name = models.CharField(max_length=200)
SITE_CHOICES = (
('FTP', 'Full Tilt Poker'),
('Stars', 'Pokerstars'),
('UB', 'Ultimate Bet'),
)
site_played = models.CharField(max_length=5, choices=SITE_CHOICES)
def __unicode__(self):
return self.user_name
def was_created_today(self):
return self.pub_date.date() == datetime.date.today()
class Stakes(models.Model):
player = models.ForeignKey(Player)
stakes = models.CharField(max_length=200)
amount_won = models.DecimalField(max_digits=12, decimal_places=2)
last_play_date = models.DateTimeField('Date Last Updated')
def __unicode__(self):
return self.stakes
class PlayerForm(ModelForm):
class Meta:
model = Player
class StakesForm(ModelForm):
class Meta:
model = Stakes
</code></pre>
<p>Views.py</p>
<pre><code>def index(request):
latest_player_list = Player.objects.all().order_by('id')[:20]
total_amount_won = Stakes.objects.filter(player__user_name='test_username').aggregate(Sum('amount_won'))
return render_to_response('stakeme/index.html', {
'latest_player_list': latest_player_list,
'total_amount_won': total_amount_won
})
</code></pre>
<p>and index.html</p>
<pre><code><h1> Players </h1>
{% if latest_player_list %}
<ul>
{% for player in latest_player_list %}
<li><a href="/stakeme/{{ player.id }}/">{{ player.user_name }} </a><br>Total Won: {{ total_amount_won }}
</li>
{% endfor %}
</ul>
<br>
{% else %}
<p>No players are available.</p>
{% endif %}
<h3><a href="/stakeme/new/">New Player</a></h3>
</code></pre>
<p>If I leave the views.py section as <code>(player__user_name='test_username')</code> it will display Amount Won: as follows <code>Total Won: {'amount_won__sum': Decimal('4225.00')}</code> using the test_username's amount_won (4225.00) for EVERY user name. Ideally, I'd like it to display Amount Won: for each user name in the for loop and display it as "Amount Won: 4225.00" only.</p>
<p>I'm starting to understand this is way over my head, but I've read the docs regarding the differences between aggregate and annotate and I can't wrap my head around it. I'm thinking my DB is not setup correctly to use annotate for this, but I obviously could be wrong.</p>
|
<p>Check out: <a href="https://docs.djangoproject.com/en/dev/topics/db/aggregation/" rel="nofollow">https://docs.djangoproject.com/en/dev/topics/db/aggregation/</a></p>
<pre><code>players = Player.objects.annotate(total_amount_won=Sum('stakes__amount_won'))
players[0].total_amount_won # This will return the 'total amount won' for the 0th player
</code></pre>
<p>So you could pass <code>players</code> to your template and loop over it.</p>
<p><strong>EDIT</strong></p>
<p>Your views.py would look like:</p>
<pre><code>def index(request):
players = Player.objects.annotate(total_amount_won=Sum('stakes__amount_won'))
return render_to_response('stakeme/index.html', {'players': players,})
</code></pre>
<p>The template would look like:</p>
<pre><code><h1> Players </h1>
{% if players %}
<ul>
{% for player in players %}
<li>
<a href="/stakeme/{{ player.id }}/">{{ player.user_name }} </a><br>Total Won: {{ player.total_amount_won }}
</li>
{% endfor %}
</ul>
<br />
{% else %}
<p>No players are available.</p>
{% endif %}
<h3><a href="/stakeme/new/">New Player</a></h3>
</code></pre>
|
python|mysql|django
| 2 |
873 | 58,735,125 |
issue with for loop in python only gets the last item
|
<p>I'm a beginner in python, currently I'm trying to automate filling website field using <code>selenium</code>.</p>
<p>I'm trying to iterate over nested lists using <code>for</code> loop but always get only the last element. Any suggestions why?</p>
<pre class="lang-py prettyprint-override"><code>fields = [['a','b','c'],['x','y','z']]
for i in range(len(fields)):
driver.find_element_by_xpath("element").send_keys(fields[i][0],fields[i[1],fields[i][2])
driver.find_element_by_xpath("element_save").click()
#then loop and iterate through 2nd nested list
# OUTPUT = x,y,z
</code></pre>
<p>I expect to iterate starting with index 0 to the end of the list. </p>
|
<p>You don't need <code>range(len(list_))</code> for iterating over indeces only.</p>
<p>Usual <code>for</code> will do. You can also unpack list with <code>*</code>:</p>
<pre class="lang-py prettyprint-override"><code>fields = [['a','b','c'],['x','y','z']]
len_ = len(fields)
for i in range(len_):
driver.find_element_by_xpath("element").send_keys(*fields[i])
</code></pre>
<p>You could also iterate trhrough the values of the <code>fields</code> itself:</p>
<pre class="lang-py prettyprint-override"><code>fields = [['a','b','c'],['x','y','z']]
for field in fields:
driver.find_element_by_xpath("element").send_keys(*field)
</code></pre>
|
python|loops|for-loop|iteration|enumerate
| 3 |
874 | 52,242,843 |
How to render my Sudoku generator results to html table using Django?
|
<p>I am pretty new to Django and currently making a Sudoku web app. I wrote a python program to generate the Sudoku games, here is an example of the result/matrix looks like when i run the code (Sudoku Generator.py). </p>
<pre><code>[[3, 8, 2, 7, 5, 6, 1, 4, 9],[1, 4, 5, 2, 3, 9, 6, 7, 8],[6, 7, 9, 1, 4, 8, 2, 3, 5],[2, 1, 3, 4, 6, 5, 8, 9, 7],[4, 5, 6, 8, 9, 7, 3, 1, 2],[7, 9, 8, 3, 1, 2, 4, 5, 6],[5, 2, 1, 6, 7, 3, 9, 8, 4],[8, 3, 7, 9, 2, 4, 5, 6, 1],[9, 6, 4, 5, 8, 1, 7, 2, 3]]
</code></pre>
<p>My question is, how can I render all these generated numbers to my html file? here is the html codes i've created under the templates: </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>{% extends 'base.html' %}
{% block sudoku %}
<style>
table { border-collapse: collapse; font-family: Calibri, sans-serif; }
colgroup, tbody { border: solid medium; }
td { border: solid thin; height: 1.4em; width: 1.4em; text-align: center; padding: 0; }
</style>
<table>
<caption>Sudoku of the day</caption>
<colgroup><col><col><col>
<colgroup><col><col><col>
<colgroup><col><col><col>
<tbody>
<tr> <td> <td> <td> <td> <td> <td> <td> <td> <td>
<tr> <td> <td> <td> <td> <td> <td> <td> <td> <td>
<tr> <td> <td> <td> <td> <td> <td> <td> <td> <td>
<tbody>
<tr> <td> <td> <td> <td> <td> <td> <td> <td> <td>
<tr> <td> <td> <td> <td> <td> <td> <td> <td> <td>
<tr> <td> <td> <td> <td> <td> <td> <td> <td> <td>
<tbody>
<tr> <td> <td> <td> <td> <td> <td> <td> <td> <td>
<tr> <td> <td> <td> <td> <td> <td> <td> <td> <td>
<tr> <td> <td> <td> <td> <td> <td> <td> <td> <td>
</table>
{% endblock %}</code></pre>
</div>
</div>
</p>
<p>Basically what I wanted is to get each number populated to each tag accordingly; also, whenever clicks "next game" button, the board will refresh and generate another bunch of numbers to form a new game. </p>
<p>Attached is the screen shot of my Django work project directory so far:
<a href="https://i.stack.imgur.com/YAbCg.png" rel="nofollow noreferrer">mysite directory</a></p>
<p>Now I totally got stuck, not sure if what i've done so far is correct and don't know what to do next... Anyone can help?? </p>
|
<p>Asuming you use the variable <code>sudoku_numbers</code> to store your array of numbers, then in the template you can use something like this:</p>
<pre><code><table>
{% for row in sudoku_numbers %}
<tr>
{% for col in row %}
<td>{{ col }}</td>
{% endfor %}
</tr>
{% endfor %}
</table>
</code></pre>
|
javascript|python|html|django|sudoku
| 0 |
875 | 69,135,482 |
How to convert input stdin to list data structure in python
|
<p>I have a stdin data in this format:<br />
100 <br />
85 92 <br />
292 42<br />
88 33<br />
500<br />
350 36<br />
800 45<br />
0</p>
<p>I want something like this [[100, [85, 92], [292, 42], [88, 33]], [500, [350, 36], [800, 45], [0]]</p>
|
<p>Something like the following (I have tested) should do it:</p>
<pre><code>lst = []
sublst = []
for line in sys.stdin:
lineLst = [int(x) for x in line.split()]
if len(lineLst) == 1:
if sublst: lst.append(sublst)
sublst = lineLst
else:
sublst.append(lineLst)
if sublst[0] == 0: lst.append(sublst)
</code></pre>
|
arrays|python-3.x|stdin
| 0 |
876 | 62,415,649 |
How do I save to a specific directory using openpyxl?
|
<p>I am trying to save an Excel workbook I created using openpyxl to a specific directory that the user inputs via a Tkinter "browse" button. I have the workbook saving at the inputted "save spot," butI am getting an error saying that it is a directory. </p>
<p>Within the function that is producing the workbook, I have:</p>
<pre><code>wb.save(save_spot)
</code></pre>
<p>The "save spot" is generated via a function:</p>
<pre><code>def set_save_destination():
global save_spot
save_spot = filedialog.askdirectory()
save_spot = str(save_spot)
</code></pre>
<p>The user gets to select the directory by the following Tkinter GUI code, within my GUI class:</p>
<pre><code>monthly_browse = ttk.Button(self, text='Select Save Destination', command=set_save_destination)
</code></pre>
<p>The error message that I receive is an "IsADirectoryError" but I am unsure what the issue is, as is says that you can directly enter the directory into the save method. I am new to programming and completely self-taught, so any help would be great! Thank you!</p>
|
<p>you need to provide full path to the desired folder please see example below</p>
<pre><code>from openpyxl import Workbook
wb = Workbook()
ws1 = wb.active
ws1.title = "1st Hour"
wb.save('/home/user/Desktop/FileName.xlsx')
</code></pre>
<p>so you might add additionally filename to the save_spot variable </p>
<pre><code> save_spot = str(save_spot)+'/filename.xlsx'
</code></pre>
|
python|python-3.x|tkinter|openpyxl
| 3 |
877 | 62,426,416 |
Definining `fac` with generators. And: Why no stack overflow with generators?
|
<p>Is there a way we can define the following code (a classic example for recursion) via generators in Python? I am using Python 3.</p>
<pre class="lang-py prettyprint-override"><code>def fac(n):
if n==0:
return 1
else:
return n * fac(n-1)
</code></pre>
<p>I tried this, no success:</p>
<pre class="lang-py prettyprint-override"><code>In [1]: def fib(n):
...: if n == 0:
...: yield 1
...: else:
...: n * yield (n-1)
File "<ipython-input-1-bb0068f2d061>", line 5
n * yield (n-1)
^
SyntaxError: invalid syntax
</code></pre>
<h2>Classic recursion in Python leads to Stack Overflow</h2>
<p>This classic example leads to a stack overflow on my machine for an input of <code>n=3000</code>. In the Lisp dialect "Scheme" I'd use tail recursion and avoid stack overflow. Not possible in Python. That's why generators come in handy in Python. But I wonder:</p>
<h2>Why no stack overflow with generators?</h2>
<p>Why is there no stack overflow with generators in Python? How do they work internally? Doing some research leads me always to examples showing how generators are used in Python, but not much about the inner workings.</p>
<h2>Update 1: <code>yield from my_function(...)</code></h2>
<p>As I tried to explain in the comments secion, maybe my example above was a poor choice for making a point. My actual question was targeted at the inner workings of generators used recursively in <code>yield from</code> statements in Python 3. </p>
<p>Below is an (incomplete) example code that I use to proces JSON files generatred by Firebox bookmark backups. At several points I use <code>yield from process_json(...)</code> to recursively call the function again via generators.</p>
<p>Exactly in this example, how is stack overflow avoided? Or is it?</p>
<pre class="lang-py prettyprint-override"><code>
# (snip)
FOLDERS_AND_BOOKMARKS = {}
FOLDERS_DATES = {}
def process_json(json_input, folder_path=""):
global FOLDERS_AND_BOOKMARKS
# Process the json with a generator
# (to avoid recursion use generators)
# https://stackoverflow.com/a/39016088/5115219
# Is node a dict?
if isinstance(json_input, dict):
# we have a dict
guid = json_input['guid']
title = json_input['title']
idx = json_input['index']
date_added = to_datetime_applescript(json_input['dateAdded'])
last_modified = to_datetime_applescript(json_input['lastModified'])
# do we have a container or a bookmark?
#
# is there a "uri" in the dict?
# if not, we have a container
if "uri" in json_input.keys():
uri = json_input['uri']
# return URL with folder or container (= prev_title)
# bookmark = [guid, title, idx, uri, date_added, last_modified]
bookmark = {'title': title,
'uri': uri,
'date_added': date_added,
'last_modified': last_modified}
FOLDERS_AND_BOOKMARKS[folder_path].append(bookmark)
yield bookmark
elif "children" in json_input.keys():
# So we have a container (aka folder).
#
# Create a new folder
if title != "": # we are not at the root
folder_path = f"{folder_path}/{title}"
if folder_path in FOLDERS_AND_BOOKMARKS:
pass
else:
FOLDERS_AND_BOOKMARKS[folder_path] = []
FOLDERS_DATES[folder_path] = {'date_added': date_added, 'last_modified': last_modified}
# run process_json on list of children
# json_input['children'] : list of dicts
yield from process_json(json_input['children'], folder_path)
# Or is node a list of dicts?
elif isinstance(json_input, list):
# Process children of container.
dict_list = json_input
for d in dict_list:
yield from process_json(d, folder_path)
</code></pre>
<h2>Update 2: <code>yield</code> vs <code>yield from</code></h2>
<p>Ok, I get it. Thanks to all the comments.</p>
<ul>
<li>So generators via <code>yield</code> create iterators. That has nothing to do with recursion, so no stack overflow here.</li>
<li>But generators via <code>yield from my_function(...)</code> are indeed recursive calls of my function, albeit delayed, and only evaluated if demanded. </li>
</ul>
<p>This second example can indeed cause a stack overflow. </p>
|
<p>OK, after your comments I have completely rewritten my answer.</p>
<ol>
<li>How does recursion work and why do we get a stack overflow?</li>
</ol>
<p>Recursion is often an elegant way to solve a problem. In most programming languages, every time you call a function, all the information and state needed for the function a put on the stack - a so called "stack frame". The stack is a special per-thread memory region and limited in size.</p>
<p>Now recursive functions implicitly use these stack frames to store state/intermediate results. E.g., the factorial function is n * (n-1) * ((n-1) -1)... 1 and all these "n-1" are stored on the stack.</p>
<p>An <strong>iterative</strong> solution has to store these intermediate results explicitly in a variable (that often sits in a single stack frame).</p>
<ol start="2">
<li>How do generators avoid stack overflow?</li>
</ol>
<p>Simply: They are not recursive. They are implemented like iterator objects. They store the current state of the computation and return a new result every time you request it (implicitly or with next()).</p>
<p>If it looks recursive, that's just syntactic sugar. "Yield" is not like return. It yields the current value and then "pauses" the computation. That's all wrapped up in one object and not in a gazillion stack frames.</p>
<p>This will give you a series from ´1 to n!´:</p>
<pre><code>def fac(n):
if (n <= 0):
yield 1
else:
v = 1
for i in range(1, n+1):
v = v * i
yield v
</code></pre>
<p>There is no recursion, the intermediate results are stored in <code>v</code> which is most likely stored in one object (on the heap, probably).</p>
<ol start="3">
<li>What about <code>yield from</code></li>
</ol>
<p>OK, that's interesting, since that was only added in Python 3.3.
<code>yield from</code> can be used to delegate to another generator.</p>
<p>You gave an example like:</p>
<pre><code>def process_json(json_input, folder_path=""):
# Some code
yield from process_json(json_input['children'], folder_path)
</code></pre>
<p>This looks recursive, but instead it's a combination of two generator objects. You have your "inner" generator (which only uses the space of one object) and with <code>yield from</code> you say "I'd like to forward all the values from that generator to my caller".</p>
<p>So it doesn't generate one stack frame per generator result, instead it creates one object per generator used.</p>
<p>In this example, you are creating one generator object per child JSON-object. That would probably be the same number of stack frames needed if you did it recursively. You won't see a stack overflow though, because objects are allocated on the heap and you have a very different size limit there - depending on your operating system and settings. On my laptop, using Ubuntu Linux, <code>ulimit -s</code> gives me 8 MB for the default stack size, while my process memory size is unlimited (although I have only 8GB of physical memory).</p>
<p>Look at this documentation page on generators: <a href="https://wiki.python.org/moin/Generators" rel="nofollow noreferrer">https://wiki.python.org/moin/Generators</a></p>
<p>And this QA: <a href="https://stackoverflow.com/questions/1756096/understanding-generators-in-python">Understanding generators in Python</a></p>
<p>Some nice examples, also for <code>yield from</code>:
<a href="https://www.python-course.eu/python3_generators.php" rel="nofollow noreferrer">https://www.python-course.eu/python3_generators.php</a></p>
<p>TL;DR: Generators are objects, they don't use recursion. Not even <code>yield from</code>, which just delegates to another generator object. Recursion is only practical when the number of calls is bounded and small, or your compiler supports tail call optimization.</p>
|
python
| 1 |
878 | 36,547,848 |
Python Request JSON
|
<p>I would like to check each JSON content type with my expectation type. I receive JSON in my python code like this:</p>
<pre><code>a = request.json['a']
b = request.json['b']
</code></pre>
<p>when I checked a and b type, it is always return Unicode. I checked it like this:</p>
<pre><code>type(a) # or
type(b) # (always return: type 'unicode')
</code></pre>
<p>How do I check if <code>request.json['a']</code> is <code>str</code>, if <code>request.json['a']</code> is always <code>unicode</code>?</p>
|
<p>I suspect you are on Python 2.x and not Python 3 (because in Python 3 both <code>type('a')</code> and <code>type(u'a')</code> are <code>str</code>, not <code>unicode</code>)</p>
<p>So in Python 2, what you should know is <code>str</code> and <code>unicode</code> both are subclasses of <code>basestring</code> so instead of testing with</p>
<pre><code>if isinstance(x, (str, unicode)): # equiv. to type(x) is str or type(x) is unicode
# something
</code></pre>
<p>you can do (Python 2.x)</p>
<pre><code>if isinstance(x, basestring):
# do something
</code></pre>
<p>In Python 3 you don't have to distinguish between <code>str</code> and <code>unicode</code>, just use</p>
<pre><code>if isinstance(x, str):
# do something
</code></pre>
|
python|json|types
| 3 |
879 | 19,686,429 |
Python does not sort sql query result
|
<p><code>
results = conn.execute(SEARCH_SQL, dict(fingerprint="{"+fp_str+"}")).fetchall()
print sorted(results)
</code></p>
<p>I retrieve some datas from database by using sql alchemy. <code>results</code> is like that:</p>
<pre><code>[(0.515625, u'str1'), (0.625, u'str2'), (0.901042, u'str3')]
</code></pre>
<p>However sort function does not work here, that is it does not do any operation on the list returned from sql query? How can I sort result list?</p>
|
<p>You have a list of tuples. How would you like to sort them?</p>
<p>For example, if you want to sort them according to the first key:</p>
<pre><code>sorted(results, key=lambda t:t[0])
</code></pre>
<p>or in reverse order:</p>
<pre><code>sorted(results, key=lambda t:t[0], reverse=True)
</code></pre>
|
python|sqlalchemy
| 1 |
880 | 13,253,792 |
list of class objects (birds). each bird has a color. how do I most efficiently get a set of all colors?
|
<p>I have a list of class objects, say birds. Each bird has a color. I want to easily get a set of bird colors from this list of birds. What is the quickest, most efficient way to do this?</p>
|
<p>That would probably be:</p>
<pre><code>set(bird.color for bird in birds)
</code></pre>
|
python
| 5 |
881 | 54,433,453 |
Create multiple Dataframe from XML based on Specific Value
|
<p>I am trying to parse an XML and save the results in Pandas Data-frame. I have succeeded in saving the details in one specific Data-frame. However now am trying to save the results in multiple data-frame based on one specific class value.</p>
<pre><code>import pandas as pd
import xml.etree.ElementTree as ET
import os
from collections import defaultdict, OrderedDict
tree = ET.parse('PowerChange_76.xml')
root = tree.getroot()
df_list = []
for i, child in enumerate(root):
for subchildren in child.findall('{raml20.xsd}header'):
for subchildren in child.findall('{raml20.xsd}managedObject'):
match_found = 0
xml_class_name = subchildren.get('class')
xml_dist_name = subchildren.get('distName')
print(xml_class_name)
df_dict = OrderedDict()
for subchild in subchildren:
header = subchild.attrib.get('name')
df_dict['Class'] = xml_class_name
df_dict['CellDN'] = xml_dist_name
df_dict[header]=subchild.text
df_list.append(df_dict)
df_cm = pd.DataFrame(df_list)
</code></pre>
<p>Expected Result is creation of multiple data-frame based on number of 'class'.</p>
<p>Current Output:</p>
<p><a href="https://i.stack.imgur.com/EPjL7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EPjL7.jpg" alt="enter image description here"></a></p>
<p><a href="https://ufile.io/nf8gu" rel="nofollow noreferrer">XML File</a></p>
|
<p>This is being answered with below method:</p>
<pre><code>def ExtractMOParam(xmlfile2):
tree2=etree.parse(xmlfile2)
root2=tree2.getroot()
df_list2=[]
for i, child in enumerate(root2):
for subchildren in (child.findall('{raml21.xsd}header') or child.findall('{raml20.xsd}header')):
for subchildren in (child.findall('{raml21.xsd}managedObject') or child.findall('{raml20.xsd}managedObject')):
xml_class_name2 = subchildren.get('class')
xml_dist_name2 = subchildren.get('distName')
if ((xml_class_name2 in GetMOClass) and (xml_dist_name2 in GetCellDN)):
#xml_dist_name2 = subchildren.get('distName')
#df_list1.append(xml_class_name1)
for subchild in subchildren:
df_dict2=OrderedDict()
header2=subchild.attrib.get('name')
df_dict2['MOClass']=xml_class_name2
df_dict2['CellDN']=xml_dist_name2
df_dict2['Parameter']=header2
df_dict2['CurrentValue']=subchild.text
df_list2.append(df_dict2)
return df_list2
ExtractDump=pd.DataFrame(ExtractMOParam(inputdfile))
d = dict(tuple(ExtractDump.groupby('MOClass')))
for key in d:
d[key]=d[key].reset_index().groupby(['CellDN','MOClass','Parameter'])['CurrentValue'].aggregate('first').unstack()
d[key].reset_index(level=0, inplace=True)
d[key].reset_index(level=0, inplace=True)
writer = pd.ExcelWriter('ExtractedDump.xlsx', engine='xlsxwriter')
for tab_name, dframe in d.items():
dframe.to_excel(writer, sheet_name=tab_name,index=False)
writer.save()
</code></pre>
<p>Hope this will help others as well.</p>
|
python-3.x|pandas|elementtree
| 0 |
882 | 71,396,254 |
If my function returns list index, what should it return if position does not exist
|
<p>I wrote a function that returns the index of an item in a list if that item exists, otherwise return False</p>
<pre><code>def student_exists(ID):
for student in students:
if student.id == ID:
return students.index(student)
return False
</code></pre>
<p>But then I realised that it can be an issue later since later in my code I did something like this:</p>
<pre><code>if student_exists(ID) == False
</code></pre>
<p>which will be true if the function returned index 0.</p>
<p>What can I use instead of false to represent the item not existing in the list?</p>
|
<p>You can return <code>None</code> if the item does not exist.</p>
<p>When you return <code>None</code>, you will avoid the location 0 problem. Note that when trying to ask if something is <code>None</code> you should use: <code>if x is None</code>.</p>
<p>Note that the <code>is</code> operator should be used for checking <code>None</code>, otherwise you can experience unexpected behavior (see <a href="https://stackoverflow.com/q/306313/6045800">"is" operator behaves unexpectedly with integers</a>)</p>
|
python
| 2 |
883 | 39,203,422 |
Scikit Learn Categorical data with random forests
|
<p>I am trying to work with the titanic survival challenge in kaggle <a href="https://www.kaggle.com/c/titanic" rel="nofollow">https://www.kaggle.com/c/titanic</a>.</p>
<p>I am not experienced in R so i am using Python and Scikit Learn for the <strong>Random Forest Classifier</strong></p>
<p>I am seeing many people using scikit learn converting their categorical of many levels into dummy variables.</p>
<p>I don't understand the point of doing this, why can't we just map the levels into a numeric value and be done with it.</p>
<p>And also i saw someone do the following:
There was a categorical feature <strong>Pclass</strong> with three levels, he created 3 dummy variables for this and dropped the variable which had the least survival rate. I couldn't understand this either, i though decision trees didn't care about correlated features.</p>
|
<p>If you just map levels to numeric values, python will treat your values as numeric. That is, numerically <code>1<2</code> and so on even if your levels were initially unordered. Think about the "distance" problem. This distance between 1 and 2 is 1, between 1 and 3 is 2. But what were the original distances between your categorical variables? For example, what are the distances between "banana" "peach" and "apple"? Do you suppose that they are all equal? </p>
<p>About dummy variable: if you have 3 classes and create 3 dummy variables, they not just correlated, they are linearly dependent. This is never good.</p>
|
python|scikit-learn|random-forest
| 6 |
884 | 55,309,793 |
Python - Enforce specific method signature for subclasses?
|
<p>I would like to create a class which defines a particular interface, and then require all subclasses to conform to this interface. For example, I would like to define a class</p>
<pre class="lang-py prettyprint-override"><code>class Interface:
def __init__(self, arg1):
pass
def foo(self, bar):
pass
</code></pre>
<p>and then be assured that if I am holding any element <code>a</code> which has type <code>A</code>, a subclass of <code>Interface</code>, then I can call <code>a.foo(2)</code> it will work.</p>
<p>It looked like <a href="https://stackoverflow.com/questions/23255808/how-to-enforce-method-signature-for-child-classes">this question</a> almost addressed the problem, but in that case it is up to the <em>subclass</em> to explicitly change it's metaclass.</p>
<p>Ideally what I'm looking for is something similar to Traits and Impls from Rust, where I can specify a particular Trait and a list of methods that trait needs to define, and then I can be assured that any object with that Trait has those methods defined.</p>
<p>Is there any way to do this in Python?</p>
|
<p>So, first, just to state the obvious - Python has a built-in mechanism to test for the <em>existence</em> of methods and <em>attributes</em> in derived classes - it just does not check their signature.</p>
<p>Second, a nice package to look at is <a href="https://zopeinterface.readthedocs.io/en/latest/" rel="noreferrer"><code>zope.interface</code></a>. Despte the <code>zope</code> namespace, it is a complete stand-alone package that allows really neat methods of having objects that can expose multiple interfaces, but just when needed - and then frees-up the namespaces. It sure involve some learning until one gets used to it, but it can be quite powerful and provide very nice patterns for large projects.</p>
<p>It was devised for Python 2, when Python had a lot less features than nowadays - and I think it does not perform automatic interface checking (one have to manually call a method to find-out if a class is compliant) - but automating this call would be easy, nonetheless.</p>
<p>Third, the linked accepted answer at <a href="https://stackoverflow.com/questions/23255808/how-to-enforce-method-signature-for-child-classes">How to enforce method signature for child classes?</a> almost works, and could be good enough with just one change. The problem with that example is that it hardcodes a call to <code>type</code> to create the new class, and do not pass <code>type.__new__</code> information about the metaclass itself. Replace the line:</p>
<pre><code>return type(name, baseClasses, d)
</code></pre>
<p>for:</p>
<pre><code>return super().__new__(cls, name, baseClasses, d)
</code></pre>
<p>And then, make the baseclass - the one defining your required methods use the metaclass - it will be inherited normally by any subclasses. (just use Python's 3 syntax for specifying metaclasses).</p>
<p>Sorry - that example is Python 2 - it requires change in another line as well, I better repost it:</p>
<pre><code>from types import FunctionType
# from https://stackoverflow.com/a/23257774/108205
class SignatureCheckerMeta(type):
def __new__(mcls, name, baseClasses, d):
#For each method in d, check to see if any base class already
#defined a method with that name. If so, make sure the
#signatures are the same.
for methodName in d:
f = d[methodName]
for baseClass in baseClasses:
try:
fBase = getattr(baseClass, methodName)
if not inspect.getargspec(f) == inspect.getargspec(fBase):
raise BadSignatureException(str(methodName))
except AttributeError:
#This method was not defined in this base class,
#So just go to the next base class.
continue
return super().__new__(mcls, name, baseClasses, d)
</code></pre>
<p>On reviewing that, I see that there is no mechanism in it to enforce that a method is <em>actually</em> implemented. I.e. if a method with the same name exists in the derived class, its signature is enforced, but if it does not exist at all in the derived class, the code above won't find out about it (and the method on the superclass will be called - that might be a desired behavior).</p>
<h1>The answer:</h1>
<p>Fourth -
Although that will work, it can be a bit rough - since it does <em>any</em> method that override another method in any superclass will have to conform to its signature. And even compatible signatures would break. Maybe it would be nice to build upon the <code>ABCMeta</code> and <code>@abstractmethod</code> existind mechanisms, as those already work all corner cases. Note however that this example is based on the code above, and check signatures at <em>class</em> creation time, while the abstractclass mechanism in Python makes it check when the class is instantiated. Leaving it untouched will enable you to work with a large class hierarchy, which might keep some abstractmethods in intermediate classes, and just the final, concrete classes have to implement all methods.
Just use this instead of <code>ABCMeta</code> as the metaclass for your interface classes, and mark the methods you want to check the interface as <code>@abstractmethod</code> as usual. </p>
<pre><code>class M(ABCMeta):
def __init__(cls, name, bases, attrs):
errors = []
for base_cls in bases:
for meth_name in getattr(base_cls, "__abstractmethods__", ()):
orig_argspec = inspect.getfullargspec(getattr(base_cls, meth_name))
target_argspec = inspect.getfullargspec(getattr(cls, meth_name))
if orig_argspec != target_argspec:
errors.append(f"Abstract method {meth_name!r} not implemented with correct signature in {cls.__name__!r}. Expected {orig_argspec}.")
if errors:
raise TypeError("\n".join(errors))
super().__init__(name, bases, attrs)
</code></pre>
|
python|class|metaclass
| 7 |
885 | 55,291,859 |
Not all parameters were used in the SQL statement when using python and mysql
|
<p>hi I am doing the python mysql at this project, I initial the database and try to create the table record, but it seems cannot load data to the table, can anyone here can help me out with this</p>
<pre><code>import mysql.connector
mydb = mysql.connector.connect( host="localhost",user="root",password="asd619248636",database="mydatabase")
mycursor = mydb.cursor()
mycursor.excute=("CREATE TABLE record (temperature FLOAT(20) , humidity FLOAT(20))")
sql = "INSERT INTO record (temperature,humidity) VALUES (%d, %d)"
val = (2.3,4.5)
mycursor.execute(sql,val)
mydb.commit()
print(mycursor.rowcount, "record inserted.")
</code></pre>
<p>and the error shows "Not all parameters were used in the SQL statement")
mysql.connector.errors.ProgrammingError: Not all parameters were used in the SQL statement</p>
|
<p>Changing the following should fix your problem:</p>
<pre><code>sql = "INSERT INTO record (temperature,humidity) VALUES (%s, %s)"
val = ("2.3","4.5") # You can also use (2.3, 4.5)
mycursor.execute(sql,val)
</code></pre>
<p>The database API takes strings as arguments, and later converts them to the appropriate datatype. Your code is throwing an error because it isn't expecting <code>%d</code> or <code>%f</code> (int or float) datatypes.</p>
<p>For more info on this you can look <a href="http://mysql-python.sourceforge.net/MySQLdb.html#some-examples" rel="nofollow noreferrer">here</a></p>
|
python|mysql|python-3.x
| 4 |
886 | 37,506,824 |
syntax_error:update for dictionary
|
<p>How can I fix this?</p>
<pre><code># E.g. word_count("I am that I am") gets back a dictionary like:
# {'i': 2, 'am': 2, 'that': 1}
# Lowercase the string to make it easier.
# Using .split() on the sentence will give you a list of words.
# In a for loop of that list, you'll have a word that you can
# check for inclusion in the dict (with "if word in dict"-style syntax).
# Or add it to the dict with something like word_dict[word] = 1.
def word_count(string):
word_list = string.split()
word_dict = {}
for word in word_list:
if word in word_dict:
word_dict.update(word:word_dict(word)+1)
else:
word_dict[word]=1
return word_dict
</code></pre>
<p><a href="https://i.stack.imgur.com/cArP0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cArP0.png" alt="enter image description here"></a></p>
<p>Disclaimer: A total newbie in Python</p>
|
<p>To update a key in a dictionary, just assign to the key using <code>[...]</code> subscription syntax:</p>
<pre><code>word_dict[word] = word_dict[word] + 1
</code></pre>
<p>or even</p>
<pre><code>word_dict[word] += 1
</code></pre>
<p>Your attempt is not valid syntax, for two reasons:</p>
<ul>
<li><code>word_dict.update()</code> is a method call, everything inside the <code>(...)</code> call syntax must be a valid expression. <code>key: value</code> is not a stand-alone expression, that only is valid within a <code>{key: value}</code> dictionary display. <code>word_dict.update()</code> takes either a dictionary object, or a sequence of <code>(key, value)</code> pairs.</li>
<li><code>word_dict(word)</code> would try to <em>call</em> the dictionary rather than try to retrieve the value for the key <code>word</code>.</li>
</ul>
<p>Using <code>word_dict.update()</code> to update just <em>one</em> key is a little overkill, because it requires creating another dictionary or sequence. Either one of the following would work:</p>
<pre><code>word_dict.update({word: word_dict[word] + 1})
</code></pre>
<p>or</p>
<pre><code>word_dict.update([(word, word_dict[word] + 1)])
</code></pre>
<p>Note that the Python standard library comes with a better solution for counting words: the <a href="https://docs.python.org/2/library/collections.html#collections.Counter" rel="nofollow"><code>collections.Counter()</code> class</a>:</p>
<pre><code>from collections import Counter
def word_count(string):
return Counter(string.split())
</code></pre>
<p>A <code>Counter()</code> is a subclass of <code>dict</code>.</p>
|
python|string|dictionary
| 2 |
887 | 7,438,666 |
Python opencv not receiving camera feed
|
<p>I've been trying to use the SimpleCV (<a href="http://simplecv.org" rel="nofollow">www.simplecv.org</a>) module to run image recognition and manipulation. Unfortunately, my incoming video feed has been quite finicky, and I'm not sure what I did wrong. Just using some basic sample code:</p>
<pre><code>import cvwindow = cv.NamedWindow("camera", 1)
capture = cv.CreateCameraCapture(0)
width = int(cv.GetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH))
height = int(cv.GetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT))
while 1:
img = cv.QueryFrame(capture)
cv.ShowImage("camera", img)
k = cv.WaitKey(1)
if(k == 102):
cv.destroyWindow("camera")
break
</code></pre>
<p>Which works perfectly when I plug in my Logitech Webcam 500. However, when I attempt to use my Vimicro Altair camera, I get a grey screen, and when saving to file, the file is empty.</p>
<p>I also attempted to use SimpleCV code, based off their <a href="http://simplecv.org/doc/cookbook.html#using-a-camera-kinect-or-virtualcamera" rel="nofollow">cookbook</a> along the lines of:</p>
<pre><code>mycam = Camera()
img = mycam.getImage()
</code></pre>
<p>which was equally unsuccessful, however instead of returning no data, simply returned an image that was completely black.</p>
<p>I'm at quite a loss of what is causing this, I tried the exact same system on my laptop, which failed to even get an image from the Logitech cam. I'm running Windows 7 64-bit with Python 2.7 and SimpleCV 1.1.</p>
<p>Thanks</p>
|
<p>I'm one of the SimpleCV developers. It appears you are trying to use the standard python openCV wrapper.</p>
<p>What I recommend doing is just run the example here:
<a href="https://github.com/sightmachine/SimpleCV/blob/develop/SimpleCV/examples/display/simplecam.py" rel="nofollow">https://github.com/sightmachine/SimpleCV/blob/develop/SimpleCV/examples/display/simplecam.py</a></p>
<p>Or here is the code as well:</p>
<pre><code>import time, webbrowser
from SimpleCV import *
#create JPEG streamers
js = JpegStreamer(8080)
cam = Camera()
cam.getImage().save(js)
webbrowser.open("http://localhost:8080", 2)
while (1):
i = cam.getImage()
i.save(js)
time.sleep(0.01) #yield to the webserver
</code></pre>
|
python|opencv|camera|simplecv
| 5 |
888 | 38,584,184 |
Imputer on some Dataframe columns in Python
|
<p>I am learning how to use Imputer on Python.</p>
<p>This is my code:</p>
<pre><code>df=pd.DataFrame([["XXL", 8, "black", "class 1", 22],
["L", np.nan, "gray", "class 2", 20],
["XL", 10, "blue", "class 2", 19],
["M", np.nan, "orange", "class 1", 17],
["M", 11, "green", "class 3", np.nan],
["M", 7, "red", "class 1", 22]])
df.columns=["size", "price", "color", "class", "boh"]
from sklearn.preprocessing import Imputer
imp=Imputer(missing_values="NaN", strategy="mean" )
imp.fit(df["price"])
df["price"]=imp.transform(df["price"])
</code></pre>
<p>However this rises the following error:
ValueError: Length of values does not match length of index</p>
<p>What's wrong with my code???</p>
<p>Thanks for helping</p>
|
<p>This is because <code>Imputer</code> usually uses with DataFrames rather than Series. A possible solution is:</p>
<pre><code>imp=Imputer(missing_values="NaN", strategy="mean" )
imp.fit(df[["price"]])
df["price"]=imp.transform(df[["price"]]).ravel()
# Or even
imp=Imputer(missing_values="NaN", strategy="mean" )
df["price"]=imp.fit_transform(df[["price"]]).ravel()
</code></pre>
|
python|scikit-learn|missing-data|imputation
| 17 |
889 | 40,446,650 |
Why is it dataframe.head() in python and head(dataframe) in R? Why is python like this in general?
|
<p>Beginner here. Shouldnt the required variables be passed as arguments to the function. Why is it variable.function() in python?</p>
|
<p>It's simple:</p>
<p><code>foo.bar()</code> does the same thing as <code>foo.__class__.bar(foo)</code></p>
<p>so it <em>is</em> a function, and the argument <em>is</em> passed to it, but the function is stored attached to the object via its class (type), so to say. The <code>foo.bar()</code> notation is just shorthand for the above.</p>
<p>The advantage is that different functions of the sams name can be attached to many objects, depending object type. So the caller of <code>foo.bar()</code> is calling whatever function is attached to the object by the name "bar". This is called polymorphism and can be used for all sorts of things, such as generic programming. Such functions are called methods. </p>
<p>The style is called object orientation, albeit object orientation as well as generic programming can also be achieved using more familiar looking function (method) call notation (e.g. multimethods in Common Lisp and Julia, or classes in Haskell). </p>
|
python
| 0 |
890 | 26,169,593 |
Adding default file directory to FileDialog in Traits
|
<p>I am using the FileDialog class within TraitsUI, which works pretty well, except for the life of me, I have not been able to figure out how to pass a <em>default</em> directory, for the dialogue to use. </p>
<p>Ideally, the dialogue box would open at a point in the local file system other than the top of the tree...</p>
<p>Any insight or direction very gratefully appreciated from a newbie.</p>
<p>Base code pretty generic/standard as follows.</p>
<pre><code>demo_id = 'traitsui.demo.standard_editors.file_dialog.file_info'
class FileDialog ( HasTraits ):
# The name of the selected file:
file_name = File
# The button used to display the file dialog:
open = Button( 'Open...' )
#-- Traits View Definitions ------------------------------------------------
view = View(
HGroup(
Item( 'open', show_label = False ),
'_',
Item( 'file_name', style = 'readonly', springy = True )
),
width = 0.5
)
#-- Traits Event Handlers --------------------------------------------------
def _open_changed ( self ):
""" Handles the user clicking the 'Open...' button.
"""
file_name = open_file( extensions = FileInfo(), id = demo_id )
if file_name != '':
self.file_name = file_name
</code></pre>
|
<p>I suggest <em>not</em> using the TraitsUI FileDialog. I think you'll do better with pyface.api.FileDialog (toolkit-specific; for the API, see <a href="https://github.com/enthought/pyface/blob/master/pyface/i_file_dialog.py" rel="nofollow">https://github.com/enthought/pyface/blob/master/pyface/i_file_dialog.py</a>).</p>
|
python|file|enthought|traitsui
| 2 |
891 | 26,086,365 |
Why is PySide's exception handling extending this object's lifetime?
|
<p><strong>tl;dr -- In a PySide application, an object whose method throws an exception will remain alive even when all other references have been deleted. Why? And what, if anything, should one do about this?</strong></p>
<p>In the course of building a simple CRUDish app using a Model-View-Presenter architecture with a PySide GUI, I discovered some curious behavior. In my case:</p>
<ul>
<li>The interface is divided into multiple Views -- i.e., each tab page displaying a different aspect of data might be its own class of View</li>
<li>Views are instantiated first, and in their initialization, they instantiate their own Presenter, keeping a normal reference to it</li>
<li>A Presenter receives a reference to the View it drives, but stores this as a weak reference (<code>weakref.ref</code>) to avoid circularity</li>
<li>No other strong references to a Presenter exist. (Presenters can communicate indirectly with the <code>pypubsub</code> messaging library, but this also stores only weak references to listeners, and is not a factor in the MCVE below.)</li>
<li>Thus, in normal operation, when a View is deleted (e.g., when a tab is closed), its Presenter is subsequently deleted as its reference count becomes 0</li>
</ul>
<p>However, a Presenter of which a method has thrown an exception does not get deleted as expected. The application continues to function, because PySide employs <a href="https://stackoverflow.com/questions/14493081/pyqt-event-handlers-snarf-exceptions">some magic</a> to catch exceptions. The Presenter in question continues to receive and respond to any View events bound to it. But when the View is deleted, the exception-throwing Presenter remains alive until the whole application is closed. An MCVE (<a href="http://pastebin.com/CvVFnjAJ" rel="nofollow noreferrer">link for readability</a>):</p>
<pre><code>import logging
import sys
import weakref
from PySide import QtGui
class InnerPresenter:
def __init__(self, view):
self._view = weakref.ref(view)
self.logger = logging.getLogger('InnerPresenter')
self.logger.debug('Initializing InnerPresenter (id:%s)' % id(self))
def __del__(self):
self.logger.debug('Deleting InnerPresenter (id:%s)' % id(self))
@property
def view(self):
return self._view()
def on_alert(self):
self.view.show_alert()
def on_raise_exception(self):
raise Exception('From InnerPresenter (id:%s)' % id(self))
class OuterView(QtGui.QMainWindow):
def __init__(self, *args, **kwargs):
super(OuterView, self).__init__(*args, **kwargs)
self.logger = logging.getLogger('OuterView')
# Menus
menu_bar = self.menuBar()
test_menu = menu_bar.addMenu('&Test')
self.open_action = QtGui.QAction('&Open inner', self, triggered=self.on_open, enabled=True)
test_menu.addAction(self.open_action)
self.close_action = QtGui.QAction('&Close inner', self, triggered=self.on_close, enabled=False)
test_menu.addAction(self.close_action)
def closeEvent(self, event, *args, **kwargs):
self.logger.debug('Exiting application')
event.accept()
def on_open(self):
self.setCentralWidget(InnerView(self))
self.open_action.setEnabled(False)
self.close_action.setEnabled(True)
def on_close(self):
self.setCentralWidget(None)
self.open_action.setEnabled(True)
self.close_action.setEnabled(False)
class InnerView(QtGui.QWidget):
def __init__(self, *args, **kwargs):
super(InnerView, self).__init__(*args, **kwargs)
self.logger = logging.getLogger('InnerView')
self.logger.debug('Initializing InnerView (id:%s)' % id(self))
self.presenter = InnerPresenter(self)
# Layout
layout = QtGui.QHBoxLayout(self)
alert_button = QtGui.QPushButton('Alert!', self, clicked=self.presenter.on_alert)
layout.addWidget(alert_button)
raise_button = QtGui.QPushButton('Raise exception!', self, clicked=self.presenter.on_raise_exception)
layout.addWidget(raise_button)
self.setLayout(layout)
def __del__(self):
super(InnerView, self).__del__()
self.logger.debug('Deleting InnerView (id:%s)' % id(self))
def show_alert(self):
QtGui.QMessageBox(text='Here is an alert').exec_()
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
app = QtGui.QApplication(sys.argv)
view = OuterView()
view.show()
sys.exit(app.exec_())
</code></pre>
<p>Open and close the inner view, and you'll see both view and presenter are deleted as expected. Open the inner view, click the button to trigger an exception on the presenter, then close the inner view. The view will be deleted, but the presenter won't until the application exits.</p>
<p><strong>Why?</strong> Presumably whatever it is that catches all exceptions on behalf of PySide is storing a reference to the object that threw it. Why would it need to do that?</p>
<p><strong>How</strong> should I proceed (aside from writing code that never causes exceptions, of course)? I have enough sense not to rely on <code>__del__</code> for resource management. I get that I have no right to expect anything subsequent to a caught-but-not-really-handled exception to go ideally but this just strikes me as unnecessarily ugly. How should I approach this in general?</p>
|
<p>The problem is <code>sys.last_tracback</code> and <code>sys.last_value</code>.</p>
<p>When a traceback is raised interactively, and this seems to be what is emulated, the last exception and its traceback are stores in <code>sys.last_value</code> and <code>sys.last_traceback</code> respectively.</p>
<p>Doing</p>
<pre><code>del sys.last_value
del sys.last_traceback
# for consistency, see
# https://docs.python.org/3/library/sys.html#sys.last_type
del sys.last_type
</code></pre>
<p>will free the memory.</p>
<p>It's worth noting that at most <em>one</em> exception and traceback pair can get cached. This means that, because you're sane and don't rely on <code>del</code>, there isn't a massive amount of damage to be done.</p>
<p>But if you want to reclaim the memory, just delete those values. </p>
|
python|exception-handling|garbage-collection|pyside
| 3 |
892 | 32,342,729 |
Pass object along with object method to function
|
<p>I know that in Python that if, say you want to pass two parameters to a function, one an object, and another that specifies the instance method that must be called on the object, the user can easily pass the object itself, along with the name of the method (as a string) then use the <code>getattr</code> function on the object and the string to call the method on the object. </p>
<p>Now, I want to know if there is a way (as in C++, for those who know) where you pass the object, as well as the actual method (or rather a reference to the method, but not the method name as a string). An example:</p>
<pre><code>def func(obj, method):
obj.method();
</code></pre>
<p>I have tried passing it as follows:</p>
<pre><code>func(obj, obj.method)
</code></pre>
<p>or as</p>
<pre><code>func(obj, classname.method)
</code></pre>
<p>but neither works (the second one I know was a bit of a long shot, but I tried it anyway)</p>
<p>I know that you can also just define a function that just accepts the method, then call it as </p>
<pre><code>func2(obj.method)
</code></pre>
<p>but I am specifically asking about instances where you want a reference to the object itself, as well as a <em>reference</em> to a desired class instance (not static) method to be called on the object.</p>
<p>EDIT:</p>
<p>For those that are interested, I found quite an elegant way 'inspired' by the accepted answer below. I simply defined <code>func</code> as</p>
<pre><code>def func(obj, method):
#more code here
method(obj, parameter); #call method on object obj
</code></pre>
<p>and called it as</p>
<pre><code>func(obj_ref, obj_class.method);
</code></pre>
<p>where obj_class is the actual class that obj_ref is an instance of.</p>
|
<p>A method is just a function with the first parameter bound to an instance. As such you can do things like. </p>
<pre><code># normal_call
result = "abc".startswith("a")
# creating a bound method
method = "abc".startswith
result = method("a")
# using the raw function
function = str.startswith
string = "abc"
result = function(string, "a")
</code></pre>
|
python|class|object|parameters
| 9 |
893 | 32,410,103 |
Django rest framework is taking too long to return nested serialized data
|
<p>We are having four models which are related, While returning queryset serializing the data is too slow(serializer.data). Below are our models and serializer.</p>
<p>Why django nested serializer is taking too long to return rendered response. What are we doing wrong here?</p>
<p>Note:Our DB lies in AWS when connected from EC2 instance it is ok but when tried from my localhost it is insanely slow. And the size of json it returns is 700KB.</p>
<p>models.py</p>
<pre><code>class ServiceType(models.Model):
service_name = models.CharField(max_length = 100)
description = models.TextField()
is_active = models.BooleanField(default = 1)
class Service(models.Model):
service_name = models.CharField(max_length = 100)
service_type = models.ForeignKey(ServiceType, related_name = "type_of_service")
min_duration = models.IntegerField() ##duration in mins
class StudioProfile(models.Model):
studio_group = models.ForeignKey(StudioGroup, related_name = "studio_of_group")
name = models.CharField(max_length = 120)
class StudioServices(models.Model):
studio_profile = models.ForeignKey(StudioProfile, related_name = "studio_detail_for_activity")
service = models.ForeignKey(Service, related_name = "service_in_studio")
class StudioPicture(models.Model):
studio_profile = models.ForeignKey(StudioProfile, related_name = "pic_of_studio")
picture = models.ImageField(upload_to = 'img_gallery', null = True, blank = True)
</code></pre>
<p>serializers.py</p>
<pre><code>class ServiceTypeSerializer(serializers.ModelSerializer):
class Meta:
model = ServiceType
fields = ('id', 'service_name')
class ServiceSerializer(serializers.ModelSerializer):
service_type = ServiceTypeSerializer()
class Meta:
model = Service
fields = ('id', 'service_type', 'service_name')
class StudioServicesSerializer(serializers.ModelSerializer):
service = ServiceSerializer()
class Meta:
model = StudioServices
fields = ('service','price','is_active','mins_takes')
class StudioPictureSerializer(serializers.ModelSerializer):
class Meta:
model = StudioPicture
fields = ('picture',)
class StudioProfileSerializer(serializers.ModelSerializer):
studio_detail_for_activity = StudioServicesSerializer(many = True)
pic_of_studio = StudioPictureSerializer(many = True)
class Meta:
model = StudioProfile
fields = ('id', 'name','studio_detail_for_activity','pic_of_studio')
</code></pre>
<p>views.py</p>
<pre><code>class StudioProfileView(ListAPIView):
serializer_class = StudioProfileSerializer
model = StudioProfile
def get_queryset(self):
try:
queryset = self.model.objects.all()
except Exception ,e:
logger_error.error(traceback.format_exc())
return None
else:
return queryset
</code></pre>
|
<p>Did you checked which part is the slow one? like, How many records do you have in that db? and I would try to run the query and check if the query is slow, then I'd check the serializers with less than 100 registers and so on.</p>
<p>I'd recommend you to read this article <a href="http://www.dabapps.com/blog/api-performance-profiling-django-rest-framework/" rel="nofollow">http://www.dabapps.com/blog/api-performance-profiling-django-rest-framework/</a> in order to evaluate how to profile your API</p>
<p>Regards</p>
|
python|django|api|rest|django-rest-framework
| 2 |
894 | 44,050,853 |
Pandas json_normalize and null values in JSON
|
<p>I have this sample JSON</p>
<pre><code>{
"name":"John",
"age":30,
"cars": [
{ "name":"Ford", "models":[ "Fiesta", "Focus", "Mustang" ] },
{ "name":"BMW", "models":[ "320", "X3", "X5" ] },
{ "name":"Fiat", "models":[ "500", "Panda" ] }
]
}
</code></pre>
<p>When I need to convert JSON to pandas DataFrame I use following code </p>
<pre><code>import json
from pandas.io.json import json_normalize
from pprint import pprint
with open('example.json', encoding="utf8") as data_file:
data = json.load(data_file)
normalized = json_normalize(data['cars'])
</code></pre>
<p>This code works well but in the case of some empty cars (null values) I'm not possible to normalize_json.</p>
<p>Example of json</p>
<pre><code>{
"name":"John",
"age":30,
"cars": [
{ "name":"Ford", "models":[ "Fiesta", "Focus", "Mustang" ] },
null,
{ "name":"Fiat", "models":[ "500", "Panda" ] }
]
}
</code></pre>
<p>Error that was thrown</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'keys'
</code></pre>
<p>I tried to ignore errors in json_normalize, but didn't help</p>
<pre><code>normalized = json_normalize(data['cars'], errors='ignore')
</code></pre>
<p>How should I handle null values in JSON?</p>
|
<p>You can fill <code>cars</code> with empty dicts to prevent this error</p>
<pre><code>data['cars'] = data['cars'].apply(lambda x: {} if pd.isna(x) else x)
</code></pre>
|
python|json|pandas
| 10 |
895 | 44,129,680 |
extracting data from json using python
|
<p>Extracting Data from JSON</p>
<p>The program will prompt for a URL, read the JSON data from that URL using urllib and then parse and extract the comment counts from the JSON data, compute the sum of the numbers in the file.</p>
<p>Sample data: <a href="http://python-data.dr-chuck.net/comments_42.json" rel="nofollow noreferrer">http://python-data.dr-chuck.net/comments_42.json</a> (Sum=2553)</p>
<p>Data Format
The data consists of a number of names and comment counts in JSON as follows:</p>
<pre><code>{
comments: [
{
name: "Matthias"
count: 97
},
{
name: "Geomer"
count: 97
}
...
]
}
</code></pre>
<p>Basically , json file reads to be a dictionary . the second element of the dictionary is a list. now this list has dictionaries in it. i need to find values from them. </p>
<p>My code where i am stuck at is:</p>
<pre><code>import json
import urllib
total = 0
url='http://python-data.dr-chuck.net/comments_42.json'
uh=urllib.urlopen(url).read()
info =json.loads(uh)
for items in info[1]:
#print items
print items[1:]
</code></pre>
|
<p>You could try:</p>
<pre><code>import json
import urllib
total = 0
url='http://python-data.dr-chuck.net/comments_42.json'
uh=urllib.urlopen(url).read()
info =json.loads(uh)
count_values = [ el['count'] for el in info['comments'] ]
name_values = [ el['name'] for el in info['comments'] ]
print count_values
print name_values
</code></pre>
<p>output of count_values:</p>
<pre><code>[97, 97, 90, 90, 88, 87, 87, 80, 79, 79, 78, 76, 76, 72, 72, 66, 66, 65, 65, 64, 61, 61, 59, 58, 57, 57, 54, 51, 49, 47, 40, 38, 37, 36, 36, 32, 25, 24, 22, 21, 19, 18, 18, 14, 12, 12, 9, 7, 3, 2]
</code></pre>
<p>output of name_values:</p>
<pre><code>[u'Romina', u'Laurie', u'Bayli', u'Siyona', u'Taisha', u'Alanda', u'Ameelia', u'Prasheeta', u'Asif', u'Risa', u'Zi', u'Danyil', u'Ediomi', u'Barry', u'Lance', u'Hattie', u'Mathu', u'Bowie', u'Samara', u'Uchenna', u'Shauni', u'Georgia', u'Rivan', u'Kenan', u'Hassan', u'Isma', u'Samanthalee', u'Alexa', u'Caine', u'Grady', u'Anne', u'Rihan', u'Alexei', u'Indie', u'Rhuairidh', u'Annoushka', u'Kenzi', u'Shahd', u'Irvine', u'Carys', u'Skye', u'Atiya', u'Rohan', u'Nuala', u'Maram', u'Carlo', u'Japleen', u'Breeanna', u'Zaaine', u'Inika']
</code></pre>
|
python
| 1 |
896 | 44,275,166 |
Simpler way with datetime time deltas?
|
<p>and thanks in advance! I've got a function I wrote that generates and appends a url to a list in the form of "<a href="http://www.examplesite.com/" rel="nofollow noreferrer">http://www.examplesite.com/</a>'year' + '-' + 'month'", appending a string format of the given year for each month. The function works just fine for what I'm trying to do, but I'm wondering if there's a simpler way to go about it using Python 3's datetime module, possibly working with time deltas.</p>
<pre><code> source = 'https://www.examplesite.com/'
year = 2017
month = ['12', '11', '10', '09', '08', '07', '06', '05', '04', '03', '02', '01']
while year >= 1989:
for entry in month:
page = source + str(year) + '-' entry
pageRepository.append(page)
year -= 1
</code></pre>
|
<p>You have to subtract 1 to decrease the year even using datetime object as:</p>
<pre><code>>>> from datetime import date
>>> print date.today().year - 1
</code></pre>
<p>result is 2016. I think the way you process year is good enough.</p>
<p>just want to simplify the month, using range() other than hard coded month list:</p>
<pre><code>>>> for month in range(12,0,-1):
... str(month).zfill(2)
...
'12'
'11'
'10'
'09'
'08'
'07'
'06'
'05'
'04'
'03'
'02'
'01'
</code></pre>
<p><strong>str.zfill(width):</strong></p>
<blockquote>
<p>Return a copy of the string left filled with ASCII '0' digits to make a string of length width.</p>
</blockquote>
|
python|python-3.x|datetime
| 0 |
897 | 44,254,759 |
Ajax query not working in python django?
|
<p>I want to change the status of data coming from table but it seems like like i have messed up some code in it.</p>
<p>my ajax request:-</p>
<pre><code>function changeStatusDataById(object) {
var baseURL = location.protocol + '//' + location.hostname + (location.port ? ':' + location.port : '');
var r = confirm("Are You sure we want to change status ?");
if (r == true) {
var requestData = {};
var action = object.getAttribute("action");
var id = object.getAttribute("id");
requestData.action = action;
requestData.id = id;
$.ajax({
url: baseURL + 'promoted-user/list/changeStatus/',
method: 'POST',
dataType: "json",
contentType: "application/json",
data: JSON.stringify(requestData),
beforeSend: function () {
var text = 'changing status . please wait..';
ajaxLoaderStart(text);
},
success: function (data) {
ajaxLoaderStop();
location.reload();
},
error: function (jqXHR, ex) {
ajaxLoaderStop();
}
});
}
return false;
}
</code></pre>
<p>my django url:-</p>
<pre><code>url(r'^promoted-user/list/changeStatus/$', delete.promoter_change_status, name='promoter-change-status')
</code></pre>
<p>my views:-</p>
<pre><code>@login_required
@csrf_exempt
def promoter_change_status(request):
response_data = dict()
message = ''
status = "ERROR"
if request.method == "GET":
message = "GET method is not allowed"
if request.method == "DELETE":
message = "Delete method is not allowed"
if request.method == "POST":
request_data = body_loader(request.body)
print 'hello'
try:
action = request_data['action']
id = request_data['id']
if action is not None and id is not None and action != '' and id != '':
status = "OK"
message = "Status Changed successfully........."
if action == "newsDelete":
object_data = News.objects.using("cms").get(id=id)
object_data.status = not object_data.status
object_data.save()
messages.success(request, 'Status Changed successfully')
else:
message = "action and id is required field................."
except ObjectDoesNotExist:
status = "ERROR"
message = "id does not exist..........."
except Exception as e:
print e
message = e.message + " is required field................."
response_data['message'] = message
response_data['status'] = status
return HttpResponse(json.dumps(response_data))
</code></pre>
<p>calling ajax on td of table:-</p>
<pre><code><td class="text-center">
<a href="#" class="fg_red changeStatusDataById" data-toggle="modal"
action="{{ object_name }}" id="{{ item.newsId.id }}">
<i class="fa fa-trash"></i>
</a>
</td>
</code></pre>
<p>but its not working.
even the hello is not printed from my view</p>
|
<p>i was just missing / on the url when i was calling ajax.</p>
<pre><code> $.ajax({
url: baseURL + '/promoted-user/list/changeStatus/',
method: 'POST',
dataType: "json",
contentType: "application/json",
data: JSON.stringify(requestData),
beforeSend: function () {
var text = 'changing status . please wait..';
ajaxLoaderStart(text);
},
</code></pre>
<p>rest of the code is fine</p>
|
jquery|python|ajax|django
| 0 |
898 | 32,885,411 |
Access coastal outlines (e.g. from Basemap, or somewhere else) without installing Basemap
|
<p>I would like to have polygons or vertices of coastlines on the Earth to manipulate in Blender (and in Python stand-alone), but I would like to avoid installing into each of the multiple Pythons on my computer. Basically it looks a bit tricky to do once, much less four times.</p>
<p>All I want is points along coastline contours, say at 1 or even 10 kilometer (1000m or 10000m) resolution. I'm assuming they would be in latitude/longitude, and in that case I would just convert to x, y, z in space myself.</p>
<p>I've downloaded Basemap - is there any way I can access the contours directly in the <code>data</code> folder?</p>
<p>An alternate data source would also be acceptable.</p>
|
<p>I found a simple solution which does not involve Basemaps or the like, thanks to the answer in GIS.stackexchange <a href="https://gis.stackexchange.com/q/164930/60078">here</a></p>
<p>I am reposting some of the info here:</p>
<p><em>The answer by @artwork21 is the accepted answer. I am just adding some supplementary information that others may find useful.</em></p>
<p><em>I downloaded some coastline data from the link provided in the answer. In this example, I used physical vector data from <a href="http://www.naturalearthdata.com/downloads/50m-physical-vectors/" rel="nofollow noreferrer">here</a>. Then reading about <a href="https://code.google.com/p/pyshp/" rel="nofollow noreferrer">pyshp</a> I just copy/pasted the script <a href="http://pyshp.googlecode.com/svn/trunk/shapefile.py" rel="nofollow noreferrer">shapefile.py</a> and then did the following:</em></p>
<pre><code>coast = Reader("ne_50m_coastline") # defined in shapefile.py
plt.figure()
for shape in coast.shapes()[:20]: # first 20 shapes out of 1428 total
x, y = zip(*shape.points)
plt.plot(x, y)
plt.xlim(110, 180)
plt.ylim(-40, 20)
plt.savefig("Australia Australia Australia Australia we love ya' Amen")
# https://www.youtube.com/watch?v=_f_p0CgPeyA&feature=youtu.be&t=121
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/ByCHC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ByCHC.png" alt="Australia!"></a></p>
|
python|blender|matplotlib-basemap
| 0 |
899 | 34,827,141 |
Evaluating the performance gain from multi-threading in python
|
<p>I tried to compare the performance gain from parallel computing using multithreading module and the normal sequence computing but couldn't find any real difference. Here's what I did:</p>
<pre><code>import time, threading, Queue
q=Queue.Queue()
def calc(_range):
exponent=(x**5 for x in _range)
q.put([x**0.5 for x in exponent])
def calc1(_range):
exponent=(x**5 for x in _range)
return [x**0.5 for x in exponent]
def multithreds(threadlist):
d=[]
for x in threadlist:
t=threading.Thread(target=calc, args=([x]))
t.start()
t.join()
s=q.get()
d.append(s)
return d
threads=[range(100000), range(200000)]
start=time.time()
#out=multithreads(threads)
out1=[calc1(x)for x in threads]
end=time.time()
print end-start
</code></pre>
<p>Timing using threading:<code>0.9390001297</code>
Timing running in sequence:<code>0.911999940872</code></p>
<p>The timing running in sequence was constantly lower than using multithreading.
I have a feeling there's something wrong with my multithreading code.</p>
<p>Can someone point me in the right direction please.
Thanks.</p>
|
<p>The reference implementation of Python (CPython) has a so-called interpreter lock where always one thread executes Python byte-code. You can switch for example to IronPython which has no GIL or you can take a look at the multiprocessing module which spawns several Python processes which can execute your code independently. In some scenarios using threads in Python can even be slower than a single-thread because the context-switches between threads on the CPU also introduce some overhead.</p>
<p>Take a look <a href="http://python-notes.curiousefficiency.org/en/latest/python3/multicore_python.html" rel="nofollow">at this page</a> for some deeper insights and help.</p>
<p>If you want to dive much more deeper in this topic I can highly recommend <a href="https://www.youtube.com/watch?v=Obt-vMVdM8s" rel="nofollow">this talk</a> by David Beazley.</p>
|
python|multithreading
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.