Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,200 | 67,843,528 |
Searching in JSON list
|
<p>I have this JSON code:</p>
<pre><code>{"intents": [
{"tag": "greeting",
"patterns": ["Hi", "How are you", "Is anyone there?", "Hello", "Good day"],
"responses": ["Hello, thanks for visiting", "Good to see you again", "Hi there, how can I help?"],
"context_set": ""
},
{"tag": "goodbye",
"patterns": ["Bye", "See you later", "Goodbye"],
"responses": ["See you later, thanks for visiting", "Have a nice day", "Bye! Come back again soon."]
},
{"tag": "thanks",
"patterns": ["Thanks", "Thank you", "That's helpful"],
"responses": ["Happy to help!", "Any time!", "My pleasure"]
},
{"tag": "hours",
"patterns": ["What hours are you open?", "What are your hours?", "When are you open?" ],
"responses": ["We're open every day 9am-9pm", "Our hours are 9am-9pm every day"]
},
{"tag": "payments",
"patterns": ["Do you take credit cards?", "Do you accept Mastercard?", "Are you cash only?" ],
"responses": ["We accept VISA, Mastercard and AMEX", "We accept most major credit cards"]
},
{"tag": "opentoday",
"patterns": ["Are you open today?", "When do you open today?", "What are your hours today?"],
"responses": ["We're open every day from 9am-9pm", "Our hours are 9am-9pm every day"]
}
]
}
</code></pre>
<p>And I have this Python code so far:</p>
<pre><code>import json
inp = input()
with open('Python\intents.json') as file:
data = json.load(file)
for intent in data['intents']:
for pattern in intent['patterns']:
if inp in pattern:
</code></pre>
<p>What I want is, if input is found in one of those patterns, it should print randomized responses from the same tag section based on the input. If I greet, I should receive a greetings back. If I say bye, I should get a bye back, etc.</p>
<p>Thanks in advance.</p>
|
<p>You can use <code>random.choice</code> to select a random value from a list.</p>
<p>Here you go:</p>
<pre><code>import json
import random
inp = input()
with open('Python\intents.json') as file:
data = json.load(file)
for intent in data['intents']:
if inp in intent['patterns']:
print(random.choice(intent['responses']))
break
</code></pre>
<p>Couple of improvements I would suggest:</p>
<ol>
<li>Separate data loading from processing</li>
<li>Make input case insensitive (you can do this by lower casing your initial patterns and lower casing the input)</li>
</ol>
<pre><code>import json
import random
# Load
with open('Python\intents.json') as file:
data = json.load(file)
# Parse
for intent in data['intents']:
intent['patterns'] = [pattern.lower() for pattern in intent['patterns']]
# Process
inp = input().lower()
for intent in data['intents']:
if inp in intent['patterns']:
print(random.choice(intent['responses']))
break
</code></pre>
<blockquote>
<p>Do you mind telling me how to do that input should contain minimum two words or more which are also in "patterns? Because I am doing this chatbot in German and our language is a little bit tricky. For example "lower" and "price". Some people ask in Ebay "Hello, can you lower the price" and if I input "Can you lower the price" and patterns include "lower" and "price", it should do x.</p>
</blockquote>
<p>For this you can use sets to determine the number of same words between input and pattern.</p>
<pre><code>import json
import random
def find_response(inp):
inp = set(inp.lower().split(' '))
for intent in data['intents']:
for proccessed_pattern in intent['processed_patterns']:
same_words_count = len(inp.intersection(proccessed_pattern))
if same_words_count >= 2:
return random.choice(intent['responses'])
with open('Python\intents.json') as file:
data = json.load(file)
for intent in data['intents']:
intent['processed_patterns'] = [set(pattern.lower().split()) for pattern in intent['patterns']]
inp = input()
response = find_response(inp)
print(response)
</code></pre>
<p>This solution will however lead to a whole series of problems with special characters like <code>.,!?</code> and with general words which can be found in any pattern like <code>the, I, how, when ...</code>.</p>
|
python
| 3 |
1,201 | 67,671,263 |
Why does the Gmail API returns 401 error?
|
<p>The response after I sent out my batch request to the gmail is the same as described in the documentation (<a href="https://developers.google.com/gmail/api/guides/handle-errors#exponential-backoff" rel="nofollow noreferrer">https://developers.google.com/gmail/api/guides/handle-errors#exponential-backoff</a>):</p>
<pre><code> "error": {
"errors": [
{
"domain": "global",
"reason": "authError",
"message": "Invalid Credentials",
"locationType": "header",
"location": "Authorization",
}
],
"code": 401,
"message": "Invalid Credentials"
}
}
</code></pre>
<p>My request code is:</p>
<pre><code>creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
elif not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file('credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run.
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
gmailUrl = "https://gmail.googleapis.com/batch/gmail/v1"
request_header = {"Authorization": f"Bearer {creds}", "Host": "www.googleapis.com", "Content-Type": "multipart/mixed; boundary=boundary"}
body = []
for n in message_Ids:
boundary = "--boundary\n"
content_type = "Content-Type: application/http\n\n"
request = f'GET /gmail/v1/users/me/messages/{n}\n'
requestObj = boundary + content_type + request + "Accept: application/json; charset=UTF-8\n"
body.append(requestObj)
body.append("--boundary")
body = "\n".join(body)
response = req.post(url=gmailUrl, headers=request_header, data=body, auth=)
pprint(response)
pprint(response.text)
</code></pre>
<p>As I managed to somehow get a response from the gmail server I suppose my request got accepted.
But I do not understand why I get the 401 error.
If I send the GET requests as single ones my application works fine.</p>
<p>What do I have to put in the <code>"Autorization": f"Bearer {creds}"</code> line?</p>
<p>Thanks in advance!</p>
|
<p>As mentioned in <a href="https://stackoverflow.com/questions/67671263/why-does-the-gmail-api-returns-401-error#comment119636583_67671263">comments</a>, you were providing an instance of <code>credentials</code> in the <code>Authorization</code> header:</p>
<pre><code>"Authorization": f"Bearer {creds}"
</code></pre>
<p>You should provide the credentials' <code>access_token</code> instead:</p>
<pre><code>"Authorization": f"Bearer {creds.token}"
</code></pre>
<h3>Reference:</h3>
<ul>
<li><a href="https://google-auth.readthedocs.io/en/stable/reference/google.oauth2.credentials.html" rel="nofollow noreferrer">google.oauth2.credentials module</a></li>
</ul>
|
python|google-api|gmail-api
| 1 |
1,202 | 30,185,834 |
Pytest - Error No Module Named Sqlalchemy
|
<p>I tried to import sqlalchemy to my pytest file but when I tried to run it shows this error, even though I have already installed sqlalchemy. </p>
<pre><code> new.py:1: in <module>
import sqlalchemy
E ImportError: No module named sqlalchemy
</code></pre>
<p>my code :</p>
<pre><code>import pytest
import sqlalchemy
</code></pre>
<p>the code was just when I was importing the sqlalchemy.
How do I fix it? Thanks in advance </p>
|
<p>I had the same, and fixed by running tests with:</p>
<pre><code>python -m pytest .
</code></pre>
|
python|python-2.7|sqlalchemy|pytest
| 0 |
1,203 | 57,170,626 |
python script output to be saved in different folder
|
<p>I'm trying to build a keyword tool. For this, I built a python script that when you run it, it outputs a CSV file with the keyword, the ranking, the URL and the date.</p>
<p>I want to run more than one keyword and I want to save the output in different folders.</p>
<p>I created 5 different folders with my python script and I created a bash file that runs the script with different keywords and outputs different CSV files.</p>
<p>The bash file looks like this:</p>
<pre><code>#! /bin/bash
/usr/bin/python3 /kw1/rank.py [website] [keyword1]
sleep 30
/usr/bin/python3 /kw2/rank.py [website] [keyword2]
sleep 20
/usr/bin/python3 /kw3/rank.py [website] [keyword3]
sleep 30
/usr/bin/python3 /kw4/rank.py [website] [keyword4]
sleep 25
/usr/bin/python3 /kw5/rank.py [website] [keyword5]
</code></pre>
<p>The problem I am having is that when I run my bash file, all the CSV outputs are stored in the home folder, where the bash file is located and not on the specific folder where the python script.</p>
<p>I tried to add >> and location/output.csv or .txt but the output is in a .txt file or if its in CSV its in one column. Also, this is not my python output, it's only what the terminal outputs when running the python script.</p>
<p>The python code that saves my output to CSV looks like this</p>
<pre><code>file = datetime.date.today().strftime("%d-%m-%Y")+'-' +keyword + '.csv'
with open(file, 'w+') as f:
writer = csv.writer(f)
writer.writerow(['Keyword' , 'Rank', 'URL' , 'Date'])
writer.writerows(zip( d[0::4], d[1::4] , d[2::4], d[3::4]))
</code></pre>
<p>I would like to run my bash file on one folder but I want to get my script outputs in the specific folder the python script is located.</p>
<p>Thanks.</p>
|
<p>Use an absolute path when opening file for writing...</p>
<pre><code>import os.path
# PRETEND YOU 'example' folder under C:\
save_to_path = 'C://example//'
name_of_file = input("What is the name of the file: ")
complete_name = os.path.join(save_to_path, name_of_file+".txt")
with open(complete_name, 'w+') as f:
f.write('Hi')
</code></pre>
<p>Now you have Hi.txt file in C:\example\</p>
|
python|linux|python-3.x|bash|csv
| 2 |
1,204 | 57,030,977 |
Cython Basics: How to speed up common c functions like random and math functions?
|
<p>I am working on learning Cython (See <a href="https://stackoverflow.com/questions/57024070/how-to-cythonize-a-python-class-with-an-attribute-that-is-a-function?noredirect=1#comment100580388_57024070">How to Cythonize a Python Class with an attribute that is a function?</a>)</p>
<p>I have the following function that I want to speed up, but Cython says both lines are going through python:</p>
<pre><code>cdef selectindex(float fitnesspref):
r = np.random.rand()
return int(log(r) / log(fitnesspref))
</code></pre>
<p>So I need to get a random number (I originally just used <code>random()</code> built in Python call, but switched to <code>Numpy</code> hoping it was faster or would Cythonize better. Then I need to take the log of two floats, divide them, and then return the result back as an integer. Nothing too difficult, and should be a great question that people can then use later. I'm struggling to find simple solutions to all the info I need via the <code>Cython</code> docs or Google. I would have thought this was like in every tutorial, but I'm not having a lot of luck.</p>
<p>Looking around though, I can't find an nice easy solution. For example: <a href="https://stackoverflow.com/questions/40976880/canonical-way-to-generate-random-numbers-in-cython?noredirect=1&lq=1">Canonical way to generate random numbers in Cython</a></p>
<p>Is there a best, but simpler, way to Cythonize a function like this?</p>
<p>As a side note, I had a similar problem with <code>abs()</code> but then after I changed everything to be cdefs it seemed to automatically switch to using the C version. </p>
<p>Here is the results of the HTML generated by Cython:
<a href="https://i.stack.imgur.com/McTCi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/McTCi.png" alt="enter image description here"></a></p>
<p>Another common one I keep bumping into is:</p>
<pre><code>start = time.time()
</code></pre>
<p>i.e. is there a simple way to get start and end times using C code to speed that up? I have that line there in an inner loop, so it's slowing things down. But it's really important to what I'm trying to do.</p>
<p>Update 1:
I'm trying to follow the suggestions in the comments and they don't seem to work out like I'd expect (which is why this all confuses me in the first place.) For example, here is a c version of random that I wrote and compiled. Why is it still yellow?
<a href="https://i.stack.imgur.com/kHg88.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kHg88.png" alt="enter image description here"></a></p>
<p>Update 2:
Okay, I researched it further and here is why it won't compile all the way to C:</p>
<p><a href="https://i.stack.imgur.com/Tm7D2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tm7D2.png" alt="enter image description here"></a></p>
<p>It's doing a check to make sure I'm not dividing by zero, even though it's a constant and can't ever be zero. How do you get rid of that?</p>
|
<p>For anyone that follows me, here were the final answers:</p>
<p>Many common functions, including len() are already built in. If you switch it to use carrays it automatically compiles to C. <a href="https://cython.readthedocs.io/en/latest/src/userguide/language_basics.html" rel="nofollow noreferrer">See this link.</a></p>
<p>For the rest, the following imports were required:</p>
<pre><code>from libc.stdlib cimport rand, RAND_MAX
from libc.math cimport log
</code></pre>
<p>To replace calls to random.random():</p>
<pre><code>@cython.cdivision(True)
cdef float crandom() except -1:
return <float>rand() / <float>RAND_MAX
</code></pre>
<p>To replace calls to random.randint():</p>
<pre><code>@cython.cdivision(True)
cdef int crandint(int lower, int upper) except -1:
return (rand() % (upper - lower + 1)) + lower
</code></pre>
<p>My selectindex function got rewritten as:</p>
<pre><code>cdef int selectindex(float fitnesspref, int popsize) except -1:
cdef float r
cdef int val
r = crandom() + 0.00000000001
return <int>(log(r) / log(fitnesspref))
</code></pre>
<p>That 0.00000000001 was necessary because C and Python behave slightly differently here. The Python version never returns a straight up zero apparently out of the random calls but the crandom does every so often. And a log of zero is undefined. This might be because my c version is only working with a limited number of ints as it's starting point.</p>
<p>I never did come up with a way to replace time.time(). </p>
<p>I hope this helps some newbie following after me. </p>
|
python|python-3.x|cython
| 2 |
1,205 | 57,143,822 |
How to extract text between two substrings from a Python file
|
<p>I want to read the text between two characters (<code>“#*”</code> and <code>“#@”</code>) from a file. My file contains thousands of records in the above-mentioned format. I have tried using the code below, but it is not returning the required output. My data contains thousands of records in the given format. </p>
<pre><code>import re
start = '#*'
end = '#@'
myfile = open('lorem.txt')
for line in fhand:
text = text.rstrip()
print (line[line.find(start)+len(start):line.rfind(end)])
myfile.close()
</code></pre>
<p>My Input:</p>
<pre><code>\#*OQL[C++]: Extending C++ with an Object Query Capability
\#@José A. Blakeley
\#t1995
\#cModern Database Systems
\#index0
\#*Transaction Management in Multidatabase Systems
\#@Yuri Breitbart,Hector Garcia-Molina,Abraham Silberschatz
\#t1995
\#cModern Database Systems
\#index1
</code></pre>
<p>My Output:</p>
<pre><code>51103
OQL[C++]: Extending C++ with an Object Query Capability
t199
cModern Database System
index
...
</code></pre>
<p>Expected output:</p>
<pre><code>OQL[C++]: Extending C++ with an Object Query Capability
Transaction Management in Multidatabase Systems
</code></pre>
|
<p>Use the following regex:</p>
<p><code>#\*([\s\S]*?)#@ /g</code></p>
<p>This regex captures all whitespace and non-whitespace characters between <code>#*</code> and <code>#@</code>.</p>
<p><a href="https://regex101.com/r/i2GJ0M/1" rel="nofollow noreferrer">Demo</a></p>
|
regex|python-3.x|text-manipulation
| 2 |
1,206 | 72,311,511 |
Python random number loop with different outcomes each time
|
<p>I’ve been working on a school project and need to do a loop consisting of a couple random numbers but I need them to output a different number each time. The code I’ve been using for the random numbers is this.</p>
<p>import random</p>
<p>a=random.randint(1,9)</p>
<p>I’m new to coding and just starting getting into python for fun and have looked everywhere for how to complete this loop but I can’t find anything that works.I know this code does not include a loop and the loop I was using before was “while True” and “for i in range” Thanks</p>
|
<p>You have not created any loop yet. You're generating random integer only once.
In order to generate more of them you have to use something like a <code>for</code> loop.</p>
<p>If you're familiar with the concept of <code>range</code> then this is a simple example of generating x-number of random integers.</p>
<pre><code>import random
x = 10
for i in range(0, x):
a = random.randint(1, 9)
print(a)
</code></pre>
|
python
| 1 |
1,207 | 4,646,089 |
python algorithm: how to efficiently find if two integer sets are intersected?
|
<p>Given a set [2004, 2008], what is the fastest way to find if this set is intersected with other sets?</p>
<p>Actually I am dealing a problem with database, the table has 2 columns, one is the lower bound, the other one is the higher bound. The task is to find all the intersected rows with the given 2 tuple(like [2004,2008]).</p>
<p>I am using mongodb, is this intrinsically supported(I mean have keywords to do it).
I have large user base, so I want this task to be completed as fast as possible.</p>
<p>EDIT: To stat more clear, a database table contains following rows:</p>
<pre><code>20 30
10 50
60 90
...
</code></pre>
<p>Given the input <code>(25 40)</code> range, I want to return the rows which represent a range, have intersection with the given range.</p>
<p>so return is: <code>(20 30),(10 50)</code></p>
|
<p>I don't know MongoDB at all, but you're basically looking for </p>
<p><code>SELECT * from the_table where not (lower_bound > 2008 or upper_bound < 2004)</code>.</p>
|
python|algorithm|data-structures|mongodb
| 4 |
1,208 | 69,344,058 |
How do I create font style and font size in my notepad?
|
<p>I have created a notepad using python. I want to create a feature which can change the font size and also the font style. I have tried various options but they have failed. My notepad is fully made up with python's tkinter module. I have also tried methods like file handling but it doesn't work. Please help me out.
Here is the code:</p>
<pre><code>from tkinter import *
import tkinter.messagebox as mb
from tkinter.filedialog import askopenfilename, asksaveasfilename
import os
def newFile():
global file
root.title("Untitled - Notepad")
file = None
textArea.delete(1.0, END)
def openFile():
global file
file = askopenfilename(defaultextension=".txt", filetypes=[("All Files", "*.*"), ("Text Documents", "*.txt")])
if file == "":
file = None
else:
root.title(os.path.basename(file) + " - Notepad")
textArea.delete(1.0, END)
f = open(file)
textArea.insert(1.0, f.read())
f.close()
def save():
global file
if file == None:
file = asksaveasfilename(initialfile="Untitled.txt", defaultextension=".txt", filetypes=[("All Files", "*.*"), ("Text Documents", "*.txt")])
if file == "":
file = None
else:
f = open(file, "w")
f.write(textArea.get(1.0, END))
f.close()
root.title(os.path.basename(file) + " - Notepad")
else:
f = open(file, "w")
f.write(textArea.get(1.0, END))
f.close()
def quitFile():
root.destroy()
def cut():
textArea.event_generate(("<<Cut>>"))
def copy():
textArea.event_generate(("<<Copy>>"))
def paste():
textArea.event_generate(("<<Paste>>"))
def changeStyle():
pass
def info():
mb.showinfo("About Notepad", '''Notepad
Version - 1.1.1
Developer - Sourabh Sontakke''')
def changeSize():
f = open('size.txt', 'w')
f.write(str(size.get()))
f.close()
print(size.get())
def changeSizeWindow():
global size
TextSize = Tk()
TextSize.geometry("400x300")
TextSize.title("Change Size")
size = StringVar()
Label(TextSize, text="Enter the font size you want:", font="lucida 15 bold").pack()
Entry(TextSize, textvariable=size, font="lucida 15").pack(padx=40)
Button(TextSize, text="Apply", command=changeSize).pack()
TextSize.mainloop()
if __name__ == "__main__":
root = Tk()
root.title("Untitled - Notepad")
root.geometry("600x400")
ScrollBar = Scrollbar(root)
ScrollBar.pack(fill=Y, side=RIGHT)
a = 20
textArea = Text(root, font=f"lucida {a}", yscrollcommand=ScrollBar.set)
file = None
textArea.pack(expand=True,fill="both")
ScrollBar.config(command=textArea.yview)
MenuBar = Menu(root)
FileMenu = Menu(MenuBar, tearoff=0)
FileMenu.add_command(label="New File", command=newFile)
FileMenu.add_command(label="Open File", command=openFile)
FileMenu.add_command(label="Save", command=save)
FileMenu.add_separator()
FileMenu.add_command(label="Quit", command=quitFile)
MenuBar.add_cascade(label="File", menu=FileMenu)
EditMenu = Menu(MenuBar, tearoff=0)
EditMenu.add_command(label="Cut", command=cut)
EditMenu.add_command(label="Copy", command=copy)
EditMenu.add_command(label="Paste", command=paste)
EditMenu.add_command(label="Font Size", command=changeSizeWindow)
EditMenu.add_command(label="Font Style", command=paste)
MenuBar.add_cascade(label="Edit", menu=EditMenu)
HelpMenu = Menu(MenuBar, tearoff=0)
HelpMenu.add_command(label="About", command=info)
MenuBar.add_cascade(label="Help", menu=HelpMenu)
root.config(menu=MenuBar)
root.mainloop()
</code></pre>
|
<p>Creating fonts, colors with various styles is achieved by creating tag names and defining tag attributes to them, then using those tag names when inserting text into <code>Text</code> object.</p>
<p>Here is an example.</p>
<pre class="lang-py prettyprint-override"><code>import tkinter as tk
master = tk.Tk()
text = tk.Text(master, undo = 1)
text.grid(row = 0, column = 0, sticky = tk.NSEW)
#
# make tag name with any letter or word
# Use tag_config to define all required attributes
# Use tag_add to create it
# Apply tag name with insert(pos, message, tag) format
#
text.tag_config( "A", font = "Consolas 12 italic", foreground = "red")
text.tag_add("A", "end")
text.insert( "end", "This is in Font consolas 12 italic. color = red\n", "A" )
text.tag_config( "B", font = "Consolas 12 overstrike", foreground = "Blue")
text.tag_add("B", "end")
text.insert( "end", "This is in Font consolas 12 overstrike. color = blue\n", "B" )
text.tag_config( "C", font = "Times 20 bold", foreground = "green")
text.tag_add("C", "end")
text.insert( "end", "This is in Font Times 20 bold. color = green\n", "C" )
text.tag_config( "D", font = "Times 20 normal", foreground = "black", background = "yellow")
text.tag_add("D", "end")
# Using some of the thousands of characters in python fonts
for a in ["♔", "♕", "♖", "♗", "♘", "♙", "♚", "♛", "♜", "♝", "♞", "♟"]:
text.insert( "end", a, "D")
master.mainloop()
</code></pre>
|
python|tkinter|notepad
| 0 |
1,209 | 69,364,743 |
How to perform an operation with two columns in the same dataframe in Python Pandas?
|
<p>I'm trying to apply the operation <code>'x-y/y'</code>, being <code>x</code> the column <code>'Faturamento'</code> and <code>y</code> column <code>'Custo'</code> from the dataframe called <code>'df'</code>, and store the results in a new column called <code>'Roi'</code>.</p>
<p>My attempt to use the apply function:</p>
<pre><code>df['Roi'] = df.apply(lambda x, y: x['Faturamento']-y['Custo']/y['Custo'], axis=1)
</code></pre>
<p>Is returning:</p>
<blockquote>
<p>TypeError: () missing 1 required positional argument: 'y'</p>
</blockquote>
<p>How can I do this?</p>
|
<p>I think you mean:</p>
<pre><code>df['Roi'] = df.apply(lambda x: (x['Faturamento']-x['Custo'])/x['Custo'], axis=1)
</code></pre>
<p><code>x</code> refers to the dataframe</p>
|
python|pandas|dataframe|lambda|apply
| 1 |
1,210 | 48,392,514 |
Issues processing JSON API response
|
<p>I am trying to process the results of this API in Python:</p>
<p><a href="https://www.cryptopia.co.nz/api/GetMarkets/BTC/12" rel="nofollow noreferrer">https://www.cryptopia.co.nz/api/GetMarkets/BTC/12</a></p>
<p>When I assign the results of the API call to a variable I cannot seem to iterate over it or even call objects in it. When I do something like <code>response[0]</code> I'm only getting single characters. I need to be able to get to the "Data" object in there and iterate over it.</p>
<p>I'm stuck so any help would be greatly appreciated.</p>
<p>Thanks</p>
|
<p>You haven't provided much detail, however, you probably need to decode the JSON response. Use the <a href="https://docs.python.org/3/library/json.html#module-json" rel="nofollow noreferrer"><code>json</code></a> module for that. Something like this should help, but it depends on what the actual response is:</p>
<pre><code>import json
data = json.loads(response)
print(data['Data'][0])
</code></pre>
<p><strong>Output</strong></p>
<pre>
{'TradePairId': 1261, 'Label': '$$$/BTC', 'AskPrice': 7.5e-07, 'BidPrice': 7.2e-07, 'Low': 7.1e-07, 'High': 7.5e-07, 'Volume': 92820.5864058, 'LastPrice': 7.3e-07, 'BuyVolume': 52598028.48552139, 'SellVolume': 7797125.25042393, 'Change': 2.82, 'Open': 7.1e-07, 'Close': 7.3e-07, 'BaseVolume': 0.06782329, 'BuyBaseVolume': 1.87597261, 'SellBaseVolume': 173448022.5864104}
</pre>
<p>In this case <code>json.loads()</code> returns a dictionary, and the actual data is the value of the <code>Data</code> key which is a list of dictionaries. To iterate over the dictionaries:</p>
<pre><code>for d in data['Data']:
print(d)
</code></pre>
<p>If you are using the <code>requests</code> module to retrieve the data you can simply use the <code>json</code> method of the response to access the decoded data.</p>
|
python|json|api
| 2 |
1,211 | 70,449,191 |
Error locating div aria label with xpath Selenium
|
<p>So I'm trying to find a way to make sure my bot doesn't get confused and click on the <strong>Following</strong> button again (it uses span to detect text) when it has already followed that particular user. I am trying to detect that if a user is already followed, the bot should skip him through another method, which is if the bot finds the following text on the user's profile <strong>"Turn on Tweet notifications"</strong> that means the user is already followed because there will be a <strong>bell</strong> icon/button present on his profile.</p>
<p>The issue is, I fail to detect that text/button. What am I doing wrong?</p>
<p>Here's the code:</p>
<pre><code>def to_follow_delay():
def to_follow_delay_2():
try:
driver.get(next(urls))
if wait.until(EC.visibility_of_element_located((By.XPATH, "//div[contains(@aria-label,'Turn on Tweet notifications')]"))):
i = 1
while True:
skipped_user = "skipped_following" + str(i) + ".png"
if not os.path.isfile(skipped_user):
break
i += 1
already_following = datetime.datetime.now()
print(already_following, "Already following... skipping")
driver.get_screenshot_as_file(skipped_user)
def skip_user():
driver.get(next(urls))
skiptimer = threading.Timer(10, skip_user)
skiptimer.start
return to_follow_delay()
else:
def follow_user():
sleep(5)
followuser = wait.until(EC.element_to_be_clickable((By.XPATH, '//span[text()="Follow"]')))
followuser.click()
followsuccess = datetime.datetime.now()
print(followsuccess, "Follow successful")
print("--- %s seconds ---" % (time.time() - starting4))
i = 1
while True:
followed_user_success = "followed" + str(i) + ".png"
if not os.path.isfile(followed_user_success):
break
i += 1
driver.get_screenshot_as_file(followed_user_success)
followedper_runcounter.set(followedper_runcounter.get() + 1)
followedcounter.set(followedcounter.get() + 1)
return operation_follow()
timer4 = threading.Timer(tofollowdelayvalue_txt.get(), follow_user)
timer4.start()
starting4 = time.time()
except StopIteration:
nourls = datetime.datetime.now()
print(nourls, "No more urls.")
return
timer5 = threading.Timer(5, to_follow_delay_2)
timer5.start()
</code></pre>
<p>Twitter code:</p>
<pre><code><div aria-label="Turn on Tweet notifications" role="button" tabindex="0" class="css-18t94o4 css-1dbjc4n r-1niwhzg r-1ets6dv r-sdzlij r-1phboty r-rs99b7 r-6gpygo r-1kb76zh r-2yi16 r-1qi8awa r-1ny4l3l r-o7ynqc r-6416eg r-lrvibr"><div dir="auto" class="css-901oao r-1awozwy r-18jsvk2 r-6koalj r-18u37iz r-16y2uox r-37j5jr r-a023e6 r-b88u0q r-1777fci r-rjixqe r-bcqeeo r-q4m81j r-qvutc0"><svg viewBox="0 0 24 24" aria-hidden="true" class="r-18jsvk2 r-4qtqp9 r-yyyyoo r-z80fyv r-dnmrzs r-bnwqim r-1plcrui r-lrvibr r-19wmn03"><g><path d="M23.24 3.26h-2.425V.832c0-.414-.336-.75-.75-.75s-.75.336-.75.75V3.26H16.89c-.414 0-.75.335-.75.75s.336.75.75.75h2.426v2.424c0 .414.336.75.75.75s.75-.336.75-.75V4.76h2.425c.415 0 .75-.337.75-.75s-.336-.75-.75-.75zm-6.23 7.606c.02-2.434-.782-4.597-2.258-6.09-1.324-1.342-3.116-2.084-5.046-2.093h-.013c-1.93.01-3.722.75-5.046 2.092C3.172 6.27 2.37 8.433 2.39 10.867 2.426 15 .467 16.56.39 16.62c-.26.193-.367.53-.266.838.102.308.39.515.712.515h4.716c.11 2.226 1.94 4.007 4.194 4.007s4.083-1.78 4.194-4.007h4.625c.32 0 .604-.206.707-.51s0-.643-.255-.838c-.082-.064-2.043-1.625-2.008-5.76zM9.745 20.48c-1.426 0-2.586-1.11-2.694-2.508h5.388c-.108 1.4-1.268 2.507-2.694 2.507zm-7.29-4.007c.702-1.095 1.457-2.904 1.434-5.618-.017-2.062.614-3.8 1.825-5.025C6.757 4.774 8.172 4.19 9.7 4.184c1.527.007 2.943.59 3.985 1.646 1.21 1.226 1.84 2.963 1.823 5.025-.022 2.714.732 4.523 1.437 5.618H2.455z"></path></g></svg><span class="css-901oao css-16my406 css-bfa6kz r-poiln3 r-a023e6 r-rjixqe r-bcqeeo r-qvutc0"></span></div></div>
</code></pre>
<p>The error stacktrace:</p>
<pre><code> if wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "button[aria-label='Turn on Tweet notifications'][role='button']"))):
File "C:\Users\Cassano\AppData\Roaming\Python\Python38\site-packages\selenium\webdriver\support\wait.py", line 89, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
Stacktrace:
Backtrace:
Ordinal0 [0x00AF6903+2517251]
Ordinal0 [0x00A8F8E1+2095329]
Ordinal0 [0x00992848+1058888]
Ordinal0 [0x009BD448+1233992]
Ordinal0 [0x009BD63B+1234491]
Ordinal0 [0x009E7812+1406994]
Ordinal0 [0x009D650A+1336586]
Ordinal0 [0x009E5BBF+1399743]
Ordinal0 [0x009D639B+1336219]
Ordinal0 [0x009B27A7+1189799]
Ordinal0 [0x009B3609+1193481]
GetHandleVerifier [0x00C85904+1577972]
GetHandleVerifier [0x00D30B97+2279047]
GetHandleVerifier [0x00B86D09+534521]
GetHandleVerifier [0x00B85DB9+530601]
Ordinal0 [0x00A94FF9+2117625]
Ordinal0 [0x00A998A8+2136232]
Ordinal0 [0x00A999E2+2136546]
Ordinal0 [0x00AA3541+2176321]
BaseThreadInitThunk [0x770EFA29+25]
RtlGetAppContainerNamedObjectPath [0x77B47A9E+286]
RtlGetAppContainerNamedObjectPath [0x77B47A6E+238]
</code></pre>
|
<p>To locate the <em>visible</em> element with text as <em><strong>Turn on Tweet notifications</strong></em> you need to induce <a href="https://stackoverflow.com/questions/59130200/selenium-wait-until-element-is-present-visible-and-interactable/59130336#59130336">WebDriverWait</a> for the <a href="https://stackoverflow.com/questions/50468629/python-selenium-wait-until-element-is-fully-loaded/50474905#50474905"><em>visibility_of_element_located()</em></a> and you can use either of the following <a href="https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver/48376890#48376890">Locator Strategies</a>:</p>
<ul>
<li><p>Using <code>CSS_SELECTOR</code>:</p>
<pre><code>element = WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div[aria-label='Turn on Tweet notifications'][role='button']")))
</code></pre>
</li>
<li><p>Using <code>XPATH</code>:</p>
<pre><code>element = WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//div[@aria-label='Turn on Tweet notifications' and @role='button']")))
</code></pre>
</li>
<li><p><strong>Note</strong> : You have to add the following imports :</p>
<pre><code>from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
</code></pre>
</li>
</ul>
|
python|selenium|xpath|css-selectors|webdriverwait
| 2 |
1,212 | 55,647,606 |
How to calculate rolling average over each product?
|
<p>I have the first three columns in a dataframe in pandas. I want to calculate the 3 days moving average with respect to each product as shown in the 4th column.</p>
<p>Data</p>
<pre><code>print (df)
Date Product Demand mov Avg
0 1-Jan-19 Product-01 3 NaN
1 2-Jan-19 Product-01 4 NaN
2 3-Jan-19 Product-01 5 4.0
3 4-Jan-19 Product-01 6 5.0
4 5-Jan-19 Product-01 7 6.0
5 3-Jan-19 Product-02 2 NaN
6 4-Jan-19 Product-02 3 NaN
7 5-Jan-19 Product-02 4 3.0
8 6-Jan-19 Product-02 5 4.0
9 7-Jan-19 Product-02 8 5.7
</code></pre>
<p>I tried using groupby and rolling mean but doesn't seem to work.</p>
<pre><code>df['mov_avg'] =df.set_index('Date').groupby('Product').rolling('Demand',window=7).mean().reset_index(drop=True)
</code></pre>
|
<p>Use:</p>
<pre><code>df['Date'] = pd.to_datetime(df['Date'], format='%d-%b-%y')
</code></pre>
<p>Your solution should be changed by <code>rolling(3, freq='d')</code>:</p>
<pre><code>#sorting if not sorted DataFrame by both columns
df = df.sort_values(['Date','Product']).reset_index(drop=True)
df['mov_avg'] = (df.set_index('Date')
.groupby('Product')['Demand']
.rolling(3, freq='d')
.mean()
.reset_index(drop=True))
</code></pre>
<p>Another better solution is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a>:</p>
<pre><code>s = df.set_index('Date').groupby('Product')['Demand'].rolling(3, freq='d').mean()
df = df.join(s.rename('mov_avg'), on=['Product','Date'])
</code></pre>
<hr>
<pre><code>print (df)
Date Product Demand mov Avg mov_avg
0 2019-01-01 Product-01 3 NaN NaN
1 2019-01-02 Product-01 4 NaN NaN
2 2019-01-03 Product-01 5 4.0 4.000000
3 2019-01-04 Product-01 6 5.0 5.000000
4 2019-01-05 Product-01 7 6.0 6.000000
5 2019-01-03 Product-02 2 NaN NaN
6 2019-01-04 Product-02 3 NaN NaN
7 2019-01-05 Product-02 4 3.0 3.000000
8 2019-01-06 Product-02 5 4.0 4.000000
9 2019-01-07 Product-02 8 5.7 5.666667
</code></pre>
|
python|pandas|pandas-groupby
| 1 |
1,213 | 73,262,943 |
How Can I get Sum Total of Django Model Column through a Queryset
|
<p>I am working on an Event App where I want to get the Total Amount of Pin Tickets that has been activated and I don't know how to do it.</p>
<p>Below is what I have tried and I am getting this error: <strong>'QuerySet' object has no attribute 'ticket'</strong></p>
<p>Here are my Models</p>
<pre><code>class Ticket(models.Model):
event = models.ForeignKey(Event, on_delete=models.CASCADE)
price = models.PositiveIntegerField()
category = models.CharField(max_length=100, choices=PASS, default=None, blank=False, null=False)
added_date = models.DateField(auto_now_add=True)
def __str__(self):
return f"{self.event} "
#Prepare the url path for the Model
def get_absolute_url(self):
return reverse("ticket-detail", args=[str(self.id)])
def generate_pin():
return ''.join(str(randint(0, 9)) for _ in range(6))
class Pin(models.Model):
ticket = models.ForeignKey(Ticket, on_delete=models.CASCADE)
value = models.CharField(max_length=6, default=generate_pin, blank=True)
added = models.DateTimeField(auto_now_add=True, blank=False)
reference = models.UUIDField(primary_key = True, editable = False, default=uuid.uuid4)
status = models.CharField(max_length=30, default='Not Activated')
#Save Reference Number
def save(self, *args, **kwargs):
self.reference == str(uuid.uuid4())
super().save(*args, **kwargs)
def __unicode__(self):
return self.ticket
class Meta:
unique_together = ["ticket", "value"]
def __str__(self):
return f"{self.ticket}"
def get_absolute_url(self):
return reverse("pin-detail", args=[str(self.id)])
</code></pre>
<p>Here is my Views.py</p>
<pre><code>from django.db.models import Count, Sum
def profile(request):
amount_sold_pins = Pin.objects.filter(status='Activated')
total_sold_tickets = amount_sold_pins.ticket.price
total = total_sold_tickets.aggregate(total =
Sum('price')).get('total') or 0
context = {
'guest':reservations,
'total':total,
}
return render(request, 'user/profile.html', context)
</code></pre>
<p>Someone should please help with the best way of getting my Pin Tickets that are activated. Thanks</p>
|
<p>To get the total price of tickets with pin status activated, use this.</p>
<pre><code>Ticket.objects.filter(pin__status="Activated").aggregate(
total=Sum('price')
)['total']
</code></pre>
<p>If is just how many tickets has a pin status activated.</p>
<pre><code>Ticket.objects.filter(pin__status="Activated").count()
</code></pre>
<p>The error you are seen is because amount_sold_pins is a queryset.</p>
<pre><code>amount_sold_pins = Pin.objects.filter(status='Activated')
# amount_sold_pins is a queryset of Pin
total_sold_tickets = amount_sold_pins.ticket.price # the queryset dosen't have a ticket field, the Pins has.
</code></pre>
|
python|django
| 0 |
1,214 | 73,198,441 |
how can i multiply each index of a list by the next?
|
<p>So, I have this array:</p>
<pre><code>numbers = [5, 9, 3, 19, 70, 8, 100, 2, 35, 27]
</code></pre>
<p>What I want to do is to create another array from this one, but now each value of this new array must be equal to the corresponding value
in the <code>numbers</code> array multiplied by the following.</p>
<p>For example: the first value of the new array should be 45, as it is the multiplication
of 5 (first value) and 9 (next value). The second value of the new array should be 27, as it is the multiplication of 9 (second
value) and 3 (next value), and so on. If there is no next value, the multiplication must be done by 2.</p>
<p>So, this array <code>numbers</code> should result in this other array: <code>[45, 27, 57 ,1330, 560, 800, 200, 70, 945, 54]</code></p>
<p>I only managed to get to this code, but I'm having problems with index:</p>
<pre><code>numbers = [5,9,3,19,70,8,100,2,35,27]
new_array = []
x = 0
while x <= 8: # Only got it to work until 8 and not the entire index of the array
new_array.append(numbers[x] * numbers[x + 1])
x += 1
print(new_array)
</code></pre>
<p>How can I make it work no matter what is index of the array and then if there's no next number, multiply it by 2? I've tried everything but this was the closest I could get.</p>
|
<p>Try:</p>
<pre class="lang-py prettyprint-override"><code>numbers = [5, 9, 3, 19, 70, 8, 100, 2, 35, 27]
out = [a * b for a, b in zip(numbers, numbers[1:] + [2])]
print(out)
</code></pre>
<p>Prints:</p>
<pre class="lang-py prettyprint-override"><code>[45, 27, 57, 1330, 560, 800, 200, 70, 945, 54]
</code></pre>
|
arrays|python-3.x
| 1 |
1,215 | 73,336,725 |
why function' object has no attribute 'user' showing up?
|
<p>I tried to create 2 users in my project.</p>
<blockquote>
<p>models.py</p>
</blockquote>
<pre><code>class CustomUser(AbstractUser):
user_type_choices = ((1, "Admin"), (2, "NotesUser"))
user_type = models.CharField(max_length=10, choices=user_type_choices, default=1)
class Admin(models.Model):
user_id = models.OneToOneField(CustomUser, on_delete=models.CASCADE)
class NotesUser(models.Model):
user_id = models.OneToOneField(CustomUser, on_delete=models.CASCADE)
@receiver(post_save, sender=CustomUser)
def create_user_profile(sender, created, instance, **kwargs):
if created:
if instance.user_type == 1:
Admin.objects.create(user_id=instance)
if instance.user_type == 2:
NotesUser.objects.create(user_id=instance)
@receiver(post_save, sender=CustomUser)
def save_user_profile(sender, instance, **kwargs):
if instance.user_type == 1:
instance.admin.save()
if instance.user_type == 2:
instance.notesuser.save()
</code></pre>
<p>and i wrote this middleware for checking user access</p>
<blockquote>
<p>LoginCheckMiddleWare.py</p>
</blockquote>
<pre><code>from django.utils.deprecation import MiddlewareMixin
from django.urls import reverse
from django.http import HttpResponseRedirect, HttpResponse
class LoginCheckMiddleWare(MiddlewareMixin):
def process_view(self, view_func, request, view_args, view_kwargs):
modulename=view_func.__module__
user = request.user
if user.is_authenticated:
if user.user_type == "1":
if modulename == "todo_app.views" or modulename == "todo_app.AdminViews":
pass
else:
return HttpResponseRedirect(reverse("admin_home"))
elif user.user_type == "2":
if modulename == "todo_app.UserViews" or modulename == "todo_app.views":
pass
else:
return HttpResponseRedirect(reverse("user_home"))
else:
pass
#return HttpResponseRedirect(reverse("log_in"))
else:
if request.path == reverse("log_in") or request.path == reverse("login_save") or modulename == "django.contrib.auth.views" or modulename == "todo_app.views":
pass
else:
return HttpResponseRedirect(reverse("log_in"))
</code></pre>
<p>I mentioned the middleware I created in app into settings.py file.</p>
<blockquote>
<p>settings.py</p>
</blockquote>
<pre><code>MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'todo_app.LoginCheckMiddleWare.LoginCheckMiddleWare',
]
</code></pre>
<p>But an error is showing when I tried to access my login page.</p>
<pre><code>'function' object has no attribute 'user'
</code></pre>
<p><a href="https://i.stack.imgur.com/fHQ9z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fHQ9z.png" alt="this is the error what's showing " /></a></p>
<p>can someone please help me?</p>
|
<p>According to the <a href="https://docs.djangoproject.com/en/4.1/topics/http/middleware/#process-view" rel="nofollow noreferrer">documentation</a> the first parameter to the process view is the request object please change the function as below:</p>
<pre class="lang-py prettyprint-override"><code>def process_view(self, request, view_func, view_args, view_kwargs)
</code></pre>
|
python|django|django-settings|django-middleware
| 2 |
1,216 | 64,624,047 |
Updating a record with a value from another table - MySQL (Python)
|
<p>I am trying to update a value in one table (csms) in MySQL from another table (serviceppl). I have attached the query I wrote and both tables, but the query does not work. It does not produce any error, but the record doesn't get updated. Help!</p>
<pre><code>query = "select*from csms where feedback in('null', 'I expect a better one', 'Bad one! Need one again')"
mycursor.execute(query)
p = mycursor.fetchall()
for i in p:
query_upserv = """update csms inner join serviceppl on csms.product = serviceppl.speciality
set serviceppl.name = csms.serviceman where csms.product = '{}'""".format(i[4])
mycursor.execute(query_upserv)
mycon.commit()
</code></pre>
<p><a href="https://i.stack.imgur.com/Af6lv.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Af6lv.jpg" alt="See this for the table images.Please refer to this link or else you won't understand my query" /></a></p>
|
<p>If you want to update the column <code>serviceman</code> of the table <code>csms</code> then you must fix your <code>SET</code> clause:</p>
<pre><code>set csms.serviceman = serviceppl.name
</code></pre>
|
python|mysql|join
| 0 |
1,217 | 64,007,764 |
Change monthly data to daily data and spread out values over each day of that month
|
<p>I have a df with monthly data:</p>
<pre><code>date | type | value1 | value2
2020-04-01 | "a" | 30 | 60
2020-04-01 | "b" | 60 | 120
2020-04-01 | "c" | 45 | 180
... | ... | ... | ...
2021-02-01 | "a" | 28 | 56
2021-02-01 | "b" | 21 | 42
2021-02-01 | "c" | 5.6 | 16.8
</code></pre>
<p>I need to get daily data for each month.
Each value1 and value2 should be spread out evenly for every month.</p>
<p>If the month has 30 days = "value1 / 30" and "value2 / 30" for each day in that month.
If the month has 28 days = "value1 / 28" and "value2 / 28" for each day in that month.</p>
<p>Same for 31 days.</p>
<p>End dataframe should be:</p>
<pre><code> date | type | value1 | value2
2020-04-01 | "a" | 1 | 2 # 30 days in April 2020
2020-04-02 | "a" | 1 | 2
2020-04-03 | "a" | 1 | 2
... | ... | ..
2020-04-01 | "b" | 2 | 4 # 30 days in April 2020
2020-04-02 | "b" | 2 | 4
2020-04-03 | "b" | 2 | 4
... | ... | ..
2020-04-01 | "c" | 1.5 | 3 # 30 days in April 2020
2020-04-02 | "c" | 1.5 | 3
2020-04-03 | "c" | 1.5 | 3
... | ... | ..
2021-02-01 | "a" | 1 | 2 # 28 days in February 2021
2021-02-02 | "a" | 1 | 2
2021-02-03 | "a" | 1 | 2
... | ... | ..
2021-02-01 | "b" | 0.75 | 1.5 # 28 days in February 2021
2021-02-02 | "b" | 0.75 | 1.5
2021-02-03 | "b" | 0.75 | 1.5
... | ... | ..
2021-02-01 | "c" | 0.2 | 6 # 28 days in February 2021
2021-02-02 | "c" | 0.2 | 6
2021-02-03 | "c" | 0.2 | 6
</code></pre>
<p>How can I do this with pandas?</p>
|
<p>First add days by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>DataFrame.reindex</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.date_range.html" rel="nofollow noreferrer"><code>date_range</code></a> and then divide by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.div.html" rel="nofollow noreferrer"><code>DataFrame.div</code></a> with number of days for each month by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.daysinmonth.html" rel="nofollow noreferrer"><code>daysinmonth</code></a>:</p>
<pre><code>df['date'] = pd.to_datetime(df['date'])
rng = pd.date_range(df['date'].min(), df['date'].max() + pd.offsets.MonthEnd(), name='date')
df = df.set_index('date').reindex(rng, method='ffill')
df = df.div(df.index.daysinmonth, axis=0).reset_index()
print (df)
date value1 value2
0 2020-04-01 1.000000 2.000000
1 2020-04-02 1.000000 2.000000
2 2020-04-03 1.000000 2.000000
3 2020-04-04 1.000000 2.000000
4 2020-04-05 1.000000 2.000000
.. ... ... ...
329 2021-02-24 0.714286 1.071429
330 2021-02-25 0.714286 1.071429
331 2021-02-26 0.714286 1.071429
332 2021-02-27 0.714286 1.071429
333 2021-02-28 0.714286 1.071429
[334 rows x 3 columns]
</code></pre>
<p>EDIT: Solution for <code>reindex</code> per <code>type</code> column separately with custom lambda function:</p>
<pre><code>df['date'] = pd.to_datetime(df['date'])
f = (lambda x: x.set_index('date')
.reindex(pd.date_range(x['date'].min(),
x['date'].max() + pd.offsets.MonthEnd(),
name='date'), method='ffill'))
df = (df.groupby('type').apply(f)
.reset_index(level=0, drop=True)
.set_index('type', append=True))
df = df.div(df.index.get_level_values(0).daysinmonth, axis=0, level=0).reset_index()
print (df)
date type value1 value2
0 2020-04-01 a 0.033333 0.066667
1 2020-04-02 a 0.033333 0.066667
2 2020-04-03 a 0.033333 0.066667
3 2020-04-04 a 0.033333 0.066667
4 2020-04-05 a 0.033333 0.066667
... ... ... ...
997 2021-02-24 c 0.007143 0.214286
998 2021-02-25 c 0.007143 0.214286
999 2021-02-26 c 0.007143 0.214286
1000 2021-02-27 c 0.007143 0.214286
1001 2021-02-28 c 0.007143 0.214286
</code></pre>
|
python|pandas|datetime|converters
| 1 |
1,218 | 71,791,818 |
How to bulild a null notnull matrix in pandas dataframe
|
<p>Here's my dataset</p>
<pre><code>Id Column_A Column_B Column_C
1 Null 7 Null
2 8 7 Null
3 Null 8 7
4 8 Null 8
</code></pre>
<p>Here's my expected output</p>
<pre><code> Column_A Column_B Column_C Total
Null 2 1 2 5
Notnull 2 3 2 7
</code></pre>
|
<p>Assuming <code>Null</code> is NaN, here's one option. Using <code>isna</code> + <code>sum</code> to count the NaNs, then find the difference between <code>df</code> length and number of NaNs for <code>Notnulls</code>. Then construct a DataFrame.</p>
<pre><code>nulls = df.drop(columns='Id').isna().sum()
notnulls = nulls.rsub(len(df))
out = pd.DataFrame.from_dict({'Null':nulls, 'Notnull':notnulls}, orient='index')
out['Total'] = out.sum(axis=1)
</code></pre>
<p>If you're into one liners, we could also do:</p>
<pre><code>out = (df.drop(columns='Id').isna().sum().to_frame(name='Nulls')
.assign(Notnull=df.drop(columns='Id').notna().sum()).T
.assign(Total=lambda x: x.sum(axis=1)))
</code></pre>
<p>Output:</p>
<pre><code> Column_A Column_B Column_C Total
Nulls 2 1 2 5
Notnull 2 3 2 7
</code></pre>
|
python-3.x|pandas|dataframe
| 2 |
1,219 | 62,694,581 |
WebDriverException: Message: 'chromedriver.exe' executable may have wrong permissions using Google Colaboratory through Selenium Python
|
<p>I am using Google Chrome version 83.0.4103.116 and ChromeDriver 83.0.4103.39. I am trying to use chrome driver in google colab. I use the path of chromedriver after uploading it in google colab. Could you please point out where i m getting error.
This is the code</p>
<pre><code>import selenium
from selenium import webdriver
wd = webdriver.Chrome(r'/content/chromedriver.exe')
</code></pre>
<p>This is the error</p>
<pre><code>---------------------------------------------------------------------------
PermissionError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/selenium/webdriver/common/service.py in start(self)
75 stderr=self.log_file,
---> 76 stdin=PIPE)
77 except TypeError:
4 frames
PermissionError: [Errno 13] Permission denied: '/content/chromedriver.exe'
During handling of the above exception, another exception occurred:
WebDriverException Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/selenium/webdriver/common/service.py in start(self)
86 raise WebDriverException(
87 "'%s' executable may have wrong permissions. %s" % (
---> 88 os.path.basename(self.path), self.start_error_message)
89 )
90 else:
WebDriverException: Message: 'chromedriver.exe' executable may have wrong permissions. Please see https://sites.google.com/a/chromium.org/chromedriver/home
</code></pre>
|
<h2>Google Colaboratory</h2>
<p><a href="https://colab.research.google.com/drive/16pBJQePbqkz3QFV54L4NIkOn1kwpuRrj#scrollTo=DO6Z47v1bO-w" rel="nofollow noreferrer">Colaboratory</a> is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud which enables us to write and execute code, save and share your analyses, and access powerful computing resources, all for free from your browser.</p>
<p>The entire colab runs in a cloud VM. If you investigate the VM you will find that the current colab notebook is running on top of Ubuntu 18.04.3 LTS.</p>
<p>So while using <a href="https://stackoverflow.com/questions/54459701/what-is-selenium-and-what-is-webdriver/54482491#54482491">Selenium</a> instead of mentioning the <a href="https://stackoverflow.com/questions/48079120/what-is-the-difference-between-chromedriver-and-webdriver-in-selenium/48080871#48080871">WebDriver</a> variant along with the extension i.e. <code>.exe</code> you need to drop the extension. So effectively your code block will be:</p>
<pre><code>import selenium
from selenium import webdriver
wd = webdriver.Chrome('/content/chromedriver')
</code></pre>
<hr />
<h2>Update</h2>
<p>Incase you aren't sure where the <a href="https://stackoverflow.com/questions/51091121/why-doesnt-chromedriver-require-chrome-or-chromium/51096327#51096327">ChromeDriver</a> is getting downloaded you can move it to a known location and use it as follows:</p>
<pre><code>!apt-get update
!apt install chromium-chromedriver
!cp /usr/lib/chromium-browser/chromedriver /usr/bin
!pip install selenium
from selenium import webdriver
wd = webdriver.Chrome('/usr/bin/chromedriver')
</code></pre>
<hr />
<h2>References</h2>
<p>You can find a couple of detailed relevant discussions in:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/47148872/webdrivers-executable-may-have-wrong-permissions-please-see-https-sites-goo/47150939#47150939">'Webdrivers' executable may have wrong permissions. Please see https://sites.google.com/a/chromium.org/chromedriver/home</a></li>
<li><a href="https://stackoverflow.com/questions/49787327/selenium-on-mac-message-chromedriver-executable-may-have-wrong-permissions/49788993#49788993">Selenium on MAC, Message: 'chromedriver' executable may have wrong permissions</a></li>
</ul>
|
python|selenium|google-chrome|selenium-chromedriver|google-colaboratory
| 2 |
1,220 | 61,776,714 |
Get specific value BeautifulSoup (parsing)
|
<p>I'm trying to extract information from a website.</p>
<p>Using Python (<strong>BeautifulSoup</strong>) </p>
<p>I want to extract the following data (<em>just the figures</em>)</p>
<p><strong>EPS (Basic)</strong> </p>
<p>from: <em><a href="https://www.marketwatch.com/investing/stock/aapl/financials/income/quarter" rel="nofollow noreferrer">https://www.marketwatch.com/investing/stock/aapl/financials/income/quarter</a></em>
<a href="https://i.stack.imgur.com/G7gyx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G7gyx.png" alt="EPS"></a></p>
<p>From the <em>xml</em>:</p>
<p><a href="https://i.stack.imgur.com/bvvAk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bvvAk.png" alt="enter image description here"></a></p>
<p>I'm built the code:</p>
<pre><code>import pandas as pd
from bs4 import BeautifulSoup
import urllib.request as ur
import request
url_is = 'https://www.marketwatch.com/investing/stock/aapl/financials/income/quarter'
read_data = ur.urlopen(url_is).read()
soup_is=BeautifulSoup(read_data, 'lxml')
cells = soup_is.findAll('tr', {'class': 'mainRow'} )
for cell in cells:
print(cell.text)
</code></pre>
<p>But I'm not to extract the figures for <strong>EPS (Basic)</strong></p>
<p><a href="https://i.stack.imgur.com/BX6cJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BX6cJ.png" alt="EPS"></a></p>
<p>Is there a way to extract just the data and sorted by column?</p>
|
<p>Try following <code>css</code> selector which check td tag contains <code>EPS (Basic)</code> text .</p>
<pre><code>import urllib.request as ur
url_is = 'https://www.marketwatch.com/investing/stock/aapl/financials/income/quarter'
read_data = ur.urlopen(url_is).read()
soup_is=BeautifulSoup(read_data, 'lxml')
row = soup_is.select_one('tr.mainRow>td.rowTitle:contains("EPS (Basic)")')
print([cell.text for cell in row.parent.select('td') if cell.text!=''])
</code></pre>
<p><strong>Output</strong>:</p>
<pre><code>[' EPS (Basic)', '2.47', '2.20', '3.05', '5.04', '2.58']
</code></pre>
<hr>
<p>To print in DF</p>
<pre><code>import pandas as pd
from bs4 import BeautifulSoup
import urllib.request as ur
url_is = 'https://www.marketwatch.com/investing/stock/aapl/financials/income/quarter'
read_data = ur.urlopen(url_is).read()
soup_is=BeautifulSoup(read_data, 'lxml')
row = soup_is.select_one('tr.mainRow>td.rowTitle:contains("EPS (Basic)")')
data=[cell.text for cell in row.parent.select('td') if cell.text!='']
df=pd.DataFrame(data)
print(df.T)
</code></pre>
<p><strong>Output</strong>:</p>
<pre><code> 0 1 2 3 4 5
0 EPS (Basic) 2.47 2.20 3.05 5.04 2.58
</code></pre>
|
python|python-3.x|parsing|beautifulsoup
| 1 |
1,221 | 61,809,209 |
I am having hard time understanding the BCEWITHLOGITLOSS
|
<p>Hello everyone recently i am working on a assignment where i have to predict mask of image using foreground_background image,background image.I used bcewithlogitloss bcz i changed my target value as combination of 1 and 0 where 1 is for foreground and 0 is for background.So my results are pretty good but i am not still convinced why i used bcewithlogitloss.It is a reconstruction problem where i am using probabilty distribution(bcewithlogitloss).SO can anyone plz hepl me why i can use bcewithlogitloss in this example.Thank you</p>
|
<p>The task you're describing is semantic segmentation, where the model predicts a mask for the image.</p>
<p><a href="https://i.stack.imgur.com/srDWS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/srDWS.jpg" alt="enter image description here"></a></p>
<p>In the mask, for every input pixel, there are 2 possible categories. Either 0 (black) for background, and 1 (white) for foreground.</p>
<p>So, technically, it's a classification problem with 2 classes.</p>
<p>We commonly use binary cross-entropy for binary classification problems <a href="https://gombru.github.io/2018/05/23/cross_entropy_loss/" rel="nofollow noreferrer">(why)</a>, hence BCE is a good fit here. Finally, the withLogit part means, the sigmoid activation will be applied internally, your model won't need a separate activation layer <a href="https://discuss.pytorch.org/t/bceloss-vs-bcewithlogitsloss/33586/2" rel="nofollow noreferrer">(details)</a>.</p>
|
pytorch
| 0 |
1,222 | 60,501,986 |
Why does the inpaint method not remove the text from IC image?
|
<p>I am trying to mask out the marking on an IC but the <code>inpaint</code> method from OpenCV does not work correctly.</p>
<p><a href="https://i.stack.imgur.com/RW4sT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RW4sT.png" alt="inpainting dies"></a></p>
<p>The left image is the original image (after cropping the ROI). The middle image is the mask I generated through threshholding. The right image is the result of the inpainting method.</p>
<p>This is what I did:</p>
<pre><code>mask = cv2.threshold(img, 120, 255, cv2.THRESH_BINARY)[1]
dst = cv2.inpaint(img, mask, 3, cv2.INPAINT_NS)
</code></pre>
<p>I played around with the third parameter of the inpainting method but it does nothing good.</p>
<p>I saw an question here where someone used exactly the same approach and he also had a dark image where the contrasts are not so distinguished. I also tried both inpainting algorithms, Telea and NS.</p>
<p>What is the issue here?</p>
|
<p>Mainly, dilate the <code>mask</code> used for the inpainting. Also, enlarging the inpaint radius will give slightly better results.</p>
<p>That'd be my suggestion:</p>
<pre class="lang-py prettyprint-override"><code>import cv2
from matplotlib import pyplot as plt
# Read image
img = cv2.imread('ic.png', cv2.IMREAD_GRAYSCALE)
# Binary threshold image
mask = cv2.threshold(img, 120, 255, cv2.THRESH_BINARY)[1]
# Remove small noise
inp_mask = cv2.morphologyEx(mask,
cv2.MORPH_OPEN,
cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)))
# Dilate mask
inp_mask = cv2.dilate(inp_mask,
cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15, 15)))
# Inpaint
dst = cv2.inpaint(img, inp_mask, 15, cv2.INPAINT_NS)
# Show results
plt.figure(1, figsize=(10, 10))
plt.subplot(2, 2, 1), plt.imshow(img, cmap='gray'), plt.title('Original image')
plt.subplot(2, 2, 2), plt.imshow(mask, cmap='gray'), plt.title('Thresholded image')
plt.subplot(2, 2, 3), plt.imshow(inp_mask, cmap='gray'), plt.title('Inpaint mask')
plt.subplot(2, 2, 4), plt.imshow(dst, cmap='gray'), plt.title('Inpainted image')
plt.tight_layout()
plt.show()
</code></pre>
<p>And, that'd be the output(s):</p>
<p><a href="https://i.stack.imgur.com/k897d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k897d.png" alt="Outputs"></a></p>
<p>Hope that helps!</p>
<pre class="lang-none prettyprint-override"><code>----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.8.1
Matplotlib: 3.2.0rc3
OpenCV: 4.2.0
----------------------------------------
</code></pre>
|
python|opencv|image-processing|image-preprocessing
| 3 |
1,223 | 60,615,012 |
Most generic way to Substitute all the occurrences of a string with different values Python
|
<p>I have text file (FILE) which internally is a combination of 4 files( file_f1,file_f2,file_f3,file_f4). </p>
<p>The delimiter to separate different files from one another is the string "stackoverflow" which appears depending on the count of files (eg: if FILE is combination of 3 files then the string appears 3 times).
FILE currently has 4 occurences of string 'stackoverflow'</p>
<p>vals=['f1','f2','f3','f4']</p>
<p>vals is a list that is extracted from the file names. ( if FILE is combination of 3 files then vals will have 3 strings)</p>
<p>I am trying to substitute the occurence of "stackoverflow" with vals[i].</p>
<p>Below is my code, its substituting all the occurence of "stackoverflow" with only vals[0].</p>
<pre><code>with open(os.path.join(file),'r') as fh:
data=fh.readlines()
for line in data:
for i in range(len(vals)):
if "stackoverflow" in line:
line= re.sub('stackoverflow',vals[i], line)
</code></pre>
<p>I have tried the below counter method </p>
<pre><code>count=1
for line in data:
if 'stackoverflow' in line:
if count==1:
line = re.sub('stackoverflow',vals[0], line)
elif count==2:
line = re.sub('stackoverflow','\n'+vals[1], line)
elif count==3:
line = re.sub('stackoverflow','\n'+vals[2], line)
elif count==4:
line = re.sub('stackoverflow','\n'+vals[3], line)
count=count+1
</code></pre>
<p>This is not serving my purpose as it static and cant be applied for the more or less than 4 files.</p>
<p>Can someone please suggest me the generic way to accomplish this.
Thanks in advance.</p>
|
<p>Use <code>iter</code></p>
<p><strong>Ex:</strong></p>
<pre><code>vals = iter(['f1','f2','f3','f4'])
with open(os.path.join(file),'r') as fh:
for line in fh:
if "stackoverflow" in line:
line= re.sub('stackoverflow',next(vals), line)
</code></pre>
|
python-3.x
| 1 |
1,224 | 63,450,441 |
load a python trained xgboost in c++ api, the predict result is empty
|
<p>i am training a 13 category classification on python xgboost, my feature dim is 3207, the model is saved after python xgbsoot training, and i checked some case, the result is normal.
but when i load the model in c++ xgboost, and do predict, the result is nothing. here is my c++ code demo:</p>
<pre><code> BoosterHandle booster_ = nullptr;
std::cout << "befor:" << booster_ << std::endl;
if (XGBoosterCreate(NULL, 0, &booster_) == 0 &&
XGBoosterLoadModel(booster_, model_path.c_str()) == 0) {
LOG(INFO) << "load xgboost model success !";
} else {
LOG(ERROR) << "load xgboost model error !";
booster_ = NULL;
}
// build Dmatrix data, two fake example, the expect predict label is 1, prob = 0.927.
float d_values[2][3207] = {0};
d_values[0][0] = 1.0;
d_values[0][8] = 1.0;
d_values[0][6] = 1.0;
d_values[0][14] = 1.0;
d_values[1][0] = 1.0;
d_values[1][8] = 1.0;
d_values[1][6] = 1.0;
d_values[1][14] = 1.0;
std::cout <<"predict:" << booster_ << std::endl;
int r = XGDMatrixCreateFromMat(&d_values[0][0], 2, 3207, 0.0, &data);
if (r != 0) {
LOG(ERROR) << "build DMatrix failed!!";
}
std::cout <<"r=" << r << std::endl;
const float* out;
uint64_t len;
auto ret = XGBoosterPredict(booster_, data, 0, 0, &len, &out);
if (ret < 0) {
LOG(ERROR) << "xgboost inference error !";
return -1;
}
std::cout << "len=" << len << std::endl;
XGDMatrixFree(data);
return 0;
</code></pre>
<p>the result is:</p>
<pre><code> r=0
len=0
</code></pre>
<p>everty step in this demo , the return status code equal 0,but the final len=0, what's the problem?</p>
|
<p>I found the reason,my python xgboost is trained anaconda, the xgboost is different from c++ xgboost source code version.</p>
|
python|c++|xgboost
| 1 |
1,225 | 68,364,605 |
Discord Bot Commands not working with on_message
|
<p>I have a few commands that worked perfectly, but when I add on_message they don't. I read that you need to add the await bot.process_commands(message) line, but it still doesn't work for me. Why?</p>
<pre class="lang-py prettyprint-override"><code>@bot.event
async def on_message(message):
if message.content.lower() == 'prefix':
prefix = guilds.find_one({"_id": message.guild.id})["prefix"]
await message.channel.send(f"> The prefix for this server is: {prefix}")
else:
return
await bot.process_commands(message)
</code></pre>
|
<p>A function ends as soon as it encounters a <code>return</code> statement, with your current logic it will only process the commands if the <code>if</code> statement is <code>True</code>. Simply remove that <code>else</code> part.</p>
<pre class="lang-py prettyprint-override"><code>@bot.event
async def on_message(message):
if message.content.lower() == 'prefix':
prefix = guilds.find_one({"_id": message.guild.id})["prefix"]
await message.channel.send(f"> The prefix for this server is: {prefix}")
await bot.process_commands(message)
</code></pre>
|
python|discord.py|command
| 3 |
1,226 | 59,139,646 |
Address generation for bitcoin with Python error
|
<p>I am trying to understand bitcoin with python and trying to create my own vanity address generator.</p>
<p>Below is a snippet of the while loop. I keep getting an error after the loop runs about 10 times. Any help would be highly appreciated. i have searched the forum and have found answers. </p>
<p>But they don't work. IE. i changed the </p>
<pre><code>intermed = hashlib.sha256(string).digest()
</code></pre>
<p>a few times modifying the code but still the same result.</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 38, in <module>
compressed_address_base58check = bitcoin.pubkey_to_address(hex_compressed_public_key)
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/bitcoin/mai
n.py", line 452, in pubkey_to_address
return bin_to_b58check(bin_hash160(pubkey), magicbyte) File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/bitcoin/mai
n.py", line 334, in bin_hash160
intermed = hashlib.sha256(string).digest()
TypeError: Unicode-objects must be encoded before hashing
</code></pre>
<pre><code>while True:
pk = binascii.hexlify(os.urandom(32)).decode('utf-8').upper()
privkey = f"{pk:0>64}"
pub = privtopub(privkey) # Get pub addr from priv key
addr = pubtoaddr(pub) # Get the 1Btc... address
decoded_private_key = bitcoin.decode_privkey(privkey, 'hex')
wif_encoded_private_key = bitcoin.encode_privkey(decoded_private_key, 'wif')
# Add suffix '01' to indicate a compressed private Key
compressed_private_key = privkey + '01'
# generate a WIF format from the compressed private key (WIF-compressed)
wif_compressed_private_key = bitcoin.encode_privkey(bitcoin.decode_privkey(compressed_private_key, 'hex'), 'wif')
# Multiply de EC generator G with the priveate key to get a public key point
pubkey = bitcoin.fast_multiply(bitcoin.G, decoded_private_key)
# Encode as hex, prefix 04
hex_encoded_public_key = bitcoin.encode_pubkey(pubkey, 'hex')
# Compress public key, adjust prefix depending on whether y is even or odd
(public_key_x, public_key_y) = pubkey
if public_key_y % 2 == 0:
compressed_prefix = '02'
else:
compressed_prefix = '03'
hex_compressed_public_key = compressed_prefix + bitcoin.encode(public_key_x, 16)
print ('Compressed Public Key: ' + hex_compressed_public_key)
# Generate compressedd bitcoin address from compressed public key
compressed_address_base58check = bitcoin.pubkey_to_address(hex_compressed_public_key)
print ("private key: " + privkey )
print ("uncompressed address: "+ addr )
print ('Compressed address: ' + bitcoin.pubkey_to_address(hex_compressed_public_key))
C_address = bitcoin.pubkey_to_address(hex_compressed_public_key)
U_address = addr
</code></pre>
|
<p>You have to encode your <code>hex_compressed_public_key</code> to generate the address. </p>
<pre><code>compressed_address_base58check = bitcoin.pubkey_to_address(hex_compressed_public_key.encode('utf-8'))
</code></pre>
|
python-3.x|bitcoin
| -1 |
1,227 | 35,593,724 |
Group by one column and iterate the ranking from 1 to 5 for whole file
|
<p>Total recs: 50 records</p>
<p>Rec id is the unique id in the file.</p>
<p>Input file will be as follows:</p>
<pre><code>rec id1
rec id1
rec id1
rec id2
rec id3
rec id3
rec id4
rec id6
rec id6
rec id7
rec id7
Output file should have
A RANKS
rec id1 1
rec id1 1
rec id1 1
rec id2 2
rec id3 3
rec id3 3
rec id4 4
rec id6 5
rec id6 5
rec id7 1
rec id7 1
rec id8 2
rec id8 2
</code></pre>
<p>and so ...on</p>
<p>Once rank reaches 5 it should start again from 1 for next group by A column</p>
<p>What i did for now</p>
<p>step 1: group by rec id's
step 2: I did For loop and got the output..but its not fast in python..
is there anyway to make it fast by using apply function etc..or some other existing function</p>
<p>Code i used:</p>
<pre><code>curr_rec_id = df.ix[i,'rec_id']
for i in xrange(len(df.index)):
if df.ix[i,'rec id'] == curr_rec_id:
df.ix[i,'rank'] = j
df.to_csv('c:\out.csv')
else:
curr_rec_id = df.ix[i,'rec id']
j=j+1
df.ix[i,'rank'] = j
df.to_csv('c:out.csv')
if j==5:
j=1
</code></pre>
<p>This takes long time if recs are many in the input file.</p>
<p>Is there any efficient way and fastest way of doing it..for eg: using apply function or group by function etc...</p>
|
<p>You can try <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.factorize.html" rel="nofollow"><code>factorize</code></a> and divide by <code>modulo</code> <code>%</code> with <code>5</code>. Last you need add <code>1</code>:</p>
<pre><code>print df
A
0 rec id1
1 rec id1
2 rec id1
3 rec id2
4 rec id3
5 rec id3
6 rec id4
7 rec id6
8 rec id6
9 rec id7
10 rec id7
11 rec id8
12 rec id8
df['RANKS'] = ( pd.factorize(df['A'])[0] % 5 ) + 1
print df
A RANKS
0 rec id1 1
1 rec id1 1
2 rec id1 1
3 rec id2 2
4 rec id3 3
5 rec id3 3
6 rec id4 4
7 rec id6 5
8 rec id6 5
9 rec id7 1
10 rec id7 1
11 rec id8 2
12 rec id8 2
</code></pre>
<p>Or you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.rank.html" rel="nofollow"><code>rank</code></a>, but first you need substract <code>1</code> for counting <code>modulo</code> <code>%</code>:</p>
<pre><code>df['RANKS'] = (( df['A'].rank(method='dense') - 1 ) % 5 ) + 1
</code></pre>
|
python-2.7|pandas|dataframe
| 1 |
1,228 | 58,706,952 |
Writing xls from python row wise using pandas
|
<p>I have sucessfully created .xlsx files using pandas</p>
<blockquote>
<p>df = pd.DataFrame([list of array])</p>
</blockquote>
<pre><code>'''
:param data: Data Rows
:param filename: name of the file
:return:
'''
df = pd.DataFrame(data)
# my "Excel" file, which is an in-memory output file (buffer)
# for the new workbook
excel_file = BytesIO()
writer = pd.ExcelWriter(excel_file, engine='xlsxwriter')
df.to_excel(writer, sheet_name='Sheet1_test')
writer.save()
writer.close()
# important step, rewind the buffer or when it is read() you'll get nothing
# but an error message when you try to open your zero length file in Excel
excel_file.seek(0)
# set the mime type so that the browser knows what to do with the file
response = HttpResponse(excel_file.read(),
content_type='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet')
# set the file name in the Content-Disposition header
response['Content-Disposition'] = 'attachment; filename=' + filename + '.xlsx'
return response
</code></pre>
<p>But I have issue here,
There is unnecessary SNo. which i dont want, how to do I remove that.</p>
<p><a href="https://i.stack.imgur.com/0BNwr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0BNwr.png" alt="There is SNo. as first row and column, How do i remove that?"></a></p>
<p>There is SNo. as first row and column, How do i remove that?</p>
|
<p>You can take reference from this <a href="https://medium.com/better-programming/using-python-pandas-with-excel-d5082102ca27" rel="nofollow noreferrer">https://medium.com/better-programming/using-python-pandas-with-excel-d5082102ca27</a> post of medium for this. </p>
|
python|django|pandas|xls|xlsxwriter
| 1 |
1,229 | 60,254,855 |
Pexpect - Read from constantly outputting shell
|
<p>I'm attempting to have pexpect begin running a command which basically continually outputs some information every few milliseconds until cancelled with Ctrl + C.</p>
<p>I've attempted getting pexpect to log to a file, though these outputs are simply ignored and are never logged.</p>
<pre><code>child = pexpect.spawn(command)
child.logfile = open('mylogfile.txt', 'w')
</code></pre>
<p>This results in the command being logged with an empty output.</p>
<p>I have also attempted letting the process run for a few seconds, then sending an interrupt to see if that logs the data, but this again, results in an almost empty log.</p>
<pre><code>child = pexpect.spawn(command)
child.logfile = open('mylogfile.txt', 'w')
time.sleep(5)
child.send('\003')
child.expect('$')
</code></pre>
<p>This is the data in question:</p>
<p><a href="https://i.stack.imgur.com/XwRLG.png" rel="nofollow noreferrer">image showing the data constantly printing to the terminal</a></p>
<p>I've attempted the solution described here: <a href="https://stackoverflow.com/questions/20182827/parsing-pexpect-output">Parsing pexpect output</a> though it hasn't worked for me and results in a timeout.</p>
|
<p>Managed to get it working by using Python Subprocess for this, not sure of a way to do it with Pexpect, but this got what I described.</p>
<pre><code>def echo(self, n_lines):
output = []
if self.running is False:
# start shell
self.current_shell = Popen(cmd, stdout=PIPE, shell=True)
self.running = True
i = 0
# Read the lines from stdout, break after reading the desired amount of lines.
for line in iter(self.current_shell.stdout.readline, ''):
output.append(line[0:].decode('utf-8').strip())
if i == n_lines:
break
i += 1
return output
</code></pre>
|
python-3.x|pexpect
| 0 |
1,230 | 42,934,712 |
python matplotlib plot datetime index
|
<p>I am trying to create a simple line graph based on a datetime index. But I get an error message. </p>
<pre><code>#standard packages
import numpy as np
import pandas as pd
#visualization
%matplotlib inline
import matplotlib.pylab as plt
#create weekly datetime index
edf = pd.read_csv('C:\Users\j~\raw.csv', parse_dates=[6])
edf2 = edf[['DATESENT','Sales','Traffic']].copy()
edf2['DATESENT']=pd.to_datetime(edf2['DATESENT'],format='%m/%d/%Y')
edf2 = edf2.set_index(pd.DatetimeIndex(edf2['DATESENT']))
edf2.resample('w').sum()
edf2
#output
SALES
DATESENT
2014-01-05 476
2014-01-12 67876
</code></pre>
<p>Then I try to plot (the simplest line plot possible to see sales by week) </p>
<pre><code>#linegraph
edf3.plot(x='DATESENT',y='Sales')
</code></pre>
<p>But I get this error message </p>
<blockquote>
<p>KeyError: 'DATESENT'</p>
</blockquote>
|
<p>You're getting a <code>KeyError</code> because your <code>'DATESENT'</code> is the index and NOT a column in <code>edf3</code>. You can do this instead:</p>
<pre><code>#linegraph
edf3.plot(x=edf3.index,y='Sales')
</code></pre>
|
python|pandas|matplotlib
| 2 |
1,231 | 42,657,894 |
BeautifulSoup scrape itemprop="name" in Python
|
<p>I have some python 3.5 code that I want to scrape part of a web page with but instead of printing "Thick and Chewy Peanut Butter Chocolate Chip Bars" it prints "None". Do you know why? Thanks. </p>
<pre><code>import requests, bs4
import tkinter as tk
from tkinter import *
import pymysql
import pymysql.cursors
res = requests.get("http://www.foodnetwork.co.uk/article/traybake-recipes/thick-and-chewy-peanut-butter-chocolate-chip-bars/list-page-2.html")
res.raise_for_status()
recipeSoup = bs4.BeautifulSoup(res.text, "html.parser")
type(recipeSoup)
instructions = recipeSoup.find("div", itemprop="name")
try:
method = str.replace(instructions.get_text(strip=True),". ",".")
method = str.replace(method, ". ", ".")
method = (str.replace(method, ".",".\n"))
except AttributeError:
print(instructions)
</code></pre>
<p><a href="http://www.foodnetwork.co.uk/article/traybake-recipes/thick-and-chewy-peanut-butter-chocolate-chip-bars/list-page-2.html" rel="noreferrer">Link to scraped page</a></p>
|
<p>Change <code>instructions = recipeSoup.find("div", itemprop="name")</code> to <code>instructions = recipeSoup.find("span", itemprop="name")</code> to get the recipe title.</p>
<p>For the instructions you'll have to search for <code>li</code> tags with <code>itemprop=ingredients</code>.</p>
|
python|python-3.x|web-scraping|beautifulsoup
| 11 |
1,232 | 65,654,756 |
Blocking unwanted users from access another users profile data with function based view in Django
|
<p>I am working with some function-based views in <code>Django</code>. I have a custom build decorator named <code>industry_required</code> which allows passing by verifying a user is authenticated from an <code>Industrial</code> account or not.</p>
<p>I have some functions in <code>views.py</code> and their particular URL in <code>urls.py</code>. like:</p>
<p>in <code>urls.py</code>:</p>
<pre><code>path('industryDetails/', views.industryDetails.as_view(), name='industryDetails'),
path('industry_create_report/<int:pk>/', views.industry_create_report, name='industry_create_report'),
</code></pre>
<p>in <code>views.py</code>:</p>
<pre><code>@method_decorator(industry_required, name='dispatch')
class industryDetails(DetailView):
model = Industry
template_name = 'app/industryDetails.html'
def get_queryset(self):
return Industry.objects.filter(user=self.request.user)
def get_object(self):
return get_object_or_404(Industry, user=self.request.user)
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['inserted_text'] = "inserted text from EmployeeDetails of views.py"
context['employee_table'] = self.object.employee_set.all()
return context
def industry_create_report(request):
industry_create_report_dict = {
}
#bla bla....
return render(request, 'app/industry_create_report.html', industry_create_report_dict)
</code></pre>
<p>Now my problem is when I use the <code>DetailView</code>, I can easily ensure that which user logged in that user can see only his <code>DetailView</code> profile by using the method_decorator named <code>industry_required</code> and <code>get_queryset</code> and <code>get_object</code> method.</p>
<p>But when I use the <code>function-based</code> view, suppose an <code>Industrial</code> account user with <code>pk=1</code> can also see the data of <code>pk=2</code> with using URL pattern (industry_create_report/2/), which is not desireable.</p>
<p>How can I fix it?</p>
|
<p>In your function based view, you need to fetch the industry using a similiar filter to what you had specified in <code>industryDetails.get_queryset</code>.</p>
<pre><code>def function_based_view(request, pk):
industry = get_object_or_404(Industry.objects.filter(user=self.request.user), pk=pk)
</code></pre>
|
python|django
| 1 |
1,233 | 50,885,201 |
Error with **args and *kwargs
|
<p>I'm learning *args and **kwargs and had a question. What happens when we instead use ** on a list and * on a dictionary? I know it doesn't work but was wondering if it's a syntax issue or if there's something going on that has a more intuitive explanation.</p>
|
<p>Let's find out:</p>
<pre><code>>>> def f(arg):
... print(arg)
...
>>> f(**[])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: f() argument after ** must be a mapping, not list
>>> f(*{})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: f() missing 1 required positional argument: 'arg'
>>> f(*{'foo': 42})
foo
</code></pre>
<p>So the list fails because the type is completely wrong, and the dict passes the keys, as expected:</p>
<pre><code>>>> list({'foo': 42})
['foo']
</code></pre>
|
python|arguments
| 2 |
1,234 | 45,212,378 |
Python - how to split a string using regex but preserving pattern that contains the split separator?
|
<p>Starting from <code>"param1=1-param2=1.e-01-param3=A"</code>, how to get
<br><code>["param1=1", "param2=1.e-01", "param3=A"]</code> ? The problem is that the separator "-" may be contained in the value of a parameter.</p>
<p>Franck</p>
<pre><code>>>> import re
>>> re.split("-", "param1=1-param2=1.e-01-param3=A")
['param1=1', 'param2=1.e', '01', 'param3=A']
>>> re.split("[^e]-[^0]", "param1=1-param2=1.e-01-param3=A")
['param1=', 'aram2=1.e-0', 'aram3=A']
>>> re.split("[?^e]-[?^0]", "param1=1-param2=1.e-01-param3=A")
['param1=1-param2=1.', '1-param3=A']
</code></pre>
<p><strong>EDIT</strong></p>
<p>OK, I forgot to mention param1, param2, param3 do actually not share the same <code>"param"</code> string. What about if we have to split <code>"p=1-q=1.e-01-r=A"</code>into the same kind of list <code>["p=1", "q=1.e-01", "r=A"]</code> ?</p>
<p><strong>EDIT</strong></p>
<pre><code>>>> re.split("(?:-)(?=[a-z]+)", "p=1-q=1.e-01-r=A")
['p=1', 'q=1.e-01', 'r=A']
</code></pre>
<p>Does the job for me as I know parameter names can not carry any <code>-</code>.</p>
<p>Thanks, guys !</p>
|
<p>By using non-capturing group and positive lookahead, capture <code>'-'</code> only if it is followed by <code>'param'</code>:</p>
<pre><code>import re
string = "param1=1-param2=1.e-01-param3=A"
print(re.split(r"(?:-)(?=param)", string))
# ['param1=1', 'param2=1.e-01', 'param3=A']
</code></pre>
<p><a href="https://regex101.com/r/znAIyg/1" rel="nofollow noreferrer">Live demo on regex101</a></p>
|
python|regex
| 2 |
1,235 | 45,115,219 |
how to print arduino accelerometer real time data to file
|
<p>The program below compiles but doesn't print data to file. I also tried while(1) but didn't get the right output (no data). I am still trying to learn python embedded and file programming. Can anybody take a look and point me in the right direction?</p>
<p>Code below:</p>
<pre><code> import logging
import serial
import serial.threaded
import threading
#import time
#from datetime import *
#import datetime
import time as t
from datetime import datetime
import sys
ser = serial.Serial('COM3',baudrate=9600, timeout=1)
def getvalues():
arduionoData=ser.readline().decode('ascii') #('UTF-8')#
return arduionoData
def realtime():
"""Generate time string"""
dt0 = datetime.now()
dt1 = dt0.replace(minute=1*(int)(dt0.minute),second=
(int)(dt0.second),microsecond=0)
return dt1.time().strftime('%H:%M:%S')
extraction_file = open("C:/Users/gurbir/Desktop/Arduino /accelerometerXonly_jul09a/extraction.txt", "w")
#while(1):
extraction_file.write(getvalues())
#extraction_file.write(realtime())
t.sleep(3) #try to collect data for 3 seconds
extraction_file.close()
sys.exit()
</code></pre>
|
<p>try to write csv file
<code></p>
<pre>import csv
while(1):
with open(r'log.csv', 'a') as f:
writer = csv.writer(f)
writer.writerow((getvalues()))
</code></pre>
|
python
| 0 |
1,236 | 58,018,049 |
Counting most common combination of values in dataframe column
|
<p>I have DataFrame in the following form:</p>
<pre><code>ID Product
1 A
1 B
2 A
3 A
3 C
3 D
4 A
4 B
</code></pre>
<p>I would like to count the most common combination of two values from <code>Product</code> column grouped by <code>ID</code>.
So for this example expected result would be:</p>
<pre><code>Combination Count
A-B 2
A-C 1
A-D 1
C-D 1
</code></pre>
<p>Is this output possible with pandas?</p>
|
<p>We can <code>merge</code> within ID and filter out duplicate merges (I assume you have a default <code>RangeIndex</code>). Then we sort so that the grouping is regardless of order:</p>
<pre><code>import pandas as pd
import numpy as np
df1 = df.reset_index()
df1 = df1.merge(df1, on='ID').query('index_x > index_y')
df1 = pd.DataFrame(np.sort(df1[['Product_x', 'Product_y']].to_numpy(), axis=1))
df1.groupby([*df1]).size()
</code></pre>
<hr>
<pre><code>0 1
A B 2
C 1
D 1
C D 1
dtype: int64
</code></pre>
|
python|pandas
| 5 |
1,237 | 56,411,436 |
Why function shows none for more output when using insert() method in lists in Python?
|
<p>I am using function to insert method to add a value in list as given below:
1. insert_value.py file as given below:</p>
<pre><code>def insert_value(my_list, value, insert_position):
str_list3 = ['one','three','four', 'five', 'six']
str_list4 = ['i', 't']
if my_list == str_list3:
front = my_list[:insert_position]
back = my_list[insert_position:]
return front + [value] + back
elif my_list == str_list4:
front = my_list[:insert_position]
back = my_list[insert_position:]
a= front +[value]+ back
return a
</code></pre>
<ol start="2">
<li><p>insert_value_test_file.py as given below:</p>
<pre><code>import insert_value
print("\ninsert_value Test")
str_list3 = ['one','three','four', 'five', 'six']
new_list = insert_value.insert_value(str_list3, 'two', 1)
print(new_list)
str_list4 = ['i', 't']
str_list4 = insert_value.insert_value(str_list4, 'p', 0)
print(str_list4)
str_list4 = insert_value.insert_value(str_list4, 's', -1)
print(str_list4)
str_list4 = insert_value.insert_value(str_list4, 's', 7)
print(str_list4)
</code></pre></li>
</ol>
<p>when running this file my output comes as :</p>
<blockquote>
<p>insert_value Test ['one', 'two', 'three', 'four', 'five', 'six'] ['p',
'i', 't'] None None</p>
</blockquote>
<p>Why None ???? for other inputs in str_list4.</p>
|
<p>Neither of the conditionals in insert_value evaluate to True after the first two calls, so insert_value has no return (returns None) for the last two calls. </p>
|
python|python-3.x
| 0 |
1,238 | 55,558,483 |
Fetching data from json file, using python3. How to printspesific parts from datapool?
|
<p>So Ive been trying to get data from this json file containing stock info. Im completely new to python, and json is almost foreign. But I do understand the concepts, and I could need a little push in the right direction. Ive been doing beginners guides on python, and have spent lots of time to figure this out on my own, and honestly I dont know what Im doing, and I recon I should do more study, but Im really hung up on this.
So forgive me for asking questions, when Im probably not even going to understand any answers you might suggest. But then I can atleast do more research based on your suggestions, and gain more knowledge, do some testing and get it down.</p>
<p>Im on archlinux, with python 3, i have the code in some.py files which i run in the terminal. Ive been googeling and done lots of testing. I see there are several ways to do what im trying to do, and I also got some other samples to work, where the json file was structured a bit differently. But I could never get it working on the json file ive been trying to get data from. Below is what ive got.</p>
<pre><code> #!/usr/bin/env python
# -*- coding:utf-8 -*-
import urllib.request
import json
url = 'https://silverrive.com/pages/Issuers_sample.json'
req = urllib.request.Request(url)
r = urllib.request.urlopen(req).read()
cont = json.loads(r.decode('utf-8'))
print(cont)
</code></pre>
<p>So if you look at the json file the are different entries, and I would like to be able to extract for example only the strings/numbers which contains "ShortingPercent", and have them printet in my terminal. Now Ive not been getting any errors with the code presentet above, the problem is the entire contents of json file gets dumped into my terminal, So I guess Im on the right track by actually getting the data to the terminal, but I dont know how to sort it, exclude or just fetch the parts I need.</p>
<p>All the tesing Ive done to just print parts of the data always results in "undefined" string, or that the indice list must be integers and not strings or something like that. So I kinda know what the problem is,I just dont know how the correct syntax should look like. I do believe it has something to do with format strings, but Ive not been able to produce anything more then explained above.</p>
<p>Also I know I should be learning more basics before strying stuff like this, but It facinated me, and yes i have ocd, so is really bugging me.</p>
<p>Ive hosted the json file on my own website, so you can copypaste the code and see all the json data coming in.</p>
<p>If any one wants to help out, thanks a lot.</p>
|
<p><a href="https://i.stack.imgur.com/0LbQN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0LbQN.png" alt="enter image description here"></a> </p>
<p>The entire json file is loaded into cont.You just need to filter the required values. </p>
|
json|python-3.x
| 0 |
1,239 | 28,493,771 |
Making pattern in Python using nested loop
|
<p>
I was trying to making a simple pattern that allows the user to decide the numer of rows and columns, for example:</p>
<pre><code>How many rows: 5
How many columns: 5
>>>
*****
*****
*****
*****
*****
</code></pre>
<p>So my code is like this:</p>
<pre><code>row = int(input('How many rows: '))
col = int(input('How may columns: '))
for row in range(row):
for col in range(col):
print ('*', end='')
</code></pre>
<p>But the result is this:</p>
<pre><code>*****
****
***
**
*
</code></pre>
<p>I realized that I assigned the variable name for the <em>variable</em> of the for loop the same as the variable for my input. I, however, don't understand the logic for that code. It would be great if you guys can explain to me something like a flowchart.</p>
|
<p>This loops <code>col</code> times and then results in <code>col</code> being set to <code>col - 1</code></p>
<pre><code>for col in range(col):
</code></pre>
<p>Due to <code>range(col)</code> looping over <code>0</code> to <code>col - 1</code> and due to the fact that after a loop finishes the loop variable is at that time set to the value from the iteration when the loop exited.</p>
<p>You should use different names for the loop indexes.</p>
<pre><code>row = int(input('How many rows: '))
col = int(input('How may columns: '))
for row_ in range(row):
for col_ in range(col):
print ('*', end='')
</code></pre>
|
python|for-loop
| 2 |
1,240 | 56,916,876 |
How to set a user-selected background colour in tkinter?
|
<p>How to take user-selected colour and use it as the background colour of the tkinter frame?</p>
<pre><code>list2 = ["red", "red", "red", "red", "blue", "yellow"];
droplist = OptionMenu(root, c, *list2)
droplist.config(width=15)
c.set('select your colour')
droplist.place(x=240, y=320)
root.configure(bg=c)
</code></pre>
|
<p>Let's make this work by filling in some missing pieces:</p>
<pre><code>import tkinter as tk
COLORS = ["red", "blue", 'green', 'cyan', 'magenta', "yellow"]
def change_color(*args):
root.configure(bg=color.get())
root = tk.Tk()
root.minsize(width=200, height=200)
color = tk.StringVar(root)
color.trace('w', change_color)
color.set(COLORS[0])
om = tk.OptionMenu(root, color, *COLORS)
label = tk.Label(root, text='Select your color')
om.pack(side="top")
label.pack(side="top")
root.mainloop()
</code></pre>
<p>The primary missing piece was the <code>StringVar</code> associated with the <code>OptionMenu</code> which allows you to interogate it. To associate a callback function with the <code>OptionMenu</code>, we <em>trace</em> changes to the <code>StringVar</code>.</p>
|
python|python-3.x|tkinter
| 3 |
1,241 | 23,689,635 |
PyMongo sort with metadata
|
<p>I wondering how to convert the follow mongodb query to pymongo syntax</p>
<pre><code>db.articles.find(
{ $text: { $search: "cake" } },
{ score: { $meta: "textScore" } }
).sort( { score: { $meta: "textScore" } } ).limit(3)
</code></pre>
<p>I tried this:</p>
<pre><code>results = \
mongo.db.products.find({ '$text': { '$search': 'cake' } }, { 'score': { '$meta': 'textScore' } }) \
.sort({ 'score': { '$meta': 'textScore' } }) \
.limit(3)
</code></pre>
<p>But I got the follow error on sort:</p>
<pre><code>raise TypeError("second item in each key pair must be 1, -1, "
TypeError: second item in each key pair must be 1, -1, '2d', 'geoHaystack', or another valid MongoDB index specifier.
</code></pre>
<p>Anyone can help me?
Thanks in advance</p>
|
<p>I think that the solution is here: <a href="https://github.com/mongodb/mongo-python-driver/blob/master/pymongo/cursor.py#L658" rel="noreferrer">https://github.com/mongodb/mongo-python-driver/blob/master/pymongo/cursor.py#L658</a> . To add list of (key, direction) for new approach with "new" feature '$text':</p>
<pre><code> Beginning with MongoDB version 2.6, text search results can be
sorted by relevance::
cursor = db.test.find(
{'$text': {'$search': 'some words'}},
{'score': {'$meta': 'textScore'}})
# Sort by 'score' field.
cursor.sort([('score', {'$meta': 'textScore'})]) #<<<< HERE
</code></pre>
|
python|mongodb|pymongo|mongodb-query
| 10 |
1,242 | 29,347,789 |
Install taiga on local server error
|
<p>I'm trying to install Taiga on my local server via this tutorial (<a href="http://taigaio.github.io/taiga-doc/dist/setup-production.html" rel="nofollow">http://taigaio.github.io/taiga-doc/dist/setup-production.html</a>)</p>
<p><strong>I get stuck at the part where i have to input this code</strong></p>
<pre><code>python manage.py migrate --noinput
python manage.py loaddata initial_user
python manage.py loaddata initial_project_templates
python manage.py loaddata initial_role
python manage.py collectstatic --noinput
</code></pre>
<p>I get the following <strong>error</strong></p>
<pre><code>Could not import settings 'settings' (Is it on sys.path? Is there an import error in the settings file?): No module named 'kombu'
</code></pre>
<p>I've already google it and there's no way i could find answer to it.</p>
<p><strong>Please help me out!</strong></p>
<p><strong>PIP FREEZE</strong></p>
<pre><code>Django==1.7.6
Jinja2==2.7.2
Markdown==2.4.1
Pillow==2.5.3
Pygments==1.6
Unidecode==0.04.16
amqp==1.4.6
bleach==1.4
celery==3.1.17
diff-match-patch==20121119
django-ipware==0.1.0
django-jinja==1.0.4
django-pgjson==0.2.2
django-picklefield==0.3.1
django-sampledatahelper==0.2.2
django-sites==0.8
django-sr==0.0.4
django-transactional-cleanup==0.1.14
djangorestframework==2.3.13
djmail==0.9
djorm-pgarray==1.0.4
easy-thumbnails==2.1
fn==0.2.13
gunicorn==19.1.1
premailer==2.8.1
psycopg2==2.5.4
pytz==2014.4
raven==5.1.1
redis==2.10.3
requests==2.4.1
six==1.8.0
</code></pre>
|
<p>In which command do you exactly get stuck? Can you print here your "pip freeze" command output?</p>
<p>To fix that error you have to install kombu's dependency. Check again everything inside requirements.txt has been fully installed.</p>
<p>This is my "pip freeze" and the commands are working for me:</p>
<pre><code>Django==1.7.6
Jinja2==2.7.2
Markdown==2.4.1
MarkupSafe==0.23
Pillow==2.5.3
Pygments==1.6
Unidecode==0.04.16
amqp==1.4.6
anyjson==0.3.3
billiard==3.3.0.19
bleach==1.4
celery==3.1.17
cssselect==0.9.1
cssutils==1.0
diff-match-patch==20121119
django-ipware==0.1.0
django-jinja==1.0.4
django-pgjson==0.2.2
django-pglocks==1.0.2
django-picklefield==0.3.1
django-sampledatahelper==0.2.2
django-sites==0.8
django-sr==0.0.4
django-transaction-hooks==0.2
django-transactional-cleanup==0.1.14
djangorestframework==2.3.13
djmail==0.10.0
djorm-pgarray==1.0.4
easy-thumbnails==2.1
enum34==1.0
fn==0.2.13
gunicorn==19.1.1
html5lib==0.999
kombu==3.0.24
lxml==3.4.1
premailer==2.8.1
psycopg2==2.5.4
pytz==2014.4
raven==5.1.1
redis==2.10.3
requests==2.4.1
six==1.8.0
</code></pre>
<p>Installing kombu fix your problem and with django-pglocks you have to install it referencing to a specific git branch like in the readme has been specified, so you have to run the pip command like this:</p>
<pre><code>pip install git+https://github.com/Xof/django-pglocks.git@dbb8d7375066859f897604132bd437832d2014ea
</code></pre>
|
python|localhost|ubuntu-14.04
| 2 |
1,243 | 46,442,737 |
Restful security check for webpage menu items
|
<p>When I want to check that a user has no access to a specific menu item, I pass a value in my REST API like this:</p>
<pre><code>{
menu_item1: false,
menu_item2: true
}
</code></pre>
<p>And regarding to these values, menu items will be visible/invisible.<br>
Is it a security issue that a user which is not authorized can see a menu item, can see that item in page_source with invisible status? Do I have to render pages and item visibility server side?<br>
Even if the user can see the menu item, he can not request the URL because of server side security checks,I want to have a single page web application.</p>
|
<p>Yes , since the item is part of the page DOM , all that you are doing is just making it invisible . There are numerous ways in which the client side page rendering be changed on the fly to make the item visible again. Ideally you should not send the menu Item in the API return and do not add those items in the page DOM during rendering.</p>
<pre><code>{
menu_item2: true
}
</code></pre>
|
python|rest|security|frontend|web-deployment
| 0 |
1,244 | 49,731,747 |
How can I install a PyPI package while using Anaconda dependencies?
|
<p>When I install a package through <code>pip</code> (since it was not available on Anaconda), it also pulls all dependencies. It seems it will use the <code>pip</code> versions of the dependencies, even if <code>conda</code> versions (same name) are available.</p>
<p>How can I easily install a <code>pip</code> package, but use <code>conda</code> for the dependencies where such a package exists?</p>
|
<p>There is no easy way, I suspect. Create a virtual environment, install all anticipated dependencies using <code>conda</code> and then install the main package using <code>pip</code> without <code>-U/--upgrade</code>. <code>pip</code> seeing dependencies installed will not install them again.</p>
|
python|pip|anaconda|conda
| 0 |
1,245 | 53,641,430 |
Matplotlib's get_ticklabels not working with custom string labels
|
<p>I would like to plot only every 50th item from "dates" as a tick on the x axis. When I comment out the line "plt.xticks(xticks, dates)", it works fine (top figure), but when I attempt to replace the numbers with the actual date strings, it puts ALL the dates instead of only every 50th ones (bottom figure). </p>
<p>Can someone tell me what I'm doing wrong? Thanks!</p>
<pre><code>import matplotlib.pyplot as plt
from datetime import datetime, date
import numpy as np
vals = [10, 11, 10, 6, 7, 4, 4, 6, 2, 3, 11, 11, 8, 8, 6, 9, 8, 8, 6, 6, 5, 2, 3, 5, 2, 3, 5, 6, 4, 6, 4, 1, 3, 4, 5, 5, 5, 7, 4, 4, 3, 5, 4, 5, 6, 4, 5, 3, 2, 4, 4, 4, 2, 2, 1, 7, 4, 8, 5, 3, 1, 3, 4, 6, 3, 7, 3, 3, 4, 5, 8, 6, 4, 2, 2, 4, 3, 2, 6, 4, 9, 6, 6, 5, 14, 17, 13, 11, 5, 7, 11, 7, 9, 6, 3, 7, 5, 4, 5, 7, 5, 4, 3, 4, 4, 2, 1, 2, 2, 2, 1, 1, 3, 4, 2, 2, 1, 2, 2, 2, 1, 3, 1, 1, 2, 1, 1, 2, 5, 1, 2, 1, 1, 1, 2, 2, 1, 1, 1, 2, 1, 2, 4, 5, 4, 3, 3, 2, 4, 3, 3, 3, 2, 1, 1, 3, 5, 4, 4, 4, 5, 6, 5, 3, 2, 3, 2, 2, 2, 2, 1, 1, 1, 2, 3, 2, 2, 3, 3, 2, 2, 2, 1, 1, 2, 2, 1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 3, 8, 1]
dates =['05/01/2018', '05/02/2018', '05/03/2018', '05/04/2018', '05/05/2018', '05/06/2018', '05/07/2018', '05/08/2018', '05/09/2018', '05/10/2018', '05/11/2018', '05/12/2018', '05/13/2018', '05/14/2018', '05/15/2018', '05/16/2018', '05/17/2018', '05/18/2018', '05/19/2018', '05/20/2018', '05/21/2018', '05/22/2018', '05/23/2018', '05/24/2018', '05/25/2018', '05/26/2018', '05/27/2018', '05/28/2018', '05/29/2018', '05/30/2018', '05/31/2018', '06/01/2018', '06/02/2018', '06/03/2018', '06/04/2018', '06/05/2018', '06/06/2018', '06/07/2018', '06/08/2018', '06/09/2018', '06/10/2018', '06/11/2018', '06/12/2018', '06/13/2018', '06/14/2018', '06/15/2018', '06/16/2018', '06/17/2018', '06/18/2018', '06/19/2018', '06/20/2018', '06/21/2018', '06/22/2018', '06/23/2018', '06/24/2018', '06/25/2018', '06/26/2018', '06/27/2018', '06/28/2018', '06/29/2018', '06/30/2018', '07/01/2018', '07/02/2018', '07/03/2018', '07/04/2018', '07/05/2018', '07/06/2018', '07/07/2018', '07/08/2018', '07/09/2018', '07/10/2018', '07/11/2018', '07/12/2018', '07/13/2018', '07/14/2018', '07/15/2018', '07/16/2018', '07/17/2018', '07/18/2018', '07/19/2018', '07/20/2018', '07/21/2018', '07/22/2018', '07/23/2018', '07/24/2018', '07/25/2018', '07/26/2018', '07/27/2018', '07/28/2018', '07/29/2018', '07/30/2018', '07/31/2018', '08/01/2018', '08/02/2018', '08/03/2018', '08/04/2018', '08/05/2018', '08/06/2018', '08/07/2018', '08/08/2018', '08/09/2018', '08/10/2018', '08/11/2018', '08/12/2018', '08/13/2018', '08/14/2018', '08/15/2018', '08/16/2018', '08/17/2018', '08/18/2018', '08/19/2018', '08/21/2018', '08/22/2018', '08/23/2018', '08/24/2018', '08/25/2018', '08/26/2018', '08/27/2018', '08/28/2018', '08/29/2018', '08/30/2018', '08/31/2018', '09/01/2018', '09/06/2018', '09/07/2018', '09/08/2018', '09/09/2018', '09/10/2018', '09/11/2018', '09/12/2018', '09/13/2018', '09/14/2018', '09/15/2018', '09/16/2018', '09/17/2018', '09/18/2018', '09/19/2018', '09/20/2018', '09/21/2018', '09/22/2018', '09/23/2018', '09/24/2018', '09/25/2018', '09/26/2018', '09/27/2018', '09/28/2018', '09/29/2018', '09/30/2018', '10/01/2018', '10/02/2018', '10/03/2018', '10/04/2018', '10/05/2018', '10/06/2018', '10/07/2018', '10/08/2018', '10/09/2018', '10/10/2018', '10/11/2018', '10/12/2018', '10/13/2018', '10/14/2018', '10/15/2018', '10/16/2018', '10/17/2018', '10/18/2018', '10/19/2018', '10/20/2018', '10/21/2018', '10/22/2018', '10/23/2018', '10/24/2018', '10/25/2018', '10/28/2018', '10/29/2018', '10/30/2018', '10/31/2018', '11/01/2018', '11/02/2018', '11/03/2018', '11/04/2018', '11/05/2018', '11/06/2018', '11/07/2018', '11/08/2018', '11/09/2018', '11/10/2018', '11/11/2018', '11/12/2018', '11/13/2018', '11/14/2018', '11/15/2018', '11/16/2018', '11/22/2018', '11/23/2018', '11/25/2018', '11/26/2018', '11/27/2018', '11/29/2018', '12/01/2018', '12/02/2018', '12/05/2018']
xticks = np.arange(len(vals))
ax = plt.axes()
ax.bar(xticks,vals)
plt.xticks(rotation=90)
plt.xticks(xticks, dates)
for label in ax.xaxis.get_ticklabels()[::50]:
label.set_visible(True)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/9gygH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9gygH.png" alt="Here, works fine when the ticks are just the numbers and line is commented out. "></a></p>
<p><a href="https://i.stack.imgur.com/MPabv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MPabv.png" alt="Here, the selection of only every 50th tick does not work and all dates are shown when we add back the conversion from number to date"></a></p>
|
<p>The following should work:</p>
<pre><code>ax = plt.axes()
pos = np.arange(len(vals))
ax.bar(pos, vals)
ax.set_xticks(pos[::50])
ax.set_xticklabels(np.array(dates)[::50], rotation=90)
</code></pre>
|
python|datetime|matplotlib
| 2 |
1,246 | 53,548,791 |
Checking a DataFrame string value contains words with certain prefixes
|
<p>First time working with Pandas, and I'm struggling to query the DataFrame for this spec.</p>
<p>Let's say I create a dataframe as follows:</p>
<pre><code>df = pd.read_csv(_file, names=['UID', 'Comment', 'Author', 'Relevancy'])
</code></pre>
<p>Which gives:</p>
<pre><code>UID . Comment . Author . Relevancy
1234 . motorcycles are cool . dave . 12
5678 . motorhomes are cooler . mike . 13
9101 . i love motorbikes . frank . 14
</code></pre>
<p>I need to return all of these rows when I query the word 'motor'.</p>
<p>I.e. a row should be returned if it's "Comment" string contains a word that is prefixed by a given word. </p>
<p>I essentially want to do something like:</p>
<pre><code>df["Comment"][any(word in df["Comment"].str.split() if word.startswith("motor"))]
</code></pre>
<p>Any help and direction is much appreciated. </p>
|
<p>Pandas <code>str</code> operations aren't vectorised. You can use a list comprehension:</p>
<pre><code>df = pd.DataFrame({'Comment': ['motorcycles are cool', 'motorhomes are cooler',
'i love motorbikes', 'nomotor test string',
'some other test string']})
flag = [any(w.startswith('motor') for w in x.casefold().split()) for x in df['Comment']]
res = df.loc[flag]
print(res)
Comment
0 motorcycles are cool
1 motorhomes are cooler
2 i love motorbikes
</code></pre>
<p>A less efficient version is possible which uses Pandas <code>str</code> methods:</p>
<pre><code>def check_words(x):
return any(w.startswith('motor') for w in x)
flag = df['Comment'].str.lower().str.split().map(check_words)
res = df.loc[flag]
</code></pre>
|
python|string|pandas
| 0 |
1,247 | 52,446,339 |
(psycopg2.DataError) invalid input syntax for integer: importing from csv file?
|
<p>the data in my csv file is like this:</p>
<pre><code>081299289X,China Dolls,Lisa See,2014
0345498127,Starter for Ten,David Nicholls,2003
0061053716,Imajica,Clive Barker,1991
0553262149,Emily Climbs,L.M. Montgomery,1925
</code></pre>
<p>my import.py is like this:</p>
<pre><code>import csv
import os
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
engine = create_engine('postgres://pnrvnavoxvnlgw:....')
db = scoped_session(sessionmaker(bind=engine))
def main():
f = open("books.csv")
reader = csv.reader(f)
for isbn, title, author, year in reader:
db.execute(
"INSERT INTO books (isbn, title, author, publication_year)
VALUES (:isbn, :title, :author, :publication_year)",
{"isbn": isbn, "title": title, "author": author, "publication_year": year}
)
db.commit()
if __name__ == "__main__":
main()
</code></pre>
<p>For some reason I cannot seem to see what's wrong with this code. this is the error:</p>
<pre><code>sqlalchemy.exc.DataError: (psycopg2.DataError) invalid input syntax for integer: "year"
LINE 1: ...publication_year) VALUES ('isbn', 'title', 'author', 'year')
</code></pre>
<p>Help?</p>
|
<p>From the looks of it your CSV contains a header as the first row. Skip it with for example</p>
<pre><code>next(reader, None)
</code></pre>
<p>before your for-loop.</p>
|
python|postgresql|sqlalchemy
| 5 |
1,248 | 52,845,915 |
Unable to print out a function
|
<p>Hi I'm new to programming, and ran into some problems while practicing using python.</p>
<p>So basically my task is to create a simple quiz with (t/f) as the answer. So here's my code:</p>
<pre><code>def quiz(question,ans):
newpara = input(question)
if newpara == ans:
print("correct")
else:
print("incorrect")
return(newpara)
quiz_eval = quiz("You should save your notebook: ","t")
print("your answer is ",quiz_eval)
</code></pre>
<p>When the user input something, this code prints:</p>
<pre><code>your answer is hi
</code></pre>
<p>However, I want it to print "<code>your answer is correct/incorrect</code>". </p>
<p>I'm not sure what has gone wrong here. Any thoughts?</p>
|
<p>Then you should be returning either "correct" or "incorrect". You function returns the value of the input so it's normal to get that output. Try this:</p>
<pre><code>def quiz(question,ans):
newpara = input(question)
if newpara == ans:
answer= "correct"
else:
answer= "incorrect"
return(answer)
quiz_eval = quiz("You should save your notebook: ","t")
print("your answer is ",quiz_eval)
</code></pre>
<p>Note that my answer is just to answer your question and give the output the way you want it. If your program requires to limit the input of the user to either "t" or "f", then you need to add more conditions in your code.</p>
|
python
| 0 |
1,249 | 52,592,663 |
Unicode Characters in QT Designer for Python?
|
<p>I am writing a gui for an executable. My first one was written using Python and TKinter. It worked. Now I am working on version 2, and I want to do it in QT4. I found the Qt Designer which is really helpful for creating the layout etc.</p>
<p>I created a QLabel, and in there I want it to display a greek Gamma Symbol.</p>
<p>But how? I did not find any way to do this. There is a button I can press and there appears to be an option to write "source code". But what does it accept? I tried different methods:</p>
<ul>
<li>u"\u03B3"</li>
<li><code>&gamma;</code></li>
<li>and a couple more.</li>
</ul>
<p>If I write the &, the text changes its color to red, in the source code editor. But it will still just display the text I typed in.</p>
<p>Is this at all possible, or I just write some placeholder and then change it to Gamma in the python code itself?</p>
<p>Thx.</p>
|
<p><code>QLabel</code> supports HTML, for this you must right click and select the option <code>Change rich text...</code> and in the tab <code>source</code> place html code:</p>
<pre><code><p>&gamma;</p>
</code></pre>
<p>That is, within the tags <code><p> </p></code> obtaining the following:</p>
<p><a href="https://i.stack.imgur.com/VSxVU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VSxVU.png" alt="enter image description here"></a></p>
|
python|pyqt|qt-designer
| 2 |
1,250 | 47,789,371 |
Use different configuration sources as input to waf's doxygen feature
|
<p>Based on the question I asked <a href="https://stackoverflow.com/q/47396160/8972161">here</a>, where I wanted to use different sources based on the build variant specified, it now appears that I have the same problem for building the doxygen documentation, as I need different configurations based on the build variant.</p>
<p>The example stays quite the same, but gets a little longer:</p>
<hr>
<p>The directory structure looks like this:</p>
<ul>
<li>doc
<ul>
<li>a.conf # config of doxygen sources for source a</li>
<li>b.conf # config of doxygen sources for source b</li>
<li>stylesheetfile # the same for both</li>
</ul></li>
<li>src
<ul>
<li>a.c</li>
<li>b.c</li>
</ul></li>
<li>doxygen.py</li>
<li>wscript</li>
</ul>
<p>The waf <code>doxygen.py</code> implementation is the same as on repository on GitHub of the waf project <a href="https://github.com/waf-project/waf/blob/master/waflib/extras/doxygen.py" rel="nofollow noreferrer">doxygen.py</a>.</p>
<hr>
<p>The waf script is extend to accept the <code>doxygen</code> option. But inside the definition of <code>doxygen</code> it is not possible to check against the build variant (here <code>bld.variant</code>) and this leads to an error as there seems no build variant defined in the doxygen feature.</p>
<p>The part where I check for the build variant is marked in the example with arrows.</p>
<p><strong>What, where and how do I have to implement changes, that the doxygen feature works and uses the correct configuration based on the build variant.</strong></p>
<p>MWE:</p>
<pre><code>def options(opt):
opt.load(['doxygen'], tooldir=os.path.join('doxygen.py'))
def configure(ctx):
load('compiler_c')
def build(bld):
if not bld.variant:
bld.fatal('call 'waf build_a' or 'waf build_b', and try 'waf --help'')
if bld.variant == 'a':
bld.program(source='a.c', target='app', includes='.', features=['doxygen'])
elif bld.variant == 'b':
bld.program(source='b.c', target='app', includes='.', features=['doxygen'])
else:
bld.fatal('err')
# create build and clean commands for each build context
from waflib.Build import BuildContext, CleanContext
for x in 'a b'.split():
for y in (BuildContext, CleanContext):
name = y.__name__.replace('Context','').lower()
class tmp(y):
__doc__ = '''executes the {} of {}'''.format(name, x)
cmd = name + '_' + x
variant = x
def doxygen(bld):
import sys
import logging
from waflib import Logs
if bld.env.DOXYGEN:
if bld.variant == 'a': # <=================
_conf = 'src/a.conf' # <=================
elif bld.variant == 'b': # <=================
_conf = 'src/b.conf' # <=================
else: # <=================
bld.fatal('err') # <=================
bld.logger = Logs.make_logger('doxy.log', out)
hdlr = logging.StreamHandler(sys.stdout)
formatter = logging.Formatter('%(message)s')
hdlr.setFormatter(formatter)
bld.logger.addHandler(hdlr)
bld(features='doxygen', doxyfile=_conf)
</code></pre>
|
<p>Your problem is that you have not defined your variants for the doxygen command. You should add something like:</p>
<pre class="lang-py prettyprint-override"><code>variants = ['a', 'b']
for variant in variants:
dox = "doxygen"
class tmp(BuildContext):
__doc__ = '''executes the {} of {}'''.format(dox, variant)
cmd = dox + '_' + variant
fun = dox
variant = variant
</code></pre>
<p>A minimal working example:</p>
<pre class="lang-py prettyprint-override"><code>def configure(conf):
pass
def doxygen(bld):
print "variant =", bld.variant
# Create build commands with variants
from waflib.Build import BuildContext
variants = ['a', 'b']
for variant in variants:
dox = "doxygen"
class tmp(BuildContext):
__doc__ = '''executes the {} of {}'''.format(dox, variant)
cmd = dox + '_' + variant
fun = dox
variant = variant
</code></pre>
|
python|doxygen|waf
| 1 |
1,251 | 37,350,892 |
Python3.5 Error with urlib.request library
|
<ul>
<li>This script visits the url specified and outputs the contents into a local file. </li>
<li>When ans = 1 the script works as intended.</li>
<li>When ans = 2 the script always returns an error for some reason.</li>
<li><p>All help is appreciated. :)</p>
<pre><code>import urllib.request
ans = True
while ans:
print("""
- Menu Selection -
1. Automatic
2. Manual
3. Add
4. Exit
""")
ans = input('Select Option : ')
if ans =="1":
with urllib.request.urlopen('http://www.mywebsite.net/something.txt') as response:
html = response.read()
f = open('proxylist.txt','a')
f.write(str(html))
f.close()
print('Data saved.')
ans = True
if ans =="2":
input('Enter link : ')
link = input()
try:
with urllib.request.urlopen(link) as response:
html1 = response.read()
f = open('proxylist.txt','a+')
f.write(str(html1))
f.close()
print('Data saved.')
ans = True
except:
print('User Input Error')
ans = True
</code></pre></li>
</ul>
|
<p>You are trying to input data twice and ignoring the first result:</p>
<pre><code>input('Enter link : ')
link = input()
</code></pre>
<p>Change that to just:</p>
<pre><code>link = input('Enter link : ')
</code></pre>
|
python|python-3.x|urllib
| 1 |
1,252 | 32,412,348 |
Structuring Flask using Blueprints error
|
<p>I'm attempting to break a small app into units and use the Blueprint pattern in Flask. However I seem to have trouble in getting the app running.</p>
<p>Here is my structure:</p>
<pre><code>\myapp
login.py
\upload
__init__.py
views.py
</code></pre>
<p>Here is login.py:</p>
<pre><code>import sys, os
from flask import Flask, Blueprint, request
from flask import render_template, session
from .upload import views
app = Flask(__name__)
app.register_blueprint(views)
@app.route('/')
def index():
return render_template('index.html')
if __name__ == '__main__':
print app.url_map
print app.blueprints
app.run(debug = True)
</code></pre>
<p>for <code>__init__.py</code></p>
<pre><code> from flask import Flask
app.register_blueprint('upload', __name__)
</code></pre>
<p>and in views.py</p>
<pre><code>from flask import Flask, Blueprint, request, redirect, url_for, render_template
import views
upload = Blueprint('upload', __name__)
@upload.route('/uploaded', methods=['GET', 'POST'])
def upload_file():
...
</code></pre>
<p>Looking at the logs on heroku, here is the error:</p>
<pre><code>from .upload import views
2015-09-05T10:59:00.506513+00:00 app[web.1]: ValueError: Attempted relative import in non-package
</code></pre>
<p>Have a structured my package and blueprint correctly? I used the documentation, but I think I'm missing something here.</p>
|
<p>There are three problems with your code:</p>
<ul>
<li>You are using a relative import in <code>login.py</code> to include <code>views</code>, but given the folder structure and the fact that you use <code>login.py</code> as starting point, it cannot work here. Simply use <code>from upload import views</code> instead.</li>
<li>The <code>__init__.py</code> references an unknown variable, <code>app</code>. You actually don't need anything in this file, remove everything.</li>
<li>You try to register a module as a blueprint via <code>app.register_blueprint(views)</code> in <code>login.py</code>. This cannot work, a module is not a blueprint. Instead, import the <code>upload</code> variable defined in the <code>views</code> module, that is a blueprint: <code>app.register_blueprint(views.upload)</code>.</li>
</ul>
<p>Changing that should get you started. Two side notes:</p>
<ul>
<li>You have an import in <code>views.py</code> that should probably not be there: <code>import views</code>. This should be harmless however.</li>
<li>I answered a question about Flask blueprints some days ago, and gave an example about how to use them. Maybe would you find <a href="https://stackoverflow.com/questions/32310580/flask-blueprints-how-to-use-them/32311791#32311791">the answer</a> useful in your case as well (in particular the part where the blueprint definition is moved to the <code>__init__.py</code> file defining the blueprint).</li>
</ul>
|
python|heroku|flask
| 2 |
1,253 | 28,141,279 |
What happens to my scipy.sparse.linalg.eigs?
|
<p>I use python 2.7.8 with the Anaconda distribution and I have problems with scipy.
Let A be a sparse matrix; I want to calculate its eigenvalues but if I write: </p>
<pre><code>import scipy
scipy.sparse.linalg.eigs(A)
</code></pre>
<p>I get the error</p>
<pre><code> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'module' object is not callable
</code></pre>
<p>What is the problem? (The version of scipy is 0.15.1)</p>
|
<p>Does this work for you?</p>
<pre><code>from scipy import sparse
import scipy.sparse.linalg as sp_linalg
B = np.random.rand(10,10)
A_dense = np.dot(B.T, B)
A_sparse = sparse.lil_matrix(A_dense)
sp_linalg.eigs(A_sparse, 3)
</code></pre>
<p>It seems that you have to explicitly import the submodules. <code>scipy</code> does not load those per default.</p>
|
python|scipy|sparse-matrix|anaconda
| 5 |
1,254 | 23,091,450 |
AWS Elastic Beanstalk - Using Mongodb instead of RDS using Python and Django environment
|
<p>I've been following the official Amazon documentation on deplaying to the Elastic Bean Stalk.</p>
<p><a href="http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python.html" rel="nofollow">http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python.html</a></p>
<p>and the customization environment</p>
<p><a href="http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html#customize-containers-format" rel="nofollow">http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html#customize-containers-format</a></p>
<p>however, I am stuck. I do not want to use the built in RDS database I want to use mongodb but have my django/python application scale as a RESTful frontend or rather API endpoint for my users. </p>
<p>Currently I am running one EC2 instance to test out my django application. </p>
<p>Some problems that I have with the Elastic Bean:
1. I cannot figure out how to run commands such as </p>
<pre><code>pip install git+https://github.com/django-nonrel/[email protected]
</code></pre>
<ol>
<li>Since I cannot install the device mongo driver for use by django I cannot run my mongodb commands. </li>
</ol>
<p>I was wondering if I am just skipping over some concepts or just not understanding how deploying on the beanstalk works. I can see that beanstalk just launches EC2 instances and possibly need to write custom scripts or something I don't know. </p>
<p>I've searched around but I don't exactly know what to ask in regards to this. Top results of google are always Amazon documents which are less than helpful in customization outside of their RDS environment. I know that Django traditionally uses RDS environments but again I don't want to use those as they are not flexible enough for the web application I am writing. </p>
|
<p>You can create a customize AMI to your specific needs the steps are outline in the AWS documentation below. Basically you would create a custom AMI with the packages needed to host your application and then update the Beanstalk config to use your customize AMI.</p>
<p><a href="http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.customenv.html" rel="nofollow">Using Custom AMIs</a></p>
|
django|mongodb|python-2.7|amazon-web-services|amazon-elastic-beanstalk
| 2 |
1,255 | 46,692,763 |
readable columns names in my CSV file that is exported from elastic search?
|
<p>Below is the code that is grabbing some data from elastic search and exporting that data to a csv file called ‘mycsvfile’.
I want to change the column names so that it is readable by a human.
Below is the code:</p>
<pre><code>from elasticsearch import Elasticsearch
import csv
es = Elasticsearch(["9200"])
# Replace the following Query with your own Elastic Search Query
res = es.search(index="search", body=
{
"_source": ["DTDT", "TRDT", "SPLE", "RPLE"],
"query": {
"bool": {
"should": [
{"wildcard": {"CN": "TEST1"}}
]
}
}
}, size=10)
with open('mycsvfile.csv', 'w') as f: # Just use 'w' mode in 3.x
header_present = False
for doc in res['hits']['hits']:
my_dict = doc['_source']
if not header_present:
w = csv.DictWriter(f, my_dict.keys())
w.writeheader()
header_present = True
w.writerow(my_dict)
</code></pre>
<p>when I run the above query the CSV file data look like below:</p>
<pre><code>DTDT TRDT SPLE SACL RPLE
20170512 12/05/2017 15:39 1001 0 0
20170512 12/05/2017 15:39 1001 0 0
20170908 08/09/2017 02:42 1001 0 0
20170908 08/09/2017 06:30 1001 0 0
</code></pre>
<p>As you can see the column names are the same as in the query and I want to give them readable names when the file is being generated. For example, instead of DTDT I would like to have DATE and TRDT would be TIME etc</p>
<p>Could someone show and fix my code up for me to enter column names to the CSV file?</p>
<p>Thank you in advance</p>
|
<p>Edit: Sorry, wrote that line out of my behind. The correct, tested, version is as follows.</p>
<pre><code>with open('mycsvfile.csv', 'w') as f: # Just use 'w' mode in 3.x
header_present = False
for doc in res['hits']['hits']:
my_dict = doc['_source']
if not header_present:
fieldnames = ['name', 'name', 'name']
w = csv.DictWriter(f, fieldnames=fieldnames)
w.writeheader()
header_present = True
w.writerow(my_dict)
</code></pre>
<p>What made your script write out the headers was the my_dict.keys() as passed to the DictWriter. Replacing those keys with a list of labels, the writer should write it properly. </p>
|
python|python-3.x|csv|elasticsearch|python-3.6
| 1 |
1,256 | 57,076,754 |
How to get the posts of a user (who is currently logged in) in django?
|
<p>When a user logged in and went to his/her account they must see there old posts, what they have uploaded in past. If tried if statement in template by comparing current logged in user (request.user) and the users available in database. If, if condition is true than all the posts which are related to that user must be visible in my account page. But this if condition is not working. When i remove this condition it shows all of the posts whether these posts are related or uploaded by the user or not. And when i apply this condition it shows nothing, no error, it shows my navbar only which means rest of the code is fine the problem is with if statement. </p>
<p>Template :</p>
<pre><code>{% for i in userstweet %}
{% if request.user==i.user %}
Username : {{i.user}}
<br />
{{i.datetime}}
<br />
<img src="{{i.image.url}}" width=400 height=400 />
<br />
<br />
{{i.tweet}}
{% endif %}
{% endfor %}
</code></pre>
<p>View :</p>
<pre><code>def myaccount(request):
return render(request, 'tweets/myaccount.html', {'userstweet': Tweets.objects.all})
</code></pre>
<p>url :</p>
<pre><code>path('myaccount', views.myaccount,name='account')
</code></pre>
<p>i expect the posts uploaded by the current logged in user will be shown on my account page but it gives nothing.</p>
|
<p>Isn't it better to just get the current user and query all Tweets by that user? What you can do is:</p>
<pre><code>def myaccount(request):
current_user = request.user
# Where user is whatever foreign key you specified that relates to user.
user_tweets = Tweets.objects.filter(user=current_user)
return render(request, 'tweets/myaccount.html', {'userstweet': user_tweets})
</code></pre>
|
python|django
| 3 |
1,257 | 57,236,475 |
Why "0" and '0' are evaluated to False?
|
<p>I was thinking if we were checking the boolean of string or character, we were checking if they were empty. But the following code gave me unexpected outputs.</p>
<pre><code>print('0'==True)
print("0"==True)
</code></pre>
<p>Output:</p>
<pre><code>False
False
</code></pre>
<p>What is happening? What we were really checking?</p>
|
<p>They are true (in a boolean context):</p>
<pre><code>if '0':
print("you will see this")
if '': # for comparison
print("you will not see this")
# Alternately:
bool('0') # True
bool('') # False
</code></pre>
<p>But they are not <em>equal to</em> the special value <code>True</code>.</p>
<p>There is no contradiction.</p>
|
python
| 5 |
1,258 | 57,242,268 |
How to change object once in a loop
|
<p>I am trying to make it so the object X is the background and A is an object to be moved up or down, but once i use loop to change the A to X, the loop finds the a in the next line and does it once again, how do i make it so it only changes once and moves on?</p>
<pre><code>newlist = ['X','X','A','X','X','X','X']
for i in range(len(newlist)):
print(newlist[i])
if newlist[i] == 'A':
newlist[i] = 'X'
newlist[i+1] = 'A'
</code></pre>
<p>It should show</p>
<pre><code>X
X
X
A
X
X
X
</code></pre>
<p>from</p>
<pre><code>X
X
A
X
X
X
X
</code></pre>
<p>but right now it shows</p>
<pre><code>X
X
X
A
A
A
A
</code></pre>
|
<p>While doing the downshift as well (just for myself) I found an even better approach than in my previous answer:</p>
<p><a href="https://docs.python.org/3.3/library/collections.html#collections.deque" rel="nofollow noreferrer">deque</a> adds rotation functionality to iterables.</p>
<pre><code>from collections import deque
newlist = ['X', 'X', 'A', 'X', 'A', 'X', 'A']
print(f'Original:\n{newlist}') # debug
a = deque(newlist)
b = deque(newlist)
a.rotate(1)
newlist = list(a)
print(f'Shift up:\n{newlist}')
b.rotate(-1)
newlist = list(b)
print(f'Shift down:\n{newlist}')
</code></pre>
<p><strong>Result:</strong></p>
<pre><code>Original:
['X', 'X', 'A', 'X', 'A', 'X', 'A']
Shift up:
['A', 'X', 'X', 'A', 'X', 'A', 'X']
Shift down:
['X', 'A', 'X', 'A', 'X', 'A', 'X']
</code></pre>
|
python|loops|if-statement
| 1 |
1,259 | 25,831,240 |
Create custom field at delivery order (stock.picking.out) which gets its value from sales order
|
<p>I have a sales order form which contains a custom delivery date field.
Now I want to pass the value from delivery date field in sales order to the commitment date field in delivery order (stock.picking.out). </p>
<p>Did we make two columns in both stock.picking and stock.picking.out?
And also how can I take the delivery date field value from sales order during the automatic creation of delivery order(at the click of confirm order button).</p>
<p>I am using v7. Thanks in advance</p>
|
<p>I got the answer from the following link. Thanks too much for @Sudhir Arya for helping me.</p>
<p><a href="https://www.odoo.com/forum/help-1/question/how-to-create-a-custom-field-at-delivery-order-stock-picking-out-which-gets-its-value-from-sales-order-in-openerp-62669" rel="nofollow">link</a></p>
<p>Here is the full code</p>
<pre><code>class StockPicking(orm.Model):
_inherit = 'stock.picking'
_columns = {
'x_commitment_date': fields.date('Commitment Date', help="Committed date for delivery."),
}
StockPicking()
class StockPickingOut(orm.Model):
_inherit = 'stock.picking.out'
def __init__(self, pool, cr):
super(StockPickingOut, self).__init__(pool, cr)
self._columns['x_commitment_date'] = self.pool['stock.picking']._columns['x_commitment_date']
StockPickingOut()
class Sale_Order(osv.Model):
_inherit = 'sale.order'
def _prepare_order_picking(self, cr, uid, order, context=None):
vals = super(sale_order, self)._prepare_order_picking(cr, uid, order, context=context)
vals.update({'x_commitment_date': order.commitment_date})
return vals
Sale_Order()
</code></pre>
|
python|openerp|openerp-7|odoo
| 0 |
1,260 | 25,507,534 |
Getting Keys Within Range/Finding Nearest Neighbor From Dictionary Keys Stored As Tuples
|
<p>I have a dictionary which has coordinates as keys. They are by default in 3 dimensions, like <code>dictionary[(x,y,z)]=values</code>, but may be in any dimension, so the code can't be hard coded for 3.</p>
<p>I need to find if there are other values within a certain radius of a new coordinate, and I ideally need to do it without having to import any plugins such as numpy.</p>
<p>My initial thought was to split the input into a cube and check no points match, but obviously that is limited to integer coordinates, and would grow exponentially slower (radius of 5 would require 729x the processing), and with my initial code taking at least a minute for relatively small values, I can't really afford this.</p>
<p>I heard finding the nearest neighbor may be the best way, and ideally, cutting down the keys used to a range of +- a certain amount would be good, but I don't know how you'd do that when there's more the one point being used.<br>Here's how I'd do it with my current knowledge:</p>
<pre><code>dimensions = 3
minimumDistance = 0.9
#example dictionary + input
dictionary[(0,0,0)]=[]
dictionary[(0,0,1)]=[]
keyToAdd = [0,1,1]
closestMatch = 2**1000
tooClose = False
for keys in dictionary:
#calculate distance to new point
originalCoordinates = str(split( dictionary[keys], "," ) ).replace("(","").replace(")","")
for i in range(dimensions):
distanceToPoint = #do pythagors with originalCoordinates and keyToAdd
#if you want the overall closest match
if distanceToPoint < closestMatch:
closestMatch = distanceToPoint
#if you want to just check it's not within that radius
if distanceToPoint < minimumDistance:
tooClose = True
break
</code></pre>
<p>However, performing calculations this way may still run very slow (it must do this to millions of values). I've searched the problem, but most people seem to have simpler sets of data to do this to. If anyone can offer any tips I'd be grateful.</p>
|
<p>You say you need to determine IF there are any keys within a given radius of a particular point. Thus, you only need to scan the keys, computing the distance of each to the point until you find one within the specified radius. (And if you do comparisons to the <em>square</em> of the radius, you can avoid the square roots needed for the actual distance.)</p>
<p>One optimization would be to sort the keys based on their "Manhattan distance" from the point (that is, add the component offsets), since the Euclidean distance will never be less than this. This would avoid some of the more expensive calculations (though I don't think you need and trigonometry).</p>
<p>If, as you suggest later in the question, you need to handle multiple points, you can obviously process each individually, or you could find the center of those points and sort based on that.</p>
|
python|dictionary
| 1 |
1,261 | 44,576,248 |
Django - Regroup By Date
|
<p>I have a model with the following fields: "Date", "Employee", and "Planned Hours". Each employee has various planned hours for various dates.</p>
<p>I'm attempting to structure my template where employees are listed in rows and their planned hours are listed in columns under the correct corresponding date.</p>
<p>I'm currently filtering by two dates. One employee has time for the second date, but not the most current date. I would like his time to be listed correctly under the 2nd date, but currently it's displaying under the current date. See 1 hour for Chase: <a href="https://i.imgur.com/O4sake8.png" rel="nofollow noreferrer">http://i.imgur.com/O4sake8.png</a></p>
<p>My view:</p>
<pre><code>def DesignHubR(request):
emp3_list = Projectsummaryplannedhours.objects.values_list('displayval', 'employeename').\
filter(businessunit='a').filter(billinggroup__startswith='PLS - Project').filter(Q(displayval=sunday2)|Q(displayval=sunday)).\
annotate(plannedhours__sum=Sum('plannedhours'))
emp3 = map(lambda x: {'date': x[0], 'employee_name': x[1], 'planned_hours': x[2]}, emp3_list)
context = {'sunday': sunday, 'emp2': emp2, 'sunday2': sunday2, 'emp3': emp3}
return render(request,'department_hub_ple.html', context)
</code></pre>
<p>My Template: </p>
<pre><code>{% regroup emp3 by employee_name as emp9 %}
{% for employee_name in emp9 %}
<!--Job-->
<div class="table-row table-job-column employee-row">{{employee_name.grouper}}</div>
{% regroup employee_name.list by date|date as date_list %}
{% for y in date_list %}
<div class="table-row table-fr-column">{{y.list.0.planned_hours|default:"0"}}</div>
<div class="table-row table-fr-column">{{y.list.1.planned_hours|default:"0"}}</div>
{% endfor %}{% endfor %}
</code></pre>
<p>emp3_list Data:</p>
<pre><code>[{'date': 'W/E 6/18/17', 'planned_hours': Decimal('45.00000'), 'employee_name': 'Waylan'}, {'date': 'W/E 6/25/17', 'planned_hours': Decimal('45.00000'), 'employee_name': 'Waylan'}, {'date': 'W/E 6/18/17', 'planned_hours': Decimal('17.00000'), 'employee_name': 'Michael'}, {'date': 'W/E 6/25/17', 'planned_hours': Decimal('13.00000'), 'employee_name': 'Michael'}, {'date': 'W/E 6/25/17', 'planned_hours': Decimal('1.00000'), 'employee_name': 'Chase'}, {'date': 'W/E 6/18/17', 'planned_hours': Decimal('27.00000'), 'employee_name': 'Bert'}, {'date': 'W/E 6/25/17', 'planned_hours': Decimal('29.00000'), 'employee_name': 'Bert'}]
</code></pre>
<p>Model:</p>
<pre><code>class Projectsummaryplannedhours(models.Model):
number = models.CharField(db_column='Number', max_length=50, blank=True, null=True) # Field name made lowercase.
description = models.CharField(db_column='Description', max_length=100, blank=True, null=True) # Field name made lowercase.
clientname = models.CharField(db_column='ClientName', max_length=100, blank=True, null=True) # Field name made lowercase.
department = models.CharField(db_column='Department', max_length=50, blank=True, null=True) # Field name made lowercase.
billinggroup = models.CharField(db_column='BillingGroup', max_length=50, blank=True, null=True) # Field name made lowercase.
businessunit = models.CharField(db_column='BusinessUnit', max_length=50, blank=True, null=True) # Field name made lowercase.
employeename = models.CharField(db_column='EmployeeName', max_length=50, blank=True, null=True) # Field name made lowercase.
displayval = models.CharField(db_column='DisplayVal', max_length=50, blank=True, null=True) # Field name made lowercase.
startofweek = models.DateTimeField(db_column='StartOfWeek', blank=True, null=True) # Field name made lowercase.
endofweek = models.DateTimeField(db_column='EndOfWeek', blank=True, null=True) # Field name made lowercase.
plannedhours = models.DecimalField(db_column='PlannedHours', max_digits=10, decimal_places=5, blank=True, null=True) # Field name made lowercase.
rateschedule = models.CharField(db_column='RateSchedule', max_length=50, blank=True, null=True) # Field name made lowercase.
classification = models.CharField(db_column='Classification', max_length=50, blank=True, null=True) # Field name made lowercase.
dollarsforecast = models.DecimalField(db_column='DollarsForecast', max_digits=10, decimal_places=5, blank=True, null=True) # Field name made lowercase.
deleted = models.NullBooleanField(db_column='Deleted') # Field name made lowercase.
datelastmodified = models.DateTimeField(db_column='DateLastModified', blank=True, null=True) # Field name made lowercase.
datecreated = models.DateTimeField(db_column='DateCreated', blank=True, null=True) # Field name made lowercase.
</code></pre>
|
<p>this can be a solution:
{% for y in date_list %}
if(y.list.0.date == current_date){
{{y.list.0.planned_hours|default:"0"}}
{{y.list.1.planned_hours|default:"0"}}
{{y.list.2.planned_hours|default:"0"}}
}else{
-----
{{y.list.0.planned_hours|default:"0"}}
{{y.list.1.planned_hours|default:"0"}}
}
{% endfor %}{% endfor %}</p>
|
python|django
| 0 |
1,262 | 44,755,690 |
Increment function name in Python
|
<p>I apologise if there is already an answer to my question, I've searched stack overflow for a while but found nothing that I could use.</p>
<p>I'm learning how to create classes at the moment and I've constructed classes for the explicit Runge-Kutta methods 1-4. The names of the classes are 'RK_1', 'RK_2', 'RK_3' and 'RK_4'. In order to test my code, I decided to solve the Legendre differential equation, which I also created a class for called 'Legendre'.</p>
<p>Now I wanted to solve the problem, so I wrote a function that uses a particular RK scheme and solves the Legendre problem. I wanted to do this for each one of my RK schemes, so I wrote the same function 4 times i.e</p>
<pre><code>def solve_Legendre_1(p,Tmax,init,dt=0.001):
f = Legendre(p)
solver = RK_1(init,f)
while solver.now() < Tmax:
solver(dt)
return solver.state()
def solve_Legendre_2(p,Tmax,init,dt=0.001):
f = Legendre(p)
solver = RK_2(init,f)
while solver.now() < Tmax:
solver(dt)
return solver.state()
def solve_Legendre_3(p,Tmax,init,dt=0.001):
f = Legendre(p)
solver = RK_3(init,f)
while solver.now() < Tmax:
solver(dt)
return solver.state()
def solve_Legendre_4(p,Tmax,init,dt=0.001):
f = Legendre(p)
solver = RK_4(init,f)
while solver.now() < Tmax:
solver(dt)
return solver.state()
</code></pre>
<p>However, I realised there must be an easier way to do this. So I thought I might be able to use a loop and str.format() to change the name of the function and get it to take in its corresponding RK scheme, something like</p>
<pre><code>for j in range(4):
def solve_Legendre_%s(p,Tmax,init,dt=0.001) % (j+1):
f = Legendre(p)
solver = RK_%s(init,f) % (j+1)
while solver.now() < Tmax:
solver(dt)
return solver.state()
</code></pre>
<p>but obviously this won't work. Does anyone know how I should approach this?</p>
<p>Thanks for your help.</p>
|
<p>You can simply pass the <code>RK_n()</code> function in as a parameter to avoid duplicating the other function:</p>
<pre><code>def solve_Legendre(p,Tmax,init,dt=0.001, RK=RK_1):
f = Legendre(p)
solver = RK(init,f)
while solver.now() < Tmax:
solver(dt)
return solver.state()
</code></pre>
<p>and if you want you can bind that last parameter in advance:</p>
<pre><code>import functools
solve_Legendre_1 = functools.partial(solve_Legendre, RK=RK_1)
solve_Legendre_2 = functools.partial(solve_Legendre, RK=RK_2)
...
</code></pre>
|
python|string
| 7 |
1,263 | 24,424,909 |
Equation roots: parameter doesn't get simplified
|
<p>I am using Python with Sympy.</p>
<p>I need to solve the following equation, finding the 4 roots (omega is my unknown):</p>
<pre><code>deter= 0.6*omega**4*cos(omega*t)**2 - 229.0*omega**2*cos(omega*t)**2 + 5880.0*cos(omega*t)**2
</code></pre>
<p>I tried to use solve:</p>
<pre><code>eqcarr=solve(deter,omega,exclude=[t])
</code></pre>
<p>I get this output:</p>
<pre><code>[-18.8143990830350, -5.26165884593044, 5.26165884593044, 18.8143990830350, 1.5707963267949/t, 4.71238898038469/t]
</code></pre>
<p>I only need the first 4 values, and not the values with the t coefficient. I expect the cos(omega*t)**2 to be simplified in solve, but this doesn't happen.</p>
|
<p>According to documentation <code>solve</code> will not solve for any of the free symbols passed in the <code>exclude</code>.</p>
<blockquote>
<p>'exclude=[] (default)'
don't try to solve for any of the free symbols in exclude;
if expressions are given, the free symbols in them will
be extracted automatically.</p>
</blockquote>
<p>It is not meant to filter solution.</p>
<p>You can solve your problem by doing this:</p>
<pre><code>In [10]: from sympy import *
In [11]: from sympy.abc import omega, t
In [12]: deter= 0.6*omega**4*cos(omega*t)**2 - 229.0*omega**2*cos(omega*t)**2 + 5880.0*cos(omega*t)**2
In [13]: eqcarr=solve(deter,omega,exclude=[t])
In [14]: filtered = [i for i in eqcarr if not i.has(t)]
In [15]: filtered
Out[15]: [-18.8143990830350, -5.26165884593044, 5.26165884593044, 18.8143990830350]
</code></pre>
|
python|parameters|equation|sympy
| 2 |
1,264 | 24,114,801 |
Dict into Dict Python
|
<p>I am new in Python, Currently I am working data dictionary.
I expect to create dict into dict like this:</p>
<pre><code>dates = {201101:{perf:10, reli:20, qos:300}, 201102:{perf:40, reli:0, qos:30}}
</code></pre>
<p>I already have the keys, and I have to created default values for initialization. i.e:</p>
<p><code>{201101:{perf:0, reli:0}, 201102:{perf:0, reli:0}}</code></p>
<p>how to do initiation and update the specifict dict. ie dates[201101[perf]] = 10.
Thanks!</p>
|
<p>First, you need to separate keys from their values with a <code>:</code>. Secondly, your keys need to be strings or numbers. Example.</p>
<pre><code>dates = { 201101: {'perf': 10, 'reli':20, 'qos': 300} }
</code></pre>
|
python|dictionary
| 1 |
1,265 | 24,103,795 |
pandas: Calculated column based on values in one column
|
<p>I have columns like this in a csv file (I load it using <code>read_csv('fileA.csv', parse_dates=['ProcessA_Timestamp'])</code>)</p>
<pre><code>Item ProcessA_Timestamp
'A' 2014-06-08 03:32:20
'B' 2014-06-08 03:32:20
'A' 2014-06-08 03:33:19
'C' 2014-06-08 03:33:20
'B' 2014-06-08 03:33:40
'D' 2014-06-08 03:38:20</code></pre>
<p>How would I go about creating a column called <code>ProcessA_ProcessingTime</code>, which would be the time difference between <strong>last</strong> time an item occurs in the table <code>-</code> <strong>first</strong> time it occurs in the table.</p>
<p>Similarly, I have other data frames (which I'm not sure if they should be merged into one dataframe).. that have their own <code>Process*_Timestamp</code>s. </p>
<p>Finally, I need to create a table, where the data is like this:</p>
<pre><code>Item ProcessA_ProcessingTime ProcessB_ProcessingTime ... ProcessX_ProcessingTime
'A' 00:00:59 ...
'B' 00:01:21
'C' NOT FINISHED YET
'D' NOT FINISHED YET
</code></pre>
|
<p>You can use the pandas groupby-apply combo. Group the dataframe by "Item" and apply a function that calculates the process time. Something like:</p>
<pre><code>import pandas as pd
def calc_process_time(row):
ts = row["ProcessA_Timestamp].values
if len(ts) == 1:
return pd.NaT
else:
return ts[-1] - ts[0] #last time - first time
df.groupby("Item").apply(calc_process_time)
</code></pre>
|
python|pandas|data-analysis
| 1 |
1,266 | 20,651,317 |
NetworkX multi-directed graph possible?
|
<p>I have a network for which I'm trying to figure out the best possible graph representation. I'm no graph theorist, but a biologist, so please pardon my lack of technicality here.</p>
<p>Currently, the network can be thought of as follows: "n" layers of networks, each layer holding a different set of edges between the nodes. Each edge is directed, and has a probability associated with it, but that probability property isn't used until later. Each layer is stored as a separate graph, as a CSV file, in an adjacency list representation.</p>
<p>Using an adjacency list representation, I have a "summary" layer, in which I compress all "n" layers, with each layer contributing a value of "+1" to the weights between each node. This is currently stored as a separate graph, as a CSV file, in an adjacency list representation.</p>
<p>If there were "n" edges between a pair of nodes, then in the summary layer, there the edge would have a weight of "n"; there can only be "n" or fewer edges between any pair of nodes.</p>
<p>I also have a "full-only" layer, which is only comprised of the edges that have weight "n". Similarly, currently stored as a CSV file, in an adjacency list representation.</p>
<p>Finally, I have a "most probable full-only" layer. In this layer, the probabilities kick in. For each of the "full-only" layer edges, I multiply all of the probabilities associated with each of the n edges (recall: the "full" layer is the sum of "n" edges, each edge with a probability).</p>
<p>In my analysis of this network, sometimes it's convenient to be able to switch between any of the "n" layers and the "summary" layers. However, the most convenient minimal storage format (i.e. without pre-computing anything) is to store the individual edges as a table (illustrated below):</p>
<pre><code>|Node 1 | Node 2 | Layer 1 Weight | Layer 2 Weight | ... | Layer n Weight |
|-------|--------|----------------|----------------|-----|----------------|
| x | y | 0.99 | 1.00 | ... | 1.00 |
| x | z | 0.98 | 1.00 | ... | 0.97 |
| y | z | 0 (no edge) | 1.00 | ... | 1.00 |
</code></pre>
<p>I say that this format is convenient, because I am able to generate such a table very easily.</p>
<p>So here's my question: is it possible in NetworkX to store such a graph (multi-layered, directed on each layer)? If it were possible, then I'd imagine being able to write functions to compute, on-the-fly, the "summary" graph, the "full-only" graph, and the "most probable full-only" graph, since they are subsets of one another. I can also imagine writing functions that compute other graphs, such as the graph that also incorporates complementary sets of multiple edges into the nodes that don't have full edges going into each node. </p>
<p>However, checking the NetworkX documentation, I can't find anything like what I'm looking for. The best I could find is a "multigraph", which allows multiple edges between nodes, but each edge has to be undirected. Am I missing something here?</p>
<p>Also, is there a better representation for what I'm trying to achieve? Again, I'm lacking experience with graph theory here, so I might be missing something. Many thanks (in advance) to everyone who takes time to respond!</p>
|
<p>There is a MultiDiGraph() object in NetworkX that might work in your case.
You can store multiple directed edges, each with arbitrary attributes.
The nodes can also have arbitrary attributes. </p>
<pre><code>In [1]: import networkx as nx
In [2]: G = nx.MultiDiGraph()
In [3]: G.add_edge(1,2,color='green')
In [4]: G.add_edge(1,2,color='red')
In [5]: G.edges(data=True)
Out[5]: [(1, 2, {'color': 'green'}), (1, 2, {'color': 'red'})]
In [6]: G.node[1]['layer1']=17
In [7]: G.node[1]['layer2']=42
In [8]: G.nodes(data=True)
Out[8]: [(1, {'layer1': 17, 'layer2': 42}), (2, {})]
</code></pre>
<p>There are some helper functions to get and set node attributes that might be useful, e.g.</p>
<pre><code>In [9]: nx.get_node_attributes(G,'layer1')
Out[9]: {1: 17}
</code></pre>
|
python|algorithm|csv|graph|networkx
| 6 |
1,267 | 71,790,788 |
Why pandas fillna function turns non empty values to empty values?
|
<p>I'm trying to fill empty values with the element with max count after grouping the dataframe. Here is my code.</p>
<pre><code>def fill_with_maxcount(x):
try:
return x.value_counts().index.tolist()[0]
except Exception as e:
return np.NaN
df_all["Surname"] = df_all.groupby(['HomePlanet','CryoSleep','Destination']).Surname.apply(lambda x : x.fillna(fill_with_maxcount(x)))
</code></pre>
<p>If there is an error occurred in try catch, it would return np.NaN value. But in the function fill_with_maxcount I tried logging the error also. But there is no exception occurred during the try catch.</p>
<p>Before the execution of the code lines, there are 294 nan values. After the execution it has incresed to 857 nan values, which means it has turned non-empty values into nan values. I can't figure out why. I did some experiments using print statements. It returns a non-empty value (a string) as the result of the function. So the problem should be with the pandas dataframe's apply or fillna function. But I have used this same method in other places without any problem.</p>
<p>Can someone give me a suggestion. Thank you</p>
|
<p>Finally found it after some testings with code.</p>
<pre><code> df_all.groupby(['HomePlanet','CryoSleep','Destination']).Surname.apply(lambda x : x.fillna(fill_with_maxcount(x)))
</code></pre>
<p>The above part returns a series with filled values. But however in the rows where the fields used for grouping are empty, it doesn't consider it for applying the function. So those indexes will be returned as null. then that series is directly assigned into the Surname column. So those values become null too.</p>
<p>As the solution I changed the code as the following.</p>
<pre><code>def fill_with_maxcount(x):
try:
return x.value_counts().index.tolist()[0]
except Exception as e:
return np.NaN
def replace_only_null(x,z):
for i in range(len(x)):
if x[i]==None or x[i]==np.NaN:
yield z[i]
else:
yield x[i]
result_1 = df_all.groupby(['HomePlanet','CryoSleep','Destination']).Surname.apply(lambda x : x.fillna(fill_with_maxcount(x)))
replaced = pd.Series(np.array(list(replace_only_null(df_all.Surname,result_1))))
df_all.Surname = replaced
</code></pre>
<p>The replace_only_null function will compare the result with current Surname columns and replace only null values with result retrieved by applying fill_with_maxcount function.</p>
|
pandas|dataframe|apply|nan|fillna
| 0 |
1,268 | 15,216,972 |
Python generator yields same value each call
|
<p>I want this generator to yield the cosine of each successive value from a list, but am getting the same value each time.</p>
<pre><code>import math
angles = range(0,361,3)
# calculate x coords:
def calc_x(angle_list):
for a in angle_list:
yield round(radius * cos(radians(a)), 3)
</code></pre>
<p>Yields the same value with each call: Why is this and how do I fix it? </p>
<pre><code>>>>calc_x(angles).next()
5.0
>>>calc_x(angles).next()
5.0
>>>calc_x(angles).next()
5.0
</code></pre>
|
<p>Every time you call <code>calc_x</code> you create a <em>new</em> generator. What you need to do is create one and then keep using it:</p>
<pre><code>calc = calc_x(angles)
next(calc)
next(calc)
# etc.
</code></pre>
|
python|generator
| 8 |
1,269 | 15,301,190 |
How to use the "__str__" method?
|
<p>I'm diving into some Object Oriented Programming. Unfortunately, I can't even get up to the first step: Converting classes to strings using <code>__str__</code>.</p>
<p>Here's my code:</p>
<pre><code>class Time:
def __init__(self, hours = 0, minutes = 0, seconds = 0):
self.hours = hours
self.minutes = minutes
self.seconds = seconds
def __str__(self):
return "basdfadsg"
time1 = Time
time1.hours = 3
time1.minutes = 23
time1.seconds = 13
print(time1)
</code></pre>
<p>Whenever I try:</p>
<pre><code>print(time1)
</code></pre>
<p>it returns:</p>
<pre><code><class '__main__.Time'>
</code></pre>
<p>What am I doing wrong?</p>
|
<p>you need an <em>instance</em> of <code>Time</code>. e.g. <code>time1 = Time()</code> (Notice the parenthesis). </p>
<p>As it is, you are modifying the <em>class</em>, not an <em>instance</em> of the class -- And <code>__str__</code> only tells python how to create strings from instances of the class, not the classes themselves...</p>
|
python
| 9 |
1,270 | 29,589,941 |
Detect changes to a file in Git
|
<p>I've looked <a href="https://stackoverflow.com/questions/3882838/whats-an-easy-way-to-detect-modified-files-in-a-git-workspace">here</a> but my question wasn't answered.
I have a script that keeps a number of files up-to-date. The repository consists of a number of bash and python scripts. One of the scripts runs on an hourly cronjob and looks like this:</p>
<pre><code>#! /bin/bash
git fetch origin && git reset --hard origin/dev && git clean -f -d
python -m compileall .
# Set permissions
chmod -R 744 *
</code></pre>
<p>Essentially it updates all the scripts to the current content of GitHub. One of the scripts is code for a daemon. When <strong>that</strong> changes I want to restart the daemon. There's no clue in the output from the <code>git</code> commands regarding which files were changed. So, how can I do that?</p>
<p>To complicate matters, I think the <code>python -m compileall</code> makes <code>git</code> think all files have changed. But I found <a href="https://stackoverflow.com/questions/1580596/how-do-i-make-git-ignore-file-mode-chmod-changes">this question</a> that seems to work.</p>
<p>[EDIT] Additional bonus question added:
Based on the answer given below by @behzad.nouri I have modified the code thus:</p>
<pre><code>#! /bin/bash
branch=$(cat ~/bin/gitbin.branch)
git fetch origin && \
DIFFdmn=$(git --no-pager diff --name-only $branch..origin/$branch -- ./testdaemon/daemon.py) && \
DIFFlib=$(git --no-pager diff --name-only $branch..origin/$branch -- ./testdaemon/libdaemon.py) && \
git reset --hard origin/dev && git clean -f -d
python -m compileall .
# Set permissions
chmod -R 744 *
if [[ -n "$DIFFdmn" ]]; then
logger -t 02-update-scripts "daemon has changed"
./testdaemon/daemon.py restart
fi
if [[ -n "$DIFFlib" ]]; then
logger -t 02-update-scripts "daemonlib has changed"
./testdaemon/daemon.py restart
fi
</code></pre>
<p>where <code>~/bin/gitbin.branch</code> should contain the name of the branch to sync with. This works for the branch called <code>dev</code> but fails for the <code>master</code>-branch (when trying to define the <code>DIFFdmn</code> variable) with the message:</p>
<pre><code>fatal: bad revision 'master..origin/master'
</code></pre>
<p>Any suggestions are very welcome.</p>
|
<pre><code>git diff --name-only
</code></pre>
<p>gives the name of files which have changed. To avoid <code>python -m compileall</code> issue, you need to compare against <em>local branch</em>, as opposed to <em>working directory</em>, as in:</p>
<pre><code>git diff --name-only dev..origin/dev
</code></pre>
<p>if you only care about one file, pass it to the <code>diff</code> command:</p>
<pre><code>git diff --name-only dev..origin/dev -- path/to/daemon.file
</code></pre>
<p>and on the bash side you can check the out-put by <code>-n</code>:</p>
<pre><code>DIFF=$(git --no-pager diff --name-only dev..origin/dev -- path/to/daemon.file)
if [[ -n "$DIFF" ]]
then
echo "daemon has changed"
fi
</code></pre>
|
python|git
| 3 |
1,271 | 29,340,765 |
API Test Case - 'module' object has no attribute
|
<pre><code>$ python manage.py test homepage.shortlistedlist
</code></pre>
<p>When i run above django rest framework testcase file, i met below error, kindly help me to solve this problem,</p>
<pre><code>(py3.4)testuser@testuser-To:~/projects/testfile/testfile$ python manage.py test homepage.compareproperties --settings=testfile.settings.test
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/testuser/envs/testfile/lib/python3.4/site-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File "/home/testuser/envs/testfile/lib/python3.4/site-packages/django/core/management/__init__.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/testuser/envs/testfile/lib/python3.4/site-packages/django/core/management/commands/test.py", line 50, in run_from_argv
super(Command, self).run_from_argv(argv)
File "/home/testuser/envs/testfile/lib/python3.4/site-packages/django/core/management/base.py", line 288, in run_from_argv
self.execute(*args, **options.__dict__)
File "/home/testuser/envs/testfile/lib/python3.4/site-packages/django/core/management/commands/test.py", line 71, in execute
super(Command, self).execute(*args, **options)
File "/home/testuser/envs/testfile/lib/python3.4/site-packages/django/core/management/base.py", line 338, in execute
output = self.handle(*args, **options)
File "/home/testuser/envs/testfile/lib/python3.4/site-packages/django/core/management/commands/test.py", line 88, in handle
failures = test_runner.run_tests(test_labels)
File "/home/testuser/envs/testfile/lib/python3.4/site-packages/django/test/runner.py", line 146, in run_tests
suite = self.build_suite(test_labels, extra_tests)
File "/home/testuser/envs/testfile/lib/python3.4/site-packages/django/test/runner.py", line 95, in build_suite
tests = self.test_loader.discover(start_dir=label, **kwargs)
File "/usr/lib/python3.4/unittest/loader.py", line 255, in discover
self._get_directory_containing_module(top_part)
File "/usr/lib/python3.4/unittest/loader.py", line 269, in _get_directory_containing_module
full_path = os.path.abspath(module.__file__)
AttributeError: 'module' object has no attribute '__file__'
</code></pre>
<p>Thanks In Advance,</p>
|
<p>I was able to solve this by adding an empty <code>__init__.py</code> to the root of my project.</p>
<p>I think the clue is the <code>get_directory_containing_module()</code> call - without <code>__init__.py</code>, it doesn't see the directory as a module.</p>
|
django|python-3.x|django-rest-framework
| 2 |
1,272 | 46,307,880 |
Find the top n values per row in one data frame, and use these indices to obtain values in another data frame to perform a pair-wise operation
|
<p>In this case, I have two data frames A and B.</p>
<pre><code> c1 c2 c3 c1 c2 c3
r0 7 6 4 r0 0 0 1
r1 6 2 5 r1 1 1 0
r2 3 5 9 r2 1 0 1
</code></pre>
<p>A is the data frame on the left, and B on the right.</p>
<p>Basically my goal is to find the top 2 values in each row of A, and the corresponding row values in B, and then take the sum of the products of these pairs.</p>
<p>So for example in the first row, the top values in A are 7, and 6, which correspond to 0, 0 in the first row of B. I then want to return 7 * 0 + 6 * 0 = 0. I'd like to do this over every row and return something like:</p>
<pre><code>d1 0
d2 6
d3 9
</code></pre>
<p>I'm currently using an implementation with using numpy argsort to find the index of the top n values in each row of A, and then using a map and a self-defined function to go over rows and find the product-sum.</p>
<p>This method has ended up being really slow for me, so I was wondering if there are any faster alternatives. Thank you.</p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rank.html" rel="nofollow noreferrer"><code>rank</code></a> to get top 2 values and use that as mask for <code>B</code>.</p>
<pre><code>In [1311]: (A*B.where(A.rank(axis=1) >= 2)).sum(axis=1)
Out[1311]:
r0 0.0
r1 6.0
r2 9.0
dtype: float64
</code></pre>
<hr>
<p>Details</p>
<pre><code>In [1314]: A.rank(axis=1)
Out[1314]:
c1 c2 c3
r0 3.0 2.0 1.0
r1 3.0 1.0 2.0
r2 1.0 2.0 3.0
In [1315]: A.rank(axis=1) >=2
Out[1315]:
c1 c2 c3
r0 True True False
r1 True False True
r2 False True True
In [1317]: B.where(A.rank(axis=1) >= 2)
Out[1317]:
c1 c2 c3
r0 0.0 0.0 NaN
r1 1.0 NaN 0.0
r2 NaN 0.0 1.0
In [1318]: (A*B.where(A.rank(axis=1) >= 2))
Out[1318]:
c1 c2 c3
r0 0.0 0.0 NaN
r1 6.0 NaN 0.0
r2 NaN 0.0 9.0
</code></pre>
|
python|pandas|numpy|dataframe
| 2 |
1,273 | 49,441,372 |
How to iterate rows of dataframe in Content-Type: application/x-www-form-urlencoded format into API POST request?
|
<p>I have a dataframe that looks like this:</p>
<pre><code>email p[1]:
[email protected] 1
[email protected] 2
</code></pre>
<p>the <code>p[1]</code> field is the list ID. </p>
<p>How do I pass rows of this dataframe one at a time into the a API post request in the <code>Content-Type: application/x-www-form-urlencoded</code> format? </p>
<p>Without a dataframe when I try this code it works: </p>
<pre><code>headers = {
'content-type': 'application/x-www-form-urlencoded',
}
params = {
'email': '[email protected]',
' p[1]': '1',
}
url = 'https://URL/admin/api.php?api_action=contact_add&api_output=json&api_key=123ABC'
resp = requests.post(url, data=params, headers=headers)
</code></pre>
<p>How do I pass each row of the dataframe and how do I convert the dataframe format into the <code>params</code> equalivent format? </p>
<p>This api does not take bulk uploads. More information can be found here about the API. <a href="https://www.activecampaign.com/api/example.php?call=contact_add" rel="nofollow noreferrer">https://www.activecampaign.com/api/example.php?call=contact_add</a></p>
<p>Thank you in advance. </p>
|
<p>If you want to do this one at a time, you want <code>DataFrame.iterrows</code></p>
<pre><code>import pandas as pd
df = pd.DataFrame({'email': ['[email protected]', '[email protected]'], 'p[1]': [1,2]})
for index, row in df.iterrows():
params = {'email': row.email, 'p[1]': row['p[1]']}
print(params)
{'email': '[email protected]', 'p[1]': 1}
{'email': '[email protected]', 'p[1]': 2}
</code></pre>
<p>You can then pass the <code>params</code> to whatever you want one at a time inside of the loop.</p>
|
python|pandas|python-requests|api-design
| 1 |
1,274 | 49,601,259 |
Setup.py---nosetests: command not found
|
<p>I successfully installed nose on my laptop and try to run nosestest. However, terminal reminds me that:
"bash: nosetests: command not found."</p>
<p>What's strange is that, when I open up the Python interpreter in Terminal, and do something like:</p>
<pre><code>import nose
nose.main()
</code></pre>
<p>I get the expected result.</p>
<p>I tried to use </p>
<pre><code>find / -type f -name 'nosetests*' -perm +111 -print -quit
</code></pre>
<p>from the answer <a href="https://stackoverflow.com/questions/32546228/installed-nose-but-cannot-use-on-command-line/32546401">Installed Nose but cannot use on command line</a>. But the result pops out as:</p>
<pre><code>find: /.DocumentRevisions-V100: Permission denied
find: /.fseventsd: Permission denied
find: /.Spotlight-V100: Permission denied
find: /.Trashes: Permission denied
</code></pre>
<p>FYI I'm following the steps of Learn Python the Hard Way.</p>
<p>THX</p>
|
<p>For ubuntu (Unix) if you do a <code>pip install nose</code> it installs nose in this folder <code>/usr/local/bin</code></p>
<p>to check this do <code>echo $PATH</code> for a unix system</p>
<p>it should you return a ":" separated list of folders. In this list of folders the folder with the nosetests executable needs to be for you to execute nosetests from the commandline</p>
|
python|nose|setup.py
| 0 |
1,275 | 21,078,836 |
Is there an easy way to store/view changes made to a Django Database?
|
<p>I've been looking around and solutions (that I can find) seem to deal mostly with general events (such as logging that a user logged into the system), or alike. Maybe my search skills have failed me, but I tried to research this and failed before asking.</p>
<p><strong>Looking for:</strong> Due to the requirements of the software I'm creating, I need a log of changes made to the database and by which user.</p>
<p>So for example, if user <code>JjR2211</code> is the logged in user that adds a new entry to table <code>Item</code>, then I need to view something at a later point which would look like:</p>
<pre><code>Entry Table
----
User Change Made
JjR2211 Added entry to Item
...
</code></pre>
<p>Is there any built in functionality or available 3rd party libraries to achieve this, or is the only option to do it manually?</p>
|
<p>Check out <a href="http://django-reversion.readthedocs.org/" rel="nofollow">Django-reversion</a>. I have the feeling it's exactly what you're looking for.</p>
|
python|database|django
| 0 |
1,276 | 20,981,936 |
python Dictread of CSV file with NUL bytes in data
|
<p>I have a CSV file which has NUL byte embedded within some data.</p>
<p>That is given columns A B C D one of the fields in column C would have data like</p>
<p>, quote character"Some Data" NUL "More Data" NUL "End of data" quote character,</p>
<p>When I open it with LIBRE Office Calc, the NUL characters do not appear in the display and if I save it by hand, they go away. I can see the NUL characters in vi and could remove or replace them with tr or by hand in vi, but I want to be able to handle it with the python program automatically.</p>
<p>The DictReader process is </p>
<p>for row in infile: which throws the exception and the except is therefore outside the loop and would not go back to get the next line (or allow me to change the NUL character to a space or embedded comma and process that line).</p>
<p>Luckily, the data appears to have other invalidations so I would probably skip it in any event. However, the question would be how do I tell Python to go to the next line.</p>
|
<p>So this is a bit ugly, but it seems to work. You can read a line like normal, clean the offending bytes, then use a StringIO object to pass it to DictReader. Here's the code, assuming your csv has a header record (it should be more simple if you don't):</p>
<pre><code>#!/usr/bin/env python
import StringIO
import csv
import ipdb
fin = open('somefilewithnulls', 'rb')
fout = StringIO.StringIO()
reader = csv.DictReader(fout)
while True:
# for the first record prep StringIO with the first
# two lines so DictReader can create header
line = fin.readline() if fin.tell() else fin.readline() + fin.readline()
if not len(line):
break
# clean the line before passing it to DictReader
line = line.replace('\x00', '')
fout.write(line)
fout.seek(-len(line), 1)
rec = reader.next()
print rec
</code></pre>
|
python|design-patterns|csv
| 1 |
1,277 | 53,585,041 |
How to split a text file in python using the amount of words per line without using modules
|
<p>so I'm writing this script where a text file is to be split into lists based on the amount of words per line, I need to generate a dictionary but no need to worry about it; I'm having trouble trying to split this text:</p>
<p>So let's say that I have:</p>
<pre><code>word1:
word word
more words
word2:
another word
word3:
word4:
</code></pre>
<p>and I want:</p>
<pre><code>[[[word:], [word word], [more words]],[[word2:], [another word]],
[[word3:]], [[word4:]]]
</code></pre>
<p>This is the code:</p>
<pre><code>from typing import List, Dict, TextIO, Tuple
def read_file(TextIO) -> Dict[str, List[tuple]]:
text = open('text_file.txt', 'r')
data = []
indexes = []
for line in text.readlines():
l = line.strip().split(',')
data.append(l)
for lists in data:
if lists == ['']:
data.remove(lists)
for elements in data:
if len(elements) == 1:
if ':' in elements[0][-1]:
indexes.append(data.index(elements))
</code></pre>
<p>How can I use the indexes to cut the data in the parts that I need? or how else could I cut the text file in the parts that I need without using modules?</p>
|
<p>You are doing a series of operations that do not make sense – possibly they were leftovers from earlier attempts. You don't have any data with comma's in them, so <code>.split(',')</code> is obsolete. I also do not see what appending to <code>indexes</code> ought to be doing.</p>
<p>Instead, take the following approach: append words that end with <code>:</code> as a new list; append all other phrases <em>to</em> that last list. The only deviation from this is that blank line; it seems this should be discarded, else it will add a <code>''</code> to one of the lists.</p>
<p>Thus, all that is necessary is this short code:</p>
<pre><code>data = []
with open('text.txt', 'r') as text:
for line in text:
line = line.strip()
if line:
if line.endswith(':'):
data.append([line])
else:
data[-1].append(line)
print (data)
</code></pre>
<p>Output as per requirement:</p>
<pre><code>[['word1:', 'word word', 'more words'], ['word2:', 'another word'], ['word3:'], ['word4:']]
</code></pre>
|
python|python-3.x
| 1 |
1,278 | 46,166,205 |
Display coordinates in pyqtgraph?
|
<p>I'm attempting to develop a GUI that allows for tracking mouse coordinates in the PlotWidget, and displaying in a label elsewhere in the main window. I have attempted several times to emulate the crosshair example in the pyqtgraph documentation, but have not been able to get it to agree to do this. A part of the difficulty is that I am unable to understand how to access the mouse tracking I can enable in the QtDesigner. </p>
<p>I attempted to use the:</p>
<pre><code>proxy = pg.SignalProxy(Plotted.scene().sigMouseMoved, rateLimit=60, slot=mouseMoved)
Plotted.scene().sigMouseMoved.connect(mouseMoved)
</code></pre>
<p>However, I do not quite understand what allows for this to "update" in real time, nor at what level I should have this statement. Should the </p>
<pre><code>def mouseMoved(evt):
pos = evt[0]
if Plotted.sceneBoundingRect().contains(pos):
mousePoint = vb.mapSceneToView(pos)
index = int(mousePoint.x())
if index > 0 and index < len(x):
mousecoordinatesdisplay.setText("<span style='font-size: 12pt'>x=%0.1f, <span style='color: red'>y1=%0.1f</span>" % (mousePoint.x(), y[index], data2[index]))
vLine.setPos(mousePoint.x())
hLine.setPos(mousePoint.y())
</code></pre>
<p>part of the code be in the Ui_MainWindow class, or outside of it?</p>
|
<p>I was able to get the updates to work by doing the following: </p>
<p>IN THE setupUi function: </p>
<pre><code>Plotted = self.plot
vLine = pg.InfiniteLine(angle=90, movable=False)
hLine = pg.InfiniteLine(angle=0, movable=False)
Plotted.addItem(vLine, ignoreBounds=True)
Plotted.addItem(hLine, ignoreBounds=True)
Plotted.setMouseTracking(True)
Plotted.scene().sigMouseMoved.connect(self.mouseMoved)
def mouseMoved(self,evt):
pos = evt
if self.plot.sceneBoundingRect().contains(pos):
mousePoint = self.plot.plotItem.vb.mapSceneToView(pos)
self.mousecoordinatesdisplay.setText("<span style='font-size: 15pt'>X=%0.1f, <span style='color: black'>Y=%0.1f</span>" % (mousePoint.x(),mousePoint.y()))
self.plot.plotItem.vLine.setPos(mousePoint.x())
self.plot.plotItem.hLine.setPos(mousePoint.y()
</code></pre>
<p>Where the .mousecoordinatedisplay is a Label. It took me forever to figure out how to do this with use in the GUI from designer. It seems between pyqt4 and pyqt5 there was a change from the Qpointf, where the new Qpointf doesn't allow for indexing. By just passing the <code>evt</code> variable, it is possible to map it without calling the <code>evt[0]</code>. </p>
|
python-3.x|pyqt5|pyqtgraph
| 3 |
1,279 | 45,840,789 |
where does the python start the code execution from?
|
<p>I am trying to understand that when we execute a .py file, then from which part
of that code the python start the execution from?
E.g.when we execute a Java program, the "public static void main(String[] args)" is the location where the java start the code execution. So, when we talk about python, how does it work? I know there is a python main function </p>
<pre><code>(__name__ = "__main__")
</code></pre>
<p>, I have gone through some article in and out of the Stackoverflow, they all say that it loads the python module, and then the python UDFs etc. So, as per my understanding, it is the location which is executed first thing. Please correct me, or guide me to some web links for my query. </p>
|
<p>If the Python code is in a method, no code will be executed unless you explicitly call the method (e.g. after checking <code>__name__ == '__main__'</code>). It is convention to call <code>main</code> method, but you can call any method as the starting point of execution.</p>
<p>If the Python code is <em>not</em> in a method, the code will be executed anytime you run or import the file.</p>
|
python
| 1 |
1,280 | 33,244,497 |
How can I create a SQLAlchemy query containing only instances related to a specific instance of a model?
|
<p>I am given an instance of a SQLAlchemy model <code>instance</code> and the name of a relationship <code>relation</code> on that instance. I can access all related instances by doing <code>getattr(instance, relation)</code>. How can I construct a <em>query</em> that contains all instances of that relationship, or in other words, a query that evaluates to <code>getattr(instance, relation)</code>? I am given no other information about the database tables, etc.</p>
<p>This is similar to the question <a href="https://stackoverflow.com/q/31839150/108197">Construct a query to filter many-to-many for all B instances related to a given A instance</a>, but I don't have the table definitions.</p>
|
<p>My solution was as follows. Assume <code>instance</code> is an instance of a model, <code>relation</code> is a string representing the relationship attribute of that model, <code>'id'</code> is the primary key for the related model, and <code>related_model</code> is the related model. This solution filters the related model by only those instances whose primary key is in the list of known primary keys from the given instance's to-many relationship attribute:</p>
<pre><code>relationship = getattr(instance, relation)
primary_keys = {getattr(inst, 'id') for inst in relationship}
query = session.query(related_model)
query.filter(getattr(related_model, 'id') in primary_keys)
</code></pre>
|
python|sqlalchemy|introspection
| 0 |
1,281 | 33,234,089 |
What is the idiomatic way to compute values based on neighboring values, in NumPy?
|
<p>I am asked to experiment with numpy calculating values in a two-dimensinal array/matrix (rows, columns) where these values depend on neighboring values. This is not just multiplying the matrix with a scalar or anything like that, even though it may be reduced to a series of such steps, I admit.</p>
<p>Even though this is homework, my question is broader in scope than just asking for a solution handed to me.</p>
<p>I have read up on broadcasting, i.e. vectorization, in numpy, and I could imagine one way would be implementing this as a new <em>ufunc</em> and running it on the matrix. However, I am a bit wary of limitations I may face - can a numpy <em>ufunc</em> access a neighboring element, versus the one it computes during current iteration? Conceptually:</p>
<pre><code>for x in columns:
for y in rows:
a[x, y] = a[x, y - 1] + a[x, y + 1] + a[x - 1, y] + a[x + 1, y] + A + B + b[x, y] # '+' is just an example of a binary op here.
</code></pre>
<p>meaning that the value in each cell depends on neighboring cells and also some constants and even values in another matrix.</p>
<p>Reading numpy documentation hasn't helped me all that much. What would be the preferred/idiomatic way to do this in numpy?</p>
|
<p>What you're describing is very common in image processing -- there it's called <em>applying a kernel to do two-dimensional filtering</em> (just to give you something to google). From the <a href="http://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html#filter-functions" rel="nofollow noreferrer">Numpy ndimage documentation</a>:</p>
<blockquote>
<p>The functions described in this section all perform some type of
spatial filtering of the input array: the elements in the output are
some function of the values in the neighborhood of the corresponding
input element. We refer to this neighborhood of elements as the filter
kernel, which is often rectangular in shape but may also have an
arbitrary footprint. Many of the functions described below allow you
to define the footprint of the kernel, by passing a mask through the
footprint parameter. For example a cross shaped kernel can be defined
as follows:</p>
<pre><code>footprint = array([[0,1,0],[1,1,1],[0,1,0]])
footprint array([[0, 1, 0],
[1, 1, 1],
[0, 1, 0]])
</code></pre>
</blockquote>
<p>What you'd then do is use the <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.filters.convolve.html" rel="nofollow noreferrer"><code>convolve</code> function</a>:</p>
<pre><code>from scipy import ndimage
output = ndimage.convolve(matrix, footprint)
</code></pre>
<p>If you want "wrapping" behaviour like <a href="https://stackoverflow.com/a/33234569/4433386">xnx's answer</a> has, use the <code>mode="wrap"</code> argument to <code>convolve</code>:</p>
<pre><code>output = ndimage.convolve(matrix, footprint, mode="wrap")
</code></pre>
|
python|arrays|numpy|vectorization
| 2 |
1,282 | 24,468,667 |
What's the difference between tp_clear, tp_dealloc and tp_free?
|
<p>I have a custom python module for fuzzy string search, implementing Levenshtein distance calculation, it contains a python type, called levtree which has two members a pointer to a wlevtree C type (called tree) which does all the calculations and a PyObject* pointing to a python-list of python-strings, called wordlist. Here is what I need:</p>
<p>-when I create a new instance of levtree I use a constructor which takes a tuple of strings as its only input (and it is the dictionary in which the instance will perform all the searches), this constructor will have to create a new instance of wordlist into the new instance of levtree and copy the content of the input tuple into the new instance of wordlist. Here is my first code snippet and my first question:</p>
<pre><code>static int
wlevtree_python_init(wlevtree_wlevtree_obj *self, PyObject *args, PyObject *kwds)
{
int numLines; /* how many lines we passed for parsing */
wchar_t** carg; /* argument to pass to the C function*/
unsigned i;
PyObject * strObj; /* one string in the list */
PyObject* intuple;
/* the O! parses for a Python object (listObj) checked
to be of type PyList_Type */
if (!(PyArg_ParseTuple(args, "O!", &PyTuple_Type, &intuple)))
{
return -1;
}
/* get the number of lines passed to us */
numLines = PyTuple_Size(intuple);
carg = malloc(sizeof(char*)*numLines);
/* should raise an error here. */
if (numLines < 0)
{
return -1; /* Not a list */
}
self->wordlist = PyList_New(numLines);
Py_IncRef(self->wordlist);
for(i=0; i<numLines; i++)
{
strObj = PyTuple_GetItem(intuple, i);
//PyList_Append(self->wordlist, string);
PyList_SetItem(self->wordlist, i, strObj);
Py_IncRef(strObj);
}
/* iterate over items of the list, grabbing strings, and parsing
for numbers */
for (i=0; i<numLines; i++)
{
/* grab the string object from the next element of the list */
strObj = PyList_GetItem(self->wordlist, i); /* Can't fail */
/* make it a string */
if(PyUnicode_Check(strObj))
{
carg[i] = PyUnicode_AsUnicode( strObj );
if(PyErr_Occurred())
{
return -1;
}
}
else
{
strObj = PyUnicode_FromEncodedObject(strObj,NULL,NULL);
if(PyErr_Occurred())
{
return -1;
}
carg[i] = PyUnicode_AsUnicode( strObj );
}
}
self->tree = (wlevtree*) malloc(sizeof(wlevtree));
wlevtree_init(self->tree,carg,numLines);
free(carg);
return 0;
}
</code></pre>
<p>Do I have to call Py_IncRef(self->wordlist); after self->wordlist = PyList_New(numLines); or it is redundant because references are already incremented in PyList_new?
Then I have the same doubt on PyList_SetItem(self->wordlist, i, strObj); and Py_IncRef(strObj);..</p>
<p>-when I destroy an instance of levtree i want to call the C function that frees the space occupied by tree, destroy wordlist and decrement all reference count on all the strings contained into wordlist.. Here is my tp_dealloc:</p>
<pre><code>static void
wlevtree_dealloc(wlevtree_wlevtree_obj* self)
{
//wlevtree_clear(self);
if(self->tree!=NULL)
{
wlevtree_free(self->tree);
}
free(self->tree);
PyObject *tmp, *strObj;
unsigned i;
int size = PyList_Size(self->wordlist);
for(i=0; i<size; i++)
{
strObj = PyList_GetItem(self->wordlist, i);
Py_CLEAR(strObj);
}
Py_CLEAR(self->wordlist);
Py_TYPE(self)->tp_free((PyObject *)self);
}
</code></pre>
<p>Is it correct to make all the deallocation work here?
At the moment I don't have a tp_clear and a tp_free, do I need them?
My code at the moment works on allocation but not on deallocation because even though I can call <strong>init</strong> on the same python variable more than once, at the end of every python script (which works correctly) I get a "Segmentation Fault" which makes me think that something in the deallocation process goes wrong..</p>
|
<p><code>tp_clear</code> is only needed if you implement <a href="https://docs.python.org/2/c-api/gcsupport.html" rel="noreferrer">cyclic garbage collection</a>. It appears that this is not needed because you only maintain references to Python unicode objects.</p>
<p><code>tp_dealloc</code> is called when the reference count of the object goes down to zero. This is where you destroy the object and its members. It should then free the memory occupied by the object by calling <code>tp_free</code>.</p>
<p><code>tp_free</code> is where the memory for the object is freed. Implement this only if you implement <code>tp_alloc</code> yourself. </p>
<p>The reason for the separation between <code>tp_dealloc</code> and <code>tp_free</code> is that if your type is subclassed, then only the subclass knows how the memory was allocated and how to properly free the memory.</p>
<p>If your type is a subclass of an exisiting type, your <code>tp_dealloc</code> may need to call the <code>tp_dealloc</code> of the derived class, but that depends on the details of the case.</p>
<p>To summarize, it seems that you are handling object destruction correctly (except that you leak <code>carg</code> when exiting the function with an error).</p>
|
python|c|python-extensions
| 6 |
1,283 | 24,563,303 |
Binary file (Labview .DAT file) conversion using Python
|
<p>I work in a lab where we acquire electrophysiological recordings (across 4 recording channels) using custom Labview VIs, which save the acquired data as a .DAT (binary) file. The analysis of these files can then be continued in more Labview VIs, however I would like to analyse all my recordings in Python. First, I need walk through all of my files and convert them out of binary! </p>
<p>I have tried numpy.fromfile (filename), but the numbers I get out make no sense to me:</p>
<blockquote>
<p>array([ 3.44316221e-282, 1.58456331e+029, 1.73060724e-077, ...,
4.15038967e+262, -1.56447362e-090, 1.80454329e+070])</p>
</blockquote>
<p>To try and get further I looked up the .DAT header format to understand how to grab the bytes and translate them - how many bytes the data is saved in etc:
<a href="http://zone.ni.com/reference/en-XX/help/370859J-01/header/header/headerallghd_allgemein/" rel="nofollow">http://zone.ni.com/reference/en-XX/help/370859J-01/header/header/headerallghd_allgemein/</a></p>
<p>But I can't work out what to do. When I type "head filename" into terminal, below is what I see.</p>
<blockquote>
<p>e.g. >> head 2014_04_10c1slice2_rest.DAT</p>
<p>DTL?
0???? @@????
empty array
PF?c ƀ????l?N?"1'.+?K13:13:27;0.00010000-08??t???DY
??N?t?x???D?
?uv?tgD?~I??
??N?t?x>?n??
????t?x>?n??
????t???D?
????t?x???D?
????t?x?~I??
????tgD>?n??
??N?tgD???D?
??N?t?x???D?
????t????DY
??N?t?x>?n??
??N?t????DY
?Kn$?t?????DY
??N?t??>?n??
??N?tgD>?n??
????t?x?~I??
????tgD>?n??
??N?tgD>?n??
??N?tgD???DY
????t?x???D?
????t???~I??
??N?tgD???DY
??N?tgD???D?
??N?t?~I??
??N?t?x???DY
??N?tF>?n??
??N?t?x??%Y</p>
</blockquote>
<p>Any help or suggestions on what to do next would be really appreciated.</p>
<p>Thanks. </p>
<p>P.s. There is an old (broken) matlab file that seems to have been intended to convert these files. I think this could probably be helpful, but having spent a couple of days trying to understand it I am still stuck. <a href="http://www.mathworks.co.uk/matlabcentral/fileexchange/27195-load-labview-binary-data" rel="nofollow">http://www.mathworks.co.uk/matlabcentral/fileexchange/27195-load-labview-binary-data</a> </p>
|
<p>Based on <a href="http://www.shocksolution.com/2008/06/reading-labview-binary-files-with-python/" rel="nofollow">this link</a> it looks like the following should do the trick:</p>
<pre><code>binaryFile = open('Measurement_4.bin', mode='rb')
(data.offset,) = struct.unpack('>d', binaryFile.read(8))
</code></pre>
<p>Note that <code>mode</code> is set to <code>'rb'</code> for binary.</p>
<p>With <code>numpy</code> you can directly do this as</p>
<pre><code>data = numpy.fromfile('Measurement_4.bin', dtype='>d')
</code></pre>
<p>Please note that if you are just using Python as an intermediate and want to go back to LabVIEW with the data, you should instead use the function <a href="http://zone.ni.com/reference/en-XX/help/371361J-01/lvhowto/reading_from_binary_files/" rel="nofollow">Read from Binary file.vi</a> to read the binary file using native LabVIEW.</p>
|
python|numpy|binary|labview|file-conversion
| 1 |
1,284 | 41,009,119 |
TensorFlow on Windows: "pip install tensorflow" fails
|
<p>I am using Visual Studio 2015, Python 3.5.2, Windows 10, and have recently upgraded pip to 9.0.1. I am trying to install Tensorflow 0.12 on my system. I tried to use VS's built in "Install Python Package" function, as well as command prompting</p>
<pre><code>pip install python
</code></pre>
<p>Both ways I get the same error:</p>
<pre><code>Installing 'tensorflow'
Collecting tensorflow
Could not find a version that satisfies the requirement tensorflow (from versions: )
No matching distribution found for tensorflow
'tensorflow' failed to install. Exit code: 1`
</code></pre>
<p>I don't know what to do from here. Supposedly the pip command was all I had to do so nowhere else has any troubleshooting.</p>
<pre><code>Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2013, 22:01:18) [MSC v.1900 32 bit (Intel)] on win32
</code></pre>
|
<p>According to your Python version string, you are running the 32-bit version of the Python interpreter. We have only made PIP packages for the <strong>64-bit</strong> version of the Python 3.5, which can be downloaded and installed separately from Python.org or Anaconda.</p>
|
visual-studio-2015|pip|tensorflow|windows-10|python-3.5
| 2 |
1,285 | 38,193,959 |
Python: Can't turn string into JSON
|
<p>For the past few hours, I've been fighting to get a string into a JSON dict. I've tried everything from json.loads(... which throws an error:</p>
<pre><code>requestInformation = json.loads(entry["request"]["postData"]["text"])
//throws this error
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes:
</code></pre>
<p>to stripping out the slashes using a medley of re.sub('\\','',mystring) ,mystring.sub(... to no effect. My problem string looks like so</p>
<pre><code>'{items:[{n:\\'PackageChannel.GetUnitsInConfigurationForUnitType\\',ps:[{n:\\'unitType\\',v:"ActionTemplate"}]}]}'
</code></pre>
<p>The origin of this string is that it's a HAR dump from Google Chrome. I think those backslashes are from it being escaped somewhere along the way because the bulk of the HAR file doesn't contain them, but they do appear commonly in any field labeled "text".</p>
<pre><code>"postData": {
"mimeType": "application/json",
"text": "{items:[{n:'PackageChannel.GetUnitsInConfigurationForUnitType',ps:[{n:'unitType',v:\"Analysis\"}]}]}"
}
</code></pre>
<p><strong>EDIT</strong> I eventually gave up on turning the text above into JSON and instead opted for regex. Sometimes the slashes showed up, sometimes they didn't based on what I was viewing the text in and that made it difficult to work with.</p>
|
<p>the <code>json</code> module wants a string where the keys are also wrapped in double quotes</p>
<p>so the string below would work:</p>
<pre><code>mystring = '{"items":[{"n":"PackageChannel.GetUnitsInConfigurationForUnitType", "ps":[{"n":"unitType","v":"ActionTemplate"}]}]}'
myjson = json.loads(mystring)
</code></pre>
<p>This function should remove the double backslashes and put double quotes around your keys. </p>
<pre><code>import json, re
def make_jsonable(mystring):
# we'll use this regex to find any key that doesn't contain any of: {}[]'",
key_regex = "([\,\[\{](\s+)?[^\"\{\}\,\[\]]+(\s+)?:)"
mystring = re.sub("[\\\]", "", mystring) # remove any backslashes
mystring = re.sub("\'", "\"", mystring) # replace single quotes with doubles
match = re.search(key_regex, mystring)
while match:
start_index = match.start(0)
end_index = match.end(0)
print(mystring[start_index+1:end_index-1].strip())
mystring = '%s"%s"%s'%(mystring[:start_index+1], mystring[start_index+1:end_index-1].strip(), mystring[end_index-1:])
match = re.search(key_regex, mystring)
return mystring
</code></pre>
<p>I couldn't directly test it on the first string you wrote, the double/single quotes don't match up, but on the one in the last code sample it works.</p>
|
python|json|decode
| 0 |
1,286 | 31,149,781 |
Grouping data by value in first column
|
<p>I'm trying to group data from a 2 column object based on the value of a first column. I need this data in a list so I can sort them afterwards. I am fetching interface data with snmp on large number of machines. In the example I have 2 interfaces. I need data grouped by interface preferably in a list.</p>
<p>Data i get is in object item:</p>
<pre><code>for i in item:
print i.oid, i.val
ifDescr lo
ifDescr eth0
ifAdminStatus 1
ifAdminStatus 1
ifOperStatus 1
ifOperStatus 0
</code></pre>
<p><strike>i would like to get this data sorted in a list by value in the first column, like this:</strike></p>
<p>I would like to get this data in a list, so it looks like this:</p>
<p>list=[[lo,1,1], [eth0,1,0]]</p>
<p><strike>Solution I have is oh so dirty and long and I'm embarrassed to post it here, so any help is appreciated.</strike></p>
<p>Here is my solution so you get better picture what I'm talking about. What I did is put each interface data in separate list based on item.oid, and then iterated trough cpu list and compared it to memory and name based on item.iid. In the end I have all data in cpu list where each interface is an element of the list. This solution works, but is too slow for my needs. </p>
<pre><code>cpu=[]
memory=[]
name=[]
for item in process:
if item.oid=='ifDescr':
cpu.append([item.iid, int(item.val)])
if item.oid=='ifAdminStatus':
memory.append([item.iid, int(item.val)])
if item.oid=='ifOperStatus':
name.append([item.iid, item.val])
for c in cpu:
for m in memory:
if m[0]==c[0]:
c.append(m[1])
for n in name:
if n[0]==c[0]:
c.append(n[1])
cpu=sorted(cpu,key=itemgetter(1),reverse=True) #sorting is easy
</code></pre>
<p>Is there a pythonic, short and faster way of doing this? Limiting factor is that I get data in a 2 column object with key=data values.</p>
|
<p>Not sure I follow your sorting as I don't see any order but to group you can use a dict grouping by <code>oid</code> using a <a href="https://docs.python.org/2/library/collections.html#collections.defaultdict" rel="nofollow">defaultdict</a> for the repeating keys:</p>
<pre><code>data = """ifDescr lo
ifDescr eth0
ifAdminStatus 1
ifAdminStatus 1
ifOperStatus 1
ifOperStatus 0"""
from collections import defaultdict
d = defaultdict(list)
for line in data.splitlines():
a, b = line.split()
d[a].append(b)
print((d.items()))
[('ifOperStatus', ['1', '0']), ('ifAdminStatus', ['1', '1']), ('ifDescr', ['lo', 'eth0'])]
</code></pre>
<p>using your code just use the attributes:</p>
<pre><code>for i in item:
d[i.oid].append(i.val)
</code></pre>
|
python|sorting|grouping
| 2 |
1,287 | 30,821,808 |
Python list comprehension - need elements skipped combinations
|
<p>For this input list </p>
<pre><code>[0, 1, 2, 3, 4, 5]
</code></pre>
<p>I need this output </p>
<pre><code>[[0, 2],
[0, 3],
[0, 4],
[0, 5],
[1, 3],
[1, 4],
[1, 5],
[2, 4],
[2, 5],
[3, 5],
[0, 2, 3],
[0, 3, 4],
[0, 4, 5],
[1, 3, 4],
[1, 4, 5],
[2, 4, 5],
[0, 2, 3, 4],
[0, 3, 4, 5],
[1, 3, 4, 5]]
</code></pre>
<p>I have tried this code,</p>
<pre><code>for k in range( 0, 5 ):
for i in range( len( inputlist ) - ( 2 + k ) ):
print [inputlist[k], inputlist[i + ( 2 + k )]]
for i in range( len( inputlist ) - ( 3 + k ) ):
print [inputlist[k], inputlist[i + ( 2 + k )], inputlist[i + ( 3 + k )]]
for i in range( len( inputlist ) - ( 4 + k ) ):
print [inputlist[k], inputlist[i + ( 2 + k )], inputlist[i + ( 3 + k )], inputlist[i + ( 4 + k )]]
</code></pre>
<p>I need skipped patterns,
1,2,3 --> 1,3
1,2,3,4 --> [1,3],[1,4],[2,4]</p>
<p>ie, first element, third element and so on. </p>
<p>How to generalize this? Help is appreciated </p>
|
<p>Try to describe with words your problem.</p>
<p>From what I understand from your example:</p>
<pre><code>def good(x): return x[0]+1!=x[1] and all(i+1==j for i,j in zip(x[1:],x[2:]))
from itertools import combinations
[i for j in range(2,5) for i in filter(good, combinations(l,j))]
</code></pre>
<blockquote>
<p>[(0, 2), (0, 3), (0, 4), (0, 5), (1, 3), (1, 4), (1, 5), (2, 4), (2, 5), (3, 5), (0, 2, 3), (0, 3, 4), (0, 4, 5), (1, 3, 4), (1, 4, 5), (2, 4, 5), (0, 2, 3, 4), (0, 3, 4, 5), (1, 3, 4, 5)]</p>
</blockquote>
|
python|list|list-comprehension
| 5 |
1,288 | 30,856,133 |
irregular slicing/copying in numpy array
|
<p>Suppose I have an array with 10 elements, e.g. <code>a=np.arange(10)</code>. If I want to create another array with the 1st, 3rd, 5th, 7th, 9th, 10th elements of the original array, i.e. <code>b=np.array([0,2,4,6,8,9])</code>, how can I do it efficiently?</p>
<p>thanks</p>
|
<pre><code>a[[0, 2, 4, 6, 8, 9]]
</code></pre>
<p>Index <code>a</code> with a list or array representing the desired indices. (Not <code>1, 3, 5, 7, 9, 10</code>, because indexing starts from 0.) It's a bit confusing that the indices and the values are the same here, so have a different example:</p>
<pre><code>>>> a = np.array([5, 4, 6, 3, 7, 2, 8, 1, 9, 0])
>>> a[[0, 2, 4, 6, 8, 9]]
array([5, 6, 7, 8, 9, 0])
</code></pre>
<p>Note that this creates a copy, not a view. <a href="https://stackoverflow.com/questions/30609734/numpy-ndarray-advanced-indexing">Also, note that this might not generalize to multiple axes the way you expect.</a></p>
|
arrays|numpy|copy|scipy|slice
| 1 |
1,289 | 40,247,825 |
Using Python ARMA model fit
|
<p>I have a time series data and I am trying to fit ARMA(p,q) model to it but I am not sure what 'p' and 'q' to use. I came across this link <a href="http://statsmodels.sourceforge.net/devel/generated/statsmodels.tsa.arima_model.ARMA.fit.html" rel="nofollow">enter link description here</a></p>
<p>The usage for this model is <a href="http://statsmodels.sourceforge.net/devel/examples/notebooks/generated/tsa_arma.html" rel="nofollow">enter link description here</a></p>
<p>But I don't think it automatically decides what 'p' and 'q' to use. It seems like I need to know what 'p' and 'q' is appropriate.</p>
|
<p>You'll have to do a bit of reading outside of the statsmodel package documentation. </p>
<p>See some of the content in this answer:
<a href="https://stackoverflow.com/a/12361198/6923545">https://stackoverflow.com/a/12361198/6923545</a></p>
<p>There's a guy named Rob Hyndman who wrote a great book on forecasting and it would be a fine idea to start there. Chapters 8.3 and 8.4 are the bulk of what you're looking for</p>
<h3>From Rob's book <a href="https://www.otexts.org/fpp/8/3" rel="nofollow noreferrer">chapter 8.3</a>,</h3>
<blockquote>
<p>In an autoregression model, we forecast the variable of interest using a linear combination of past values of the variable.</p>
</blockquote>
<p>This is describing <em>p</em> -- the number of past values used to forecast a value</p>
<h3>From Rob's book <a href="https://www.otexts.org/fpp/8/4" rel="nofollow noreferrer">chapter 8.4</a>,</h3>
<blockquote>
<p>a moving average model uses past forecast errors in a regression-like model.</p>
</blockquote>
<p>This is describing <em>q</em>, the number of previous forecast errors in the model. </p>
|
python|statsmodels
| 1 |
1,290 | 29,102,415 |
Artifactory PyPi repo layout with build promotion
|
<p><strong>Q1:</strong>
I have an Artifactory PyPi enabled repo <em>my-pypi-repo</em> where I can publish my packages. When uploading via python <code>setup.py -sdist</code>, I get a structure like this:</p>
<pre><code>my-pypi-repo|
|my_package|
|x.y.z|
|my_package-x.y.z.tar.gz
</code></pre>
<p>The problem is this structure will not match any "allowed" repo layout in Artifactory, since <em>[org]</em> or <em>[orgPath]</em> are mandatory:</p>
<blockquote>
<p>Pattern '[module]/[baseRev]/[module]-[baseRev].[ext]' must at-least
contain the tokens 'module', 'baseRev' and 'org' or 'orgPath'.</p>
</blockquote>
<p>I managed to publish to a path by 'hacking' the package name to <code>myorg/my_package</code>, but then pip cannot find it, so it's pretty useles.</p>
<p><strong>Q2:</strong>
Has anyone tried the "ci-repo" and "releases-repo" with promotion for Python using Artifactory? </p>
<p>What I would like to achieve:</p>
<p><strong>CI repo:</strong>
<code>my_package-1.2.3+build90.tar.gz</code> When this artifact gets promoted build metadata gets dropped </p>
<p><strong>Releases repo:</strong>
<code>my_package-1.2.3.tar.gz</code> </p>
<p>I can achieve this via repo layouts (providing I resolve Q1). The problem is how to deal with the "embedded" version inside my Python script, hardcoded in setup.py. </p>
<p>I'd rather not rebuild the package again, for best practices.</p>
|
<p>I am running into the same issue in regards to your first question/problem. When configuring my system to publish to artifactory using pip, it uses the format you described.</p>
<p>As you mentioned, the <code>[org]</code> or <code>[orgPath]</code> is mandatory and this basically breaks all the REST API functionality, like searching for latest version, etc. I'm currently using this as my Artifact Path Pattern:</p>
<pre><code>[org]/[module]/[baseRev].([fileItegRev])/[module]-[baseRev].([fileItegRev]).[ext]
</code></pre>
<p>The problem is that pip doesn't understand the concept of <code>[org]</code> in this case. I'm temporarily using a python script to publish my packages to Artifactory to get around this. Hopefully this is something that can be addressed by the jFrog team.</p>
<p>The python script simply uses Artifactory's REST API to publish to my local pypi repository, tacking on a few properties so that some of the REST API functions work properly, like <a href="https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API#ArtifactoryRESTAPI-ArtifactLatestVersionSearchBasedonProperties" rel="nofollow">Artifact Latest Version Search Based on Properties</a>.</p>
<p>I need to be able to use that call because we're using Chef in-house and we use that method to get the latest version. The <code>pypi.version</code> property that gets added when publishing via <code>python setup.py sdist upload -r local</code> doesn't work with the REST API so I have to manually add the <code>version</code> property. Painful to be honest since we can't add properties when using the upload option with setup.py. Ideally I'd like to be able to do everything using pip, but at the moment this isn't possible.</p>
<p>I'm using the <a href="http://docs.python-requests.org/en/latest/" rel="nofollow">requests</a> package and the upload method in the Artifactory documentation <a href="https://www.jfrog.com/confluence/display/RTF/Using+Properties+in+Deployment+and+Resolution" rel="nofollow">here</a>. Here is the function I'm using to publish adding a few properties (feel free to add more if you need):</p>
<pre><code>def _publish_artifact(name, version, path, summary):
base_url = 'http://server:8081/artifactory/{0}'.format(PYPI_REPOSITORY)
properties = ';version={0};pypi.name={1};pypi.version={0};pypi.summary={2}'\
.format(version, name, summary)
url_path = '/Company/{0}/{1}/{0}-{1}.zip'.format(name, version)
url = '{0}{1}{2}'.format(base_url, properties, url_path)
dist_file = r'{0}\dist\{1}-{2}.zip'.format(path, name, version)
files = {'upload_file': open(dist_file, 'rb')}
s = requests.Session()
s.auth = ('username', 'password')
reply = s.put(url, files=files)
logger.info('HTTP reply: {0}'.format(reply))
</code></pre>
|
python|artifactory
| 3 |
1,291 | 52,304,492 |
Error: Could not find or load main class net.minecraft.launchwrapper.Launch when launching Minecraft 1.12.2 with Forge
|
<p>I've written a launcher for Minecraft 1.12.2 in Python, which just prepares a command and runs it using subprocess.</p>
<p>This is the command formed on Linux Ubuntu:</p>
<pre><code>#!/usr/bin/env bash
java -Xmx4G -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -XX:G1NewSizePercent=20 -XX:G1ReservePercent=20 -XX:MaxGCPauseMillis=50 -XX:G1HeapRegionSize=32M -Djava.library.path=/tmp/tmp2dl6pidh -cp /home/kuba/Desktop/launch/package/libraries/org/apache/maven/maven-artifact/3.5.3/maven-artifact-3.5.3.jar:/home/kuba/Desktop/launch/package/libraries/org/apache/httpcomponents/httpcore/4.3.2/httpcore-4.3.2.jar:/home/kuba/Desktop/launch/package/libraries/org/apache/httpcomponents/httpclient/4.3.3/httpclient-4.3.3.jar:/home/kuba/Desktop/launch/package/libraries/org/apache/logging/log4j/log4j-core/2.8.1/log4j-core-2.8.1.jar:/home/kuba/Desktop/launch/package/libraries/org/apache/logging/log4j/log4j-api/2.8.1/log4j-api-2.8.1.jar:/home/kuba/Desktop/launch/package/libraries/org/apache/commons/commons-compress/1.8.1/commons-compress-1.8.1.jar:/home/kuba/Desktop/launch/package/libraries/org/apache/commons/commons-lang3/3.5/commons-lang3-3.5.jar:/home/kuba/Desktop/launch/package/libraries/org/lwjgl/lwjgl/lwjgl_util/2.9.4-nightly-20150209/lwjgl_util-2.9.4-nightly-20150209.jar:/home/kuba/Desktop/launch/package/libraries/org/lwjgl/lwjgl/lwjgl-platform/2.9.4-nightly-20150209/lwjgl-platform-2.9.4-nightly-20150209-natives-linux.jar:/home/kuba/Desktop/launch/package/libraries/org/lwjgl/lwjgl/lwjgl/2.9.4-nightly-20150209/lwjgl-2.9.4-nightly-20150209.jar:/home/kuba/Desktop/launch/package/libraries/org/scala-lang/scala-xml_2.11/1.0.2/scala-xml_2.11-1.0.2.jar:/home/kuba/Desktop/launch/package/libraries/org/scala-lang/scala-compiler/2.11.1/scala-compiler-2.11.1.jar:/home/kuba/Desktop/launch/package/libraries/org/scala-lang/scala-reflect/2.11.1/scala-reflect-2.11.1.jar:/home/kuba/Desktop/launch/package/libraries/org/scala-lang/scala-swing_2.11/1.0.1/scala-swing_2.11-1.0.1.jar:/home/kuba/Desktop/launch/package/libraries/org/scala-lang/scala-parser-combinators_2.11/1.0.1/scala-parser-combinators_2.11-1.0.1.jar:/home/kuba/Desktop/launch/package/libraries/org/scala-lang/scala-actors-migration_2.11/1.1.0/scala-actors-migration_2.11-1.1.0.jar:/home/kuba/Desktop/launch/package/libraries/org/scala-lang/scala-library/2.11.1/scala-library-2.11.1.jar:/home/kuba/Desktop/launch/package/libraries/org/scala-lang/plugins/scala-continuations-plugin_2.11.1/1.0.2/scala-continuations-plugin_2.11.1-1.0.2.jar:/home/kuba/Desktop/launch/package/libraries/org/scala-lang/plugins/scala-continuations-library_2.11/1.0.2/scala-continuations-library_2.11-1.0.2.jar:/home/kuba/Desktop/launch/package/libraries/org/ow2/asm/asm-all/5.2/asm-all-5.2.jar:/home/kuba/Desktop/launch/package/libraries/it/unimi/dsi/fastutil/7.1.0/fastutil-7.1.0.jar:/home/kuba/Desktop/launch/package/libraries/commons-codec/commons-codec/1.10/commons-codec-1.10.jar:/home/kuba/Desktop/launch/package/libraries/oshi-project/oshi-core/1.1/oshi-core-1.1.jar:/home/kuba/Desktop/launch/package/libraries/commons-logging/commons-logging/1.1.3/commons-logging-1.1.3.jar:/home/kuba/Desktop/launch/package/libraries/commons-io/commons-io/2.5/commons-io-2.5.jar:/home/kuba/Desktop/launch/package/libraries/net/sf/jopt-simple/jopt-simple/5.0.3/jopt-simple-5.0.3.jar:/home/kuba/Desktop/launch/package/libraries/net/sf/trove4j/trove4j/3.0.3/trove4j-3.0.3.jar:/home/kuba/Desktop/launch/package/libraries/net/java/jinput/jinput/2.0.5/jinput-2.0.5.jar:/home/kuba/Desktop/launch/package/libraries/net/java/jinput/jinput-platform/2.0.5/jinput-platform-2.0.5-natives-linux.jar:/home/kuba/Desktop/launch/package/libraries/net/java/dev/jna/platform/3.4.0/platform-3.4.0.jar:/home/kuba/Desktop/launch/package/libraries/net/java/dev/jna/jna/4.4.0/jna-4.4.0.jar:/home/kuba/Desktop/launch/package/libraries/net/java/jutils/jutils/1.0.0/jutils-1.0.0.jar:/home/kuba/Desktop/launch/package/libraries/net/minecraftforge/forge/1.12.2-14.23.4.2705/forge-1.12.2-14.23.4.2705.jar:/home/kuba/Desktop/launch/package/libraries/net/minecraft/launchwrapper/1.12/launchwrapper-1.12.jar:/home/kuba/Desktop/launch/package/libraries/lzma/lzma/0.0.1/lzma-0.0.1.jar:/home/kuba/Desktop/launch/package/libraries/io/netty/netty-all/4.1.9.Final/netty-all-4.1.9.Final.jar:/home/kuba/Desktop/launch/package/libraries/java3d/vecmath/1.5.2/vecmath-1.5.2.jar:/home/kuba/Desktop/launch/package/libraries/com/google/guava/guava/21.0/guava-21.0.jar:/home/kuba/Desktop/launch/package/libraries/com/google/code/gson/gson/2.8.0/gson-2.8.0.jar:/home/kuba/Desktop/launch/package/libraries/com/paulscode/codecwav/20101023/codecwav-20101023.jar:/home/kuba/Desktop/launch/package/libraries/com/paulscode/soundsystem/20120107/soundsystem-20120107.jar:/home/kuba/Desktop/launch/package/libraries/com/paulscode/librarylwjglopenal/20100824/librarylwjglopenal-20100824.jar:/home/kuba/Desktop/launch/package/libraries/com/paulscode/libraryjavasound/20101123/libraryjavasound-20101123.jar:/home/kuba/Desktop/launch/package/libraries/com/paulscode/codecjorbis/20101023/codecjorbis-20101023.jar:/home/kuba/Desktop/launch/package/libraries/com/typesafe/config/1.2.1/config-1.2.1.jar:/home/kuba/Desktop/launch/package/libraries/com/typesafe/akka/akka-actor_2.11/2.3.3/akka-actor_2.11-2.3.3.jar:/home/kuba/Desktop/launch/package/libraries/com/ibm/icu/icu4j-core-mojang/51.2/icu4j-core-mojang-51.2.jar:/home/kuba/Desktop/launch/package/libraries/com/mojang/authlib/1.5.25/authlib-1.5.25.jar:/home/kuba/Desktop/launch/package/libraries/com/mojang/text2speech/1.10.3/text2speech-1.10.3-natives-linux.jar:/home/kuba/Desktop/launch/package/libraries/com/mojang/text2speech/1.10.3/text2speech-1.10.3.jar:/home/kuba/Desktop/launch/package/libraries/com/mojang/patchy/1.1/patchy-1.1.jar:/home/kuba/Desktop/launch/package/libraries/com/mojang/realms/1.10.22/realms-1.10.22.jar:/home/kuba/Desktop/launch/package/libraries/jline/jline/2.13/jline-2.13.jar:/home/kuba/Desktop/launch/package/versions/1.12.2/1.12.2.jar net.minecraft.launchwrapper.Launch --username kuba --version 1.12.2 --gameDir /home/kuba/Desktop/launch/package --assetsDir /home/kuba/Desktop/launch/package/assets --assetIndex 1.12 --uuid c8db861e4ea74ebba869459e68de5e15 --accessToken 6cf19a22fac84ebfa62ba697b899bd68 --userType offline --tweakClass net.minecraftforge.fml.common.launcher.FMLTweaker --versionType Forge
</code></pre>
<p>This is the command formed on Windows 10 Home:</p>
<pre><code>@echo off
java -Xmx4G -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -XX:G1NewSizePercent=20 -XX:G1ReservePercent=20 -XX:MaxGCPauseMillis=50 -XX:G1HeapRegionSize=32M -Djava.library.path=C:\Users\kubac\AppData\Local\Temp\tmpsmseu5hx -cp C:\Users\kubac\Desktop\launch\package\libraries\com\google\code\gson\gson\2.8.0\gson-2.8.0.jar:C:\Users\kubac\Desktop\launch\package\libraries\com\google\guava\guava\21.0\guava-21.0.jar:C:\Users\kubac\Desktop\launch\package\libraries\com\ibm\icu\icu4j-core-mojang\51.2\icu4j-core-mojang-51.2.jar:C:\Users\kubac\Desktop\launch\package\libraries\com\mojang\authlib\1.5.25\authlib-1.5.25.jar:C:\Users\kubac\Desktop\launch\package\libraries\com\mojang\patchy\1.1\patchy-1.1.jar:C:\Users\kubac\Desktop\launch\package\libraries\com\mojang\realms\1.10.22\realms-1.10.22.jar:C:\Users\kubac\Desktop\launch\package\libraries\com\mojang\text2speech\1.10.3\text2speech-1.10.3-natives-windows.jar:C:\Users\kubac\Desktop\launch\package\libraries\com\mojang\text2speech\1.10.3\text2speech-1.10.3.jar:C:\Users\kubac\Desktop\launch\package\libraries\com\paulscode\codecjorbis\20101023\codecjorbis-20101023.jar:C:\Users\kubac\Desktop\launch\package\libraries\com\paulscode\codecwav\20101023\codecwav-20101023.jar:C:\Users\kubac\Desktop\launch\package\libraries\com\paulscode\libraryjavasound\20101123\libraryjavasound-20101123.jar:C:\Users\kubac\Desktop\launch\package\libraries\com\paulscode\librarylwjglopenal\20100824\librarylwjglopenal-20100824.jar:C:\Users\kubac\Desktop\launch\package\libraries\com\paulscode\soundsystem\20120107\soundsystem-20120107.jar:C:\Users\kubac\Desktop\launch\package\libraries\com\typesafe\akka\akka-actor_2.11\2.3.3\akka-actor_2.11-2.3.3.jar:C:\Users\kubac\Desktop\launch\package\libraries\com\typesafe\config\1.2.1\config-1.2.1.jar:C:\Users\kubac\Desktop\launch\package\libraries\commons-codec\commons-codec\1.10\commons-codec-1.10.jar:C:\Users\kubac\Desktop\launch\package\libraries\commons-io\commons-io\2.5\commons-io-2.5.jar:C:\Users\kubac\Desktop\launch\package\libraries\commons-logging\commons-logging\1.1.3\commons-logging-1.1.3.jar:C:\Users\kubac\Desktop\launch\package\libraries\io\netty\netty-all\4.1.9.Final\netty-all-4.1.9.Final.jar:C:\Users\kubac\Desktop\launch\package\libraries\it\unimi\dsi\fastutil\7.1.0\fastutil-7.1.0.jar:C:\Users\kubac\Desktop\launch\package\libraries\java3d\vecmath\1.5.2\vecmath-1.5.2.jar:C:\Users\kubac\Desktop\launch\package\libraries\jline\jline\2.13\jline-2.13.jar:C:\Users\kubac\Desktop\launch\package\libraries\lzma\lzma\0.0.1\lzma-0.0.1.jar:C:\Users\kubac\Desktop\launch\package\libraries\net\java\dev\jna\jna\4.4.0\jna-4.4.0.jar:C:\Users\kubac\Desktop\launch\package\libraries\net\java\dev\jna\platform\3.4.0\platform-3.4.0.jar:C:\Users\kubac\Desktop\launch\package\libraries\net\java\jinput\jinput\2.0.5\jinput-2.0.5.jar:C:\Users\kubac\Desktop\launch\package\libraries\net\java\jinput\jinput-platform\2.0.5\jinput-platform-2.0.5-natives-windows.jar:C:\Users\kubac\Desktop\launch\package\libraries\net\java\jutils\jutils\1.0.0\jutils-1.0.0.jar:C:\Users\kubac\Desktop\launch\package\libraries\net\minecraft\launchwrapper\1.12\launchwrapper-1.12.jar:C:\Users\kubac\Desktop\launch\package\libraries\net\minecraftforge\forge\1.12.2-14.23.4.2705\forge-1.12.2-14.23.4.2705.jar:C:\Users\kubac\Desktop\launch\package\libraries\net\sf\jopt-simple\jopt-simple\5.0.3\jopt-simple-5.0.3.jar:C:\Users\kubac\Desktop\launch\package\libraries\net\sf\trove4j\trove4j\3.0.3\trove4j-3.0.3.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\apache\commons\commons-compress\1.8.1\commons-compress-1.8.1.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\apache\commons\commons-lang3\3.5\commons-lang3-3.5.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\apache\httpcomponents\httpclient\4.3.3\httpclient-4.3.3.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\apache\httpcomponents\httpcore\4.3.2\httpcore-4.3.2.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\apache\logging\log4j\log4j-api\2.8.1\log4j-api-2.8.1.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\apache\logging\log4j\log4j-core\2.8.1\log4j-core-2.8.1.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\apache\maven\maven-artifact\3.5.3\maven-artifact-3.5.3.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\lwjgl\lwjgl\lwjgl\2.9.4-nightly-20150209\lwjgl-2.9.4-nightly-20150209.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\lwjgl\lwjgl\lwjgl-platform\2.9.4-nightly-20150209\lwjgl-platform-2.9.4-nightly-20150209-natives-windows.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\lwjgl\lwjgl\lwjgl_util\2.9.4-nightly-20150209\lwjgl_util-2.9.4-nightly-20150209.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\ow2\asm\asm-all\5.2\asm-all-5.2.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\scala-lang\plugins\scala-continuations-library_2.11\1.0.2\scala-continuations-library_2.11-1.0.2.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\scala-lang\plugins\scala-continuations-plugin_2.11.1\1.0.2\scala-continuations-plugin_2.11.1-1.0.2.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\scala-lang\scala-actors-migration_2.11\1.1.0\scala-actors-migration_2.11-1.1.0.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\scala-lang\scala-compiler\2.11.1\scala-compiler-2.11.1.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\scala-lang\scala-library\2.11.1\scala-library-2.11.1.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\scala-lang\scala-parser-combinators_2.11\1.0.1\scala-parser-combinators_2.11-1.0.1.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\scala-lang\scala-reflect\2.11.1\scala-reflect-2.11.1.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\scala-lang\scala-swing_2.11\1.0.1\scala-swing_2.11-1.0.1.jar:C:\Users\kubac\Desktop\launch\package\libraries\org\scala-lang\scala-xml_2.11\1.0.2\scala-xml_2.11-1.0.2.jar:C:\Users\kubac\Desktop\launch\package\libraries\oshi-project\oshi-core\1.1\oshi-core-1.1.jar:C:\Users\kubac\Desktop\launch\package\versions\1.12.2\1.12.2.jar net.minecraft.launchwrapper.Launch --username kuba --version 1.12.2 --gameDir C:\Users\kubac\Desktop\launch\package --assetsDir C:\Users\kubac\Desktop\launch\package\assets --assetIndex 1.12 --uuid 1e12c9641d68419c8e0ef3c43adc9678 --accessToken c270917a4aa44267859663b52ebdf07f --userType offline --tweakClass net.minecraftforge.fml.common.launcher.FMLTweaker --versionType Forge
</code></pre>
<p>PROBLEM:
Ubuntu version executes perfectly, without any complain, but
Windows version says:</p>
<pre><code>Error: Could not find or load main class net.minecraft.launchwrapper.Launch
</code></pre>
<p>and it does not start at all.</p>
<p>As u see Ubuntu and Windows commands are almost identical.</p>
<p>The minecraft installation (used on both Ubuntu and Windows) that I'm trying to start, comes from Windows (generated by official launcher, forge is installed, I only replace libraries marked as natives because they are platform dependent), and it can be started using official launcher on both platforms without any problems, so I assume that there is no issue with missing or corrupted minecraft files. Moreover I was able to start it using my custom launcher but only on Ubuntu, as I wrote above.</p>
<p>Do u guys have any suggestions on what am I doing wrong?</p>
<p>PS I've tried to start Vanilla Minecraft using my custom launcher and the issue is exactly the same, it works on Ubuntu, but fails on Windows with the same error (ofc it mentions vanilla class instead)</p>
<p>EDIT I do not provide the Python code as it is long, and it just extracts natives to a temporary folder and forms the command to launch minecraft, which could be just written by hand and pasted into a script and executed.</p>
|
<p>Ok, actually, on Windows the classpath separator is ; not :
Replacing separator fixes this issue</p>
|
java|python|windows|ubuntu|minecraft
| 2 |
1,292 | 62,407,058 |
Merge rows from multiple CSV files into one CSV file and keep same number of columns
|
<p>I have 3 CSV files (separated by ',') with no headers and need to concat them into one file:</p>
<p>file1.csv</p>
<pre><code>United Kingdom John
</code></pre>
<p>file2.csv </p>
<pre><code>France Pierre
</code></pre>
<p>file3.csv </p>
<pre><code>Italy Marco
</code></pre>
<p>expected result:</p>
<pre><code>United Kingdom John
France Pierre
Italy Marco
</code></pre>
<p>my code:</p>
<pre><code>import pandas as pd
df = pd.read_csv('path/to/file1.csv', sep=',')
df1 = pd.read_csv('path/to/file2.csv', sep=',')
df2 = pd.read_csv('path/to/file3.csv', sep=',')
df_combined = pd.concat([df,df1,df2])
df_combined.to_csv('path/to/output.csv')
</code></pre>
<p>the above gives me data merged but it added rows from my CSV files as new columns and rows, instead to add only new rows to existing two columns:</p>
<pre><code>United Kingdom John
France Pierre
Italy Marco
</code></pre>
<p>Could someone please help with this? Thank you in advance!</p>
|
<p>Pandas usually infer the column name from the first row when reading CSV file. One thing you can do here is to check each data frame's header, which you should expect to see the sample data is treated as header.</p>
<p>In order to override this default behaviour, you can use <code>names</code> field to explicitly specify column names, like <code>df1=pd.read_csv("file1.csv", names=['country','name'])</code>. Then pandas would be able to merge columns accordingly.</p>
|
python|pandas|csv
| 1 |
1,293 | 67,522,690 |
Reshaping NumPy Array with Index Locations in Columns
|
<p>I have a 3 column numpy array, as follows</p>
<pre><code>[0 0 'a'
0 1 'b'
1 0 'c'
1 1 'd']
</code></pre>
<p>The first column contains the row index for the 3rd columns value, and the second column the column index. That is, the final output should be</p>
<pre><code>['a' 'b'
'c' 'd']
</code></pre>
<p>How can I use numpy to place the third columns values into a numpy array at the indices specified in the first two columns? In reality, my dataset is much larger (this is for HD imagery, a 13000 x 11000 pixel photograph, so efficiency is important (avoiding loops)).</p>
<p>Thanks!</p>
|
<p>You can use <a href="https://numpy.org/doc/stable/reference/arrays.indexing.html#advanced-indexing" rel="nofollow noreferrer">numpy advanced indexing</a> to place the values in an empty instantiated array. Here's an example assuming the third column is actually numeric, with the values <code>[1, 3, 5, 7]</code>, and that your matrix is square (if it isn't you can change the shape of the empty matrix, as long as you can get the dimensions).</p>
<pre class="lang-py prettyprint-override"><code>ar = np.array([[0, 0, 1, 1], [0, 1, 0, 1], [1, 3, 5, 7]]).T
n = int(np.sqrt(ar.shape[0])) # Assuming you have a square matrix
out = np.empty([n, n]) # Create a new array of the expected shape
row_idx = ar[:, 0]
col_idx = ar[:, 1]
out[row_idx, col_idx] = ar[:, 2] # Put the data into the indexed locations
</code></pre>
<p>which returns:</p>
<pre class="lang-py prettyprint-override"><code>>> out
array([[1., 3.],
[5., 7.]])
</code></pre>
<p>If your third column is actually a string, then it's a bit more tricky, as you have to specify the size of the string you want to handle when you instantiate the numpy array and also convert the index slices from <code>object</code> back to <code>int</code>.</p>
<hr />
<p>EDIT: Additional info for string data.
If you are actually trying to put strings into a numpy array, you'll need to declare the size of expected string using the unicode string <code>dtype='U1'</code> kwarg to <code>np.empty</code>, where the number after <code>U</code> is the length of the character array.</p>
<pre class="lang-py prettyprint-override"><code>ar = np.array([[0, 0, 1, 1], [0, 1, 0, 1], ['a', 'b', 'c', 'd']]).T
row_idx = ar[:, 0].astype(int)
col_idx = ar[:, 1].astype(int)
out = np.empty([2, 2], dtype='U1') # Unicode string
out[row_idx, col_idx] = ar[:, 2]
</code></pre>
<p>which returns:</p>
<pre class="lang-py prettyprint-override"><code>>> out
array([['a', 'b'],
['c', 'd']], dtype='<U1')
</code></pre>
|
python|arrays|numpy
| 0 |
1,294 | 19,631,993 |
What is an efficient way to check if a name exists in sqlalchemy, python
|
<p>currently I'm using a for loop to check if a record exists,</p>
<pre><code>def IsUserPrivileged(name):
namequery = Ops.query.all()
for names in namequery:
if name == namequery.name:
return True
else:
return False
</code></pre>
<p>So theres a database of ops with an Id field and a name field. I'm looking for something if this (I know this is wrong syntax):</p>
<pre><code>def IsUserPrivileged(name)
namequery = Ops.query.filter_by(name =name).first()
if namequery:
return True
else:
return False
</code></pre>
<p>But this throws an error.</p>
<p>Thanks :)</p>
|
<p>You were almost there:</p>
<pre><code>def is_user_privileged(name):
namequery = Ops.query.filter(name==name)
if namequery.count():
return True
else:
return False
</code></pre>
<p>But you can optimize your function further:</p>
<pre><code>def is_user_privileged(name):
namequery = Ops.query.filter(name==name)
return namequery.count()
</code></pre>
<p>In Python, the general practice is to use InitialCaps for classes, and for functions lower case with underscores. For more in this, see the <a href="http://www.python.org/dev/peps/pep-0008/#function-names" rel="nofollow">python style guide</a> also called by its Python Enhancement Proposal (PEP) number, PEP8.</p>
|
python|sqlalchemy
| 2 |
1,295 | 19,489,132 |
Python: Passing value of a string variable as argument in Popen
|
<p>What is the proper way to pass values of string variables to Popen function in Python? I tried the below piece of code</p>
<pre><code>var1 = 'hello'
var2 = 'universe'
p=Popen('/usr/bin/python /apps/sample.py ' + '"' + str(eval('var1')) + ' ' + '"' + str(eval('var2')), shell=True)
</code></pre>
<p>and in sample.py, below is the code</p>
<pre><code>print sys.argv[1]
print '\n'
print sys.argv[2]
</code></pre>
<p>but it prints the below output</p>
<pre><code>hello universe
none
</code></pre>
<p>so its considering both var1 and var2 as one argument. Is this the right approach of passing values of string variables as arguments?</p>
|
<p>This should work:</p>
<pre><code>p = Popen('/usr/bin/python /apps/sample.py {} {}'.format(var1, var2), shell=True)
</code></pre>
<p>Learn about <a href="http://docs.python.org/2/library/string.html#format-examples" rel="nofollow">string formatting </a>.</p>
<p>Second, passing arguments to scripts has its quirks: <a href="http://docs.python.org/2/library/subprocess.html#subprocess.Popen" rel="nofollow">http://docs.python.org/2/library/subprocess.html#subprocess.Popen</a></p>
<p>This works for me:</p>
<p>test.py:</p>
<pre><code>#!/usr/bin/env python
import sys
print sys.argv[1]
print '\n'
print sys.argv[2]
</code></pre>
<p>Then:</p>
<pre><code>chmod +x test.py
</code></pre>
<p>Then:</p>
<pre><code>Python 2.7.4 (default, Jul 5 2013, 08:21:57)
>>> from subprocess import Popen
>>> p = Popen('./test.py {} {}'.format('hello', 'universe'), shell=True)
>>> hello
universe
</code></pre>
|
python
| 1 |
1,296 | 13,289,291 |
Python: subprocess.communicate(): ValueError with print() function but not "print" builtin
|
<p>I'm trying to run a C program from Python with the subprocess module, capturing its output in a variable. the code looks like this:</p>
<pre><code>process = Popen(["myprog", str(length), filename], stdout=PIPE, stderr=PIPE)
#wait for the process
result = process.communicate()
end=time()
print result
</code></pre>
<p>The above code works - <code>result</code> is shown as a 2-tuple of <code>myprog</code>'s stdout output and stderr output (as strings). </p>
<p>... <em>however</em> if I change <code>print result</code> to <code>print(result)</code>...</p>
<pre><code>Traceback (most recent call last):
File "tests.py", line 26, in <module>
print(result)
ValueError: I/O operation on closed file
</code></pre>
<p>I'm completely stumped here, I don't even know where to start trying to explain this! Of course, my program works anyway, but I'd really like to know why that's happened, and hopefully it will be an interesting question.</p>
|
<p>This is <em>not</em> a Python problem. You have a problem with <code>myprog</code>, not Python.</p>
<p>In Python 2, the difference between <code>print something</code> and <code>print(something)</code> is null and void. There is <em>no</em> difference at all because the Python compiler sees the parenthesis as a no-op and the resulting byte-code is exactly the same:</p>
<pre><code>>>> import dis
>>> def foo(): print 'bar'
...
>>> dis.dis(foo)
1 0 LOAD_CONST 1 ('bar')
3 PRINT_ITEM
4 PRINT_NEWLINE
5 LOAD_CONST 0 (None)
8 RETURN_VALUE
>>> def foo(): print('bar')
...
>>> dis.dis(foo)
1 0 LOAD_CONST 1 ('bar')
3 PRINT_ITEM
4 PRINT_NEWLINE
5 LOAD_CONST 0 (None)
8 RETURN_VALUE
</code></pre>
|
python|printing|subprocess|popen
| 4 |
1,297 | 13,244,212 |
Prevent duplicate of multiple constraint sqlite3 python
|
<p>Suppose I have a table :</p>
<pre><code>Table A:
Id Name Movie Comment
1 Foo Bar anything
2 Foo Bar anything
</code></pre>
<p>Here I want to make sure that the user cannot insert Foo and Bar twice , but he is allowed to insert Foo , foobar .
One solution to prevent this duplication was to add this statement <code>ALTER TABLE table ADD UNIQUE KEY(Name , Movie).</code> That was probably for sql not sqlite. I wrote the same command in python for sqlite3 as <code>cur.execute("ALTER TABLE ADD UNIQUE KEY(Name,Movie)")</code>. It gave me the error sqlite3.operationalerror: near "unique" syntax error. Is there anything that can be done while creating the table A , or is there any mistake in my sqlite query .</p>
<p>Help is appreciated .</p>
|
<pre><code>CREATE UNIQUE INDEX IDX_Movies ON table(Name, Movie)
</code></pre>
<p>This is standard SQL, and documented at <a href="http://sqlite.org/lang_createindex.html" rel="nofollow">http://sqlite.org/lang_createindex.html</a>. (SQLite's on-line documentation is pretty comprehensive)</p>
|
python|sqlite
| 1 |
1,298 | 22,247,904 |
How to style (rich text) in QListWidgetItem and QCombobox items? (PyQt/PySide)
|
<p>I have found similar questions being asked, but without answers or where the answer is an alternative solution.</p>
<p>I need to create a breadcrumb trail in both QComboBoxes and QListWidgets (in PySide), and I'm thinking making these items' text bold. However, I have a hard time finding information on how to achieve this.</p>
<p>This is what I have:</p>
<pre><code># QComboBox
for server in servers:
if optionValue == 'top secret':
optionValue = server
else:
optionValue = '<b>' + server + '</b>'
self.comboBox_servers.addItem( optionValue, 'data to store for this QCombobox item' )
# QListWidgetItem
for folder in folders:
item = QtGui.QListWidgetItem()
if folder == 'top secret':
item.setText( '<b>' + folder + '</b>' )
else:
item.setText( folder )
iconSequenceFilepath = os.path.join( os.path.dirname(__file__), 'folder.png' )
item.setIcon( QtGui.QIcon(r'' + iconSequenceFilepath + ''))
item.setData( QtCore.Qt.UserRole, 'data to store for this QListWidgetItem' )
self.listWidget_folders.addItem( item )
</code></pre>
|
<p>You could use html/css-likes styles, i.e just wrap your text inside tags:</p>
<pre><code>item.setData( QtCore.Qt.UserRole, "<b>{0}</b>".format('data to store for this QListWidgetItem'))
</code></pre>
<p>Another option is setting a font-role:</p>
<pre><code>item.setData(0, QFont("myFontFamily",italic=True), Qt.FontRole)
</code></pre>
<p>Maybe you'd have to use QFont.setBold() in your case. However, using html-formating might be more flexible at all.</p>
<p>In the case of a combo-box use setItemData():</p>
<pre><code># use addItem or insertItem (both works)
# the number ("0" in this case referss to the item index)
combo.insertItem(0,"yourtext"))
#set at tooltip
combo.setItemData(0,"a tooltip",Qt.ToolTipRole)
# set the Font Color
combo.setItemData(0,QColor("#FF333D"),Qt.BackgroundColorRole)
#set the font
combo.setItemData(0, QtGui.QFont('Verdana', bold=True), Qt.FontRole)
</code></pre>
<p>Using style-sheet formating will not work for the Item-Text itself, afaik.</p>
|
python|pyqt|pyside
| 6 |
1,299 | 54,671,706 |
specify data type of columns in a dataframe returned from SQL server
|
<p>I am retrieving data from a SQL Server database using pandas with the line below.</p>
<pre><code>df = pd.read_sql_query(query, cnxn)
</code></pre>
<p>So a dataframe is returned which is want I want. However I have noticed that the columns are not always the correct data type, for example sometimes a number will be a string.</p>
<p>I was wondering what is the best way to get around this?</p>
<p>1) should I initialise an empty dataframe with the correct dtypes for the columns and then populate the dataframe by looping through the cursor result</p>
<p>2) use the dataframe (df in the example above) that is returned and use astype() & other convertors on the columns that require conversion</p>
<p>3) or is there a way to specify in <code>read_sql_query</code> what data type you are expecting for each column from your query</p>
|
<p>By default you have <code>coerce_float=True</code>, and you can feed a list of date columns into <code>parse_dates</code>. You don't have explicit <code>dtypes</code> support as in <code>read_csv</code> and other IO methods. There's a discussion about it <a href="https://github.com/pandas-dev/pandas/issues/6798" rel="nofollow noreferrer">here</a>.</p>
|
python|pandas|dataframe
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.