title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
sequence |
---|---|---|---|---|---|---|---|---|---|
Perfect Square function that doesn't return | 40,089,750 | <p>I'm a beginner working with Python and I was given this task: <em>write a function which returns the highest perfect square which is less or equal to its parameter (a positive integer).</em></p>
<pre><code>def perfsq(n):
x = 0
xy = x * x
if n >= 0:
while xy < n:
x += 1
if xy != n:
print (("%s is not a perfect square.") % (n))
x -= 1
print (("%s is the next highest perfect square.") % (xy))
else:
return(print(("%s is a perfect square of %s.") % (n, x)))
</code></pre>
<p>When I run the code to execute the function it doesn't output anything. I'll admit, I'm struggling and if you could give me some advice on how to fix this, I would be grateful.</p>
| 0 | 2016-10-17T15:00:53Z | 40,090,009 | <pre><code>def perfsq(n):
x = 0
xy = x * x
if n >= 0:
while xy < n:
x += 1
xy = x*x <-- Here
if xy != n:
print (("%s is not a perfect square.") % (n))
x -= 1
xy = x*x <--Here
print (("%s is the next highest perfect square.") % (xy))
else:
print(("%s is a perfect square of %s.") % (n, x)) <--And here
</code></pre>
<p>Think about that. </p>
| -1 | 2016-10-17T15:13:47Z | [
"python",
"loops",
"if-statement"
] |
Perfect Square function that doesn't return | 40,089,750 | <p>I'm a beginner working with Python and I was given this task: <em>write a function which returns the highest perfect square which is less or equal to its parameter (a positive integer).</em></p>
<pre><code>def perfsq(n):
x = 0
xy = x * x
if n >= 0:
while xy < n:
x += 1
if xy != n:
print (("%s is not a perfect square.") % (n))
x -= 1
print (("%s is the next highest perfect square.") % (xy))
else:
return(print(("%s is a perfect square of %s.") % (n, x)))
</code></pre>
<p>When I run the code to execute the function it doesn't output anything. I'll admit, I'm struggling and if you could give me some advice on how to fix this, I would be grateful.</p>
| 0 | 2016-10-17T15:00:53Z | 40,090,355 | <p>your loop condition which is</p>
<pre><code>while xy < n:
</code></pre>
<p>this will be true always as xy is always 0 and n is always greater then zero if you will call the function with n = 0, it will print <code>0 is a perfect square of 0.</code> and will return None.</p>
<pre><code>for n > 0
</code></pre>
<p>why it is always <code>true</code> in the case of <code>xy < n</code> because you have assigned xy <code>0</code> and never modified it to anyother value when a loop runs it check the condition and it will always get <code>True</code> </p>
| 0 | 2016-10-17T15:31:47Z | [
"python",
"loops",
"if-statement"
] |
Perfect Square function that doesn't return | 40,089,750 | <p>I'm a beginner working with Python and I was given this task: <em>write a function which returns the highest perfect square which is less or equal to its parameter (a positive integer).</em></p>
<pre><code>def perfsq(n):
x = 0
xy = x * x
if n >= 0:
while xy < n:
x += 1
if xy != n:
print (("%s is not a perfect square.") % (n))
x -= 1
print (("%s is the next highest perfect square.") % (xy))
else:
return(print(("%s is a perfect square of %s.") % (n, x)))
</code></pre>
<p>When I run the code to execute the function it doesn't output anything. I'll admit, I'm struggling and if you could give me some advice on how to fix this, I would be grateful.</p>
| 0 | 2016-10-17T15:00:53Z | 40,090,455 | <p>A simpler way of doing what you desire is below:</p>
<pre><code>def perfsq(n):
x = n
while True:
if x**2 <= n:
# display the result on the console using print function
print ("The highest perfect square of %s or less is %s" %(n,x**2))
return x**2 # return the value so that it can be used outside of the function
else:
x -= 1
highest_sqr = perfsq(37000)
</code></pre>
<p>This prints the highest perfect square as well as returning the value to be used later in your code:</p>
<blockquote>
<p>The highest perfect square of 37000 or less is 36864</p>
</blockquote>
<p><strong>UPDATE</strong></p>
<p>As said in the comments by @JohnColeman, the code above is very inefficient as it calculates a lot more squares than it needs to before coming to the actual answer. A much better answer to the question was found <a href="https://www.quora.com/Is-there-an-algorithm-to-find-the-nearest-square-of-a-given-number" rel="nofollow">here</a>. I have modified the code from that answer to be applicable to this question:</p>
<pre><code>def find_perfsqr(n):
root_of_n = math.sqrt(n) # square root the inital number
# Find the closest integer smaller than the root of n
floor_int = int(root_of_n) # closest below
floor_int_sqaure = floor_int * floor_int # square integer calculated above
return floor_int_sqaure
test_number = 123456
perfect_square = find_perfsqr(test_number)
print ("The highest perfect square of %s or less is %s" %(test_number, perfect_square))
</code></pre>
<p>This produces the output </p>
<blockquote>
<p>The highest perfect square of 123456 or less is 123201</p>
</blockquote>
<p>If you wanted to print out the results with print statements inside the function you can add the following line just before the <code>return</code> statement.</p>
<pre><code>("The highest perfect square of %s or less is %s" %(n, floor_int_square))
</code></pre>
<p>To compare the speed of this code compared to my original answer I have used the timeit module. This calls the function for a user defined number of repetitions and records the time it takes:</p>
<pre><code>import timeit
print(timeit.timeit("find_perfsqr(123456)", "from __main__ import find_perfsqr",number= 1000))
print(timeit.timeit("perfsq(123456)", "from __main__ import perfsq",number= 1000))
</code></pre>
<p>Which gives the following results for 1000 repetitions:</p>
<pre><code>0.000784402654164445 # The new function, find_perfsqr (result is in seconds)
67.38138927728903 # Original function, perfsq (result is in seconds)
</code></pre>
<p>This shows that for large numbers the <code>find_perfsqr</code> function is considerably faster.</p>
| 0 | 2016-10-17T15:37:21Z | [
"python",
"loops",
"if-statement"
] |
Python: Where to put a one time initialization in a test case of unittest framework? | 40,089,787 | <p>My test is simple. I want to send two requests to two different servers then compare whether the results match.</p>
<p>I want to test the following things.</p>
<ol>
<li>send each request and see whether the return code is valid.</li>
<li>compare different portion of the outputs in each test method</li>
</ol>
<p>I do not want to send the requests in setUp method because it will be send over and over for each new test. I would rather want to send the requests at the initialization. (maybe in the <strong>init</strong> method). But I found a lot of people were against that idea because they believe I should not override the init method for some reason. (I do not know exactly why) If that's the case, where should I send the requests?
I am kind of against doing them in the class body (as shared variables).</p>
| 0 | 2016-10-17T15:02:54Z | 40,089,899 | <p>A class method called before tests in an individual class run. setUpClass is called with the class as the only argument and must be decorated as a classmethod():</p>
<pre><code>@classmethod
def setUpClass(cls):
...
</code></pre>
<p>See: <a href="https://docs.python.org/3/library/unittest.html#unittest.TestCase.setUpClass" rel="nofollow">https://docs.python.org/3/library/unittest.html#unittest.TestCase.setUpClass</a></p>
| 1 | 2016-10-17T15:09:28Z | [
"python",
"python-2.7",
"unit-testing",
"python-unittest"
] |
optimization for processing big data in pyspark | 40,089,822 | <p>Not a question->need a suggestion</p>
<p>I am operating on 20gb+6gb=26Gb csv file with 1+3 (1-master, 3-slave (each of 16 gb RAM).</p>
<p>This is how I am doing my ops</p>
<pre><code>df = spark.read.csv() #20gb
df1 = spark.read.csv() #6gb
df_merged= df.join(df1,'name','left') ###merging
df_merged.persists(StorageLevel.MEMORY_AND_DISK) ##if i do MEMORY_ONLY will I gain more performance?????
print('No. of records found: ',df_merged.count()) ##just ensure persist by calling an action
df_merged.registerTempTable('table_satya')
query_list= [query1,query2,query3] ###sql query string to be fired
city_list = [city1, city2,city3...total 8 cities]
file_index=0 ###will create files based on increasing index
for query_str in query_list:
result = spark.sql(query_str) #ex: select * from table_satya where date >= '2016-01-01'
#result.persist() ###willit increase performance
for city in city_list:
df_city = result.where(result.city_name==city)
#store as csv file(pandas style single file)
df_city.collect().toPandas().to_csv('file_'+str(file_index)+'.csv',index=False)
file_index += 1
df_merged.unpersist() ###do I even need to do it or Spark can handle it internally
</code></pre>
<p>Currently it is taking a huge time. </p>
<pre><code>#persist(On count())-34 mins.
#each result(on firing each sql query)-around (2*8=16min toPandas() Op)
# #for each toPandas().to_csv() - around 2 min each
#for 3 query 16*3= 48min
#total 34+48 = 82 min ###Need optimization seriously
</code></pre>
<p>So can anybody suggest how can i optimize the above process for a better performance(Time and Memory both.)</p>
<p>Why I am worried is : I was doing the above on Python-Pandas platform (64Gb single machine with serialized pickle data) and I was able to do that in 8- 12mins. As my data-volume seems growing, so need to adopt a technology like spark.</p>
<p>Thanks in Advance. :)</p>
| 1 | 2016-10-17T15:05:03Z | 40,092,736 | <p>I think your best bet is cutting the source data down to size. You mention that your source data has 90 cities, but you are only interested in 8 of them. Filter out the cities you don't want and keep the ones you do want in separate csv files:</p>
<pre><code>import itertools
import csv
city_list = [city1, city2,city3...total 8 cities]
with open('f1.csv', 'rb') as f1, open('f2.csv', 'rb') as f2:
r1, r2 = csv.reader(f1), csv.reader(f2)
header = next(r1)
next(r2) # discard headers in second file
city_col = header.index('city_name')
city_files = []
city_writers = {}
try:
for city in city_list:
f = open(city+'.csv', 'wb')
city_files.append(f)
writer = csv.writer(f)
writer.writerow(header)
city_writers[city] = writer
for row in itertools.chain(r1, r2):
city_name = row[city_col]
if city_name in city_writers:
city_writers[city_name].writerow(row)
finally:
for f in city_files:
f.close()
</code></pre>
<p>After this iterate over each city, creating a DataFrame for the city, then in a nested loop run your three queries. Each DataFrame should have no problem fitting in memory and the queries should run quickly since they are running over a much smaller data set.</p>
| 1 | 2016-10-17T17:55:22Z | [
"python",
"python-2.7",
"apache-spark",
"pyspark",
"pyspark-sql"
] |
Error installing pyopenssl using pip | 40,089,841 | <p>I am trying to install different packages using python pip, but i get this error:</p>
<blockquote>
<p>File
"/usr/local/lib/python2.7/site-packages/pip-8.1.2-py2.7.egg/pip/_vendor/
cnx.set_tlsext_host_name(server_hostname) AttributeError: '_socketobject' object has no attribute 'set_tlsext_host_name'</p>
</blockquote>
<p>I some workaroundI founr I need to install <a href="http://stackoverflow.com/questions/31576258/attributeerror-socketobject-object-has-no-attribute-set-tlsext-host-name">pyoepnssl</a>.</p>
<p>When I try to manually install pyopenssl I get the error:</p>
<blockquote>
<p>gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall
-Wstrict-prototypes -fPIC -I/usr/local/include/python2.7 -c OpenSSL/ssl/connection.c -o
build/temp.linux-x86_64-2.7/OpenSSL/ssl/connection.o
OpenSSL/ssl/connection.c: In function âssl_Connection_set_contextâ:
OpenSSL/ssl/connection.c:289: warning: implicit declaration of
function âSSL_set_SSL_CTXâ OpenSSL/ssl/connection.c: In function
âssl_Connection_get_servernameâ: OpenSSL/ssl/connection.c:313: error:
âTLSEXT_NAMETYPE_host_nameâ undeclared (first use in this function)
OpenSSL/ssl/connection.c:313: error: (Each undeclared identifier is
reported only once OpenSSL/ssl/connection.c:313: error: for each
function it appears in.) OpenSSL/ssl/connection.c:320: warning:
implicit declaration of function âSSL_get_servernameâ
OpenSSL/ssl/connection.c:320: warning: assignment makes pointer from
integer without a cast OpenSSL/ssl/connection.c: In function
âssl_Connection_set_tlsext_host_nameâ: OpenSSL/ssl/connection.c:346:
warning: implicit declaration of function âSSL_set_tlsext_host_nameâ
error: command 'gcc' failed with exit status 1</p>
</blockquote>
<p>I already have installed all the libraries gcc, python-devel, gcc-c++, libffi-devel, openssl-devel. I am using Red Hat 4</p>
<p>I don't know if I am missing some library.</p>
<p>Any advice.</p>
<p>Thanks in advance</p>
| 0 | 2016-10-17T15:06:13Z | 40,090,080 | <pre><code>For Ubuntu:
sudo apt-get purge python-openssl
sudo apt-get install libffi-dev
sudo pip install pyopenssl
For RedHat:
sudo yum remove pyOpenSSL
sudo pip install pyopenssl
</code></pre>
<p>Method 2:</p>
<pre><code>easy_install http://pypi.python.org/packages/source/p/pyOpenSSL/pyOpenSSL-0.12.tar.gz
</code></pre>
<p>Executing above line may fix your problem.</p>
| 0 | 2016-10-17T15:17:51Z | [
"python",
"linux",
"pip",
"redhat"
] |
How to JSON dump to a rotating file object | 40,090,057 | <p>I'm writing a program which periodically dumps old data from a RethinkDB database into a file and removes it from the database. Currently, the data is dumped into a single file which grows without limit. I'd like to change this so that the maximum file size is, say, 250 Mb, and the program starts to write to a new output file just before this size is exceeded.</p>
<p>It seems like Python's RotatingFileHandler class for <a href="https://docs.python.org/2/library/logging.handlers.html" rel="nofollow">loggers</a> does approximately what I want; however, I'm not sure whether logging can be applied to any JSON-dumpable object or just to strings.</p>
<p>Another possible approach would be to use (a variant of) Mike Pennington's
RotatingFile class (see <a href="http://stackoverflow.com/questions/10894692/python-outfile-to-another-text-file-if-exceed-certain-file-size">python: outfile to another text file if exceed certain file size</a>).</p>
<p>Which of these approaches is likely to be the most fruitful?</p>
<p>For reference, my current program is as follows:</p>
<pre><code>import os
import sys
import json
import rethinkdb as r
import pytz
from datetime import datetime, timedelta
import schedule
import time
import functools
from iclib import RethinkDB
import msgpack
''' The purpose of the Controller is to periodically archive data from the "sensor_data" table so that it does not grow without limit.'''
class Controller(RethinkDB):
def __init__(self, db_address=(os.environ['DB_ADDR'], int(os.environ['DB_PORT'])), db_name=os.environ['DB_NAME']):
super(Controller, self).__init__(db_address=db_address, db_name=db_name) # Initialize the IperCronComponent with the default logger name (in this case, "Controller")
self.db_table = RethinkDB.SENSOR_DATA_TABLE # The table name is "sensor_data" and is stored as a class variable in RethinkDBMixIn
def generate_archiving_query(self, retention_period=timedelta(days=3)):
expiry_time = r.now() - retention_period.total_seconds() # Timestamp before which data is to be archived
if "timestamp" in r.table(self.db_table).index_list().run(self.db): # If "timestamp" is a secondary index
beginning_of_time = r.time(1400, 1, 1, 'Z') # The minimum time of a ReQL time object (i.e., the year 1400 in the UTC timezone)
data_to_archive = r.table(self.db_table).between(beginning_of_time, expiry_time, index="timestamp") # Generate query using "between" (faster)
else:
data_to_archive = r.table(self.db_table).filter(r.row['timestamp'] < expiry_time) # Generate the same query using "filter" (slower, but does not require "timestamp" to be a secondary index)
return data_to_archive
def archiving_job(self, data_to_archive=None, output_file="archived_sensor_data.json"):
if data_to_archive is None:
data_to_archive = self.generate_archiving_query() # By default, the call the "generate_archiving_query" function to generate the query
old_data = data_to_archive.run(self.db, time_format="raw") # Without time_format="raw" the output does not dump to JSON
with open(output_file, 'a') as f:
ids_to_delete = []
for item in old_data:
print item
# msgpack.dump(item, f)
json.dump(item, f)
f.write('\n') # Separate each document by a new line
ids_to_delete.append(item['id'])
r.table(self.db_table).get_all(r.args(ids_to_delete)).delete().run(self.db) # Delete based on ID. It is preferred to delete the entire batch in a single operation rather than to delete them one by one in the for loop.
def test_job_1():
db_name = "ipercron"
table_name = "sensor_data"
port_offset = 1 # To avoid interference of this testing program with the main program, all ports are initialized at an offset of 1 from the default ports using "rethinkdb --port_offset 1" at the command line.
conn = r.connect("localhost", 28015 + port_offset)
r.db(db_name).table(table_name).delete().run(conn)
import rethinkdb_add_data
controller = Controller(db_address=("localhost", 28015+port_offset))
archiving_job = functools.partial(controller.archiving_job, data_to_archive=controller.generate_archiving_query())
return archiving_job
if __name__ == "__main__":
archiving_job = test_job_1()
schedule.every(0.1).minutes.do(archiving_job)
while True:
schedule.run_pending()
</code></pre>
<p>It is not completely 'runnable' from the part shown, but the key point is that I would like to replace the line</p>
<pre><code>json.dump(item, f)
</code></pre>
<p>with a similar line in which <code>f</code> is a rotating, and not fixed, file object.</p>
| 1 | 2016-10-17T15:16:04Z | 40,132,614 | <p>Following <a href="http://stackoverflow.com/users/4265407/stanislav-ivanov">Stanislav Ivanov</a>, I used <code>json.dumps</code> to convert each RethinkDB document to a string and wrote this to a <code>RotatingFileHandler</code>:</p>
<pre><code>import os
import sys
import json
import rethinkdb as r
import pytz
from datetime import datetime, timedelta
import schedule
import time
import functools
from iclib import RethinkDB
import msgpack
import logging
from logging.handlers import RotatingFileHandler
from random_data_generator import RandomDataGenerator
''' The purpose of the Controller is to periodically archive data from the "sensor_data" table so that it does not grow without limit.'''
os.environ['DB_ADDR'] = 'localhost'
os.environ['DB_PORT'] = '28015'
os.environ['DB_NAME'] = 'ipercron'
class Controller(RethinkDB):
def __init__(self, db_address=None, db_name=None):
if db_address is None:
db_address = (os.environ['DB_ADDR'], int(os.environ['DB_PORT'])) # The default host ("rethinkdb") and port (28015) are stored as environment variables
if db_name is None:
db_name = os.environ['DB_NAME'] # The default database is "ipercron" and is stored as an environment variable
super(Controller, self).__init__(db_address=db_address, db_name=db_name) # Initialize the instance of the RethinkDB class. IperCronComponent will be initialized with its default logger name (in this case, "Controller")
self.db_name = db_name
self.db_table = RethinkDB.SENSOR_DATA_TABLE # The table name is "sensor_data" and is stored as a class variable of RethinkDBMixIn
self.table = r.db(self.db_name).table(self.db_table)
self.archiving_logger = logging.getLogger("archiving_logger")
self.archiving_logger.setLevel(logging.DEBUG)
self.archiving_handler = RotatingFileHandler("archived_sensor_data.log", maxBytes=2000, backupCount=10)
self.archiving_logger.addHandler(self.archiving_handler)
def generate_archiving_query(self, retention_period=timedelta(days=3)):
expiry_time = r.now() - retention_period.total_seconds() # Timestamp before which data is to be archived
if "timestamp" in self.table.index_list().run(self.db):
beginning_of_time = r.time(1400, 1, 1, 'Z') # The minimum time of a ReQL time object (namely, the year 1400 in UTC)
data_to_archive = self.table.between(beginning_of_time, expiry_time, index="timestamp") # Generate query using "between" (faster, requires "timestamp" to be a secondary index)
else:
data_to_archive = self.table.filter(r.row['timestamp'] < expiry_time) # Generate query using "filter" (slower, but does not require "timestamp" to be a secondary index)
return data_to_archive
def archiving_job(self, data_to_archive=None):
if data_to_archive is None:
data_to_archive = self.generate_archiving_query() # By default, the call the "generate_archiving_query" function to generate the query
old_data = data_to_archive.run(self.db, time_format="raw") # Without time_format="raw" the output does not dump to JSON or msgpack
ids_to_delete = []
for item in old_data:
print item
self.dump(item)
ids_to_delete.append(item['id'])
self.table.get_all(r.args(ids_to_delete)).delete().run(self.db) # Delete based on ID. It is preferred to delete the entire batch in a single operation rather than to delete them one by one in the for-loop.
def dump(self, item, mode='json'):
if mode == 'json':
dump_string = json.dumps(item)
elif mode == 'msgpack':
dump_string = msgpack.packb(item)
self.archiving_logger.debug(dump_string)
def populate_database(db_name, table_name, conn):
if db_name not in r.db_list().run(conn):
r.db_create(db_name).run(conn) # Create the database if it does not yet exist
if table_name not in r.db(db_name).table_list().run(conn):
r.db(db_name).table_create(table_name).run(conn) # Create the table if it does not yet exist
r.db(db_name).table(table_name).delete().run(conn) # Empty the table to start with a clean slate
# Generate random data with timestamps uniformly distributed over the past 6 days
random_data_time_interval = timedelta(days=6)
start_random_data = datetime.utcnow().replace(tzinfo=pytz.utc) - random_data_time_interval
random_generator = RandomDataGenerator(seed=0)
packets = random_generator.packets(N=100, start=start_random_data)
# print packets
print "Adding data to the database..."
r.db(db_name).table(table_name).insert(packets).run(conn)
if __name__ == "__main__":
db_name = "ipercron"
table_name = "sensor_data"
port_offset = 1 # To avoid interference of this testing program with the main program, all ports are initialized at an offset of 1 from the default ports using "rethinkdb --port_offset 1" at the command line.
host = "localhost"
port = 28015 + port_offset
conn = r.connect(host, port) # RethinkDB connection object
populate_database(db_name, table_name, conn)
# import rethinkdb_add_data
controller = Controller(db_address=(host, port))
archiving_job = functools.partial(controller.archiving_job, data_to_archive=controller.generate_archiving_query()) # This ensures that the query is only generated once. (This is sufficient since r.now() is re-evaluated every time a connection is made).
schedule.every(0.1).minutes.do(archiving_job)
while True:
schedule.run_pending()
</code></pre>
<p>In this context the <code>RethinkDB</code> class does little other than define the class variable <code>SENSOR_DATA_TABLE</code> and the RethinkDB connection, <code>self.db = r.connect(self.address[0], self.address[1])</code>. This is run together with a module for generating fake data, <code>random_data_generator.py</code>:</p>
<pre><code>import random
import faker
from datetime import datetime, timedelta
import pytz
import rethinkdb as r
class RandomDataGenerator(object):
def __init__(self, seed=None):
self._seed = seed
self._random = random.Random()
self._random.seed(seed)
self.fake = faker.Faker()
self.fake.random.seed(seed)
def __getattr__(self, x):
return getattr(self._random, x)
def name(self):
return self.fake.name()
def datetime(self, start=None, end=None):
if start is None:
start = datetime(2000, 1, 1, tzinfo=pytz.utc) # Jan 1st 2000
if end is None:
end = datetime.utcnow().replace(tzinfo=pytz.utc)
if isinstance(end, datetime):
dt = end - start
elif isinstance(end, timedelta):
dt = end
assert isinstance(dt, timedelta)
random_dt = timedelta(microseconds=self._random.randrange(int(dt.total_seconds() * (10 ** 6))))
return start + random_dt
def packets(self, N=1, start=None, end=None):
return [{'name': self.name(), 'timestamp': self.datetime(start=start, end=end)} for _ in range(N)]
</code></pre>
<p>When I run <code>controller</code> it produces several rolled-over output logs, each at most 2 kB in size, as expected:</p>
<p><a href="https://i.stack.imgur.com/eEBTi.png" rel="nofollow"><img src="https://i.stack.imgur.com/eEBTi.png" alt="enter image description here"></a></p>
| 0 | 2016-10-19T13:20:37Z | [
"python",
"json"
] |
How to make make run the subtasks for periodic task with Celery? | 40,090,075 | <p>I would like to create <em>periodic</em> task which makes query to database, get's data from data provider, makes some API requests, formats documents and sends them using another API. </p>
<p>Result of the previous task should be chained to the next task. I've got from the documentation that I have to use <strong>chain</strong>, <strong>group</strong> and <strong>chord</strong> in order to organise chaining. </p>
<p>But, what else I've got from the documentation: don't run subtask from the task, because it might be the reason of deadlocks. </p>
<p>So, the question is: how to run subtasks inside the periodic task?</p>
<pre><code>@app.task(name='envoy_emit_subscription', ignore_result=False)
def emit_subscriptions(frequency):
# resulting queryset is the source for other tasks
return Subscription.objects.filter(definition__frequency=1).values_list('pk', flat=True)
@app.task(name='envoy_query_data_provider', ignore_result=False)
def query_data_provider(pk):
# gets the key from the chain and returns received data
return "data"
@app.task(name='envoy_format_subscription', ignore_result=False)
def format_subscription(data):
# formats documents
return "formatted text"
@app.task(name='envoy_send_subscription', ignore_result=False)
def send_subscription(text):
return send_text_somehow(text)
</code></pre>
<p>Sorry for the noob question, but I'm a noob in Celery, indeed.</p>
| 0 | 2016-10-17T15:17:40Z | 40,090,214 | <p>Something like this maybe?</p>
<pre><code>import time
while True:
my_celery_chord()
time.sleep(...)
</code></pre>
| 0 | 2016-10-17T15:24:39Z | [
"python",
"celery"
] |
Matplotlib: Gridspec not displaying bar subplot | 40,090,082 | <p>I have a 4x3 grid. I have 1 broken horizontal bar plot in the first row followed by 9 scatter plots. The height of the bar plot needs to be 2x height of the scatter plots. I am using gridspec to achieve this. However, it doesn't plot the bar plot completely. See picture below:</p>
<p><a href="https://i.stack.imgur.com/62A5a.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/62A5a.jpg" alt="enter image description here"></a></p>
<p>The complete bar plot looks like this</p>
<p><a href="https://i.stack.imgur.com/ljH2L.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/ljH2L.jpg" alt="enter image description here"></a></p>
<p>I am not sure why is this happening. Any suggestions?</p>
<p>Here's my code:</p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
from matplotlib import gridspec
#####Importing Data from csv file#####
dataset1 = np.genfromtxt('dataSet1.csv', dtype = float, delimiter = ',', skip_header = 1, names = ['a', 'b', 'c', 'x0'])
dataset2 = np.genfromtxt('dataSet2.csv', dtype = float, delimiter = ',', skip_header = 1, names = ['a', 'b', 'c', 'x0'])
dataset3 = np.genfromtxt('dataSet3.csv', dtype = float, delimiter = ',', skip_header = 1, names = ['a', 'b', 'c', 'x0'])
corr1 = np.corrcoef(dataset1['a'],dataset1['x0'])
corr2 = np.corrcoef(dataset1['b'],dataset1['x0'])
corr3 = np.corrcoef(dataset1['c'],dataset1['x0'])
corr4 = np.corrcoef(dataset2['a'],dataset2['x0'])
corr5 = np.corrcoef(dataset2['b'],dataset2['x0'])
corr6 = np.corrcoef(dataset2['c'],dataset2['x0'])
corr7 = np.corrcoef(dataset3['a'],dataset3['x0'])
corr8 = np.corrcoef(dataset3['b'],dataset3['x0'])
corr9 = np.corrcoef(dataset3['c'],dataset3['x0'])
fig = plt.figure(figsize = (8,8))
gs = gridspec.GridSpec(4, 3, height_ratios=[2,1,1,1])
def tornado1():
np.set_printoptions(precision=4)
variables = ['a1','b1','c1','a2','b2','c2','a3','b3','c3']
base = 0
values = np.array([corr1[0,1],corr2[0,1],corr3[0,1],
corr4[0,1],corr5[0,1],corr6[0,1],
corr7[0,1],corr8[0,1],corr9[0,1]])
variables=zip(*sorted(zip(variables, values),reverse = True, key=lambda x: abs(x[1])))[0]
values = sorted(values,key=abs, reverse=True)
# The y position for each variable
ys = range(len(values))[::-1] # top to bottom
# Plot the bars, one by one
for y, value in zip(ys, values):
high_width = base + value
# Each bar is a "broken" horizontal bar chart
ax1= plt.subplot(gs[1]).broken_barh(
[(base, high_width)],
(y - 0.4, 0.8),
facecolors=['red', 'red'], # Try different colors if you like
edgecolors=['black', 'black'],
linewidth=1,
)
# Draw a vertical line down the middle
plt.axvline(base, color='black')
# Position the x-axis on the top/bottom, hide all the other spines (=axis lines)
axes = plt.gca() # (gca = get current axes)
axes.spines['left'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.spines['top'].set_visible(False)
axes.xaxis.set_ticks_position('bottom')
# Make the y-axis display the variables
plt.yticks(ys, variables)
plt.ylim(-2, len(variables))
plt.draw()
return
def correlation1():
corr1 = np.corrcoef(dataset1['a'],dataset1['x0'])
print corr1[0,1]
corr2 = np.corrcoef(dataset1['b'],dataset1['x0'])
print corr2[0,1]
corr3 = np.corrcoef(dataset1['c'],dataset1['x0'])
print corr3[0,1]
ax2=plt.subplot(gs[3])
ax2.scatter(dataset1['a'],dataset1['x0'],marker = '.')
ax2.set_xlabel('a1')
ax2.set_ylabel('x01')
ax3=plt.subplot(gs[4])
ax3.scatter(dataset1['b'],dataset1['x0'],marker = '.')
ax3.set_xlabel('b1')
#ax3.set_ylabel('x01')
ax4=plt.subplot(gs[5])
ax4.scatter(dataset1['c'],dataset1['x0'],marker = '.')
ax4.set_xlabel('c1')
#ax4.set_ylabel('x01')
ax5=fig.add_subplot(gs[6])
ax5.scatter(dataset2['a'],dataset2['x0'],marker = '.')
ax5.set_xlabel('a2')
ax5.set_ylabel('x02')
ax6=fig.add_subplot(gs[7])
ax6.scatter(dataset2['b'],dataset2['x0'],marker = '.')
ax6.set_xlabel('b2')
#ax6.set_ylabel('x02')
ax7=fig.add_subplot(gs[8])
ax7.scatter(dataset2['c'],dataset2['x0'],marker = '.')
ax7.set_xlabel('c2')
#ax7.set_ylabel('x02')
ax8=plt.subplot(gs[9])
ax8.scatter(dataset3['a'],dataset3['x0'],marker = '.')
ax8.set_xlabel('a3')
ax8.set_ylabel('x03')
ax9=plt.subplot(gs[10])
ax9.scatter(dataset3['b'],dataset3['x0'],marker = '.')
ax9.set_xlabel('b3')
#ax9.set_ylabel('x03')
ax10=plt.subplot(gs[11])
ax10.scatter(dataset3['c'],dataset3['x0'],marker = '.')
ax10.set_xlabel('c3')
#ax10.set_ylabel('x03')
plt.show()
return
tornado1()
correlation1()
plt.tight_layout()
plt.show()
</code></pre>
<p>Any help would be highly appreciated :-)</p>
| 0 | 2016-10-17T15:17:56Z | 40,090,337 | <p>In the block of code:</p>
<pre><code># Plot the bars, one by one
for y, value in zip(ys, values):
high_width = base + value
# Each bar is a "broken" horizontal bar chart
ax1= plt.subplot(gs[1]).broken_barh(
[(base, high_width)],
(y - 0.4, 0.8),
facecolors=['red', 'red'], # Try different colors if you like
edgecolors=['black', 'black'],
linewidth=1,
)
</code></pre>
<p>You're reinitializing gs[1] on each loop so in the end, your plot only contains the last bar. You should try something like this instead:</p>
<pre><code># Plot the bars, one by one
ax1 = plt.subplot(gs[1])
for y, value in zip(ys, values):
high_width = base + value
# Each bar is a "broken" horizontal bar chart
ax1.broken_barh(
[(base, high_width)],
(y - 0.4, 0.8),
facecolors=['red', 'red'], # Try different colors if you like
edgecolors=['black', 'black'],
linewidth=1,
)
</code></pre>
<p>Hope that helps.</p>
| 1 | 2016-10-17T15:30:49Z | [
"python",
"python-2.7",
"matplotlib"
] |
Args and dictionary in render_to_response | 40,090,107 | <p>How to pass arguments and a dictionary in the query ?</p>
<p>My example does not work</p>
<pre><code>return render_to_response('bookmarks_add.html', {'categories': categories, 'args':args})
</code></pre>
<p>Separately work</p>
<pre><code>return render_to_response('bookmarks_add.html', args)
</code></pre>
<p>and</p>
<pre><code>return render_to_response('bookmarks_add.html', {'categories': categories})
</code></pre>
| 0 | 2016-10-17T15:19:13Z | 40,090,216 | <p>try this:</p>
<pre><code>args.update({
'categories': categories,
})
return render_to_response('bookmarks_add.html', args)
</code></pre>
| 1 | 2016-10-17T15:24:53Z | [
"python",
"django"
] |
Django Python Script | 40,090,167 | <p>I've been sifting through the documention but I don't feel like I'm getting a clear answer. Is it possible to run something python-like</p>
<pre><code>if company_name.startswith(('A')):
enter code here
</code></pre>
<p>from within a Django site or app? How would I go about it?</p>
<p>Currently the code I use is </p>
<pre><code> {% for TblCompanies in object_list %}
<tr class="customer-table">
<td>{{ TblCompanies.company_id }}</td>
<td>{{ TblCompanies.company_name }}</td>
<td>{{ TblCompanies.contact_phone }}</td>
<td>{{ TblCompanies.billing_address }}</td>
<td>{{ TblCompanies.contact_e_mail }}</td>
</tr>
{% endfor %}
</code></pre>
<p>but our customer database is too large and it's a burden to have to go through the list to find a customer. I want to instead sort it alphabetically using urls like <a href="http://path/to/customer_list/A" rel="nofollow">http://path/to/customer_list/A</a></p>
| 1 | 2016-10-17T15:22:05Z | 40,090,333 | <p>Using <a href="https://docs.djangoproject.com/en/1.10/ref/templates/builtins/#slice" rel="nofollow"><code>slice</code> filter</a>, you can get a substring; then compare the substring with the <code>'A'</code>:</p>
<pre><code>{% for TblCompanies in object_list %}
{% if TblCompanies.company_name|slice:':1' == 'A' %}
<tr class="customer-table">
<td>{{ TblCompanies.company_id }}</td>
<td>{{ TblCompanies.company_name }}</td>
<td>{{ TblCompanies.contact_phone }}</td>
<td>{{ TblCompanies.billing_address }}</td>
<td>{{ TblCompanies.contact_e_mail }}</td>
</tr>
{% endif %}
{% endfor %}
</code></pre>
<p>As @Matthias commented, it would be better to pass filtered <code>object_list</code> in view. Assuming <code>object_list</code> is a queryset object:</p>
<pre><code>object_list = object_list.filter(company_name__startswith='A')
</code></pre>
<hr>
<h2>Sorintg</h2>
<p>Sort the <code>object_list</code> before passing it to the template:</p>
<pre><code>page = requests.REQUEST.get('page', 'A') # or Get from view parameter
# depending on url conf.
object_list = (object_list.filter(company_name__startswith=page)
.order_by('company_name'))
</code></pre>
<p><strong>UPDATE</strong></p>
<p>NOTE: Change <code>app</code> with actual app name.</p>
<p>urls.py:</p>
<pre><code>url(r'^/path/to/site/customers/(?P<page>[A-Z])$', 'app.views.list_customers')
</code></pre>
<p>app/views.py:</p>
<pre><code>from django.shortcuts import render
def list_compnaies(request, page):
object_list = (Customer.objects.filter(company_name__startswith=page)
.order_by('company_name'))
return render(request, 'customers/list.html', {
'object_list': object_list,
})
</code></pre>
<p>customers/list.html</p>
<pre><code>{# Link to A .. Z customer pages %}
{% for page in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' %}
<a href="/path/to/site/customers/{{ page }}">{{ page }}</a>
{# Use {% url ... %} once you learn the url tag if possible to reduce duplicated hard-coded url #}
{% endif %}
{% for TblCompanies in object_list %}
<tr class="customer-table">
<td>{{ TblCompanies.company_id }}</td>
<td>{{ TblCompanies.company_name }}</td>
<td>{{ TblCompanies.contact_phone }}</td>
<td>{{ TblCompanies.billing_address }}</td>
<td>{{ TblCompanies.contact_e_mail }}</td>
</tr>
{% endfor %}
</code></pre>
| 2 | 2016-10-17T15:30:36Z | [
"python",
"django",
"django-templates",
"django-views",
"django-urls"
] |
How to retrieve a data requested by user from django models? | 40,090,229 | <p>I have a <em>model</em> and it has <strong>user</strong> table as "foreign key" in it . I need a query by which i can get user requested value . </p>
<p>Here is my models:
<div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>from django.db import models
#from django.contrib.auth.models import User
from django.db import models
from datetime import datetime
from django.conf import settings
class schedulesdb(models.Model):
f_name = models.CharField(max_length=100)
dateAndTime = models.DateTimeField(['%Y-%m-%d %H:%M:%S'],null=True)
user = models.ForeignKey(settings.AUTH_USER_MODEL, default=1)
def __unicode__(self): # on Python 2
return self.f_name
</code></pre>
</div>
</div>
</p>
<p>I was trying in this way I don't know what was wrong here :</p>
<p>Here is my views.py:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>def dashboard(request):
container=[]
DIR = os.path.realpath("/home/user/Desktop/Demo")
WAY = os.listdir(DIR)
for file in WAY:
if file.endswith('.mp4'):
file_name = file
FDIR=os.path.join(DIR, file)
container.append(FDIR)
return render(request, 'dashboard.html', {'container': container})
def new_scheduler(request):
if request.method =='POST':
f_name = request.POST.get('file')
dateAndTime = request.POST.get('dateAndTime')
Scheduled_data = schedulesdb.objects.create(
f_name = file,
dateAndTime = dateAndTime,
)
Scheduled_data.save()
return HttpResponse ('done')
def new_job(request):
user = request.user
print user.username
print user.id
schedule_entries = schedulesdb.objects.filter(user=request.user)
print schedule_entries.dateAndTime
return HttpResponse(schedule_entries)</code></pre>
</div>
</div>
</p>
<p>Here is my Traceback:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>Performing system checks...
System check identified no issues (0 silenced).
October 17, 2016 - 16:10:43
Django version 1.8, using settings 'pro1.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
goku
2
[]
[17/Oct/2016 16:10:51]"GET /job/ HTTP/1.1" 200 0</code></pre>
</div>
</div>
</p>
<p>Thanks in advance #peace</p>
| 0 | 2016-10-17T15:25:08Z | 40,090,690 | <p>In order to select all the records from <code>schedulesdb</code> model that relate to the current user, do the following.</p>
<pre><code>def new_job(request):
schedule_entries = schedulesdb.objects.filter(user=request.user)
return HttpResponse(schedule_entries)
</code></pre>
<p>P.S. It is always better to follow the Python convention, so that other programmers can read your code faster. Start by reading the <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow">PEP-8 Style Guide</a></p>
| 1 | 2016-10-17T15:49:55Z | [
"python",
"sql",
"django"
] |
Django multiple forms on one view | 40,090,233 | <p>I have a Django template that has data from a few different model types combining to make it. A dashboard if you will. And each of those has an edit form. </p>
<p>Is it best to process all those forms in one view as they are posted back to the same place and differentiating between them by a unique field like below? </p>
<p>Or if having lots of different dedicated avenues is the way forward? Thanks for any guidance</p>
<pre><code>class ProjectDetail(DetailView):
template_name = 'project/view.html'
def get_object(self):
try:
return Project.objects.filter(brief__slug=self.kwargs['slug']).filter(team=get_user_team(self.request)).first()
# add loop to allow multiple teams to work on the same brief (project)
except Exception as e:
project_error = '%s (%s)' % (e.message, type(e))
messages.error(self.request, 'OH NO! %s' % project_error)
return redirect('home')
def get_context_data(self, **kwargs):
project = self.get_object()
context = dict()
context['project'] = project
context['milestone_form'] = MilestoneForm(initial={'project': project})
context['view'] = self
return context
def post(self, request, *args, **kwargs):
# get the context for the page
context = self.get_context_data()
try:
# switch for each of the form types on the team profile page (shown if member)
if 'milestone_form_submit' in request.POST:
project=self.get_object()
# set date arbitrarily to half way to brief deadline
brief = Brief.objects.get(project=project)
last_milestone = self.milestones().last()
milestone_del_date = last_milestone.del_date + timedelta(days=7)
new_milestone = Milestone(
project=project,
title_text=request.POST.get('title_text'),
del_date=milestone_del_date,
)
try:
new_milestone.save()
messages.success(self.request, "Excellent! New delivery popped on the bottom of the list")
except Exception as e:
# pass the erroring form back in the context if not
form = MilestoneForm(request.POST)
context['milestone_form'] = form
messages.error(self.request, "OH NO! Deadline didn't save. Be a sport and check what you entered")
elif 'milestone-edit-date-form-submit' in request.POST:
# get object from db
milestone = Milestone.objects.get(pk=request.POST['id'])
# update del_date field sent
milestone.del_date = request.POST['del_date']
# save back to db
milestone.save()
messages.success(self.request, "Updated that delivery right there!")
elif ...
except Exception as e:
messages.error(self.request, "OH NO! Deadline didn't save. Be a sport and check what you entered")
return render(request, self.template_name, context)
</code></pre>
| 0 | 2016-10-17T15:25:19Z | 40,090,964 | <p>You can use mixins in order to solve your problem.</p>
<p>Example from the gist <a href="https://gist.github.com/michelts/1029336" rel="nofollow">https://gist.github.com/michelts/1029336</a></p>
<pre><code>class MultipleFormsMixin(FormMixin):
"""
A mixin that provides a way to show and handle several forms in a
request.
"""
form_classes = {} # set the form classes as a mapping
def get_form_classes(self):
return self.form_classes
def get_forms(self, form_classes):
return dict([(key, klass(**self.get_form_kwargs())) \
for key, klass in form_classes.items()])
def forms_valid(self, forms):
return super(MultipleFormsMixin, self).form_valid(forms)
def forms_invalid(self, forms):
return self.render_to_response(self.get_context_data(forms=forms))
</code></pre>
<p>As you can see, when you inherit from this class, you can handle multiple forms simultaneously. Look at the gist's code and adapt it to your problem.</p>
<p>Look at this <a href="http://stackoverflow.com/a/24011448/2886270">answer</a></p>
| 1 | 2016-10-17T16:04:59Z | [
"python",
"django",
"forms"
] |
Joining string of a columns over several index while keeping other colums | 40,090,235 | <p>Here is an example data set:</p>
<pre><code>>>> df1 = pandas.DataFrame({
"Name": ["Alice", "Marie", "Smith", "Mallory", "Bob", "Doe"],
"City": ["Seattle", None, None, "Portland", None, None],
"Age": [24, None, None, 26, None, None],
"Group": [1, 1, 1, 2, 2, 2]})
>>> df1
Age City Group Name
0 24.0 Seattle 1 Alice
1 NaN None 1 Marie
2 NaN None 1 Smith
3 26.0 Portland 2 Mallory
4 NaN None 2 Bob
5 NaN None 2 Doe
</code></pre>
<p>I would like to merge the Name column for all index of the same group while keeping the City and the Age wanting someting like:</p>
<pre><code>>>> df1_summarised
Age City Group Name
0 24.0 Seattle 1 Alice Marie Smith
1 26.0 Portland 2 Mallory Bob Doe
</code></pre>
<p>I know those 2 columns (Age, City) will be NaN/None after the first index of a given group from the structure of my starting data. </p>
<p>I have tried the following:</p>
<pre><code>>>> print(df1.groupby('Group')['Name'].apply(' '.join))
Group
1 Alice Marie Smith
2 Mallory Bob Doe
Name: Name, dtype: object
</code></pre>
<p>But I would like to keep the Age and City columns...</p>
| 3 | 2016-10-17T15:25:25Z | 40,090,310 | <p>try this:</p>
<pre><code>In [29]: df1.groupby('Group').ffill().groupby(['Group','Age','City']).Name.apply(' '.join)
Out[29]:
Group Age City
1 24.0 Seattle Alice Marie Smith
2 26.0 Portland Mallory Bob Doe
Name: Name, dtype: object
</code></pre>
| 3 | 2016-10-17T15:29:24Z | [
"python",
"pandas"
] |
Joining string of a columns over several index while keeping other colums | 40,090,235 | <p>Here is an example data set:</p>
<pre><code>>>> df1 = pandas.DataFrame({
"Name": ["Alice", "Marie", "Smith", "Mallory", "Bob", "Doe"],
"City": ["Seattle", None, None, "Portland", None, None],
"Age": [24, None, None, 26, None, None],
"Group": [1, 1, 1, 2, 2, 2]})
>>> df1
Age City Group Name
0 24.0 Seattle 1 Alice
1 NaN None 1 Marie
2 NaN None 1 Smith
3 26.0 Portland 2 Mallory
4 NaN None 2 Bob
5 NaN None 2 Doe
</code></pre>
<p>I would like to merge the Name column for all index of the same group while keeping the City and the Age wanting someting like:</p>
<pre><code>>>> df1_summarised
Age City Group Name
0 24.0 Seattle 1 Alice Marie Smith
1 26.0 Portland 2 Mallory Bob Doe
</code></pre>
<p>I know those 2 columns (Age, City) will be NaN/None after the first index of a given group from the structure of my starting data. </p>
<p>I have tried the following:</p>
<pre><code>>>> print(df1.groupby('Group')['Name'].apply(' '.join))
Group
1 Alice Marie Smith
2 Mallory Bob Doe
Name: Name, dtype: object
</code></pre>
<p>But I would like to keep the Age and City columns...</p>
| 3 | 2016-10-17T15:25:25Z | 40,090,538 | <p>using <code>dropna</code> and <code>assign</code> with <code>groupby</code></p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.assign.html" rel="nofollow">docs to assign</a></p>
<pre><code>df1.dropna(subset=['Age', 'City']) \
.assign(Name=df1.groupby('Group').Name.apply(' '.join).values)
</code></pre>
<p><a href="https://i.stack.imgur.com/Vh8Go.png" rel="nofollow"><img src="https://i.stack.imgur.com/Vh8Go.png" alt="enter image description here"></a></p>
<hr>
<p><strong><em>timing</em></strong><br>
per request</p>
<p><a href="https://i.stack.imgur.com/iYTBw.png" rel="nofollow"><img src="https://i.stack.imgur.com/iYTBw.png" alt="enter image description here"></a></p>
<hr>
<p><strong><em>update</em></strong><br>
use <code>groupby</code> and <code>agg</code><br>
I thought of this and it feels far more satisfying</p>
<pre><code>df1.groupby('Group').agg(dict(Age='first', City='first', Name=' '.join))
</code></pre>
<p>to get the exact output</p>
<pre><code>df1.groupby('Group').agg(dict(Age='first', City='first', Name=' '.join)) \
.reset_index().reindex_axis(df1.columns, 1)
</code></pre>
| 3 | 2016-10-17T15:41:07Z | [
"python",
"pandas"
] |
Convert a string into list of list | 40,090,375 | <p>I have a string, say:</p>
<p><code>s = "abcdefghi"</code></p>
<p>I need to create a list of lists of this such that the list formed will :-</p>
<p><code>list = [['a','b','c'],['d','e','f'],['g','h','i']]</code></p>
<p>Now the string can be anything but will always have length n * n</p>
<p>So if <code>n = 4.</code></p>
<p>The <code>len(s) = 16</code></p>
<p>And the list will be:</p>
<pre><code>[['1','2','3','4'],['5','6','7','8'],...,['13','14','15','16']]
</code></pre>
<p>So each list in the list of lists will have length 4.</p>
<p>So I want to write a function which takes string and n as input and gives
a list of list as output where the length of each list within the list is n.</p>
<p>How can one do this ?</p>
<p>UPDATE :</p>
<p>How can you convert the above list of list back to a string ?</p>
<p>So if l = <code>list = [['a','b','c'],['d','e','f'],['g','h','i']]</code></p>
<p>How to convert it to a string :-</p>
<p><code>s = "abcdefghi"</code></p>
<p>I found a solution for this in stackoverflow :-</p>
<pre><code>s = "".join("".join(map(str,l)) for l in list)
</code></pre>
<p>Thanks everyone !</p>
| 0 | 2016-10-17T15:32:44Z | 40,090,427 | <p>Here's a list comprehension that does the job:</p>
<pre><code>[list(s[i:i+n]) for i in range(0, len(s), n)]
</code></pre>
<p>If <code>n * n == len(s)</code> is always true, then just put</p>
<pre><code>n = int(len(s) ** 0.5) # Or use math.sqrt
</code></pre>
<p>For the edit part here's another list comprehension:</p>
<pre><code>''.join(e for e in sub for sub in l)
</code></pre>
| 3 | 2016-10-17T15:35:31Z | [
"python",
"string",
"list"
] |
Convert a string into list of list | 40,090,375 | <p>I have a string, say:</p>
<p><code>s = "abcdefghi"</code></p>
<p>I need to create a list of lists of this such that the list formed will :-</p>
<p><code>list = [['a','b','c'],['d','e','f'],['g','h','i']]</code></p>
<p>Now the string can be anything but will always have length n * n</p>
<p>So if <code>n = 4.</code></p>
<p>The <code>len(s) = 16</code></p>
<p>And the list will be:</p>
<pre><code>[['1','2','3','4'],['5','6','7','8'],...,['13','14','15','16']]
</code></pre>
<p>So each list in the list of lists will have length 4.</p>
<p>So I want to write a function which takes string and n as input and gives
a list of list as output where the length of each list within the list is n.</p>
<p>How can one do this ?</p>
<p>UPDATE :</p>
<p>How can you convert the above list of list back to a string ?</p>
<p>So if l = <code>list = [['a','b','c'],['d','e','f'],['g','h','i']]</code></p>
<p>How to convert it to a string :-</p>
<p><code>s = "abcdefghi"</code></p>
<p>I found a solution for this in stackoverflow :-</p>
<pre><code>s = "".join("".join(map(str,l)) for l in list)
</code></pre>
<p>Thanks everyone !</p>
| 0 | 2016-10-17T15:32:44Z | 40,090,491 | <p>I'm a personal fan of numpy...</p>
<pre><code>import numpy as np
s = "abcdefghi"
n = int(np.sqrt(len(s)))
a = np.array([x for x in s], dtype=str).reshape([n,n])
</code></pre>
| 0 | 2016-10-17T15:38:47Z | [
"python",
"string",
"list"
] |
Convert a string into list of list | 40,090,375 | <p>I have a string, say:</p>
<p><code>s = "abcdefghi"</code></p>
<p>I need to create a list of lists of this such that the list formed will :-</p>
<p><code>list = [['a','b','c'],['d','e','f'],['g','h','i']]</code></p>
<p>Now the string can be anything but will always have length n * n</p>
<p>So if <code>n = 4.</code></p>
<p>The <code>len(s) = 16</code></p>
<p>And the list will be:</p>
<pre><code>[['1','2','3','4'],['5','6','7','8'],...,['13','14','15','16']]
</code></pre>
<p>So each list in the list of lists will have length 4.</p>
<p>So I want to write a function which takes string and n as input and gives
a list of list as output where the length of each list within the list is n.</p>
<p>How can one do this ?</p>
<p>UPDATE :</p>
<p>How can you convert the above list of list back to a string ?</p>
<p>So if l = <code>list = [['a','b','c'],['d','e','f'],['g','h','i']]</code></p>
<p>How to convert it to a string :-</p>
<p><code>s = "abcdefghi"</code></p>
<p>I found a solution for this in stackoverflow :-</p>
<pre><code>s = "".join("".join(map(str,l)) for l in list)
</code></pre>
<p>Thanks everyone !</p>
| 0 | 2016-10-17T15:32:44Z | 40,090,542 | <pre><code>from math import sqrt
def listify(s):
n = sqrt(len(s))
g = iter(s)
return [[next(g) for i in range(n)] for j in range(n)]
</code></pre>
<p>I like using generators for problems like this</p>
| 0 | 2016-10-17T15:41:25Z | [
"python",
"string",
"list"
] |
How to get a graph to start a certain point? (matplotlib) | 40,090,383 | <p>I was just wondering if anyone could tell me how to start a graph at a certain height on the y axis? <a href="https://i.stack.imgur.com/qD217.png" rel="nofollow"><img src="https://i.stack.imgur.com/qD217.png" alt="enter image description here"></a></p>
<p>Like if I wanted to start my curve at 50? I am plotting the trajectories of projectiles and I need to be able to plot it when they enter a starting height that isn't 0. Thanks in advance! I am very new to python and matplotlib.</p>
<p>My Code for firing at an angle:</p>
<p>def angle_calculation():</p>
<pre><code>print("\nTo calculate the components of a projectile fired at an angle")
print("We need the angle(degrees), the initial velocity(ms-1), and")
print("The gravity constant (9.81 on earth)\n")
angle_0 = float(input("Please enter the angle you want it to be fired at: "))
angle_0 = (angle_0)/180.0*np.pi
speed = float(input("\nPlease enter the initial velocity you want it to be fire at\n In metres per second: "))
g=9.81
plt.figure()
time_max = ((2 * speed) * np.sin(angle_0)) / g
time = time_max*np.linspace(0,1,100)[:,None]
x_values = ((speed * time) * np.cos(angle_0))
y_values = ((speed * time) * np.sin(angle_0)) - ((0.5 * g) * (time ** 2))
distance_x = (speed * np.cos(angle_0))*time_max
plt.plot(x_values,y_values, linewidth = 1.5)
plt.ylim([0,200])
plt.xlim([0,distance_x])
plt.show()
print("")
print ("The Range of this projectile will be, "(distance_x))
print ("")
for x, y in zip(y_values, x_values):
if x == 0 or y == 0:
print(x, y)
return
</code></pre>
| 0 | 2016-10-17T15:33:16Z | 40,111,716 | <p>Is it what you're looking for? </p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
print("\nTo calculate the components of a projectile fired at an angle")
print("We need the angle(degrees), the initial velocity(ms-1), and")
print("The gravity constant (9.81 on earth)\n")
angle_0 = float(input("Please enter the angle you want it to be fired at: "))
angle_0 = (angle_0)/180.0*np.pi
speed = float(input("\nPlease enter the initial velocity you want it to be fire at\n In metres per second: "))
start = float(input("\nPlease enter the starting point\n In metres: "))
g=9.81
plt.figure()
time_max = ((2 * speed) * np.sin(angle_0)) / g
time = time_max*np.linspace(0,1,100)[:,None]
x_values = ((speed * time) * np.cos(angle_0))
y_values = ((speed * time) * np.sin(angle_0)) - ((0.5 * g) * (time ** 2))
x_values_trimmed = x_values[x_values>start]
start_idx = np.where(x_values == x_values_trimmed[0])[0]
y_values_trimmed = y_values[int(start_idx):]
distance_x = (speed * np.cos(angle_0))*time_max
plt.plot(x_values_trimmed,y_values_trimmed, linewidth = 1.5)
plt.ylim([0,200])
plt.xlim([start,distance_x])
plt.show()
print("")
print ("The Range of this projectile will be, "(distance_x))
print ("")
for x, y in zip(y_values, x_values):
if x == 0 or y == 0:
print(x, y)
plt.show()
</code></pre>
<p>result:</p>
<pre><code>To calculate the components of a projectile fired at an angle
We need the angle(degrees), the initial velocity(ms-1), and
The gravity constant (9.81 on earth)
Please enter the angle you want it to be fired at: 50
Please enter the initial velocity you want it to be fire at
In metres per second: 30
Please enter the starting point
In metres: 20
</code></pre>
<p><a href="https://i.stack.imgur.com/BIRxd.png" rel="nofollow"><img src="https://i.stack.imgur.com/BIRxd.png" alt="enter image description here"></a></p>
| 0 | 2016-10-18T15:01:00Z | [
"python",
"matplotlib"
] |
paramiko.exec_command() not executing and returns "Extra params found in CLI" | 40,090,445 | <p>I am trying to ssh a server using Paramiko and execute a command. But the paramiko.exec_command() returns with an error.Why is this happening?</p>
<p>This is my Python script:</p>
<pre><code>import paramiko
ssh = paramiko.SSHClient()
ssh.load_system_host_keys()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('10.126.141.132', username='usrm', password='passwd')
stdin, stdout, stderr = ssh.exec_command("show chassis")
print(stdout.readlines())
ssh.close()
</code></pre>
<p>When executed it returns with this message:</p>
<blockquote>
<p>['Extra params found in CLI, this is not supported, exiting the CLI session:\n']</p>
</blockquote>
<p>I am using Python 3.5.2 :: Anaconda 4.1.1 (64-bit) with Paramiko on Windows.</p>
<p>I have tried the commands manually and it is working.</p>
| 0 | 2016-10-17T15:36:40Z | 40,103,207 | <p>Based on your latest comment:</p>
<blockquote>
<p>I installed a Cygwin Terminal and SSH'd the server with the command...it came up with the <code>Extra params</code> error. Command I executed: <code>ssh [email protected] "show chassis"</code>, Output: <code>No entry for terminal type "dumb"; using dumb terminal settings. Extra params found in CLI, this is not supported, exiting the CLI session:</code></p>
</blockquote>
<p>it sounds like the <code>usrm</code> account's <em>login shell</em> on the SSH server is not allowed to run commands in the non-interactive way. To solve the problem you have to use <code>invoke_shell()</code> like this:</p>
<pre><code>chan = ssh.invoke_shell()
chan.sendall('show chassis\r')
s = chan.recv(4096)
print s
</code></pre>
| 0 | 2016-10-18T08:27:04Z | [
"python",
"python-3.x",
"network-programming",
"paramiko"
] |
font-color of FileChooser | 40,090,453 | <p>I would like to know how to change the font-color (text-color) of both the FileChooserListView and the FileChooserIconView.</p>
<p>I could change the background color (to white), and I would like to change the font color to black.</p>
<p>How could I do that?</p>
| 0 | 2016-10-17T15:37:08Z | 40,093,899 | <p>Default style of Kivy widget is placed in <a href="https://github.com/kivy/kivy/blob/master/kivy/data/style.kv" rel="nofollow">kivy/data/style.kv</a> file. You can copy its entries and change it to your liking. For example:</p>
<pre><code>from kivy.uix.button import Button
from kivy.uix.boxlayout import BoxLayout
from kivy.app import App
from kivy.lang import Builder
Builder.load_string('''
<FileChooserListView>:
# --------------------
# ADD BACKGROUND COLOR
# --------------------
canvas.before:
Color:
rgb: 1, 1, 1
Rectangle:
pos: self.pos
size: self.size
layout: layout
FileChooserListLayout:
id: layout
controller: root
[FileListEntry@FloatLayout+TreeViewNode]:
locked: False
entries: []
path: ctx.path
# FIXME: is_selected is actually a read_only treeview property. In this
# case, however, we're doing this because treeview only has single-selection
# hardcoded in it. The fix to this would be to update treeview to allow
# multiple selection.
is_selected: self.path in ctx.controller().selection
orientation: 'horizontal'
size_hint_y: None
height: '48dp' if dp(1) > 1 else '24dp'
# Don't allow expansion of the ../ node
is_leaf: not ctx.isdir or ctx.name.endswith('..' + ctx.sep) or self.locked
on_touch_down: self.collide_point(*args[1].pos) and ctx.controller().entry_touched(self, args[1])
on_touch_up: self.collide_point(*args[1].pos) and ctx.controller().entry_released(self, args[1])
BoxLayout:
pos: root.pos
size_hint_x: None
width: root.width - dp(10)
Label:
# --------------
# CHANGE FONT COLOR
# --------------
color: 0, 0, 0, 1
id: filename
text_size: self.width, None
halign: 'left'
shorten: True
text: ctx.name
Label:
# --------------
# CHANGE FONT COLOR
# --------------
color: 0, 0, 0, 1
text_size: self.width, None
size_hint_x: None
halign: 'right'
text: '{}'.format(ctx.get_nice_size())
<MyWidget>:
FileChooserListView
''')
class MyWidget(BoxLayout):
pass
class TestApp(App):
def build(self):
return MyWidget()
if __name__ == '__main__':
TestApp().run()
</code></pre>
| 2 | 2016-10-17T19:10:05Z | [
"python",
"kivy"
] |
Removing parts of a string after certain chars in Python | 40,090,472 | <p>New to Python.</p>
<p>I'd like to remove the substrings between the word AND and the comma character in the following string:</p>
<pre><code>MyString = ' x.ABC AND XYZ, \ny.DEF AND Type, \nSome Long String AND Qwerty, \nz.GHI AND Tree \n'
</code></pre>
<p>The result should be:</p>
<pre><code>MyString = ' x.ABC,\ny.DEF,\nSome Long String,\nz.GHI\n'
</code></pre>
<p>I'd like to do it without using regex.</p>
<p>I have tried various methods with splits and joins and indexes to no avail.</p>
<p>Any direction appreciated.</p>
<p>Thanks.</p>
| -2 | 2016-10-17T15:38:03Z | 40,090,631 | <p>You can split the string into lines, and further split the lines into words and use <a href="https://docs.python.org/2/library/itertools.html#itertools.takewhile" rel="nofollow"><code>itertools.takewhile</code></a> to drop all words after <code>AND</code> (itself included):</p>
<pre><code>from itertools import takewhile
''.join(' '.join(takewhile(lambda x: x != 'AND', line.split())) + ',\n'
for line in MyString.splitlines())
</code></pre>
<p>Notice that the newline character and a comma are manually added after each line is reconstructed with <a href="https://docs.python.org/2/library/stdtypes.html#str.join" rel="nofollow"><code>str.join</code></a>.</p>
<p>All the lines are then finally joined using <code>str.join</code>.</p>
| 1 | 2016-10-17T15:46:37Z | [
"python",
"string"
] |
Removing parts of a string after certain chars in Python | 40,090,472 | <p>New to Python.</p>
<p>I'd like to remove the substrings between the word AND and the comma character in the following string:</p>
<pre><code>MyString = ' x.ABC AND XYZ, \ny.DEF AND Type, \nSome Long String AND Qwerty, \nz.GHI AND Tree \n'
</code></pre>
<p>The result should be:</p>
<pre><code>MyString = ' x.ABC,\ny.DEF,\nSome Long String,\nz.GHI\n'
</code></pre>
<p>I'd like to do it without using regex.</p>
<p>I have tried various methods with splits and joins and indexes to no avail.</p>
<p>Any direction appreciated.</p>
<p>Thanks.</p>
| -2 | 2016-10-17T15:38:03Z | 40,090,657 | <p>While Moses's answer is really good, I have a funny feeling this is a homework question and meant for you not to use any imports. Anyways here's an answer with no imports, it's not as efficient as other answers like Moses' or Regex but it works just not as well as others.</p>
<pre><code>MyString = 'x.ABC AND XYZ, \ny.DEF AND Type, \nSome Long String AND Qwerty, \nz.GHI AND Tree \n'
new_string = ''
for each in [[y for y in x.split(' AND ')][0] for x in MyString.split('\n')]:
new_string+=each
new_string+='\n'
print(new_string)
</code></pre>
| 1 | 2016-10-17T15:48:19Z | [
"python",
"string"
] |
Removing parts of a string after certain chars in Python | 40,090,472 | <p>New to Python.</p>
<p>I'd like to remove the substrings between the word AND and the comma character in the following string:</p>
<pre><code>MyString = ' x.ABC AND XYZ, \ny.DEF AND Type, \nSome Long String AND Qwerty, \nz.GHI AND Tree \n'
</code></pre>
<p>The result should be:</p>
<pre><code>MyString = ' x.ABC,\ny.DEF,\nSome Long String,\nz.GHI\n'
</code></pre>
<p>I'd like to do it without using regex.</p>
<p>I have tried various methods with splits and joins and indexes to no avail.</p>
<p>Any direction appreciated.</p>
<p>Thanks.</p>
| -2 | 2016-10-17T15:38:03Z | 40,090,829 | <p>Now it is working.. and probably avoiding the 'append' keyword makes it really fast... </p>
<pre><code>In [19]: ',\n'.join([x.split('AND')[0].strip() for x in MyString.split('\n')])
Out[19]: 'x.ABC,\ny.DEF,\nSome Long String,\nz.GHI,\n'
</code></pre>
<p>You can check this answer to understand why...</p>
<p><a href="http://stackoverflow.com/questions/39518899/comparing-list-comprehensions-and-explicit-loops-3-array-generators-faster-than/39519661#39519661">Comparing list comprehensions and explicit loops (3 array generators faster than 1 for loop)</a></p>
| 0 | 2016-10-17T15:58:01Z | [
"python",
"string"
] |
Why the Pool class in multiprocessing lack the __exit__() method in Python 2? | 40,090,519 | <p>It is not clear to me why the <a href="http://fossies.org/dox/Python-2.7.12/pool_8py_source.html" rel="nofollow">Python 2.7 implementation</a> of <code>Pool</code> does not have the <code>__exit__()</code> method that is present in the <a href="http://fossies.org/dox/Python-3.5.2/pool_8py_source.html" rel="nofollow">Python 3 version</a> of the same class. Is it safe to add the <code>__exit__()</code> method (together with <code>__enter__()</code>, of course) (I just want to use <code>with Pool(n) as p:</code> ) or is there a special reason to avoid it? </p>
| 0 | 2016-10-17T15:40:02Z | 40,091,484 | <p>Doesn't seem like there's any reason to avoid it. Looking at it and testing it real quick didn't bring up any odd behavior. This was implemented in <a href="http://bugs.python.org/issue15064" rel="nofollow"><em>Issue 15064</em></a>, it just seems it wasn't added in <code>2.7</code> (probably because only bug-fixes were considered).</p>
<p>Returning <code>self</code> from <code>__enter__</code> and calling <code>terminate</code> from <code>__exit__</code> as implemented in Python 3.3 should be the way to go. Instead of altering the source though (<em>if</em> that was your intention), just create a custom subclass:</p>
<pre><code>from multiprocessing.pool import Pool as PoolCls
class CMPool(PoolCls):
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return self.terminate()
</code></pre>
| 2 | 2016-10-17T16:36:10Z | [
"python",
"python-2.7",
"python-3.x",
"threadpool",
"python-multiprocessing"
] |
Pandas: How to conditionally assign multiple columns? | 40,090,522 | <p>I want to replace negative values with <code>nan</code> for only certain columns. The simplest way could be:</p>
<pre><code>for col in ['a', 'b', 'c']:
df.loc[df[col ] < 0, col] = np.nan
</code></pre>
<p><code>df</code> could have many columns and I only want to do this to specific columns. </p>
<p>Is there a way to do this in one line? Seems like this should be easy but I have not been able to figure out.</p>
| 5 | 2016-10-17T15:40:08Z | 40,090,688 | <p>use <code>loc</code> and <code>where</code></p>
<pre><code>cols = ['a', 'b', 'c']
df.loc[:, cols] = df[cols].where(df[cols].whwere.ge(0), np.nan)
</code></pre>
<p><strong><em>demonstration</em></strong></p>
<pre><code>df = pd.DataFrame(np.random.randn(10, 5), columns=list('abcde'))
df
</code></pre>
<p><a href="https://i.stack.imgur.com/LT639.png" rel="nofollow"><img src="https://i.stack.imgur.com/LT639.png" alt="enter image description here"></a></p>
<pre><code>cols = list('abc')
df.loc[:, cols] = df[cols].where(df[cols].ge(0), np.nan)
df
</code></pre>
<p><a href="https://i.stack.imgur.com/62dsI.png" rel="nofollow"><img src="https://i.stack.imgur.com/62dsI.png" alt="enter image description here"></a></p>
<hr>
<p>You could speed it up with numpy</p>
<pre><code>df[cols] = np.where(df[cols] < 0, np.nan, df[cols])
</code></pre>
<p>to do the same thing.</p>
<hr>
<p><strong><em>timing</em></strong> </p>
<pre><code>def gen_df(n):
return pd.DataFrame(np.random.randn(n, 5), columns=list('abcde'))
</code></pre>
<p>since assignment is an important part of this, I create the <code>df</code> from scratch each loop. I also added the timing for <code>df</code> creation.</p>
<p>for <code>n = 10000</code></p>
<p><a href="https://i.stack.imgur.com/3jaVi.png" rel="nofollow"><img src="https://i.stack.imgur.com/3jaVi.png" alt="enter image description here"></a></p>
<p>for <code>n = 100000</code></p>
<p><a href="https://i.stack.imgur.com/nGnNQ.png" rel="nofollow"><img src="https://i.stack.imgur.com/nGnNQ.png" alt="enter image description here"></a></p>
| 4 | 2016-10-17T15:49:49Z | [
"python",
"pandas"
] |
Pandas: How to conditionally assign multiple columns? | 40,090,522 | <p>I want to replace negative values with <code>nan</code> for only certain columns. The simplest way could be:</p>
<pre><code>for col in ['a', 'b', 'c']:
df.loc[df[col ] < 0, col] = np.nan
</code></pre>
<p><code>df</code> could have many columns and I only want to do this to specific columns. </p>
<p>Is there a way to do this in one line? Seems like this should be easy but I have not been able to figure out.</p>
| 5 | 2016-10-17T15:40:08Z | 40,090,694 | <p>Here's a way:</p>
<pre><code>df[df.columns.isin(['a', 'b', 'c']) & (df < 0)] = np.nan
</code></pre>
| 5 | 2016-10-17T15:50:03Z | [
"python",
"pandas"
] |
Pandas: How to conditionally assign multiple columns? | 40,090,522 | <p>I want to replace negative values with <code>nan</code> for only certain columns. The simplest way could be:</p>
<pre><code>for col in ['a', 'b', 'c']:
df.loc[df[col ] < 0, col] = np.nan
</code></pre>
<p><code>df</code> could have many columns and I only want to do this to specific columns. </p>
<p>Is there a way to do this in one line? Seems like this should be easy but I have not been able to figure out.</p>
| 5 | 2016-10-17T15:40:08Z | 40,090,735 | <p>You can use <code>np.where</code> to achieve this:</p>
<pre><code>In [47]:
df = pd.DataFrame(np.random.randn(5,5), columns=list('abcde'))
df
Out[47]:
a b c d e
0 0.616829 -0.933365 -0.735308 0.665297 -1.333547
1 0.069158 2.266290 -0.068686 -0.787980 -0.082090
2 1.203311 1.661110 -1.227530 -1.625526 0.045932
3 -0.247134 -1.134400 0.355436 0.787232 -0.474243
4 0.131774 0.349103 -0.632660 -1.549563 1.196455
In [48]:
df[['a','b','c']] = np.where(df[['a','b','c']] < 0, np.NaN, df[['a','b','c']])
df
Out[48]:
a b c d e
0 0.616829 NaN NaN 0.665297 -1.333547
1 0.069158 2.266290 NaN -0.787980 -0.082090
2 1.203311 1.661110 NaN -1.625526 0.045932
3 NaN NaN 0.355436 0.787232 -0.474243
4 0.131774 0.349103 NaN -1.549563 1.196455
</code></pre>
| 4 | 2016-10-17T15:51:51Z | [
"python",
"pandas"
] |
Pandas: How to conditionally assign multiple columns? | 40,090,522 | <p>I want to replace negative values with <code>nan</code> for only certain columns. The simplest way could be:</p>
<pre><code>for col in ['a', 'b', 'c']:
df.loc[df[col ] < 0, col] = np.nan
</code></pre>
<p><code>df</code> could have many columns and I only want to do this to specific columns. </p>
<p>Is there a way to do this in one line? Seems like this should be easy but I have not been able to figure out.</p>
| 5 | 2016-10-17T15:40:08Z | 40,090,749 | <p>If it has to be a one-liner:</p>
<pre><code>df[['a', 'b', 'c']] = df[['a', 'b', 'c']].apply(lambda c: [x>0 and x or np.nan for x in c])
</code></pre>
| 1 | 2016-10-17T15:53:08Z | [
"python",
"pandas"
] |
Pandas: How to conditionally assign multiple columns? | 40,090,522 | <p>I want to replace negative values with <code>nan</code> for only certain columns. The simplest way could be:</p>
<pre><code>for col in ['a', 'b', 'c']:
df.loc[df[col ] < 0, col] = np.nan
</code></pre>
<p><code>df</code> could have many columns and I only want to do this to specific columns. </p>
<p>Is there a way to do this in one line? Seems like this should be easy but I have not been able to figure out.</p>
| 5 | 2016-10-17T15:40:08Z | 40,090,781 | <p>I don't think you'll get much simpler than this:</p>
<pre><code>>>> df = pd.DataFrame({'a': np.arange(-5, 2), 'b': np.arange(-5, 2), 'c': np.arange(-5, 2), 'd': np.arange(-5, 2), 'e': np.arange(-5, 2)})
>>> df
a b c d e
0 -5 -5 -5 -5 -5
1 -4 -4 -4 -4 -4
2 -3 -3 -3 -3 -3
3 -2 -2 -2 -2 -2
4 -1 -1 -1 -1 -1
5 0 0 0 0 0
6 1 1 1 1 1
>>> df[df[cols] < 0] = np.nan
>>> df
a b c d e
0 NaN NaN NaN -5 -5
1 NaN NaN NaN -4 -4
2 NaN NaN NaN -3 -3
3 NaN NaN NaN -2 -2
4 NaN NaN NaN -1 -1
5 0.0 0.0 0.0 0 0
6 1.0 1.0 1.0 1 1
</code></pre>
| 9 | 2016-10-17T15:55:19Z | [
"python",
"pandas"
] |
Pandas: How to conditionally assign multiple columns? | 40,090,522 | <p>I want to replace negative values with <code>nan</code> for only certain columns. The simplest way could be:</p>
<pre><code>for col in ['a', 'b', 'c']:
df.loc[df[col ] < 0, col] = np.nan
</code></pre>
<p><code>df</code> could have many columns and I only want to do this to specific columns. </p>
<p>Is there a way to do this in one line? Seems like this should be easy but I have not been able to figure out.</p>
| 5 | 2016-10-17T15:40:08Z | 40,090,782 | <p>Sure, just pick your desired columns out of the mask:</p>
<pre><code>(df < 0)[['a', 'b', 'c']]
</code></pre>
<p>You can use this mask in <code>df[(df < 0)[['a', 'b', 'c']]] = np.nan</code>.</p>
| 3 | 2016-10-17T15:55:24Z | [
"python",
"pandas"
] |
Python type checking not working as expected | 40,090,600 | <p>I'm sure I'm missing something obvious here, but why does the following script actually work?</p>
<pre><code>import enum
import typing
class States(enum.Enum):
a = 1
b = 2
states = typing.NewType('states', States)
def f(x: states) -> states:
return x
print(
f(States.b),
f(3)
)
</code></pre>
<p>As far I understand it, it should fail on the call <code>f(3)</code>, however it doesn't. Can someone shed some light on this behaviour?</p>
| 2 | 2016-10-17T15:44:41Z | 40,090,625 | <p>No checking is performed by Python itself. This is specified in the <a href="https://www.python.org/dev/peps/pep-0484/#non-goals" rel="nofollow">"Non- Goals" section</a> of PEP 484. When executed (i.e during run-time), Python completely ignores the annotations you provided and evaluates your statements as it usually does, dynamically.</p>
<p>If you need type checking, you should perform it yourself. This can currently be performed by static type checking tools like <a href="http://mypy.readthedocs.io/en/latest/" rel="nofollow"><code>mypy</code></a>.</p>
| 3 | 2016-10-17T15:46:21Z | [
"python",
"python-3.x",
"types",
"type-hinting"
] |
pandas: merge conditional on time range | 40,090,619 | <p>I'd like to merge one data frame with another, where the merge is conditional on the date/time falling in a particular range. </p>
<p>For example, let's say I have the following two data frames.</p>
<pre><code>import pandas as pd
import datetime
# Create main data frame.
data = pd.DataFrame()
time_seq1 = pd.DataFrame(pd.date_range('1/1/2016', periods=3, freq='H'))
time_seq2 = pd.DataFrame(pd.date_range('1/2/2016', periods=3, freq='H'))
data = data.append(time_seq1, ignore_index=True)
data = data.append(time_seq1, ignore_index=True)
data = data.append(time_seq1, ignore_index=True)
data = data.append(time_seq2, ignore_index=True)
data['myID'] = ['001','001','001','002','002','002','003','003','003','004','004','004']
data.columns = ['Timestamp', 'myID']
# Create second data frame.
data2 = pd.DataFrame()
data2['time'] = [pd.to_datetime('1/1/2016 12:06 AM'), pd.to_datetime('1/1/2016 1:34 AM'), pd.to_datetime('1/2/2016 12:25 AM')]
data2['myID'] = ['002', '003', '004']
data2['specialID'] = ['foo_0', 'foo_1', 'foo_2']
# Show data frames.
data
Timestamp myID
0 2016-01-01 00:00:00 001
1 2016-01-01 01:00:00 001
2 2016-01-01 02:00:00 001
3 2016-01-01 00:00:00 002
4 2016-01-01 01:00:00 002
5 2016-01-01 02:00:00 002
6 2016-01-01 00:00:00 003
7 2016-01-01 01:00:00 003
8 2016-01-01 02:00:00 003
9 2016-01-02 00:00:00 004
10 2016-01-02 01:00:00 004
11 2016-01-02 02:00:00 004
data2
time myID specialID
0 2016-01-01 00:06:00 002 foo_0
1 2016-01-01 01:34:00 003 foo_1
2 2016-01-02 00:25:00 004 foo_2
</code></pre>
<p>I would like to construct the following output. </p>
<pre><code># Desired output.
Timestamp myID special_ID
0 2016-01-01 00:00:00 001 NaN
1 2016-01-01 01:00:00 001 NaN
2 2016-01-01 02:00:00 001 NaN
3 2016-01-01 00:00:00 002 NaN
4 2016-01-01 01:00:00 002 foo_0
5 2016-01-01 02:00:00 002 NaN
6 2016-01-01 00:00:00 003 NaN
7 2016-01-01 01:00:00 003 NaN
8 2016-01-01 02:00:00 003 foo_1
9 2016-01-02 00:00:00 004 NaN
10 2016-01-02 01:00:00 004 foo_2
11 2016-01-02 02:00:00 004 NaN
</code></pre>
<p>In particular, I want to merge <code>special_ID</code> into <code>data</code> such that <code>Timestamp</code> is the first time occurring after the value of <code>time</code>. For example, <code>foo_0</code> would be in the row corresponding to <code>2016-01-01 01:00:00</code> with <code>myID = 002</code> since that is the next time in <code>data</code> immediately following <code>2016-01-01 00:06:00</code> (the <code>time</code> of <code>special_ID = foo_0</code>) among the rows containing <code>myID = 002</code>. </p>
<p>Note, <code>Timestamp</code> is not the index of <code>data</code> and <code>time</code> is not the index of <code>data2</code>. Most other related posts seem to rely on using the datetime object as the index of the data frame.</p>
| 2 | 2016-10-17T15:46:15Z | 40,091,177 | <p>Not very beautiful, but i think it works.</p>
<pre><code>data['specialID'] = None
foolist = list(data2['myID'])
for i in data.index:
if data.myID[i] in foolist:
if data.Timestamp[i]> list(data2[data2['myID'] == data.myID[i]].time)[0]:
data['specialID'][i] = list(data2[data2['myID'] == data.myID[i]].specialID)[0]
foolist.remove(list(data2[data2['myID'] == data.myID[i]].myID)[0])
In [95]: data
Out[95]:
Timestamp myID specialID
0 2016-01-01 00:00:00 001 None
1 2016-01-01 01:00:00 001 None
2 2016-01-01 02:00:00 001 None
3 2016-01-01 00:00:00 002 None
4 2016-01-01 01:00:00 002 foo_0
5 2016-01-01 02:00:00 002 None
6 2016-01-01 00:00:00 003 None
7 2016-01-01 01:00:00 003 None
8 2016-01-01 02:00:00 003 foo_1
9 2016-01-02 00:00:00 004 None
10 2016-01-02 01:00:00 004 foo_2
11 2016-01-02 02:00:00 004 None
</code></pre>
| 1 | 2016-10-17T16:17:19Z | [
"python",
"datetime",
"pandas"
] |
pandas: merge conditional on time range | 40,090,619 | <p>I'd like to merge one data frame with another, where the merge is conditional on the date/time falling in a particular range. </p>
<p>For example, let's say I have the following two data frames.</p>
<pre><code>import pandas as pd
import datetime
# Create main data frame.
data = pd.DataFrame()
time_seq1 = pd.DataFrame(pd.date_range('1/1/2016', periods=3, freq='H'))
time_seq2 = pd.DataFrame(pd.date_range('1/2/2016', periods=3, freq='H'))
data = data.append(time_seq1, ignore_index=True)
data = data.append(time_seq1, ignore_index=True)
data = data.append(time_seq1, ignore_index=True)
data = data.append(time_seq2, ignore_index=True)
data['myID'] = ['001','001','001','002','002','002','003','003','003','004','004','004']
data.columns = ['Timestamp', 'myID']
# Create second data frame.
data2 = pd.DataFrame()
data2['time'] = [pd.to_datetime('1/1/2016 12:06 AM'), pd.to_datetime('1/1/2016 1:34 AM'), pd.to_datetime('1/2/2016 12:25 AM')]
data2['myID'] = ['002', '003', '004']
data2['specialID'] = ['foo_0', 'foo_1', 'foo_2']
# Show data frames.
data
Timestamp myID
0 2016-01-01 00:00:00 001
1 2016-01-01 01:00:00 001
2 2016-01-01 02:00:00 001
3 2016-01-01 00:00:00 002
4 2016-01-01 01:00:00 002
5 2016-01-01 02:00:00 002
6 2016-01-01 00:00:00 003
7 2016-01-01 01:00:00 003
8 2016-01-01 02:00:00 003
9 2016-01-02 00:00:00 004
10 2016-01-02 01:00:00 004
11 2016-01-02 02:00:00 004
data2
time myID specialID
0 2016-01-01 00:06:00 002 foo_0
1 2016-01-01 01:34:00 003 foo_1
2 2016-01-02 00:25:00 004 foo_2
</code></pre>
<p>I would like to construct the following output. </p>
<pre><code># Desired output.
Timestamp myID special_ID
0 2016-01-01 00:00:00 001 NaN
1 2016-01-01 01:00:00 001 NaN
2 2016-01-01 02:00:00 001 NaN
3 2016-01-01 00:00:00 002 NaN
4 2016-01-01 01:00:00 002 foo_0
5 2016-01-01 02:00:00 002 NaN
6 2016-01-01 00:00:00 003 NaN
7 2016-01-01 01:00:00 003 NaN
8 2016-01-01 02:00:00 003 foo_1
9 2016-01-02 00:00:00 004 NaN
10 2016-01-02 01:00:00 004 foo_2
11 2016-01-02 02:00:00 004 NaN
</code></pre>
<p>In particular, I want to merge <code>special_ID</code> into <code>data</code> such that <code>Timestamp</code> is the first time occurring after the value of <code>time</code>. For example, <code>foo_0</code> would be in the row corresponding to <code>2016-01-01 01:00:00</code> with <code>myID = 002</code> since that is the next time in <code>data</code> immediately following <code>2016-01-01 00:06:00</code> (the <code>time</code> of <code>special_ID = foo_0</code>) among the rows containing <code>myID = 002</code>. </p>
<p>Note, <code>Timestamp</code> is not the index of <code>data</code> and <code>time</code> is not the index of <code>data2</code>. Most other related posts seem to rely on using the datetime object as the index of the data frame.</p>
| 2 | 2016-10-17T15:46:15Z | 40,091,640 | <p>You can use <a href="https://pandas-docs.github.io/pandas-docs-travis/generated/pandas.merge_asof.html" rel="nofollow"><code>merge_asof</code></a>, which is new in Pandas 0.19, to do most of the work. Then, combine <code>loc</code> and <code>duplicated</code> to remove secondary matches:</p>
<pre><code># Data needs to be sorted for merge_asof.
data = data.sort_values(by='Timestamp')
# Perform the merge_asof.
df = pd.merge_asof(data, data2, left_on='Timestamp', right_on='time', by='myID').drop('time', axis=1)
# Make the additional matches null.
df.loc[df['specialID'].duplicated(), 'specialID'] = np.nan
# Get the original ordering.
df = df.set_index(data.index).sort_index()
</code></pre>
<p>The resulting output:</p>
<pre><code> Timestamp myID specialID
0 2016-01-01 00:00:00 001 NaN
1 2016-01-01 01:00:00 001 NaN
2 2016-01-01 02:00:00 001 NaN
3 2016-01-01 00:00:00 002 NaN
4 2016-01-01 01:00:00 002 foo_0
5 2016-01-01 02:00:00 002 NaN
6 2016-01-01 00:00:00 003 NaN
7 2016-01-01 01:00:00 003 NaN
8 2016-01-01 02:00:00 003 foo_1
9 2016-01-02 00:00:00 004 NaN
10 2016-01-02 01:00:00 004 foo_2
11 2016-01-02 02:00:00 004 NaN
</code></pre>
| 5 | 2016-10-17T16:46:00Z | [
"python",
"datetime",
"pandas"
] |
Program to add together a series of number | 40,090,659 | <blockquote>
<p>I am writing a program that will add together a series of
numbers a user inputs until the user enters a rogue value of 0. The
program will then display the total.</p>
</blockquote>
<pre><code>user_input = None
total_sum = 0
while user_input != 0:
user_input = input("Enter a number:")
total_sum = total_sum + user_input
# Sample output:
# Enter a number: 5
# Enter a number: 50
# Enter a number: 10
# Enter a number: 0
# The total sum of the numbers are 65
print("The total sum of the numbers are {}".format(total_sum))
</code></pre>
<p>The error that keeps coming up is :</p>
<pre><code>total_sum = total_sum + user_input
TypeError: unsupported operand type(s) for +: 'int' and 'str'
</code></pre>
| 1 | 2016-10-17T15:48:25Z | 40,090,716 | <p>you need to cast the input to an int</p>
<pre><code>user_input = int(user_input )
</code></pre>
| 0 | 2016-10-17T15:50:41Z | [
"python",
"python-3.x",
"loops"
] |
Program to add together a series of number | 40,090,659 | <blockquote>
<p>I am writing a program that will add together a series of
numbers a user inputs until the user enters a rogue value of 0. The
program will then display the total.</p>
</blockquote>
<pre><code>user_input = None
total_sum = 0
while user_input != 0:
user_input = input("Enter a number:")
total_sum = total_sum + user_input
# Sample output:
# Enter a number: 5
# Enter a number: 50
# Enter a number: 10
# Enter a number: 0
# The total sum of the numbers are 65
print("The total sum of the numbers are {}".format(total_sum))
</code></pre>
<p>The error that keeps coming up is :</p>
<pre><code>total_sum = total_sum + user_input
TypeError: unsupported operand type(s) for +: 'int' and 'str'
</code></pre>
| 1 | 2016-10-17T15:48:25Z | 40,090,722 | <p><code>input</code> returns a string, even if it's a string that contains a number (e.g., "5"). You need to explicitly convert it to an <code>int</code>:</p>
<pre><code>user_input = int(input("Enter a number: "))
</code></pre>
| 0 | 2016-10-17T15:51:18Z | [
"python",
"python-3.x",
"loops"
] |
Program to add together a series of number | 40,090,659 | <blockquote>
<p>I am writing a program that will add together a series of
numbers a user inputs until the user enters a rogue value of 0. The
program will then display the total.</p>
</blockquote>
<pre><code>user_input = None
total_sum = 0
while user_input != 0:
user_input = input("Enter a number:")
total_sum = total_sum + user_input
# Sample output:
# Enter a number: 5
# Enter a number: 50
# Enter a number: 10
# Enter a number: 0
# The total sum of the numbers are 65
print("The total sum of the numbers are {}".format(total_sum))
</code></pre>
<p>The error that keeps coming up is :</p>
<pre><code>total_sum = total_sum + user_input
TypeError: unsupported operand type(s) for +: 'int' and 'str'
</code></pre>
| 1 | 2016-10-17T15:48:25Z | 40,090,772 | <p>You need to parse String value to Integer value.</p>
<p>In your case, <code>user_input</code> is String Value.</p>
<p>To parse String value to Integer you need to use <code>int(user_input)</code> instead of <code>user_input</code>.</p>
<p>total_sum = total_sum + int(user_input) //Like this</p>
| 0 | 2016-10-17T15:54:24Z | [
"python",
"python-3.x",
"loops"
] |
Program to add together a series of number | 40,090,659 | <blockquote>
<p>I am writing a program that will add together a series of
numbers a user inputs until the user enters a rogue value of 0. The
program will then display the total.</p>
</blockquote>
<pre><code>user_input = None
total_sum = 0
while user_input != 0:
user_input = input("Enter a number:")
total_sum = total_sum + user_input
# Sample output:
# Enter a number: 5
# Enter a number: 50
# Enter a number: 10
# Enter a number: 0
# The total sum of the numbers are 65
print("The total sum of the numbers are {}".format(total_sum))
</code></pre>
<p>The error that keeps coming up is :</p>
<pre><code>total_sum = total_sum + user_input
TypeError: unsupported operand type(s) for +: 'int' and 'str'
</code></pre>
| 1 | 2016-10-17T15:48:25Z | 40,090,783 | <p>input("Enter a number") returns a string which means user_input is of type string.
So you need to cast user_input to int in order to add. i.e. <code>total_sum = total_sum + parseInt(user_input)</code></p>
| 0 | 2016-10-17T15:55:24Z | [
"python",
"python-3.x",
"loops"
] |
How I can use external stylesheet for PyQT4 project? | 40,090,669 | <p>I am just creating a remote app with pyqt4. So, there is lots of css on the UI. I was wondering how to use external stylessheets like in web apps.</p>
<p>For Example: </p>
<pre><code>button_one.setStyleSheet("QPushButton { background-color:#444444; color:#ffffff; border: 2px solid #3d3d3d; width: 15px; height: 25px; border-radius: 15px;}"
"QPushButton:pressed { background-color:#ccc;}")
Instead of the above code
button_one.setStyleSheet("QPushButton.styleclass or #styleid")
</code></pre>
| 0 | 2016-10-17T15:48:56Z | 40,092,488 | <p>There's no need to set a stylesheet on each widget. Just set one stylesheet for the whole application:</p>
<pre><code>stylesheet = """
QPushButton#styleid {
background-color: yellow;
}
QPushButton.styleclass {
background-color: magenta;
}
QPushButton {
color: blue;
}
"""
QtGui.qApp.setStyleSheet(stylesheet)
</code></pre>
<p>The qss id of a widget can be specified with <code>setObjectName</code>:</p>
<pre><code>button_one.setObjectName('styleid')
</code></pre>
<p>The qss class can be specified with <code>setProperty</code>:</p>
<pre><code>button_two.setProperty('class', 'styleclass')
</code></pre>
<p>For other qss selectors, see <a href="https://doc.qt.io/qt-4.8/stylesheet-syntax.html#selector-types" rel="nofollow">Selector Types</a> in the Qt docs.</p>
| 1 | 2016-10-17T17:38:45Z | [
"python",
"qt",
"user-interface",
"pyqt4"
] |
How I can use external stylesheet for PyQT4 project? | 40,090,669 | <p>I am just creating a remote app with pyqt4. So, there is lots of css on the UI. I was wondering how to use external stylessheets like in web apps.</p>
<p>For Example: </p>
<pre><code>button_one.setStyleSheet("QPushButton { background-color:#444444; color:#ffffff; border: 2px solid #3d3d3d; width: 15px; height: 25px; border-radius: 15px;}"
"QPushButton:pressed { background-color:#ccc;}")
Instead of the above code
button_one.setStyleSheet("QPushButton.styleclass or #styleid")
</code></pre>
| 0 | 2016-10-17T15:48:56Z | 40,103,604 | <p>Cool. Thank you so much for your answer. That helped me to separate my styles finally :) </p>
<p>Just defined all my apps styles on the external file and linked on the required page.
styleFile = "styles/remote.stylesheet"
with open(styleFile, "r") as fh:
self.setStyleSheet(fh.read()) </p>
| 0 | 2016-10-18T08:46:49Z | [
"python",
"qt",
"user-interface",
"pyqt4"
] |
How to recursively remove the first set of parantheses in string of nested parantheses? (in Python) | 40,090,673 | <p>EDIT:
Say I have a string of nested parentheses as follows: ((AB)CD(E(FG)HI((J(K))L))) (assume the parantheses are balanced and enclose dproperly
How do i recursively remove the first set of fully ENCLOSED parantheses of every subset of fully enclosed parentheses?</p>
<p>So in this case would be (ABCD(E(FG)HI(JK)). (AB) would become AB because (AB) is the first set of closed parentheses in a set of closed parentheses (from (AB) to K)), E is also the first element of a set of parentheses but since it doesn't have parentheses nothing is changed, and (J) is the first element in the set ((J)K) and therefore the parentheses would be removed.</p>
<p>This is similar to building an expression tree and so far I have parsed it into a nested list and thought I can recursively check if the first element of every nested list isinstance(type(list)) but I don't know how?</p>
<p>The nested list is as follows:</p>
<pre><code>arr = [['A', 'B'], 'C', 'D', ['E', ['F', 'G'], 'H', 'I', [['J'], 'K']]]
</code></pre>
<p>Perhaps convert it into:</p>
<pre><code>arr = [A, B, C, D, [E, [F, G], H, I, [J, K]]
</code></pre>
<p>Is there a better way?</p>
| -2 | 2016-10-17T15:49:04Z | 40,091,099 | <p>You need to reduce your logic to something clear enough to program. What I'm getting from your explanations would look like the code below. Note that I haven't dealt with edge cases: you'll need to check for None elements, hitting the end of the list, etc.</p>
<pre><code>def simplfy(parse_list):
# Find the first included list;
# build a new list with that set of brackets removed.
reduce_list = []
for pos in len(parse_list):
elem = parse_list[pos]
if isinstance(elem, list):
# remove brackets; construct new list
reduce_list = parse_list[:pos]
reduce_list.extend(elem)
reduce_list.extend(parse_list[pos+1:]
# Recur on each list element
return_list = []
for elem in parse_list
if isinstance(elem, list):
return_list.append(simplfy(elem))
else:
return_list.append(elem)
return return_list
</code></pre>
| 0 | 2016-10-17T16:12:15Z | [
"python",
"string",
"algorithm",
"recursion"
] |
How to recursively remove the first set of parantheses in string of nested parantheses? (in Python) | 40,090,673 | <p>EDIT:
Say I have a string of nested parentheses as follows: ((AB)CD(E(FG)HI((J(K))L))) (assume the parantheses are balanced and enclose dproperly
How do i recursively remove the first set of fully ENCLOSED parantheses of every subset of fully enclosed parentheses?</p>
<p>So in this case would be (ABCD(E(FG)HI(JK)). (AB) would become AB because (AB) is the first set of closed parentheses in a set of closed parentheses (from (AB) to K)), E is also the first element of a set of parentheses but since it doesn't have parentheses nothing is changed, and (J) is the first element in the set ((J)K) and therefore the parentheses would be removed.</p>
<p>This is similar to building an expression tree and so far I have parsed it into a nested list and thought I can recursively check if the first element of every nested list isinstance(type(list)) but I don't know how?</p>
<p>The nested list is as follows:</p>
<pre><code>arr = [['A', 'B'], 'C', 'D', ['E', ['F', 'G'], 'H', 'I', [['J'], 'K']]]
</code></pre>
<p>Perhaps convert it into:</p>
<pre><code>arr = [A, B, C, D, [E, [F, G], H, I, [J, K]]
</code></pre>
<p>Is there a better way?</p>
| -2 | 2016-10-17T15:49:04Z | 40,091,154 | <p>If I understood the question correctly, this ugly function should do the trick:</p>
<pre><code>def rm_parens(s):
s2 = []
consec_parens = 0
inside_nested = False
for c in s:
if c == ')' and inside_nested:
inside_nested = False
consec_parens = 0
continue
if c == '(':
consec_parens += 1
else:
consec_parens = 0
if consec_parens == 2:
inside_nested = True
else:
s2.append(c)
s2 = ''.join(s2)
if s2 == s:
return s2
return rm_parens(s2)
s = '((AB)CD(E(FG)HI((J)K))'
s = rm_parens(s)
print(s)
</code></pre>
<p>Note that this function will call itself recursively until no consecutive parentheses exist. However, in your example, ((AB)CD(E(FG)HI((J)K)), a single call is enough to produce (ABCD(E(FG)HI(JK)).</p>
| 0 | 2016-10-17T16:15:35Z | [
"python",
"string",
"algorithm",
"recursion"
] |
Implementing __eq__ using numpy isclose | 40,090,734 | <p>I fear this might be closed as being a soft question,
but perhaps there is an obvious idiomatic way.</p>
<p>I have a class that contains a lot of information stored in floating point numbers.
I am thinking about implementing the <code>__eq__</code> method using not exact but numerical equivalence similar to <code>np.isclose</code>. At the moment I have three options:</p>
<ol>
<li><code>__eq__</code> means exact comparison. But there is a function similar to <code>np.isclose</code></li>
<li><code>__eq__</code> means exact comparison. But there is a function similar to <code>np.isclose</code>and <code>__eq__</code> prints a warning everytime it is called and refers to using the function.</li>
<li><code>__eq__</code> means numerical comparison.</li>
</ol>
<p>I can not think of any use case where someone would like to do exact floating point comparison with this class. Hence option three is my favourite.
But I don't want to surprise the user.</p>
| 0 | 2016-10-17T15:51:51Z | 40,095,109 | <p>One option would be to add a context manager to switch modes:</p>
<pre><code>from contextlib import contextmanager
class MyObject(object):
_use_loose_equality = False
@contextmanager
@classmethod
def loose_equality(cls, enabled=True):
old_mode = cls._use_loose_equality
cls._use_loose_equality = enabled
try:
yield
finally:
cls._use_loose_equality = old_mode
def __eq__(self):
if self._use_loose_equality:
...
else:
...
</code></pre>
<p>Which you would use as:</p>
<pre><code>x = MyObject(1.1)
y = MyObject(1.100001)
assert x != y
with MyObject.loose_equality():
assert x == y
</code></pre>
<p>Of course, this is still as dangerous as 3, but at least you now have control of when to enable the dangerous behaviour</p>
| 1 | 2016-10-17T20:28:13Z | [
"python",
"python-3.x",
"numpy",
"floating-point",
"precision"
] |
Python convert json to python object and visit attribute using comma | 40,090,741 | <p>As title, A json object or convert to a python object like:</p>
<pre><code>u = {
"name": "john",
"coat": {
"color": "red",
"sex": "man",
},
"groups": [
{"name": "git"},
{"name": "flask"}
]
}
</code></pre>
<p>I want visit as:</p>
<pre><code>u.name
</code></pre>
<p>It's easy to do with inherit from dict, but </p>
<pre><code>u.groups.0.name
</code></pre>
<p>We also set it as </p>
<pre><code>u.name = "flask"
u.groups.0.name = "svn"
</code></pre>
<p>Thanks</p>
| -3 | 2016-10-17T15:52:04Z | 40,091,938 | <p>Python is not JavaScript. You need to refer to <code>u["groups"][0]["name"]</code>.</p>
| 1 | 2016-10-17T17:03:25Z | [
"python"
] |
Python convert json to python object and visit attribute using comma | 40,090,741 | <p>As title, A json object or convert to a python object like:</p>
<pre><code>u = {
"name": "john",
"coat": {
"color": "red",
"sex": "man",
},
"groups": [
{"name": "git"},
{"name": "flask"}
]
}
</code></pre>
<p>I want visit as:</p>
<pre><code>u.name
</code></pre>
<p>It's easy to do with inherit from dict, but </p>
<pre><code>u.groups.0.name
</code></pre>
<p>We also set it as </p>
<pre><code>u.name = "flask"
u.groups.0.name = "svn"
</code></pre>
<p>Thanks</p>
| -3 | 2016-10-17T15:52:04Z | 40,099,026 | <p>May be it's difficult, because integer as key is invalid.</p>
<p>I do like this </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>class Dict(dict):
def __init__(self, value):
if isinstance(value, (list, tuple)):
value = dict(zip(['_%s' % i for i in range(len(value))], value))
super(Dict, self).__init__(value)
def __getattr__(self, key):
try:
if type(self.__getitem__(key)) in (dict, OrderedDict, tuple, list):
return Dict(self.__getitem__(key))
return self.__getitem__(key)
except KeyError:
raise AttributeError(key)
def __setattr__(self, key, value):
try:
return self.__setitem__(key, value)
except KeyError:
raise AttributeError(key)</code></pre>
</div>
</div>
</p>
<pre><code>u = {
"name": "john",
"coat": {
"color": "red",
"sex": "man",
},
"groups": [
{"name": "git"},
{"name": "flask"}
]
}
>>> u = Dict(u)
>>> u.name
output: john
>>> u.groups._0
output: {"name": "git"}
</code></pre>
| 0 | 2016-10-18T03:29:40Z | [
"python"
] |
How to select by WEEK and count it in postgresql and Django | 40,090,744 | <p>I'm trying to select by week and counting how many tickets were sold on that week.</p>
<p>i select the tickets using EVENT ID.</p>
<pre><code>WHERE EVENT ID 148
SAMPLE DATA: TICKETS TABLE
-------------------------------------------------
"General";0;"2016-09-02 17:50:45.644381+00"
"General";0;"2016-09-03 21:05:54.830366+00"
"General";0;"2016-09-02 18:21:33.976451+00"
"Early Bird";500;"2016-09-09 19:15:33.721279+00"
"Youth";0;"2016-09-06 14:46:53.903704+00"
"Post Secondary Student";1000;"2016-09-06 14:46:53.90927+00"
"Youth";0;"2016-09-06 14:46:53.903704+00"
"Youth";0;"2016-09-06 14:46:53.903704+00"
"General";0;"2016-09-01 23:50:05.034436+00"
"Youth";0;"2016-09-06 14:46:53.903704+00"
"Youth";0;"2016-09-06 14:46:53.903704+00"
"Youth";0;"2016-09-06 14:46:53.903704+00"
"Youth";0;"2016-09-06 14:46:53.903704+00"
"Youth";0;"2016-09-06 14:46:53.903704+00"
"Post Secondary Student";1000;"2016-09-06 14:46:53.90927+00"
"Post Secondary Student";1000;"2016-09-06 14:46:53.90927+00"
"General";0;"2016-09-03 18:39:15.571188+00"
"General";0;"2016-09-07 20:14:35.959517+00"
"General";0;"2016-09-03 21:33:04.349198+00"
"General";0;"2016-09-07 18:21:22.220223+00"
"General";0;"2016-09-01 23:34:55.773516+00"
"General";0;"2016-09-01 23:42:15.498778+00"
"Early Bird";500;"2016-09-09 19:15:33.721279+00"
"Youth";0;"2016-09-06 14:46:53.903704+00"
"RSVP";0;"2016-09-27 21:27:33.378934+00"
"RSVP";0;"2016-09-14 22:23:04.922607+00"
"RSVP";0;"2016-09-14 22:23:04.922607+00"
"General Admission";0;"2016-09-23 15:35:54.972803+00"
"General Admission";0;"2016-09-23 15:35:54.972803+00"
"RSVP";0;"2016-09-14 22:23:04.922607+00"
"RSVP";0;"2016-09-14 22:23:04.922607+00"
"General";1000;"2016-09-09 19:15:33.72771+00"
"General Admission";0;"2016-09-23 15:35:54.972803+00"
"General Admission";0;"2016-09-23 15:35:54.972803+00"
"Youth";0;"2016-09-06 14:46:53.903704+00"
"Youth";0;"2016-09-06 14:46:53.903704+00"
"RSVP";0;"2016-09-14 22:23:04.922607+00"
"RSVP";0;"2016-09-14 22:23:04.922607+00"
"General Admission";0;"2016-09-23 15:35:54.972803+00"
"Youth";0;"2016-09-06 14:46:53.903704+00"
"Youth";0;"2016-09-06 14:46:53.903704+00"
"General Admission";0;"2016-09-23 15:35:54.972803+00"
"General Admission";0;"2016-09-23 15:35:54.972803+00"
"Free Admission";0;"2016-10-03 19:12:12.965369+00"
"Free Admission";0;"2016-10-06 19:00:25.926406+00"
"Free Admission";0;"2016-10-06 19:00:25.926406+00"
</code></pre>
<p>Any suggestions how i would achieve that?</p>
<p>I DID THIS TO FIND AND COUNT BY DAY:</p>
<pre><code>Ticket.objects.filter(event_id=event_id, event_ticket_id=ticket_type.id, refunded=False).extra(where=('created',),
select={'date_sold':'date(created)'}).values('date_sold').annotate(sold_count=Count('id'))
</code></pre>
<p>but could not do by week.</p>
<p>Thank you</p>
| -2 | 2016-10-17T15:52:26Z | 40,093,032 | <p>Here is a solution that I used in a similar situation.
I used a Raw SQL query however.</p>
<pre><code>Ticket.objects.raw('''SELECT COUNT(app_ticket.id) as id, app_ticket.name, EXTRACT(WEEK FROM app_ticket.created) as week, EXTRACT(YEAR FROM app_ticket.created) as YEAR
FROM app_ticket
WHERE app_ticket.event_id = %s
GROUP BY app_ticket.name, EXTRACT(WEEK FROM app_ticket.created), EXTRACT(YEAR FROM app_ticket.created), app_ticket.id
ORDER BY EXTRACT(WEEK FROM app_ticket.created), EXTRACT(YEAR FROM app_ticket.created)''', [1])
</code></pre>
<p>Hope it helps.</p>
| 1 | 2016-10-17T18:14:40Z | [
"python",
"django",
"postgresql",
"date"
] |
Check failed: error == cudaSuccess (2 vs. 0) out of memory | 40,090,892 | <p>I am trying to run a neural network with pycaffe on gpu.</p>
<p>This works when I call the script for the first time.
When I run the same script for the second time, CUDA throws the error in the title.</p>
<p>Batch size is 1, image size at this moment is 243x322, the gpu has 8gb RAM.</p>
<p>I guess I am missing a command that resets the memory?</p>
<p>Thank you very much!</p>
<p>EDIT: </p>
<p>Maybe I should clarify a few things: I am running caffe on windows. </p>
<p>When i call the script with python script.py, the process terminates and the gpu memory gets freed, so this works.</p>
<p>With ipython, which I use for debugging, the GPU memory indeed does not get freed (after one pass, 6 of the 8 bg are in use, thanks for the nvidia-smi suggestion!)</p>
<p>So, what I am looking for is a command I can call from within pyhton, along the lines of:</p>
<p>run network</p>
<p>process image output</p>
<p><strong>free gpu memory</strong></p>
| 2 | 2016-10-17T16:01:11Z | 40,091,765 | <p>This happens when you run out of memory in the GPU. Are you sure you stopped the first script properly? Check the running processes on your system (<code>ps -A</code> in ubuntu) and see if the python script is still running. Kill it if it is. You should also check the memory being used in your GPU (<code>nvidia-smi</code>).</p>
| 2 | 2016-10-17T16:53:48Z | [
"python",
"gpu",
"caffe"
] |
Check failed: error == cudaSuccess (2 vs. 0) out of memory | 40,090,892 | <p>I am trying to run a neural network with pycaffe on gpu.</p>
<p>This works when I call the script for the first time.
When I run the same script for the second time, CUDA throws the error in the title.</p>
<p>Batch size is 1, image size at this moment is 243x322, the gpu has 8gb RAM.</p>
<p>I guess I am missing a command that resets the memory?</p>
<p>Thank you very much!</p>
<p>EDIT: </p>
<p>Maybe I should clarify a few things: I am running caffe on windows. </p>
<p>When i call the script with python script.py, the process terminates and the gpu memory gets freed, so this works.</p>
<p>With ipython, which I use for debugging, the GPU memory indeed does not get freed (after one pass, 6 of the 8 bg are in use, thanks for the nvidia-smi suggestion!)</p>
<p>So, what I am looking for is a command I can call from within pyhton, along the lines of:</p>
<p>run network</p>
<p>process image output</p>
<p><strong>free gpu memory</strong></p>
| 2 | 2016-10-17T16:01:11Z | 40,092,858 | <p>Your GPU memory is not getting freed. This happens when the previous process is stopped but not terminated. See my answer <a href="http://stackoverflow.com/a/35748621/3579977">here</a>.</p>
| 3 | 2016-10-17T18:02:44Z | [
"python",
"gpu",
"caffe"
] |
Throw a NotImplementedError in Python | 40,090,907 | <p>when I try to run my code I face with this issue I have defined a real-time request for this scraping but still does not working. anyone knows how to deal with this issue in python?
How sitemap is important in this case?
Thanks in advance </p>
<pre><code>import logging
import re
from urllib.parse import urljoin, urlparse
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy import Request
from scrapy.spiders import SitemapSpider
from scrapy.selector import Selector
from scrapy.linkextractors import LinkExtractor
from scrapy.shell import inspect_response
from sqlalchemy.orm import sessionmaker
from content.spiders.templates.sitemap_template import ModSitemapSpider
from content.models import db_connect, create_db_table, Articles
from content.items import ContentItems
from content.item_functions import (process_item,
process_singular_item,
process_date_item,
process_array_item,
process_plural_texts,
process_external_links,
process_article_text)
HEADER_XPATH = ['//h1[@class="article-title"]//text()']
AUTHOR_XPATH = ['//span[@class="cnnbyline"]//text()',
'//span[@class="byline"]//text()']
PUBDATE_XPATH = ['//span[@class="cnnDateStamp"]//text()']
TAGS_XPATH = ['']
CATEGORY_XPATH = ['']
TEXT = ['//div[@id="storytext"]//text()',
'//div[@id="storycontent"]//p//text()']
INTERLINKS = ['//span[@class="inStoryHeading"]//a/@href']
DATE_FORMAT_STRING = '%Y-%m-%d'
class CNNnewsSpider(ModSitemapSpider):
name = 'cnn'
allowed_domains = ["cnn.com"]
sitemap_urls = ["http://edition.cnn.com/sitemaps/sitemap-news.xml"]
def parse(self, response):
items = []
item = ContentItems()
item['title'] = process_singular_item(self, response, HEADER_XPATH, single=True)
item['resource'] = urlparse(response.url).hostname
item['author'] = process_array_item(self, response, AUTHOR_XPATH, single=False)
item['pubdate'] = process_date_item(self, response, PUBDATE_XPATH, DATE_FORMAT_STRING, single=True)
item['tags'] = process_plural_texts(self, response, TAGS_XPATH, single=False)
item['category'] = process_array_item(self, response, CATEGORY_XPATH, single=False)
item['article_text'] = process_article_text(self, response, TEXT)
item['external_links'] = process_external_links(self, response, INTERLINKS, single=False)
item['link'] = response.url
items.append(item)
return items
</code></pre>
<p>This is my Text result:</p>
<pre><code>File "/home/nik/project/lib/python3.5/site- packages/scrapy/spiders/__init__.py", line 76, in parse
raise NotImplementedError
NotImplementedError
2016-10-17 18:48:04 [scrapy] DEBUG: Redirecting (302) to <GET http://edition.cnn.com/2016/10/15/opinions/the-black-panthers-heirs-after-50- years-joseph/index.html> from <GET http://www.cnn.com/2016/10/15/opinions/the- black-panthers-heirs-after-50-years-joseph/index.html>
2016-10-17 18:48:04 [scrapy] DEBUG: Redirecting (302) to <GET http://edition.cnn.com/2016/10/15/africa/montreal-climate-change-hfc- kigali/index.html> from <GET http://www.cnn.com/2016/10/15/africa/montreal- climate-change-hfc-kigali/index.html>
2016-10-17 18:48:04 [scrapy] DEBUG: Redirecting (302) to <GET http://edition.cnn.com/2016/10/14/middleeast/battle-for-mosul-hawija-iraq/index.html> from <GET http://www.cnn.com/2016/10/14/middleeast/battle-for-mosul-hawija-iraq/index.html>
2016-10-17 18:48:04 [scrapy] ERROR: Spider error processing <GET http://edition.cnn.com/2016/10/15/politics/donald-trump-hillary-clinton-drug- test/index.html> (referer: http://edition.cnn.com/sitemaps/sitemap-news.xml)
Traceback (most recent call last):
File "/home/nik/project/lib/python3.5/site- packages/twisted/internet/defer.py", line 587, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/home/nik/project/lib/python3.5/site- packages/scrapy/spiders/__init__.py", line 76, in parse
raise NotImplementedError
</code></pre>
| -2 | 2016-10-17T16:01:49Z | 40,094,130 | <p>The exception is being thrown because your class <code>CNNnewsSpider</code> does not override the method <code>parse()</code> from <code>scrapy.BaseSpider</code>. Although you are defining a <code>parse()</code> method in the code you pasted, it is not being included in <code>CNNnewsSpider</code> because of indentation: instead, it is being defined as a standalone function. You need to fix your indentation as follows:</p>
<pre><code>class CNNnewsSpider(ModSitemapSpider):
name = 'cnn'
allowed_domains = ["cnn.com"]
sitemap_urls = ["http://edition.cnn.com/sitemaps/sitemap-news.xml"]
def parse(self, response):
items = []
item = ContentItems()
item['title'] = process_singular_item(self, response, HEADER_XPATH, single=True)
item['resource'] = urlparse(response.url).hostname
item['author'] = process_array_item(self, response, AUTHOR_XPATH, single=False)
item['pubdate'] = process_date_item(self, response, PUBDATE_XPATH, DATE_FORMAT_STRING, single=True)
item['tags'] = process_plural_texts(self, response, TAGS_XPATH, single=False)
item['category'] = process_array_item(self, response, CATEGORY_XPATH, single=False)
item['article_text'] = process_article_text(self, response, TEXT)
item['external_links'] = process_external_links(self, response, INTERLINKS, single=False)
item['link'] = response.url
items.append(item)
return items
</code></pre>
| 0 | 2016-10-17T19:24:51Z | [
"python",
"python-3.x",
"scrapy",
"web-crawler",
"webscarab"
] |
Gmail authentication error when sending email via django celery | 40,090,925 | <p><strong>TL;DR</strong>: I get a SMTPAuthenticationError from Gmail when trying to send emails using Celery/RabbitMQ tasks from my Django app, despite the credentials passed in are correct. This does not happen when the emails are sent normally, without Celery.</p>
<p>Hi,</p>
<p>I am trying to send email to a user asynchronously using Celery/RabbitMQ in my Django application. I have the following code to send the email:</p>
<pre><code>from grad.celery import app
from django.core.mail import EmailMultiAlternatives
from django.template.loader import get_template
from django.template import Context
from grad import gradbase
from time import sleep
from grad import settings
@app.task
def send_transaction_email(student_data, student_email):
html_email = get_template('grad/student_email.html')
context = Context({
'name': student_data['name'],
'university': student_data['university'],
'data': student_data,
'shorturl': gradbase.encode_graduation_data(student_data)
})
msg = EmailMultiAlternatives('Subject Here',
'Message Here',
'[email protected]',
[student_email])
msg.attach_alternative(html_email.render(context), "text/html")
msg.send()
</code></pre>
<p>When I call the method normally:</p>
<pre><code>tasks.send_transaction_email(entry, stud_email)
</code></pre>
<p>The emails are sent fine, but if I delegate the call to a Celery task like so:</p>
<pre><code>tasks.send_transaction_email.delay(entry, stud_email)
</code></pre>
<p>I get the following trace:</p>
<pre><code>Traceback (most recent call last):
File "/Users/albydeca/Gradcoin/venv/lib/python2.7/site- packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/Users/albydeca/Gradcoin/venv/lib/python2.7/site-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/Users/albydeca/Gradcoin/grad/tasks.py", line 27, in send_transaction_email
msg.send()
File "/Users/albydeca/Gradcoin/venv/lib/python2.7/site-packages/django/core/mail/message.py", line 303, in send
return self.get_connection(fail_silently).send_messages([self])
File "/Users/albydeca/Gradcoin/venv/lib/python2.7/site-packages/django/core/mail/backends/smtp.py", line 102, in send_messages
new_conn_created = self.open()
File "/Users/albydeca/Gradcoin/venv/lib/python2.7/site- packages/django/core/mail/backends/smtp.py", line 69, in open
self.connection.login(self.username, self.password)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/smtplib.py", line 622, in login
raise SMTPAuthenticationError(code, resp)
SMTPAuthenticationError: (535, '5.7.8 Username and Password not accepted. Learn more at\n5.7.8 https://support.google.com/mail/?p=BadCredentials q8sm53748398wjj.7 - gsmtp')
</code></pre>
<p>Basically Gmail does not recognise the credentials, despite when I print them literally on the two lines before the one that causes the crash, they are correct.</p>
<p>Can someone help me?</p>
| 0 | 2016-10-17T16:02:43Z | 40,090,979 | <p>Try to make an app password here <a href="https://security.google.com/settings/security/apppasswords" rel="nofollow">https://security.google.com/settings/security/apppasswords</a> and use it as your password in the STMP settings. </p>
| 0 | 2016-10-17T16:06:01Z | [
"python",
"django",
"smtp",
"gmail",
"celery"
] |
Extracting articles from New York Post by using Python and New York Post API | 40,090,962 | <p>I am trying to create a corpus of text documents via the New york Times API (articles concerning terrorist attacks) on Python.</p>
<p>I am aware that the NYP API do not provide the full body text, but provides the URL from which I can scrape the article. So the idea is to extract the "web_url" parameters from the API and consequently scrape the full body article.</p>
<p>I am trying to use the NYT API library on Python with these lines:</p>
<pre><code>from nytimesarticle import articleAPI
api = articleAPI("*Your Key*")
articles = api.search( q = 'terrorist attack')
print(articles['response'],['docs'],['web_url'])
</code></pre>
<p>But I cannot extract the "web_url" or the articles. All I get is this output:</p>
<pre><code>{'meta': {'time': 19, 'offset': 10, 'hits': 0}, 'docs': []} ['docs'] ['web_url']
</code></pre>
| 0 | 2016-10-17T16:04:58Z | 40,091,026 | <p>The comma in the print statement separates what is printed. </p>
<p>You'll want something like this </p>
<pre><code>articles['response']['docs']['web_url']
</code></pre>
<p>But <code>'docs': []</code> is both an array and empty, so above line won't work, so you could try </p>
<pre><code>articles = articles['response']['docs']
for article in articles:
print(article['web_url'])
</code></pre>
| 0 | 2016-10-17T16:08:40Z | [
"python"
] |
Extracting articles from New York Post by using Python and New York Post API | 40,090,962 | <p>I am trying to create a corpus of text documents via the New york Times API (articles concerning terrorist attacks) on Python.</p>
<p>I am aware that the NYP API do not provide the full body text, but provides the URL from which I can scrape the article. So the idea is to extract the "web_url" parameters from the API and consequently scrape the full body article.</p>
<p>I am trying to use the NYT API library on Python with these lines:</p>
<pre><code>from nytimesarticle import articleAPI
api = articleAPI("*Your Key*")
articles = api.search( q = 'terrorist attack')
print(articles['response'],['docs'],['web_url'])
</code></pre>
<p>But I cannot extract the "web_url" or the articles. All I get is this output:</p>
<pre><code>{'meta': {'time': 19, 'offset': 10, 'hits': 0}, 'docs': []} ['docs'] ['web_url']
</code></pre>
| 0 | 2016-10-17T16:04:58Z | 40,092,566 | <p>There seems to be an issue with the <code>nytimesarticle</code> module itself. For example, see the following:</p>
<pre><code>>>> articles = api.search(q="trump+women+accuse", begin_date=20161001)
>>> print(articles)
{'response': {'docs': [], 'meta': {'offset': 0, 'hits': 0, 'time': 21}}, 'status': 'OK', 'copyright': 'Copyright (c) 2013 The New York Times Company. All Rights Reserved.'}
</code></pre>
<p>But if I use <a href="http://docs.python-requests.org" rel="nofollow"><code>requests</code></a> (as is used in the module) to access the API directly, I get the results I'm looking for:</p>
<pre><code>>>> import requests
>>> r = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=trump+women+accuse&begin_date=20161001&api-key=XXXXX")
>>> data = r.json()
>>> len(data["response"]["docs"])
10
</code></pre>
<p>meaning that 10 articles were returned (the full value of <code>data</code> is 16kb, so I won't include it all here). Contrast that to the response from <code>api.search()</code>, where <code>articles["response"]["docs"]</code> is an empty list.</p>
<p><code>nytimesarticle.py</code> is only 115 lines long, so it's pretty straightforward to debug. Printing the value of the URL sent to the API reveals this:</p>
<pre><code>>>> articles = api.search(q="trump+women+accuse", begin_date=20161001)
https://api.nytimes.com/svc/search/v2/articlesearch.json?q=b'trump+women+accuse'&begin_date=20161001&api-key=XXXXX
# ^^ THIS
</code></pre>
<p>The <a href="https://github.com/evansherlock/nytimesarticle/blob/89f551699ffb11f71b47271246d350a1043e9326/nytimesarticle.py#L29-L42" rel="nofollow">offending code</a> encodes every string parameter to UTF-8, which makes it a <code>bytes</code> object. This is not necessary, and wrecks the constructed URL as shown above. Fortunately, there is a <a href="https://github.com/evansherlock/nytimesarticle/pull/2" rel="nofollow">pull request</a> that fixes this:</p>
<pre><code>>>> articles = api.search(q="trump+women+accuse", begin_date=20161001)
http://api.nytimes.com/svc/search/v2/articlesearch.json?begin_date=20161001&q=trump+women+accuse&api-key=XXXXX
>>> len(articles["response"]["docs"])
10
</code></pre>
<p>This also allows for other string parameters such as <code>sort="newest"</code> to be used, as the bytes formatting was causing an error previously.</p>
| 0 | 2016-10-17T17:43:55Z | [
"python"
] |
Dynamic create questions with inquirer | 40,091,057 | <p>i have the following object:</p>
<pre><code>questions = [
{ 'type': 'Text'
, 'name': 'input_file'
, 'message': 'Input file'
}
]
</code></pre>
<p>And i want transform into this:</p>
<pre><code>inquirer.Text(name = 'input_file', message = 'Input file')
</code></pre>
<p>So, i'm using the following code:</p>
<pre><code>for question in questions:
q.append(inquirer[question['type']](question['name'], question['message']))
</code></pre>
<p>But this wont work:</p>
<pre><code>Traceback (most recent call last):
File "Funds/__main__.py", line 4, in <module>
main()
File "/Users/MarceloAlves/DEV/LACLAW/Funds/funds/funds.py", line 30, in main
ask_questions_to_user()
File "/Users/MarceloAlves/DEV/LACLAW/Funds/funds/funds.py", line 11, in ask_questions_to_user
q.append(inquirer[question['type']]('name', message="What's your name"))
TypeError: 'module' object has no attribute '__getitem__'
</code></pre>
<p>Any ideas? (I have a background in js)</p>
| 0 | 2016-10-17T16:09:35Z | 40,091,102 | <p>You need to use <code>getattr</code>:</p>
<pre><code>for question in questions:
q.append(getattr(inquirer, question['type'])(question['name'], question['message']))
</code></pre>
<p>You can read more about it <a href="http://stackoverflow.com/questions/7119235/in-python-what-is-the-difference-between-an-object-and-a-dictionary">here</a></p>
| 0 | 2016-10-17T16:12:16Z | [
"python"
] |
How do I add several books to the bookstore in the admin form | 40,091,067 | <p>In django, imagine i have a model bookstore and a model book which has a foreign key to bookStore model . On the django admin,I added like 10 books, and on the bookstore I want to assign multiple books to this one bookstore. How do I do this ? Because even with foreignKey, while editing the bookstore, i can only choose one book...</p>
<pre><code>class BookStore(models.Model):
name = models.CharField(max_length=100)
class Book(models.Model):
name = models.CharField(max_length=100)
store = models.ForeignKey(BookStore, null=True)
</code></pre>
| 0 | 2016-10-17T16:10:00Z | 40,091,135 | <p>Your relationship is the wrong way around. If your bookstore has a fk to a book, you are saying that "each bookstore can only store one single book". Instead, you should have a fk from the book to the book store. This is saying "a book belongs to a bookstore (and only one bookstore)"</p>
<pre><code>class Book:
bookstore = models.ForeignKey("Bookstore")
class Bookstore:
...
</code></pre>
<p>You need to use an <a href="https://docs.djangoproject.com/en/1.10/ref/contrib/admin/#inlinemodeladmin-objects" rel="nofollow">inline form</a> if you want to add multiple books while editing your bookstore object: </p>
<pre><code>class BookInline(admin.StackedInline):
model = Book
class BookstoreAdmin(admin.ModelAdmin):
model = Bookstore
inlines = [BookInline,]
</code></pre>
| 4 | 2016-10-17T16:14:36Z | [
"python",
"django"
] |
Maximize Sharpe's ratio using scipy.minimze | 40,091,188 | <p>I'm trying to maximize Sharpe's ratio using scipy.minimize</p>
<p><img src="http://latex.codecogs.com/gif.latex?%5Cfrac%7BE_R%20-%20E_F%7D%7B%5Csigma_R%7D" alt="1"></p>
<p>I do this for finding CAPM's Security Market Line</p>
<p><img src="http://latex.codecogs.com/gif.latex?E%28R%29%20%3D%20E_F%20+%20%5Cfrac%7BE_M%20-%20E_F%7D%7B%5Csigma_M%7D%20%5Csigma" alt="2"></p>
<p>So I have an equation:</p>
<p><img src="http://latex.codecogs.com/gif.latex?%5Cfrac%7BE_R%20-%20E_F%7D%7B%5Csigma_R%7D%20%5Crightarrow%20%5Cmax" alt="3"></p>
<p><img src="http://latex.codecogs.com/gif.latex?%5Csum%20x_i%20%3D%201" alt="4"></p>
<p>Optional (if short positions is not allowed):</p>
<p><img src="http://latex.codecogs.com/gif.latex?x_i%20%3E%200" alt="5"></p>
<p>So I'm trying solve this:
</p>
<pre><code>def target_func(x, cov_matix, mean_vector, virtual_mean):
f = float(-(x.dot(mean_vector) - virtual_mean) / np.sqrt(x.dot(cov_matix).dot(x.T)))
return f
def optimal_portfolio_with_virtual_mean(profits, virtual_mean, allow_short=False):
x = np.zeros(len(profits))
mean_vector = np.mean(profits, axis=1)
cov_matrix = np.cov(profits)
cons = ({'type': 'eq',
'fun': lambda x: np.sum(x) - 1})
if not allow_short:
bounds = [(0, None,) for i in range(len(x))]
else:
bounds = None
minimize = optimize.minimize(target_func, x, args=(cov_matrix, mean_vector, virtual_mean,), bounds=bounds,
constraints=cons)
return minimize
</code></pre>
<p>But I always get Success: False (iteration limit exceeded). I tried to set maxiter = 10000 option, but it didn't help.</p>
<p>I will be greatful for any help</p>
<p>P.S. I use python 2.7</p>
| 1 | 2016-10-17T16:18:22Z | 40,091,485 | <p>I dont know why, but it works perfectly, when I replacing </p>
<pre><code>x = np.zeros(len(profits))
</code></pre>
<p>With</p>
<pre><code>x = np.ones(len(profits))
</code></pre>
| 0 | 2016-10-17T16:36:13Z | [
"python",
"scipy",
"mathematical-optimization",
"nonlinear-optimization"
] |
Troubleshooting item loaders between callbacks | 40,091,228 | <p>In order to understand the "Naive approach" example in <a href="http://oliverguenther.de/2014/08/almost-asynchronous-requests-for-single-item-processing-in-scrapy/" rel="nofollow">http://oliverguenther.de/2014/08/almost-asynchronous-requests-for-single-item-processing-in-scrapy/</a> </p>
<p>I am trying to replicate that code. The idea is to have a single item populated where each of the fields are sourced from a different website. </p>
<p>I am trying to understand why I get the following behavior from the below code when I run it and I export the result into a csv file using <code>scrapy crawl compSpider -o prices.csv</code>.</p>
<p>The code actually populates the nic_price with the relevant price but it doesn't do the same with the tester_price. </p>
<p>I believe it should do so as the item loader object is passed in the request meta field from the first callback [firstRequest], where the item loader object is created, to the second call back [parseDescription1], where the item loader object is finally loaded into an item.</p>
<p>I have tested that the css selectors work. Can someone please help me understand why am I getting that behaviour?</p>
<h2>ITEM DECLARATION</h2>
<pre><code>import scrapy
class ProductItem(scrapy.Item):
nic_price = scrapy.Field()
tester_price = scrapy.Field()
</code></pre>
<h2>SPIDER CODE</h2>
<pre><code>import scrapy
from scrapy.http import Request
from scrapy.loader import ItemLoader
from comparator.items import ProductItem
class Compspider(scrapy.Spider):
name = "compSpider"
#start_urls = ( 'https://www.shop.niceic.com/', )
def start_requests(self):
yield Request(
'https://www.shop.niceic.com/6559-megger-mft1711-multifunction-tester-1008-121', callback=self.firstRequest)
def firstRequest(self, response):
l = ItemLoader(item=ProductItem(), response=response)
l.add_css('nic_price', 'div.product-info p.product-price span[itemprop="price"]::text')
yield Request('https://www.tester.co.uk/test-safe-pen-co-meter', meta={'loader' : l}, callback= self.parseDescription1)
def parseDescription1(self, response):
# Recover item(loader)
l = response.meta['loader']
# Use just as before
l.add_css('tester_price', 'div.price-breakdown div.price-excluding-tax span.price::text')
yield l.load_item()
</code></pre>
| 1 | 2016-10-17T16:20:52Z | 40,093,685 | <p>The css selector is do definitely work. The problem resides in the itemloader variable <strong>l</strong> in which you are saving in the meta dict, it points to the response from <strong>firstRequest</strong> callback, not to the <strong>parseDescription1</strong> callback in which this css selector can work</p>
<p>'div.price-breakdown div.price-excluding-tax span.price::text'</p>
<p>to solve this, just create a new itemloader <strong>parseDescription1</strong> or better load the item itself rather than its loader in the meta dict as follows...</p>
<pre><code> def firstRequest(self, response):
l = ItemLoader(item=ProductItem(), response=response)
l.add_css('nic_price', 'div.product-info p.product-price span[itemprop="price"]::text')
yield Request('https://www.tester.co.uk/test-safe-pen-co-meter',
meta={'item': l.load_item()},
callback=self.parseDescription1)
def parseDescription1(self, response):
# Recover item
item = response.meta['item']
l = ItemLoader(item=ProductItem(), response=response)
# Use just as before
l.add_css('tester_price', 'div.price-breakdown div.price-excluding-tax span.price::text')
l.add_value('nic_price', item['nic_price'])
yield l.load_item()
</code></pre>
| 0 | 2016-10-17T18:56:49Z | [
"python",
"csv",
"scrapy",
"web-crawler"
] |
Extract all characters between _ and .csv | 40,091,243 | <p>I am trying to extract the date from a series of files of the form:</p>
<p><code>costs_per_day_100516.csv</code></p>
<p>I have gotten to the point of extracting the <code>6</code>, but I don't understand why I can't extract more. What is wrong with the following:</p>
<pre><code>test_string = 'search_adwords_cost_by_state_100516.csv'
m = re.search("_([^_])*\.csv", test_string)
m.group(1)
</code></pre>
<p>This yields <code>6</code> rather than <code>100516</code>. What am I doing wrong?</p>
| 0 | 2016-10-17T16:21:36Z | 40,091,273 | <pre><code>m = re.search("_([^_]*)\.csv", test_string)
</code></pre>
<p>The repetition qualifier has to be inside the capture</p>
| 3 | 2016-10-17T16:23:27Z | [
"python",
"regex"
] |
Extract all characters between _ and .csv | 40,091,243 | <p>I am trying to extract the date from a series of files of the form:</p>
<p><code>costs_per_day_100516.csv</code></p>
<p>I have gotten to the point of extracting the <code>6</code>, but I don't understand why I can't extract more. What is wrong with the following:</p>
<pre><code>test_string = 'search_adwords_cost_by_state_100516.csv'
m = re.search("_([^_])*\.csv", test_string)
m.group(1)
</code></pre>
<p>This yields <code>6</code> rather than <code>100516</code>. What am I doing wrong?</p>
| 0 | 2016-10-17T16:21:36Z | 40,092,030 | <pre><code>Data_Frame_Name.join(filter(lambda x: x.isdigit(), Data_Frame_Name['Column_Name']))
</code></pre>
<p>This will extract just digits. This may not be applicable for your case but would work well for extracting digits from multiple rows in a column.</p>
| 0 | 2016-10-17T17:09:37Z | [
"python",
"regex"
] |
Cannot debug tensorflow using gdb on macOS | 40,091,310 | <p>I am trying to debug TensorFlow using GDB on macOS Sierra. I followed the instructions on the <a href="https://gist.github.com/Mistobaan/738e76c3a5bb1f9bcc52e2809a23a7a1#run-python-test" rel="nofollow">post</a>.After I installed developer version of tensorflow for debug I tried to use gdb to attach a python session.</p>
<p>When I run <code>gdb -p <pid></code>:</p>
<pre><code>GNU gdb (GDB) 7.11.1
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-apple-darwin16.0.0".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 18822
Reading symbols from /Users/dtong/.virtualenvs/tensorflow/bin/python3.5...(no debugging symbols found)...done.
warning: unhandled dyld version (15)
0x00007fffc4d83f4e in ?? ()
(gdb)
</code></pre>
<p>When I set the breakpoint:</p>
<pre><code>(gdb) break TF_NewSession
Function "TF_NewSession" not defined.
Make breakpoint pending on future shared library load? (y or [n]) y
Breakpoint 1 (TF_NewSession) pending.
</code></pre>
<p>Then even I run <code>sess = tf.Session()</code> as the post says. GDB will never enter into breakpoint.</p>
| 0 | 2016-10-17T16:25:37Z | 40,120,761 | <p>Well, after I change to use <code>lldb</code> on macOS, everything works fine. So I guess the solution to this problem is "USING LLDB INSTEAD OF GDB".</p>
| 0 | 2016-10-19T01:49:18Z | [
"python",
"debugging",
"gdb",
"tensorflow"
] |
Getting 403 Error: Apache2 mod_wsgi | 40,091,341 | <p>Me, and a friend of mine are facing a problem getting a raw entity of <a href="https://github.com/PiJoules/atchannel/blob/master/PRODUCTION.md" rel="nofollow" title="Guide to Setting it up">PI Joule's @channel (Link to "Production" Readme)</a> to work, currently supposed to be running at the domain atchannel.cf, as per the person's guide to set it up, as well as do what it said in the main README file <a href="https://github.com/PiJoules/atchannel/blob/master/README.md" rel="nofollow">here</a> </p>
<pre><code><VirtualHost *:80>
ServerName atchannel.cf
WSGIScriptAlias / /var/www-atchannel/atchannel.wsgi
<Directory /var/www-atchannel>
Require all granted
</Directory>
Alias /static /var/www-atchannel/atchannel/static
<Directory /var/www-atchannel/atchannel/static/>
Require all granted
</Directory>
ErrorLog ${APACHE_LOG_DIR}/errorAtChannel.log
LogLevel warn
CustomLog ${APACHE_LOG_DIR}/accessAtChannel.log combined
</VirtualHost>
</code></pre>
<p>This is the Apache Virtual Host for the file, which matches up with the path to the site, but still returns a 403 forbidden error. Can anyone tell us what we're doing wrong?</p>
| 0 | 2016-10-17T16:27:41Z | 40,094,843 | <p>Instead of:</p>
<pre><code><Directory /var/www-atchannel/atchannel/>
Order allow,deny
Allow from all
</Directory>
</code></pre>
<p>you should have:</p>
<pre><code><Directory /var/www-atchannel>
Order allow,deny
Allow from all
</Directory>
</code></pre>
<p>The directory didn't match the leading path for where the WSGI script file is located.</p>
| 0 | 2016-10-17T20:09:56Z | [
"python",
"mongodb",
"apache",
"virtualhost",
"wsgi"
] |
Call another click command from a click command | 40,091,347 | <p>I want to use some useful functions as commands. For that I am testing the <code>click</code> library. I defined my three original functions then decorated as <code>click.command</code>:</p>
<pre><code>import click
import os, sys
@click.command()
@click.argument('content', required=False)
@click.option('--to_stdout', default=True)
def add_name(content, to_stdout=False):
if not content:
content = ''.join(sys.stdin.readlines())
result = content + "\n\tadded name"
if to_stdout is True:
sys.stdout.writelines(result)
return result
@click.command()
@click.argument('content', required=False)
@click.option('--to_stdout', default=True)
def add_surname(content, to_stdout=False):
if not content:
content = ''.join(sys.stdin.readlines())
result = content + "\n\tadded surname"
if to_stdout is True:
sys.stdout.writelines(result)
return result
@click.command()
@click.argument('content', required=False)
@click.option('--to_stdout', default=False)
def add_name_and_surname(content, to_stdout=False):
result = add_surname(add_name(content))
if to_stdout is True:
sys.stdout.writelines(result)
return result
</code></pre>
<p>This way I am able to generate the three commands <code>add_name</code>, <code>add_surname</code> and <code>add_name_and_surname</code> using a <code>setup.py</code> file and <code>pip install --editable .</code> Then I am able to pipe:</p>
<pre><code>$ echo "original content" | add_name | add_surname
original content
added name
added surname
</code></pre>
<p>However there is one slight problem I need to solve, when composing with different click commands as functions:</p>
<pre><code>$echo "original content" | add_name_and_surname
Usage: add_name_and_surname [OPTIONS] [CONTENT]
Error: Got unexpected extra arguments (r i g i n a l c o n t e n t
)
</code></pre>
<p>I have no clue why it does not work, I need this <code>add_name_and_surname</code> command to call <code>add_name</code> and <code>add_surname</code> not as command but as functions, else it defeats my original purpose of using functions as both library functions and commands. </p>
| 0 | 2016-10-17T16:28:08Z | 40,094,408 | <p>When you call <code>add_name()</code> and <code>add_surname()</code> directly from another function, you actually call the decorated versions of them so the arguments expected may not be as you defined them (see the answers to <a href="http://stackoverflow.com/questions/1166118/how-to-strip-decorators-from-a-function-in-python">How to strip decorators from a function in python</a> for some details on why). </p>
<p>I would suggest modifying your implementation so that you keep the original functions undecorated and create thin click-specific wrappers for them, for example: </p>
<pre><code>def add_name(content, to_stdout=False):
if not content:
content = ''.join(sys.stdin.readlines())
result = content + "\n\tadded name"
if to_stdout is True:
sys.stdout.writelines(result)
return result
@click.command()
@click.argument('content', required=False)
@click.option('--to_stdout', default=True)
def add_name_command(content, to_stdout=False):
return add_name(content, to_stdout)
</code></pre>
<p>You can then either call these functions directly or invoke them via a CLI wrapper script created by setup.py. </p>
<p>This might seem redundant but in fact is probably the right way to do it: one function represents your business logic, the other (the click command) is a "controller" exposing this logic via command line (there could be, for the sake of example, also a function exposing the same logic via a Web service for example). </p>
<p>In fact, I would even advise to put them in separate Python modules - Your "core" logic and a click-specific implementation which could be replaced for any other interface if needed.</p>
| 2 | 2016-10-17T19:43:05Z | [
"python",
"command-line-arguments",
"stdout",
"piping",
"python-click"
] |
Call another click command from a click command | 40,091,347 | <p>I want to use some useful functions as commands. For that I am testing the <code>click</code> library. I defined my three original functions then decorated as <code>click.command</code>:</p>
<pre><code>import click
import os, sys
@click.command()
@click.argument('content', required=False)
@click.option('--to_stdout', default=True)
def add_name(content, to_stdout=False):
if not content:
content = ''.join(sys.stdin.readlines())
result = content + "\n\tadded name"
if to_stdout is True:
sys.stdout.writelines(result)
return result
@click.command()
@click.argument('content', required=False)
@click.option('--to_stdout', default=True)
def add_surname(content, to_stdout=False):
if not content:
content = ''.join(sys.stdin.readlines())
result = content + "\n\tadded surname"
if to_stdout is True:
sys.stdout.writelines(result)
return result
@click.command()
@click.argument('content', required=False)
@click.option('--to_stdout', default=False)
def add_name_and_surname(content, to_stdout=False):
result = add_surname(add_name(content))
if to_stdout is True:
sys.stdout.writelines(result)
return result
</code></pre>
<p>This way I am able to generate the three commands <code>add_name</code>, <code>add_surname</code> and <code>add_name_and_surname</code> using a <code>setup.py</code> file and <code>pip install --editable .</code> Then I am able to pipe:</p>
<pre><code>$ echo "original content" | add_name | add_surname
original content
added name
added surname
</code></pre>
<p>However there is one slight problem I need to solve, when composing with different click commands as functions:</p>
<pre><code>$echo "original content" | add_name_and_surname
Usage: add_name_and_surname [OPTIONS] [CONTENT]
Error: Got unexpected extra arguments (r i g i n a l c o n t e n t
)
</code></pre>
<p>I have no clue why it does not work, I need this <code>add_name_and_surname</code> command to call <code>add_name</code> and <code>add_surname</code> not as command but as functions, else it defeats my original purpose of using functions as both library functions and commands. </p>
| 0 | 2016-10-17T16:28:08Z | 40,096,967 | <p>Due to the click decorators the functions can no longer be called just by specifying the arguments.
The <a href="http://click.pocoo.org/6/api/#context" rel="nofollow">Context</a> class is your friend here, specifically:</p>
<ol>
<li>Context.invoke() - invokes another command with the arguments you supply</li>
<li>Context.forward() - fills in the arguments from the current command</li>
</ol>
<p>So your code for add_name_and_surname should look like:</p>
<pre><code>@click.command()
@click.argument('content', required=False)
@click.option('--to_stdout', default=False)
@click.pass_context
def add_name_and_surname(ctx, content, to_stdout=False):
result = ctx.invoke(add_surname, content=ctx.forward(add_name))
if to_stdout is True:
sys.stdout.writelines(result)
return result
</code></pre>
<p>Reference:
<a href="http://click.pocoo.org/6/advanced/#invoking-other-commands" rel="nofollow">http://click.pocoo.org/6/advanced/#invoking-other-commands</a></p>
| 1 | 2016-10-17T23:00:32Z | [
"python",
"command-line-arguments",
"stdout",
"piping",
"python-click"
] |
Given a binary tree, print all root-to-leaf paths using scipy | 40,091,369 | <p>I'm using the <code>hierarchy.to_tree</code> from scipy, and I'm interested in getting a print out of all root-to-leaf paths:</p>
<p><a href="https://i.stack.imgur.com/Nuzo8.gif" rel="nofollow"><img src="https://i.stack.imgur.com/Nuzo8.gif" alt="enter image description here"></a></p>
<p><code>
10.8.3
10.8.5
10.2.2
</code></p>
<pre><code>from scipy.cluster import hierarchy
a = hierarchy.to_tree(linkage_matrix)
</code></pre>
<p>I've given it a try</p>
<pre><code>linkage_matrix
[[2, 3, 0.06571365, 2], [0, 10, 0.07951425, 2], [5, 6, 0.09405724, 2], [11, 13, 0.10182075, 3], [1, 12, 0.12900146, 3], [14, 15, 0.13498948, 5], [8, 9, 0.16806049, 2], [7, 16, 0.1887918, 4], [17, 19, 0.2236683, 9], [18, 20, 0.29471335, 11], [4, 21, 0.45878, 12]]
from scipy.cluster import hierarchy
a = hierarchy.to_tree(linkage_matrix)
def parse_tree(tree, path):
path = path
if path ==[]:
path.append(str(tree.get_id()))
if tree.is_leaf() is False:
left = tree.get_left()
left_id = str(left.get_id())
if left.is_leaf() is False:
path.append(left_id)
parse_tree(left, path)
path.pop()
else:
parse_tree(left, path)
right = tree.get_right()
right_id = str(right.get_id())
if right.is_leaf() is False:
path.append(right_id)
parse_tree(right, path)
else:
path.append(str(tree.get_id()))
print(('.').join(path))
path.pop()
parse_tree(a, [])
</code></pre>
<p>But obviously my logic is completely wrong, specifically it breaks down when the left node is not a leave (22.21.20.17.15.19.7 should be 22.21.20.19.7). I'm looking for new ways, I have not considered. </p>
<p>For the below example tree, all root-to-leaf paths are: </p>
| 0 | 2016-10-17T16:29:32Z | 40,091,495 | <p>Without looking at your code, you should be doing something like:</p>
<pre><code>print_paths(tree, seen):
seen = seen
seen.append(tree.value)
if not tree.children:
print(seen)
else:
map(print_paths, tree.children)
</code></pre>
<p>Having now seen your code, try something like:</p>
<pre><code>def parse(tree, p):
path = p[:]
path.append(str(tree.get_id()))
if tree.is_leaf():
print('.'.join(path))
else:
#Here I assume get_left() returns some falsey value for no left child
left = tree.get_left()
if left:
parse(left, path)
right = tree.get_right()
if right:
parse(right, path)
</code></pre>
| 1 | 2016-10-17T16:36:37Z | [
"python",
"tree",
"scipy"
] |
Remove only double letters sequences in word with best speed with python | 40,091,435 | <p>Very popular task for finding/replacing double letters in a string. But exist solution, where you can make remove double letters through few steps. For example, we have string <code>"skalallapennndraaa"</code>, and after replacing double letters we need to get in output <code>"skalpendra"</code>. I tried solution with </p>
<pre><code>re.sub(r'([a-z])\1+', r'\1', "skalallapennndraaa")
</code></pre>
<p>, but this don't remove all double letters in a string(result- <code>"skalalapendra"</code>). If I use <code>r''</code> as second parameter, I got a closely related result <code>"skalaapendr"</code>, but I still can't find right regular expression for replacement parameter. Any ideas?</p>
| 0 | 2016-10-17T16:33:23Z | 40,091,593 | <p>You can use this double replacement:</p>
<pre><code>>>> s = 'skalallapennndraaa'
>>> print re.sub(r'([a-z])\1', '', re.sub(r'([a-z])([a-z])\2\1', '', s))
skalpendra
</code></pre>
<p><code>([a-z])([a-z])\2\1</code> will remove <code>alla</code> type of cases and <code>([a-z])\1</code> will remove remaining double letters.</p>
<hr>
<p><strong>Update:</strong> Based on comments below I realize a loop based approach is best. Here it is:</p>
<pre><code>>>> s = 'nballabnz'
>>> while re.search(r'([a-z])\1', s):
... s = re.sub(r'([a-z])\1', '', s)
...
>>> print s
z
</code></pre>
| 2 | 2016-10-17T16:43:33Z | [
"python",
"regex",
"string"
] |
How is base Exception getting initialized? | 40,091,472 | <p>I'm confused by how the following exception in Python 3 gets initialized.</p>
<pre><code>class MyException( Exception ):
def __init__(self,msg,foo):
self.foo = foo
raise MyException('this is msg',123)
</code></pre>
<p>In Python 3, this outputs:</p>
<pre><code>Traceback (most recent call last):
File "exceptionTest.py", line 7, in <module>
raise MyException('this is msg',123)
__main__.MyException: ('this is msg', 123)
</code></pre>
<p>How are the arguments getting initialized? I'm surprised that the trackback shows the args since I'm not calling the super class initializer. </p>
<p>In Python 2, I get the following output, which makes more sense to me since the args aren't included in the traceback.</p>
<pre><code>Traceback (most recent call last):
File "exceptionTest.py", line 7, in <module>
raise MyException('this is msg',123)
__main__.MyException
</code></pre>
| 0 | 2016-10-17T16:35:39Z | 40,091,499 | <p>The Python <code>BaseException</code> class is special in that it has a <code>__new__</code> method that is put there specifically to handle this common case of errors.</p>
<p>So no, <code>BaseException.__init__</code> is not being called, but <code>BaseException.__new__</code> <em>is</em>. You can override <code>__new__</code> and ignore the arguments passed in to suppress the setting of <code>self.args</code>:</p>
<pre><code>>>> class MyException(Exception):
... def __new__(cls, *args, **kw):
... return super().__new__(cls) # ignoring arguments
... def __init__(self,msg,foo):
... self.foo = foo
...
>>> MyException('this is msg', 123) # no need to raise to see the result
MyException()
</code></pre>
<p>This addition is specific to Python 3 and is not present in Python 2. See <a href="https://bugs.python.org/issue1692335" rel="nofollow">issue #1692335</a> for the motivations and details.</p>
| 2 | 2016-10-17T16:37:08Z | [
"python",
"python-3.x",
"exception",
"python-2.x"
] |
Python HTML source code | 40,091,490 | <p>I would like to write a script that picks a special point from the source code and returns it. (print it)</p>
<pre><code>import urllib.request
Webseite = "http://myip.is/"
html_code = urllib.request.urlopen(Webseite)
print(html_code.read().decode('ISO-8859-1'))
</code></pre>
<p>This is my current code.
I would like to print only the IP address that the website gives.
The input of this I will print in python (title="copy ip address").</p>
| 0 | 2016-10-17T16:36:27Z | 40,091,626 | <p>You could use <a href="http://jsonip.com" rel="nofollow">jsonip</a> which returns a JSON object that you can easily parse using standard Python library</p>
<pre><code>import json
from urllib2 import urlopen
my_ip = json.load(urlopen('http://jsonip.com'))['ip']
</code></pre>
| 0 | 2016-10-17T16:44:54Z | [
"python",
"python-3.x"
] |
Python HTML source code | 40,091,490 | <p>I would like to write a script that picks a special point from the source code and returns it. (print it)</p>
<pre><code>import urllib.request
Webseite = "http://myip.is/"
html_code = urllib.request.urlopen(Webseite)
print(html_code.read().decode('ISO-8859-1'))
</code></pre>
<p>This is my current code.
I would like to print only the IP address that the website gives.
The input of this I will print in python (title="copy ip address").</p>
| 0 | 2016-10-17T16:36:27Z | 40,091,730 | <pre><code>import requests
from bs4 import BeautifulSoup
s = requests.Session()
r = s.get('http://myip.is/')
soup = BeautifulSoup(r.text, "html5lib")
myIP = mySoup.find('a', {'title': 'copy ip address'}).text
print(myIP)
</code></pre>
<p>This uses the requests library (which you should always use for HTTP requests) to pull the page, feeds the content to BeautifulSoup, a very nice HTML parser, and asks BeautifulSoup to find a single <code><a></code> tag, with the atrtibuet <code>title</code> set to 'copy ip address', and then save the text component of that tag as <code>myIP</code>. </p>
| 0 | 2016-10-17T16:51:40Z | [
"python",
"python-3.x"
] |
Python HTML source code | 40,091,490 | <p>I would like to write a script that picks a special point from the source code and returns it. (print it)</p>
<pre><code>import urllib.request
Webseite = "http://myip.is/"
html_code = urllib.request.urlopen(Webseite)
print(html_code.read().decode('ISO-8859-1'))
</code></pre>
<p>This is my current code.
I would like to print only the IP address that the website gives.
The input of this I will print in python (title="copy ip address").</p>
| 0 | 2016-10-17T16:36:27Z | 40,092,082 | <p>You can use a regular expression to find the IP addresses:</p>
<pre><code>import urllib.request
import re
Webseite = "http://myip.is/"
html_code = urllib.request.urlopen(Webseite)
content = html_code.read().decode('ISO-8859-1')
ip_regex = r'(?:[0-9]{1,3}\.){3}[0-9]{1,3}'
ips_found = re.findall(ip_regex, content)
print(ips_found[0])
</code></pre>
| 0 | 2016-10-17T17:12:35Z | [
"python",
"python-3.x"
] |
In pytesting, how to provide the link/pointer to the file/image for uploading the image in flask | 40,091,539 | <p>I am using flask with python 3.5 and try to test my code with pytest for uploading the image.I have gone through various answers but sadly couldn't get the point to solve my problem.In link of github, it explains how to use filename, file_field
<a href="https://gist.github.com/DazWorrall/1779861" rel="nofollow">https://gist.github.com/DazWorrall/1779861</a>
I tried in that way also but i wasn't going in right direction. Please help me in solving my problem. Here, post method is used to upload the image, content-part should be multi-part but regarding data, how to send the image data and its path.</p>
<pre><code>test_client.post(
'/uploadimage',
content_type='multipart/form-data',
buffered=True,
data=dict(
file='user4.jpg', file_field=io.BytesIO(b"~/Downloads/images/")),
follow_redirects=True)
</code></pre>
<p>In here, while running for pytest it doesn't recognise the file. Dont know why? I will hope to get my answer soon.. Thanks.</p>
| 0 | 2016-10-17T16:39:54Z | 40,100,812 | <p>res = test_client.post(
products_url,
content_type='multipart/form-data',
buffered=True,
data={
'file': (io.BytesIO(b'~/Downloads/images'), 'user4.jpg'),
})</p>
<p>Little bit change the way data is send.
It is working fine for me.</p>
| 0 | 2016-10-18T06:12:48Z | [
"python",
"flask",
"py.test"
] |
Flask+Gunicorn+Nginx occur AttributeError "NoneType" has no attribute | 40,091,571 | <p>I'm reading Flask Web Development book , and now I've deployed my app on VPS.
And I can visit my index page by using IP.
But when I tried to click Login button (The page which you can fill the inforamtion or register the account)
It occurred alarm as below.
I totally dunt get it...why it said the response of flask login is NoneType,
and there is no attribute "set_cookie" "delete_cookie" for it.</p>
<p><a href="https://i.stack.imgur.com/ktvP4.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/ktvP4.jpg" alt="enter image description here"></a></p>
<blockquote>
<p>Alarm information please check here</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/vP1gV.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/vP1gV.jpg" alt="enter image description here"></a></p>
<blockquote>
<p>My views.py code for the route "login" as below</p>
</blockquote>
<pre><code>from flask import render_template,redirect,request,url_for,flash
from flask.ext.login import login_user,current_user
from . import auth
from ..models import User
from .forms import LoginForm,RegistrationForm,ChangePasswordForm,PasswordResetRequestForm,PasswordResetForm,ChangeEmailForm
from flask.ext.login import logout_user,login_required
from app import db
from ..email import send_email
@auth.route('/login',methods=['GET','POST'])
def login():
form=LoginForm()
if form.validate_on_submit():
user=User.query.filter_by(email=form.email.data).first()
if user is not None and user.verify_password(form.password.data):
login_user(user,form.remember_me.data)
return redirect(request.args.get('next')or url_for('main.index'))
flash('Invalid username or password.')
return render_template('auth/login.html',form=form)
@auth.before_app_request
def before_request():
if current_user.is_authenticated:
current_user.ping()
if not current_user.confirmed and request.endpoint[:5] !='auth.':
return redirect(url_for('auth.unconfirmed'))
</code></pre>
<p>Please give some suggestion for me , if need any more information please tell me
Thanks a lot in advance!</p>
| -1 | 2016-10-17T16:42:00Z | 40,112,362 | <p>Finally I found the problem .
That was caused from another file /main/views.py
At the end of file , there is a function which was used to test the response time , and it return the response object , but I add one extra indentation in the line.
After I cancel one indentation , it works normally.</p>
<pre><code>@main.after_app_request
def after_request(response):
for query in get_debug_queries():
if query.duration >= current_app.config['FLAKSY_SLOW_DB_QUERY_TIME']:
current_app.logger.warning('Slow query: %s\nParameters: %s\nDuration: %fs\nContext: %s\n' %
(query.statement, query.parameters, query.duration,
query.context))
return response
</code></pre>
| 0 | 2016-10-18T15:32:26Z | [
"python",
"html",
"nginx",
"flask"
] |
Test for consecutive numbers in list | 40,091,617 | <p>I have a list that contains only integers, and I want to check if all the numbers in the list are consecutive (the order of the numbers does not matter).</p>
<p>If there are repeated elements, the function should return False.</p>
<p>Here is my attempt to solve this:</p>
<pre><code>def isconsecutive(lst):
"""
Returns True if all numbers in lst can be ordered consecutively, and False otherwise
"""
if len(set(lst)) == len(lst) and max(lst) - min(lst) == len(lst) - 1:
return True
else:
return False
</code></pre>
<p>For example:</p>
<pre><code>l = [-2,-3,-1,0,1,3,2,5,4]
print(isconsecutive(l))
True
</code></pre>
<p>Is this the best way to do this?</p>
| 2 | 2016-10-17T16:44:30Z | 40,091,652 | <p>Here is another solution:</p>
<pre><code>def is_consecutive(l):
setl = set(l)
return len(l) == len(setl) and setl == set(range(min(l), max(l)+1))
</code></pre>
<p>However, your solution is probably better as you don't store the whole range in memory.</p>
<p>Note that you can always simplify</p>
<pre><code>if boolean_expression:
return True
else:
return False
</code></pre>
<p>by</p>
<pre><code>return boolean_expression
</code></pre>
| 4 | 2016-10-17T16:46:29Z | [
"python",
"list",
"python-3.x"
] |
Test for consecutive numbers in list | 40,091,617 | <p>I have a list that contains only integers, and I want to check if all the numbers in the list are consecutive (the order of the numbers does not matter).</p>
<p>If there are repeated elements, the function should return False.</p>
<p>Here is my attempt to solve this:</p>
<pre><code>def isconsecutive(lst):
"""
Returns True if all numbers in lst can be ordered consecutively, and False otherwise
"""
if len(set(lst)) == len(lst) and max(lst) - min(lst) == len(lst) - 1:
return True
else:
return False
</code></pre>
<p>For example:</p>
<pre><code>l = [-2,-3,-1,0,1,3,2,5,4]
print(isconsecutive(l))
True
</code></pre>
<p>Is this the best way to do this?</p>
| 2 | 2016-10-17T16:44:30Z | 40,092,012 | <p>A better approach in terms of how many times you look at the elements would be to incorporate finding the <em>min</em>, <em>max</em> and <em>short circuiting</em> on any dupe all in one pass, although would probably be beaten by the speed of the builtin functions depending on the inputs:</p>
<pre><code>def mn_mx(l):
mn, mx = float("inf"), float("-inf")
seen = set()
for ele in l:
# if we already saw the ele, end the function
if ele in seen:
return False, False
if ele < mn:
mn = ele
if ele > mx:
mx = ele
seen.add(ele)
return mn, mx
def isconsecutive(lst):
"""
Returns True if all numbers in lst can be ordered consecutively, and False otherwise
"""
mn, mx = mn_mx(lst)
# could check either, if mn is False we found a dupe
if mn is False:
return False
# if we get here there are no dupes
return mx - mn == len(lst) - 1
</code></pre>
| 1 | 2016-10-17T17:08:19Z | [
"python",
"list",
"python-3.x"
] |
What does this ImportError mean when importing my c++ module? | 40,091,698 | <p>I've been working on writing a Python module in C++. I have a C++ <a href="https://github.com/justinrixx/Asteroids2.0/blob/master/gameNNRunner.cpp" rel="nofollow">program</a> that can run on its own. It works great, but I thought it would be better if I could actually call it like a function from Python. So I took my best go at it, and it builds and installs. Here's the code for my module (called nnrunner.cpp):</p>
<pre><code>#include <Python.h>
#include <vector>
#include "game.h"
#include "neuralnetai.h"
using namespace std;
/**************************************************
* This is the actual function that will be called
*************************************************/
static int run(string filename)
{
srand(clock());
Game * pGame = new Game();
vector<int> topology;
topology.push_back(20);
Network net(31, 4, topology);
net.fromFile(filename);
NNAI ai(pGame, net);
pGame->setAI(&ai);
while (!pGame->isGameOver())
pGame->update(NULL);
return pGame->getScore();
}
static PyObject *
nnrunner_run(PyObject * self, PyObject * args)
{
string filename;
int score;
if (!PyArg_ParseTuple(args, "s", &filename))
return NULL;
score = run(filename);
return PyLong_FromLong(score);
}
static PyMethodDef NnrunnerMethods[] = {
{"run", nnrunner_run, METH_VARARGS,
"Run the game and return the score"},
{NULL, NULL, 0, NULL} /* Sentinel */
};
static struct PyModuleDef nnrunnermodule = {
PyModuleDef_HEAD_INIT,
"nnrunner", /* name of module */
NULL, /* module documentation, may be NULL */
-1, /* size of per-interpreter state of the module,
or -1 if the module keeps state in global variables. */
NnrunnerMethods
};
PyMODINIT_FUNC
PyInit_nnrunner(void)
{
PyObject *m;
m = PyModule_Create(&nnrunnermodule);
if (m == NULL)
return NULL;
return m;
}
</code></pre>
<p>And my build script (called setup.py):</p>
<pre><code>from distutils.core import setup, Extension
module1 = Extension('nnrunner',
sources = ['nnrunner.cpp', 'game.cpp', 'uiDraw.cpp',
'uiInteract.cpp', 'player.cpp', 'ship.cpp', 'network.cpp'],
libraries = ['glut', 'GL', 'GLU'])
setup (name = 'NNRunner',
version = '1.0',
description = 'This is my first package',
ext_modules = [module1])
</code></pre>
<p>It has to compile with <code>-lglut -lGL -lGLU</code> due to a dependency, but it doesn't actually have any UI.
I can compile it and install it (<code>python setup.py build</code>, <code>python setup.py install</code>) but when I try to import it, I get errors:</p>
<pre><code>Python 3.5.2 |Anaconda 4.2.0 (64-bit)| (default, Jul 2 2016, 17:53:06)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import nnrunner
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: /home/justin/anaconda3/lib/python3.5/site-packages/nnrunner.cpython-35m-x86_64-linux-gnu.so: undefined symbol: _ZTVNSt7__cxx1115basic_stringbufIcSt11char_traitsIcESaIcEEE
>>>
</code></pre>
<p>Could somebody point me in the direction of documentation about this? This is the first time I've tried to make a Python module in C++.</p>
| 1 | 2016-10-17T16:49:32Z | 40,091,708 | <p>Most likely it means that you're importing a shared library that has a binary interface not compatible with your Python distribution. </p>
<p>So in your case: You have a 64-bit Python, and you're importing a 32-bit library, or vice-versa. (Or as suggested in a comment, a different compiler is used).</p>
| 1 | 2016-10-17T16:50:29Z | [
"python",
"c++"
] |
How can I create a mesh that can be changed without invalidating native sets in Abaqus? | 40,091,703 | <p>The nodesets I create "by feature edge" in Abaqus are invalidated if I change the mesh.
What are my options to prevent this from happening?</p>
<p>I am asking because I am trying to write a phython file in which to change the mesh as a parameter. That will not be possible if changing the mesh invalides the nodesets.</p>
| -1 | 2016-10-17T16:49:49Z | 40,138,337 | <p>As usual, there is more than one way. </p>
<p>One technique, if you know the coordinates of some point on or near the edge(s) of interest, is to use the EdgeArray.findAt() method, followed with the Edge.getNodes() method to return the Node objects, and then defining a new set from them. You can use the following code for inspiration for other more complex methods you might dream up:</p>
<pre><code># Tested on Abaqus/CAE 6.12
# Assumes access from the Part-level (Assembly-level is similar):
p = mdb.models['Model-1'].parts['Part-1'] # the Part object
e = p.edges # an EdgeArray of Edge objects in the Part
# Look for your edge at the specified coords, then get the nodes:
e1 = e.findAt( coordinates = (0.5, 0.0, 0.0) ) # the Edge object of interest
e1_nodes = e1.getNodes() # A list of Node objects
# Specify the new node set:
e1_nset = p.SetFromNodeLabels( name='Nset-1', nodeLabels=[node.label for node in e1_nodes] )
</code></pre>
| 0 | 2016-10-19T17:50:44Z | [
"python",
"mesh",
"abaqus"
] |
GET variables with Jade in Django templates | 40,091,704 | <p>I use Jade (pyjade) with my Django project. For now I need to use <code>static</code> template tag with GET variable specified - something like following: <code>link(rel="shortcut icon", href="{% static 'images/favicon.ico?v=1' %}")</code>. But I get <code>/static/images/favicon.ico%3Fv%3D1</code> instead of <code>/static/images/favicon.ico?v=1</code>
Why it happens and how can I fix this?
Thanks in advance!</p>
| 0 | 2016-10-17T16:50:03Z | 40,091,767 | <p>You could try <code>href="{% static 'images/favicon.ico' %}"?v=1'</code></p>
| 0 | 2016-10-17T16:53:53Z | [
"python",
"django",
"jade"
] |
How to accept FormData sent via ajax in Flask? | 40,091,718 | <p>I'm trying to send an image file in a FormData using an Ajax POST request.
I am faced with 2 problems:</p>
<ol>
<li>I do not know how to extract the FormData on the flask part</li>
<li>I 500 internal server error when making an ajax POST request (not sure if this is because of 1) </li>
</ol>
<p>Thank you</p>
<p>Flask python code:</p>
<pre><code>@app.route('/', methods=['GET','POST'])
def upload_file():
if request.method == 'POST':
file = request.files['file']
if file: # and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(os.getcwd()+"/static", "current_image.jpg"))
return jsonify({'tasks': tasks})
</code></pre>
<p>HTML and Javascript code:</p>
<pre><code><input id="pictureInput" type=file name=file>
<input type=submit value=Upload id="button">
<script type="text/javascript">
var pictureInput = document.getElementById("pictureInput");
var myFormData = new FormData();
myFormData.append('pictureFile', pictureInput.files[0]);
$("#button").click(function(){
console.log(pictureInput);
console.log(pictureInput.files[0]);
console.log(myFormData);
$.ajax({
url: "http://localhost:8000/",
type: 'POST',
processData: false, // important
contentType: false, // important
dataType : 'json',
data: myFormData,
success : function(data){
console.log(data);
},
});
});
</script>
</code></pre>
<p>Error:
<a href="https://i.stack.imgur.com/yiAgd.png" rel="nofollow"><img src="https://i.stack.imgur.com/yiAgd.png" alt="enter image description here"></a></p>
| 0 | 2016-10-17T16:50:56Z | 40,094,189 | <p>The following code should work for you. <strong>You need to have the <code>static</code> folder in the same level as your <code>app.py</code> file</strong></p>
<p><strong>app.py</strong></p>
<pre><code>import os
from flask import Flask, request, jsonify
from werkzeug.utils import secure_filename
app = Flask(__name__)
@app.route('/', methods=['GET','POST'])
def upload_file():
if request.method == 'POST':
file = request.files['file']
if file:
filename = secure_filename(file.filename)
file.save(os.path.join(os.getcwd()+"/static", "current_image.jpg"))
tasks = []
return jsonify({'tasks': tasks})
if __name__ == "__main__":
app.run(host='0.0.0.0', debug=True)
</code></pre>
<p>tasks is not defined above in your code, so I just initialized it to an empty list. You need also to make sure that jQuery is loaded in your template.</p>
| 0 | 2016-10-17T19:28:37Z | [
"python",
"ajax",
"flask"
] |
Can you write numpy slicing generically? | 40,091,727 | <p>I want to do something like</p>
<pre><code>x[i, :, :] = (rhs[i, :, :]-diag[i] * x[i+1, :, :])/diag[i]
</code></pre>
<p>where x and rhs are 3D numpy arrays of size (T,L,S). diag is a 1D array of size T.</p>
<p>This will broadcast properly. </p>
<p>But now I'd like to write a similar function to work on 2D arrays or some other number of dimensions. How can I write this generically so that it will work on any array that has first dimension of size T. I don't want to duplicate code with just a different number of colons since there are a lot of these kinds of lines in the function. </p>
| 1 | 2016-10-17T16:51:33Z | 40,091,799 | <pre><code>x[i] = (rhs[i] - diag[i] * x[i+1])/diag[i]
</code></pre>
<p>Those colons are completely unnecessary.</p>
| 2 | 2016-10-17T16:55:30Z | [
"python",
"numpy"
] |
python os.walk and unicode error | 40,091,750 | <p>two questions:
1. why does </p>
<pre><code>In [21]:
....: for root, dir, file in os.walk(spath):
....: print(root)
</code></pre>
<p>print the whole tree but</p>
<pre><code>In [6]: for dirs in os.walk(spath):
...: print(dirs)
</code></pre>
<p>chokes on this unicode error?</p>
<pre><code>UnicodeEncodeError: 'charmap' codec can't encode character '\u2122' in position 1477: character maps to <undefined>
</code></pre>
<p>[NOTE: this is the TM symbol]</p>
<ol start="2">
<li>I looked at these answers</li>
</ol>
<p><a href="http://stackoverflow.com/questions/22184178/scraping-works-well-until-i-get-this-error-ascii-codec-cant-encode-character">Scraping works well until I get this error: 'ascii' codec can't encode character u'\u2122' in position</a></p>
<p><a href="http://stackoverflow.com/questions/30539882/whats-the-deal-with-python-3-4-unicode-different-languages-and-windows/30551552#30551552">What's the deal with Python 3.4, Unicode, different languages and Windows?</a></p>
<p><a href="http://stackoverflow.com/questions/16346914/python-3-2-unicodeencodeerror-charmap-codec-cant-encode-character-u2013-i?noredirect=1&lq=1">python 3.2 UnicodeEncodeError: 'charmap' codec can't encode character '\u2013' in position 9629: character maps to <undefined></a></p>
<p><a href="https://github.com/Drekin/win-unicode-console" rel="nofollow">https://github.com/Drekin/win-unicode-console</a></p>
<p><a href="https://docs.python.org/3/search.html?q=IncrementalDecoder&check_keywords=yes&area=default" rel="nofollow">https://docs.python.org/3/search.html?q=IncrementalDecoder&check_keywords=yes&area=default</a></p>
<p>and tried these variations</p>
<pre><code>----> 1 print(dirs, encoding='utf-8')
TypeError: 'encoding' is an invalid keyword argument for this function
In [11]: >>> u'\u2122'.encode('ascii', 'ignore')
Out[11]: b''
print(dirs).encode(âutf=8â)
</code></pre>
<p>all to no effect.</p>
<p>This was done with python 3.4.3 and visual studio code 1.6.1 on Windows 10. The default settings in Visual Studio Code include:</p>
<blockquote>
<p>// The default character set encoding to use when reading and writing
files.
"files.encoding": "utf8",</p>
</blockquote>
<p>python 3.4.3
visual studio code 1.6.1
ipython 3.0.0</p>
<p><strong>UPDATE EDIT</strong>
I tried this again in the Sublime Text REPL, running a script. Here's what I got:</p>
<pre><code># -*- coding: utf-8 -*-
import os
spath = 'C:/Users/Semantic/Documents/Align'
with open('os_walk4_align.txt', 'w') as f:
for path, dirs, filenames in os.walk(spath):
print(path, dirs, filenames, file=f)
Traceback (most recent call last):
File "listdir_test1.py", line 8, in <module>
print(path, dirs, filenames, file=f)
File "C:\Python34\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u2605' in position 300: character maps to <undefined>
</code></pre>
<p>This code is only 217 characters long, so where does âposition 300â come from?</p>
| 0 | 2016-10-17T16:52:43Z | 40,097,714 | <p>The console you're outputting to doesn't support non-ASCII by default. You need to use <code>str.encode('utf-8')</code>.</p>
<p>That works on strings not on lists. So <code>print(dirs).encode(âutf=8â)</code> won't works, and it's <code>utf-8</code>, not <code>utf=8</code>.</p>
<p>Print your lists with list comprehension like:</p>
<pre><code>>>> print([s.encode('utf-8') for s in ['a', 'b']])
['a', 'b']
>>> print([d.encode('utf-8') for d in dirs]) # to print `dirs`
</code></pre>
| -1 | 2016-10-18T00:32:16Z | [
"python",
"unicode",
"encoding",
"utf-8",
"vscode"
] |
python os.walk and unicode error | 40,091,750 | <p>two questions:
1. why does </p>
<pre><code>In [21]:
....: for root, dir, file in os.walk(spath):
....: print(root)
</code></pre>
<p>print the whole tree but</p>
<pre><code>In [6]: for dirs in os.walk(spath):
...: print(dirs)
</code></pre>
<p>chokes on this unicode error?</p>
<pre><code>UnicodeEncodeError: 'charmap' codec can't encode character '\u2122' in position 1477: character maps to <undefined>
</code></pre>
<p>[NOTE: this is the TM symbol]</p>
<ol start="2">
<li>I looked at these answers</li>
</ol>
<p><a href="http://stackoverflow.com/questions/22184178/scraping-works-well-until-i-get-this-error-ascii-codec-cant-encode-character">Scraping works well until I get this error: 'ascii' codec can't encode character u'\u2122' in position</a></p>
<p><a href="http://stackoverflow.com/questions/30539882/whats-the-deal-with-python-3-4-unicode-different-languages-and-windows/30551552#30551552">What's the deal with Python 3.4, Unicode, different languages and Windows?</a></p>
<p><a href="http://stackoverflow.com/questions/16346914/python-3-2-unicodeencodeerror-charmap-codec-cant-encode-character-u2013-i?noredirect=1&lq=1">python 3.2 UnicodeEncodeError: 'charmap' codec can't encode character '\u2013' in position 9629: character maps to <undefined></a></p>
<p><a href="https://github.com/Drekin/win-unicode-console" rel="nofollow">https://github.com/Drekin/win-unicode-console</a></p>
<p><a href="https://docs.python.org/3/search.html?q=IncrementalDecoder&check_keywords=yes&area=default" rel="nofollow">https://docs.python.org/3/search.html?q=IncrementalDecoder&check_keywords=yes&area=default</a></p>
<p>and tried these variations</p>
<pre><code>----> 1 print(dirs, encoding='utf-8')
TypeError: 'encoding' is an invalid keyword argument for this function
In [11]: >>> u'\u2122'.encode('ascii', 'ignore')
Out[11]: b''
print(dirs).encode(âutf=8â)
</code></pre>
<p>all to no effect.</p>
<p>This was done with python 3.4.3 and visual studio code 1.6.1 on Windows 10. The default settings in Visual Studio Code include:</p>
<blockquote>
<p>// The default character set encoding to use when reading and writing
files.
"files.encoding": "utf8",</p>
</blockquote>
<p>python 3.4.3
visual studio code 1.6.1
ipython 3.0.0</p>
<p><strong>UPDATE EDIT</strong>
I tried this again in the Sublime Text REPL, running a script. Here's what I got:</p>
<pre><code># -*- coding: utf-8 -*-
import os
spath = 'C:/Users/Semantic/Documents/Align'
with open('os_walk4_align.txt', 'w') as f:
for path, dirs, filenames in os.walk(spath):
print(path, dirs, filenames, file=f)
Traceback (most recent call last):
File "listdir_test1.py", line 8, in <module>
print(path, dirs, filenames, file=f)
File "C:\Python34\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u2605' in position 300: character maps to <undefined>
</code></pre>
<p>This code is only 217 characters long, so where does âposition 300â come from?</p>
| 0 | 2016-10-17T16:52:43Z | 40,098,261 | <p>Here's a test case:</p>
<pre><code>C:\TEST
ââââdir1
â file1â¢
â
ââââdir2
file2
</code></pre>
<p>Here's a script (Python 3.x):</p>
<pre><code>import os
spath = r'c:\test'
for root,dirs,files in os.walk(spath):
print(root)
for dirs in os.walk(spath):
print(dirs)
</code></pre>
<p>Here's the output, on an IDE that supports UTF-8 (PythonWin, in this case):</p>
<pre><code>c:\test
c:\test\dir1
c:\test\dir2
('c:\\test', ['dir1', 'dir2'], [])
('c:\\test\\dir1', [], ['file1â¢'])
('c:\\test\\dir2', [], ['file2'])
</code></pre>
<p>Here's the output, on my Windows console, which defaults to <code>cp437</code>:</p>
<pre><code>c:\test
c:\test\dir1
c:\test\dir2
('c:\\test', ['dir1', 'dir2'], [])
Traceback (most recent call last):
File "C:\test.py", line 9, in <module>
print(dirs)
File "C:\Python33\lib\encodings\cp437.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_map)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u2122' in position 47: character maps to <undefined>
</code></pre>
<p>For Question 1, the reason <code>print(root)</code> works is that no directory had a character that wasn't supported by the output encoding, but <code>print(dirs)</code> is now printing a tuple containing <code>(root,dirs,files)</code> and one of the files has an unsupported character in the Windows console.</p>
<p>For Question 2, the first example misspelled <code>utf-8</code> as <code>utf=8</code>, and the second example didn't declare an encoding for the file the output was written to, so it used a default that didn't support the character.</p>
<p>Try this:</p>
<pre><code>import os
spath = r'c:\test'
with open('os_walk4_align.txt', 'w', encoding='utf8') as f:
for path, dirs, filenames in os.walk(spath):
print(path, dirs, filenames, file=f)
</code></pre>
<p>Content of <code>os_walk4_align.txt</code>, encoded in UTF-8:</p>
<pre><code>c:\test ['dir1', 'dir2'] []
c:\test\dir1 [] ['file1â¢']
c:\test\dir2 [] ['file2']
</code></pre>
| 2 | 2016-10-18T01:46:41Z | [
"python",
"unicode",
"encoding",
"utf-8",
"vscode"
] |
CSV prints blank column between each data column | 40,091,753 | <p>So, the title explains my question - please see the image for clarification. </p>
<p><a href="https://i.stack.imgur.com/gPJGV.png" rel="nofollow"><img src="https://i.stack.imgur.com/gPJGV.png" alt="enter image description here"></a></p>
<p>I have tried the fixes from a couple of questions on here to no avail, so here is my code: </p>
<pre><code>def csv_writer(file_a, file_b):
with open('Comparison.csv', 'wb') as csv_file:
writer = csv.writer(csv_file, delimiter=',', escapechar=' ', lineterminator='\n', quoting=csv.QUOTE_NONE)
line = map(compare, file_a.master_list, file_b.master_list)
for line in line:
if line != None:
writer.writerow(line)
else:
del line
</code></pre>
<p>The data for <code>line</code> on line 6 is a list in the following format:
<code>['20,', 'start,', '1000,', '1002']</code>.</p>
<p>I assume that my issue is to do with <code>escapechar=' '</code>, but having tried a variety of different options with no success I'm at a loss. </p>
| -1 | 2016-10-17T16:53:04Z | 40,091,911 | <p>I believe what is happening is that you are getting two <code>,</code> characters between each data item.</p>
<p>For example, in <code>['20,', 'start,', '1000,', '1002']</code>, each element is joined with a <code>,</code> because of the <code>delimiter=','</code> option. This means that the <code>,</code> is being repeated and you'll end up with:</p>
<pre><code>20,,start,,1000,,1002
</code></pre>
<p>Notice how there is an empty item between each of the items you want.</p>
<p>You can probably fix this by replacing <code>delimiter=','</code> with <code>delimiter=''</code> as you already have the <code>,</code> character and don't need another one.</p>
<p>Hope this helps!</p>
| 0 | 2016-10-17T17:01:30Z | [
"python",
"python-2.7",
"csv"
] |
Pandas: How to find the first valid column among a series of columns | 40,091,796 | <p>I have a dataset of different sections of a race in a pandas dataframe from which I need to calculate certain features. It looks something like this:</p>
<pre><code>id distance timeto1000m timeto800m timeto600m timeto400m timeto200m timetoFinish
1 1400m 10 21 30 39 50 60
2 1200m 0 19 31 42 49 57
3 1800m 0 0 0 38 49 62
4 1000m 0 0 29 40 48 61
</code></pre>
<p>So, what I need to do is for each row find the first <code>timetoXXm</code> column that is non-zero and the correspoding distance <code>XX</code>. For instance, for <code>id=1</code> that would be 1000m, for <code>id=3</code> that would be 400m etc. </p>
<p>I can do this with a series of <code>if..elif..else</code> conditions but was wondering if there is a better way of doing this kind of lookup in pandas/numpy?</p>
| 2 | 2016-10-17T16:55:23Z | 40,091,913 | <p>You can do it like this, first filter the cols of interest and take a slice, then call <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmin.html" rel="nofollow"><code>idxmin</code></a> on the cols of interest to return the columns where the boolean condition is met:</p>
<pre><code>In [11]:
df_slice = df.ix[:,df.columns.str.startswith('time')]
df_slice[df_slice!=0].idxmin(axis=1)
Out[11]:
0 timeto1000m
1 timeto800m
2 timeto400m
3 timeto600m
dtype: object
In [15]:
df['first_valid'] = df_slice[df_slice!=0].idxmin(axis=1)
df[['id','first_valid']]
Out[15]:
id first_valid
0 1 timeto1000m
1 2 timeto800m
2 3 timeto400m
3 4 timeto600m
</code></pre>
| 2 | 2016-10-17T17:01:42Z | [
"python",
"pandas",
"numpy",
"feature-extraction"
] |
Pandas: How to find the first valid column among a series of columns | 40,091,796 | <p>I have a dataset of different sections of a race in a pandas dataframe from which I need to calculate certain features. It looks something like this:</p>
<pre><code>id distance timeto1000m timeto800m timeto600m timeto400m timeto200m timetoFinish
1 1400m 10 21 30 39 50 60
2 1200m 0 19 31 42 49 57
3 1800m 0 0 0 38 49 62
4 1000m 0 0 29 40 48 61
</code></pre>
<p>So, what I need to do is for each row find the first <code>timetoXXm</code> column that is non-zero and the correspoding distance <code>XX</code>. For instance, for <code>id=1</code> that would be 1000m, for <code>id=3</code> that would be 400m etc. </p>
<p>I can do this with a series of <code>if..elif..else</code> conditions but was wondering if there is a better way of doing this kind of lookup in pandas/numpy?</p>
| 2 | 2016-10-17T16:55:23Z | 40,091,914 | <p>use <code>idxmax(1)</code></p>
<pre><code>df.set_index(['id', 'distance']).ne(0).idxmax(1)
id distance
1 1400m timeto1000m
2 1200m timeto800m
3 1800m timeto400m
4 1000m timeto600m
dtype: object
</code></pre>
| 1 | 2016-10-17T17:01:56Z | [
"python",
"pandas",
"numpy",
"feature-extraction"
] |
Applying a matrix decomposition for classification using a saved W matrix | 40,091,833 | <p>I'm performing an NMF decomposition on a tf-idf input in order to perform topic analysis. </p>
<pre><code>def decomp(tfidfm, topic_count):
model = decomposition.NMF(init="nndsvd", n_components=topic_count, max_iter=500)
H = model.fit_transform(tfidfm)
W = model.components_
return W, H
</code></pre>
<p>This returns <em>W</em>, a model definition consisting of topics to term assignments, and <em>H</em>, a document to topic assignment matrix</p>
<p>So far so good, I can use H to classify documents based on their association via term frequency to a list of topics which in turn are also based on their association to term frequency. </p>
<p>I'd like to save the topic-term-associations to disk so I can reapply them later - and have adopted the method described here [<a href="http://stackoverflow.com/questions/8955448]">http://stackoverflow.com/questions/8955448]</a> to store the sparse-matrix reperesentation of W.</p>
<p>So what I'd like to do now, is perform the same process, only fixing the topic-definition matrix W.</p>
<p>In the documentation, it appears that I can set W in the calling parameters something along the lines of:</p>
<pre><code>def applyModel(tfidfm,W,topic_count):
model = decomposition.NMF(init="nndsvd", n_components=topic_count, max_iter=500)
H = model.fit_transform(X=tfidfm, W=W)
W = model.components_
return W, H
</code></pre>
<p>And I've tried this, but it doesn't appear to work. </p>
<p>I've tested by compiling a W matrix using a differently sized vocabulary, then feeding that into the <code>applyModel</code> function, the shape of the resulting matrices should be defined (or I should say, that is what I'm intending) by the W model, but this isn't the case. </p>
<p>The short version of this question is: How can I save the topic-model generated from a matrix decomposition, such that I can use it to classify a different document set than the one used to originally generate it?</p>
<p>In other terms, if <strong>V</strong>=<strong>WH</strong>, then how can I return <strong>H</strong>, given <strong>V</strong> and <strong>W</strong>?</p>
| 0 | 2016-10-17T16:57:07Z | 40,092,629 | <p>The initial equation is: <img src="https://i.stack.imgur.com/Vg05x.gif" alt="initial equation"> and we solve it for <img src="https://i.stack.imgur.com/yuuvq.gif" alt="H"> like this: <img src="https://i.stack.imgur.com/I4jLY.gif" alt="How to solve it for H">.</p>
<p>Here <img src="https://i.stack.imgur.com/ClqEe.gif" alt="inverse of W"> denotes the inverse of the matrix <img src="https://i.stack.imgur.com/ki72B.gif" alt="W">, which exists only if <img src="https://i.stack.imgur.com/ki72B.gif" alt="W"> is nonsingular.</p>
<p>The multiplication order is, as always, important. If you had <img src="https://i.stack.imgur.com/icKMv.gif" alt="if the order is changed">, you'd need to multiply <img src="https://i.stack.imgur.com/lvvsw.gif" alt="V"> by the inverse of <img src="https://i.stack.imgur.com/ki72B.gif" alt="W"> the other way round: <img src="https://i.stack.imgur.com/Z5Mtd.gif" alt="no description">.</p>
| 1 | 2016-10-17T17:47:35Z | [
"python",
"scikit-learn",
"tf-idf",
"matrix-decomposition",
"nmf"
] |
Applying a matrix decomposition for classification using a saved W matrix | 40,091,833 | <p>I'm performing an NMF decomposition on a tf-idf input in order to perform topic analysis. </p>
<pre><code>def decomp(tfidfm, topic_count):
model = decomposition.NMF(init="nndsvd", n_components=topic_count, max_iter=500)
H = model.fit_transform(tfidfm)
W = model.components_
return W, H
</code></pre>
<p>This returns <em>W</em>, a model definition consisting of topics to term assignments, and <em>H</em>, a document to topic assignment matrix</p>
<p>So far so good, I can use H to classify documents based on their association via term frequency to a list of topics which in turn are also based on their association to term frequency. </p>
<p>I'd like to save the topic-term-associations to disk so I can reapply them later - and have adopted the method described here [<a href="http://stackoverflow.com/questions/8955448]">http://stackoverflow.com/questions/8955448]</a> to store the sparse-matrix reperesentation of W.</p>
<p>So what I'd like to do now, is perform the same process, only fixing the topic-definition matrix W.</p>
<p>In the documentation, it appears that I can set W in the calling parameters something along the lines of:</p>
<pre><code>def applyModel(tfidfm,W,topic_count):
model = decomposition.NMF(init="nndsvd", n_components=topic_count, max_iter=500)
H = model.fit_transform(X=tfidfm, W=W)
W = model.components_
return W, H
</code></pre>
<p>And I've tried this, but it doesn't appear to work. </p>
<p>I've tested by compiling a W matrix using a differently sized vocabulary, then feeding that into the <code>applyModel</code> function, the shape of the resulting matrices should be defined (or I should say, that is what I'm intending) by the W model, but this isn't the case. </p>
<p>The short version of this question is: How can I save the topic-model generated from a matrix decomposition, such that I can use it to classify a different document set than the one used to originally generate it?</p>
<p>In other terms, if <strong>V</strong>=<strong>WH</strong>, then how can I return <strong>H</strong>, given <strong>V</strong> and <strong>W</strong>?</p>
| 0 | 2016-10-17T16:57:07Z | 40,134,433 | <p>For completeness, here's the rewritten <code>applyModel</code> function that takes into account the answer from ForceBru (uses an import of <code>scipy.sparse.linalg</code>)</p>
<pre><code>def applyModel(tfidfm,W):
H = tfidfm * linalg.inv(W)
return H
</code></pre>
<p>This returns (assuming an aligned vocabulary) a mapping of documents to topics <strong>H</strong> based on a pregenerated topic-model <strong>W</strong> and document feature matrix <strong>V</strong> generated by tfidf.</p>
| 0 | 2016-10-19T14:33:47Z | [
"python",
"scikit-learn",
"tf-idf",
"matrix-decomposition",
"nmf"
] |
Find the row number using the first element of the row | 40,091,864 | <p>I have a <code>numpy</code> matrix where I store some kind of key in the first element of the each row (or in another way all keys are in the first column). </p>
<pre><code>[[123,0,1,1,2],
[12,1,2,3,4],
[1,0,2,5,4],
[90,1,1,4,3]]
</code></pre>
<p>I want to get the row number searching by the key. I found that we can use <code>numpy.where</code> for this but not clear how to employ it to get the row number. I want something like</p>
<pre><code>>>numpy.func(myMatrix,90)
3
</code></pre>
<p>Any help would be appreciated.</p>
| 0 | 2016-10-17T16:59:03Z | 40,092,149 | <p>Compare the first column with <code>90</code> within <code>np.where</code>. It will return an array of the indices of items which are equal with <code>90</code>:</p>
<pre><code>In [3]: A = np.array([[123,0,1,1,2],
[12,1,2,3,4],
[1,0,2,5,4],
[90,1,1,4,3]])
In [6]: np.where(A[:,0]==90)[0]
Out[6]: array([3])
</code></pre>
| 0 | 2016-10-17T17:16:36Z | [
"python",
"numpy"
] |
Find the row number using the first element of the row | 40,091,864 | <p>I have a <code>numpy</code> matrix where I store some kind of key in the first element of the each row (or in another way all keys are in the first column). </p>
<pre><code>[[123,0,1,1,2],
[12,1,2,3,4],
[1,0,2,5,4],
[90,1,1,4,3]]
</code></pre>
<p>I want to get the row number searching by the key. I found that we can use <code>numpy.where</code> for this but not clear how to employ it to get the row number. I want something like</p>
<pre><code>>>numpy.func(myMatrix,90)
3
</code></pre>
<p>Any help would be appreciated.</p>
| 0 | 2016-10-17T16:59:03Z | 40,092,227 | <p>According to the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html#numpy.where" rel="nofollow">online doc</a>, <code>numpy.where</code> if you give it only a boolean array will return lists of coordinates (one list per dimension) for the elements that are <code>True</code>.
So we can get the information you want by grabbing the first column of your array , comparing it with the element you want to find and calling np.where on that boolean array. All that would look like that:</p>
<pre><code>row,column=np.where(myMatrix[;,0]==90)
#Note that column will just be 0 here
</code></pre>
| 1 | 2016-10-17T17:21:24Z | [
"python",
"numpy"
] |
How to find a sentence containing a phrase in text using python re? | 40,091,894 | <p>I have some text which is sentences, some of which are questions. I'm trying to create a regular expression which will extract only the questions which contain a specific phrase, namely 'NSF' :</p>
<pre><code>import re
s = "This is a string. Is this a question? This isn't a question about NSF. Is this one about NSF? This one is a question about NSF but is it longer?"
</code></pre>
<p>Ideally, the re.findall would return:</p>
<pre><code>['Is this one about NSF?','This one is a question about NSF but is it longer?']
</code></pre>
<p>but my current best attempt is:</p>
<pre><code>re.findall('([\.\?].*?NSF.*\?)+?',s)
[". Is this a question? This isn't a question about NSF. Is this one about NSF? This one is a question about NSF but is it longer?"]
</code></pre>
<p>I know I need to do something with non-greedy-ness, but I'm not sure where I'm messing up.</p>
| 3 | 2016-10-17T17:00:47Z | 40,094,792 | <p><em>DISCLAIMER</em>: The answer is not aiming at a generic interrogative sentence splitting solution, rather show how the strings supplied by OP can be matched with regular expressions. The best solution is to tokenize the text into sentences with <a href="http://www.nltk.org/" rel="nofollow"><code>nltk</code></a> and parse sentences (see <a href="http://stackoverflow.com/questions/17879551/nltk-find-if-a-sentence-is-in-a-questioning-form">this thread</a>).</p>
<p>The regex you might want to use for strings like the one you posted is based on matching all chars that are not final punctuation and then matching the subtring you want to appear inside the sentence, and then matching those chars other than final punctuation again. To negated a single character, use negated character classes.</p>
<pre><code>\s*([^!.?]*?NSF[^!.?]*?[?])
</code></pre>
<p>See the <a href="https://regex101.com/r/LXr6x8/3" rel="nofollow">regex demo</a>.</p>
<p><strong>Details</strong>:</p>
<ul>
<li><code>\s*</code> - 0+ whitespaces</li>
<li><code>([^!.?]*?NSF[^.?]*?[?])</code> - Group 1 capturing
<ul>
<li><code>[^!.?]*?</code> - 0+ chars other than <code>.</code>, <code>!</code> and <code>?</code>, as few as possible</li>
<li><code>NSF</code> - the value you need to be present, a sequence of chars <code>NSF</code></li>
<li><code>[^.?]*?</code> - ibid.</li>
<li><code>[?]</code> - a literal <code>?</code> (can be replaced with <code>\?</code>)</li>
</ul></li>
</ul>
| 1 | 2016-10-17T20:06:53Z | [
"python",
"regex",
"non-greedy"
] |
Deploying Django With AWS | 40,091,919 | <p>So I am trying to <strong>deploy</strong> my <strong>django</strong> app(which mostly has REST Apis) but when I use Amazon CLI, I end up having <strong>Fedora instance</strong>, while I <strong>want</strong> to use <strong>Ubuntu instance</strong>. </p>
<p>So I tried to do this, I made an ubuntu instance, made a repository of my code, installed git on ubuntu and cloned the code from git to ubuntu. Next thing, I installed all the requirements.txt dependencies and everything is in virtualenv and working fine. </p>
<p>But here's the <strong>catch</strong>, <code>python manage.py runserver</code> runs it on <code>localhost</code>(not really surprising). So the question is, how to serve those apis(not on localhost)?</p>
| 0 | 2016-10-17T17:02:09Z | 40,091,976 | <p>Don't use the <code>runserver</code> command on production. It's meant for local development only. </p>
<p>On production, you need to setup an application server (uwsgi / gunicorn) and then use nginx as a reverse proxy. </p>
<p>Digital Ocean articles are pretty good - <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-14-04" rel="nofollow">https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-14-04</a> </p>
<p>(The same stuff apply for AWS as well)</p>
| 3 | 2016-10-17T17:05:43Z | [
"python",
"django",
"git",
"amazon-web-services",
"ubuntu"
] |
Deploying Django With AWS | 40,091,919 | <p>So I am trying to <strong>deploy</strong> my <strong>django</strong> app(which mostly has REST Apis) but when I use Amazon CLI, I end up having <strong>Fedora instance</strong>, while I <strong>want</strong> to use <strong>Ubuntu instance</strong>. </p>
<p>So I tried to do this, I made an ubuntu instance, made a repository of my code, installed git on ubuntu and cloned the code from git to ubuntu. Next thing, I installed all the requirements.txt dependencies and everything is in virtualenv and working fine. </p>
<p>But here's the <strong>catch</strong>, <code>python manage.py runserver</code> runs it on <code>localhost</code>(not really surprising). So the question is, how to serve those apis(not on localhost)?</p>
| 0 | 2016-10-17T17:02:09Z | 40,092,275 | <p>As mentioned in the other answer, <code>runserver</code> command is only meant for local development. You can, in fact, make it listen on external interfaces by running it as <code>python manage.py runserver 0.0.0.0:8000</code>, but it is a bad idea. Configuring nginx+uwsgi to run a Django app is very easy. There are multiple tutorials and guides available for this. Here is the official uWSGI guide
<a href="http://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.html" rel="nofollow">http://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.html</a></p>
| 0 | 2016-10-17T17:25:15Z | [
"python",
"django",
"git",
"amazon-web-services",
"ubuntu"
] |
z3py: simplify nested Stores with concrete values | 40,091,927 | <p>I'm using the python API of Z3 [version 4.4.2 - 64 bit] and I'm trying to understand why z3 simplifies the expression in this case:</p>
<pre><code>>>> a = Array('a', IntSort(), IntSort())
>>> a = Store(a, 0, 1)
>>> a = Store(a, 0, 3)
>>> simplify(a)
Store(a, 0, 3)
</code></pre>
<p>but it doesn't in this case:</p>
<pre><code>>>> a = Array('a', IntSort(), IntSort())
>>> a = Store(a, 0, 1)
>>> a = Store(a, 1, 2)
>>> a = Store(a, 0, 3)
>>> simplify(a)
Store(Store(Store(a, 0, 1), 1, 2), 0, 3)
</code></pre>
| 0 | 2016-10-17T17:02:40Z | 40,093,544 | <p>The simplifier only applies the cheapest rewrites on arrays.
So if it finds two adjacent stores for the same key, it knows to squash the inner most key.
There is generally a high cost of finding overlapping keys, and furthermore keys could be variables where it is not possible to determine whether they are overlapping.</p>
| 0 | 2016-10-17T18:47:49Z | [
"python",
"z3",
"z3py"
] |
Pandas/Python adding row based on condition | 40,091,963 | <p>I am looking to insert a row into a dataframe between two existing rows based on certain criteria.</p>
<p>For example, my data frame:</p>
<pre><code> import pandas as pd
df = pd.DataFrame({'Col1':['A','B','D','E'],'Col2':['B', 'C', 'E', 'F'], 'Col3':['1', '1', '1', '1']})
</code></pre>
<p>Which looks like:</p>
<pre><code> Col1 Col2 Col3
0 A B 1
1 B C 1
2 D E 1
3 E F 1
</code></pre>
<p>I want to be able to insert a new row between Index 1 and Index 2 given the condition:</p>
<pre><code>n = 0
while n < len(df):
(df.ix[n]['Col2'] == df.ix[n+1]['Col1']) == False
Something, Something, insert row
n+=1
</code></pre>
<p>My desired output table would look like:</p>
<pre><code> Col1 Col2 Col3
0 A B 1
1 B C 1
2 C D 1
3 D E 1
4 E F 1
</code></pre>
<p>I am struggling with conditional inserting of rows based on values in the previous and proceeding records. I ultimately want to preform the above exercise on my real world example which would include multiple conditions, and preserving the values of more than one column (in this example it was Col3, but in my real world it would be multiple columns)</p>
| 1 | 2016-10-17T17:05:32Z | 40,093,920 | <p>UPDATE: memory saving method - first set a new index with a gap for a new row:</p>
<pre><code>In [30]: df
Out[30]:
Col1 Col2 Col3
0 A B 1
1 B C 1
2 D E 1
3 E F 1
</code></pre>
<p>if we want to insert a new row between indexes <code>1</code> and <code>2</code>, we split the index at position <code>2</code>:</p>
<pre><code>In [31]: idxs = np.split(df.index, 2)
</code></pre>
<p>set a new index (with gap at position <code>2</code>):</p>
<pre><code>In [32]: df.set_index(idxs[0].union(idxs[1] + 1), inplace=True)
In [33]: df
Out[33]:
Col1 Col2 Col3
0 A B 1
1 B C 1
3 D E 1
4 E F 1
</code></pre>
<p>insert new row with index <code>2</code>:</p>
<pre><code>In [34]: df.loc[2] = ['X','X',2]
In [35]: df
Out[35]:
Col1 Col2 Col3
0 A B 1
1 B C 1
3 D E 1
4 E F 1
2 X X 2
</code></pre>
<p>sort index:</p>
<pre><code>In [38]: df.sort_index(inplace=True)
In [39]: df
Out[39]:
Col1 Col2 Col3
0 A B 1
1 B C 1
2 X X 2
3 D E 1
4 E F 1
</code></pre>
<p>PS you also can insert DataFrame instead of single row using <code>df.append(new_df)</code>:</p>
<pre><code>In [42]: df
Out[42]:
Col1 Col2 Col3
0 A B 1
1 B C 1
2 D E 1
3 E F 1
In [43]: idxs = np.split(df.index, 2)
In [45]: new_df = pd.DataFrame([['X', 'X', 10], ['Y','Y',11]], columns=df.columns)
In [49]: new_df.index += idxs[1].min()
In [51]: new_df
Out[51]:
Col1 Col2 Col3
2 X X 10
3 Y Y 11
In [52]: df = df.set_index(idxs[0].union(idxs[1]+len(new_df)))
In [53]: df
Out[53]:
Col1 Col2 Col3
0 A B 1
1 B C 1
4 D E 1
5 E F 1
In [54]: df = df.append(new_df)
In [55]: df
Out[55]:
Col1 Col2 Col3
0 A B 1
1 B C 1
4 D E 1
5 E F 1
2 X X 10
3 Y Y 11
In [56]: df.sort_index(inplace=True)
In [57]: df
Out[57]:
Col1 Col2 Col3
0 A B 1
1 B C 1
2 X X 10
3 Y Y 11
4 D E 1
5 E F 1
</code></pre>
<p><strong>OLD answer:</strong></p>
<p>One (among many) way to achieve that would be to split your DF and concatenate it together with needed DF in desired order:</p>
<p>Original DF:</p>
<pre><code>In [12]: df
Out[12]:
Col1 Col2 Col3
0 A B 1
1 B C 1
2 D E 1
3 E F 1
</code></pre>
<p>let's split it into two parts ([0:1], [2:end]):</p>
<pre><code>In [13]: dfs = np.split(df, [2])
In [14]: dfs
Out[14]:
[ Col1 Col2 Col3
0 A B 1
1 B C 1, Col1 Col2 Col3
2 D E 1
3 E F 1]
</code></pre>
<p>now we can concatenate it together with a new DF in desired order:</p>
<pre><code>In [15]: pd.concat([dfs[0], pd.DataFrame([['C','D', 1]], columns=df.columns), dfs[1]], ignore_index=True)
Out[15]:
Col1 Col2 Col3
0 A B 1
1 B C 1
2 C D 1
3 D E 1
4 E F 1
</code></pre>
| 0 | 2016-10-17T19:11:31Z | [
"python",
"pandas"
] |
Inspect a TFRecordReader entry without launching a Session | 40,091,967 | <p>Lets assume I write a TFRecords file with MNIST examples (<a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/how_tos/reading_data/convert_to_records.py" rel="nofollow">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/how_tos/reading_data/convert_to_records.py</a>)
This is done like so:</p>
<pre><code>writer = tf.python_io.TFRecordWriter(filename)
for index in range(num_examples):
image_raw = images[index].tostring()
example = tf.train.Example(features=tf.train.Features(feature={
'height': _int64_feature(rows),
'width': _int64_feature(cols),
'depth': _int64_feature(depth),
'label': _int64_feature(int(labels[index])),
'image_raw': _bytes_feature(image_raw)}))
writer.write(example.SerializeToString())
writer.close()
</code></pre>
<p>Then in some other script I load it in. But the only way I have found is to run it as a tensor and extract the data, in which <code>r</code> is one record from an iterator <code>record_iter = tf.python_io.tf_record_iterator(db_path)</code></p>
<pre><code> with tf.Session() as sess_tmp:
single_ex = (sess_tmp.run(tf.parse_single_example(r,features={
'height': tf.FixedLenFeature([], tf.int64),
'width': tf.FixedLenFeature([], tf.int64),
'depth': tf.FixedLenFeature([], tf.int64),
'image_raw': tf.FixedLenFeature([], tf.string),
'label': tf.FixedLenFeature([], tf.int64),
})))
</code></pre>
<p>The data can then be retrieved with <code>single_ex['height']</code> for example.
However, it seems to me that there must be an easier way. I cannot seem to find a corresponding .proto to retrieve the data. And the data is definately there. here is a dump of <code>r</code>:</p>
<pre><code>?
?
image_raw?
?
?&00>a?????????????(??Y??aC\?z??;??\????\e?\i???
??)L???^
?y????~??a???4??G??<????.M???n???t????VBÑ?<???اZ???\?????,I?Å
depth
label
width
height
</code></pre>
| 0 | 2016-10-17T17:05:35Z | 40,092,811 | <p>The <code>tf.train.Example.ParseFromString()</code> can be used to transform the string into a protobuf object:</p>
<pre><code>r = ... # String object from `tf.python_io.tf_record_iterator()`.
example_proto = tf.train.Example()
example_proto.ParseFromString(r)
</code></pre>
<p>The schema for this protocol buffer can be found in <a href="https://github.com/tensorflow/tensorflow/blob/28166c086204c5ca4134086f250469a3802546ea/tensorflow/core/example/example.proto" rel="nofollow"><code>tensorflow/core/example/example.proto</code></a>.</p>
| 2 | 2016-10-17T18:00:08Z | [
"python",
"tensorflow"
] |
pandas transpose numeric column by group | 40,091,996 | <p>code to make test data</p>
<pre><code>import pandas as pd
dftest = pd.DataFrame({'Amt': {0: 60, 1: 35.0, 2: 30.0, 3: np.nan, 4: 25},
'Year': {0: 2012.0, 1: 2012.0, 2: 2012.0, 3: 2013.0, 4: 2013.0},
'Name': {0: 'A', 1: 'A', 2: 'C', 3: 'A', 4: 'B'}})
</code></pre>
<p>gives</p>
<pre><code> Amt Name Year
0 60 A 2012.0
1 35.0 A 2012.0
2 30.0 C 2012.0
3 NaN A 2013.0
4 25 B 2013.0
</code></pre>
<p>column <code>Amt</code> has max 2 values corresponding to group <code>['Name', 'Year']</code>. I would like to pivot/transpose such that output is of the form</p>
<pre><code> Name Year Amt1 Amt2
0 A 2012 35 60
2 C 2012 30 NaN
3 A 2013 NaN NaN
4 B 2013 25 NaN
</code></pre>
<p>I have tried playing with pivot, unstack, pivot_table </p>
<p>what I really want to do is to ensure there are two values of <code>Amt</code> per <code>['Name', 'Year']</code> (<code>NA</code>'s are OK), which I can achieve by stacking the desired output</p>
| 4 | 2016-10-17T17:06:58Z | 40,092,170 | <p>use <code>groupby</code> and <code>apply</code></p>
<pre><code>f = lambda x: x.sort_values(ascending=True).reset_index(drop=True)
dftest.groupby(['Name', 'Year']).Amt.apply(f).unstack()
</code></pre>
<p><a href="https://i.stack.imgur.com/IsAFv.png" rel="nofollow"><img src="https://i.stack.imgur.com/IsAFv.png" alt="enter image description here"></a></p>
| 4 | 2016-10-17T17:17:28Z | [
"python",
"pandas",
"pivot",
"pivot-table",
"transpose"
] |
Python - Count the duplicate seq element in array | 40,092,102 | <p>How to solve this</p>
<pre><code>Input: 2
Array = [2,1,3,2,2,2,1,2,2]
Result : 3 (Max of count of seq of 2)
</code></pre>
<p>I just simply used the for loop, it works fine. But is there any other efficient way?</p>
<pre><code>for i in array:
if i == input:
Cnt = Cnt + 1
if Cnt > Result:
Result = Cnt;
else:
Cnt = 0
</code></pre>
| 0 | 2016-10-17T17:13:30Z | 40,092,264 | <p>You can use <code>itertools.groupby</code> for this:</p>
<pre><code>from itertools import groupby
max(sum(1 for i in g) for k, g in groupby(array) if k == input)
</code></pre>
| 1 | 2016-10-17T17:24:15Z | [
"python",
"list"
] |
Python - Count the duplicate seq element in array | 40,092,102 | <p>How to solve this</p>
<pre><code>Input: 2
Array = [2,1,3,2,2,2,1,2,2]
Result : 3 (Max of count of seq of 2)
</code></pre>
<p>I just simply used the for loop, it works fine. But is there any other efficient way?</p>
<pre><code>for i in array:
if i == input:
Cnt = Cnt + 1
if Cnt > Result:
Result = Cnt;
else:
Cnt = 0
</code></pre>
| 0 | 2016-10-17T17:13:30Z | 40,092,319 | <p>What I think your're describing can be solved with <a href="https://en.wikipedia.org/wiki/Run-length_encoding" rel="nofollow">run length encoding.</a></p>
<p>Essentially, you take a sequence of numbers (most often used with characters or simply unsigned bytes) and condense it down into a list of tuples where one value represents the value, and the other represents how many times in a row it occurs.</p>
<pre><code>array = [2,1,3,2,2,2,1,2,2]
runs = []
count = 0
current = array[0] #setup first iteration
for i in array:
if i == current: #continue sequence
count += 1
else:
runs.append([count, current]) #start new sequence
current = i
count = 1
runs.append([count, current]) #complete last iteration
longest_Sequence = max(runs)
</code></pre>
| 0 | 2016-10-17T17:28:13Z | [
"python",
"list"
] |
Python - Count the duplicate seq element in array | 40,092,102 | <p>How to solve this</p>
<pre><code>Input: 2
Array = [2,1,3,2,2,2,1,2,2]
Result : 3 (Max of count of seq of 2)
</code></pre>
<p>I just simply used the for loop, it works fine. But is there any other efficient way?</p>
<pre><code>for i in array:
if i == input:
Cnt = Cnt + 1
if Cnt > Result:
Result = Cnt;
else:
Cnt = 0
</code></pre>
| 0 | 2016-10-17T17:13:30Z | 40,093,495 | <p>You could seriously abuse side effects in a comprehension :-) :</p>
<pre><code>Input = 2
Array = [2,1,3,2,2,2,1,2,2]
r = []
Result = max([(r.append(1),len(r))[1] if x==Input else (r.clear(),0)[1] for x in Array])
</code></pre>
<p>.</p>
<p>That kind of rigamarole wouldn't be necessary if Python allowed assignments in expressions:</p>
<pre><code>r = 0
Result = max([++r if x==Input else r=0 for x in Array]) # What we want, but NOT valid Python!
</code></pre>
<p>Note that a generator expression could be used instead of the list comprehension if you don't want to look at the intermediate result. For the toy Array it doesn't matter, but for an Array of hundreds of thousands of elements the generator saves memory.</p>
| 1 | 2016-10-17T18:45:08Z | [
"python",
"list"
] |
Ranking Daily Price Changes | 40,092,105 | <p>I am writing an algo to rank the changes in price of a given list of stocks each day. Right now I'm working with two stocks: Apple and Amex. I would like to rank them from greatest change in price to least change in price, daily.</p>
<p>For example the data I have looks like this:</p>
<p>Day 1:</p>
<p>AAPL = 60.40</p>
<p>AMX = 15.50</p>
<p>Day 2:</p>
<p>AAPL = 61.00</p>
<p>AMX = 15.60</p>
<p>And I would like the result to look like this, with the highest positive change to be ranked fist. Is this possible?</p>
<p>Change:</p>
<p>AAPLchg = .60 Rank 1</p>
<p>AMXchg = .10 Rank 2</p>
<p>This is what I have so far:</p>
<pre><code>def handle_data(context, data):
price_history = data.history(context.security_list, "price", bar_count=2, frequency="1d")
for s in context.security_list:
PrevAMX = price_history[sid(679)][-2]
CurrAMX = price_history[sid(679)][1]
PrevAAPL = price_history[sid(24)][-2]
CurrAAPL = price_history[sid(24)][1]
AMXchg = CurrAMX - PrevAMX
AAPLchg = CurrAAPL - PrevAAPL
if AMXchg < AAPLchg:
order(sid(679), 20)
if AAPLchg < AMXchg:
order(sid(24), 20)
price_history = data.history(context.security_list, "price", 20, "1d")
print price_history.mean()
print AMXchg
print AAPLchg
print log.info(str(context.chg_list.sort('adv_rank', descending=True).head()))
</code></pre>
| 0 | 2016-10-17T17:13:50Z | 40,092,508 | <p>Say you have the prices in dictionaries, then you could do this:</p>
<pre><code>yesterday = {
'AAPL': 60.40,
'AMX': 15.50,
}
today = {
'AAPL': 61.00,
'AMX': 15.60,
}
change = {k: today[k] - yesterday[k] for k in today}
sorted_stocks = sorted(change, key=change.get, reverse=True)
for ss in sorted_stocks:
print('%s: %.2f' % (ss, change[ss]))
</code></pre>
| 0 | 2016-10-17T17:40:02Z | [
"python",
"sorting",
"order",
"ranking",
"rank"
] |
How to retain position values of original list after the elements of the list have been sorted into pairs (Python)? | 40,092,254 | <pre><code>sample = ['AAAA','CGCG','TTTT','AT$T','ACAC','ATGC','AATA']
Position = [0, 1, 2, 3, 4, 5, 6]
</code></pre>
<p>I have the above sample with positions associated with each element. I do several steps of filtering, the code of which is given <a href="https://eval.in/662091" rel="nofollow">here.</a></p>
<p>The steps in the elimination are:</p>
<pre><code>#If each base is identical to itself eliminate those elements eg. AAAA, TTTT
#If there are more than 2 types of bases (i.e.' conversions'== 1 ) then eliminate those elements eg. ATGC
#Make pairs of all remaining combinations
#If a $ in the pair, then the corresponding base from the other pair is eliminated eg. (CGCG,AT$T) ==> (CGG, ATT) and (ATT, AAA)
#Remove all pairs where one of the elements has all identical bases eg. (ATT,AAA)
</code></pre>
<p>In the end, I have an output with different combinations of the above as shown below.</p>
<pre><code>Final Output [['CGG','ATT'],['CGCG','ACAC'],['CGCG','AATA'],['ATT','ACC']]
</code></pre>
<p>I need to find a way such that I get the positions of these pairs with respect to the original sample as below. </p>
<pre><code>Position = [[1,3],[1,4],[1,6],[3,4]]
</code></pre>
| 0 | 2016-10-17T17:23:29Z | 40,092,314 | <p>You could convert the list to a list of tuples first</p>
<pre><code>xs = ['AAAA', 'CGCG', 'TTTT', 'AT$T', 'ACAC', 'ATGC', 'AATA']
ys = [(i, x) for i,x in enumerate(xs)]
print(ys)
=> [(0, 'AAAA'), (1, 'CGCG'), (2, 'TTTT'), (3, 'AT$T'), (4, 'ACAC'), (5, 'ATGC'), (6, 'AATA')]
</code></pre>
<p>Then work with that as your input list instead</p>
| 0 | 2016-10-17T17:28:02Z | [
"python",
"list",
"indexing",
"position"
] |
Python - Creating a text file converting Fahrenheit to Degrees | 40,092,290 | <p>I'm new to Python and I've been given the task to create a program which uses a text file which contains figures in Fahrenheit, and then I need to change them into a text file which gives the figures in Degrees... Only problem is, I have no idea where to start.
Any advice?</p>
| 0 | 2016-10-17T17:26:25Z | 40,092,355 | <p>Hope this help!</p>
<pre><code>#!/usr/bin/env python
Fahrenheit = int(raw_input("Enter a temperature in Fahrenheit: "))
Celsius = (Fahrenheit - 32) * 5.0/9.0
print "Temperature:", Fahrenheit, "Fahrenheit = ", Celsius, " C"
</code></pre>
<p>The program asks for input as fahrenheit & converts in to celcius</p>
| 0 | 2016-10-17T17:31:14Z | [
"python"
] |
Python - Creating a text file converting Fahrenheit to Degrees | 40,092,290 | <p>I'm new to Python and I've been given the task to create a program which uses a text file which contains figures in Fahrenheit, and then I need to change them into a text file which gives the figures in Degrees... Only problem is, I have no idea where to start.
Any advice?</p>
| 0 | 2016-10-17T17:26:25Z | 40,092,381 | <p>First you need to create a Python function to read from a text file.</p>
<p>Second, create a method to convert the degrees.
Third, you will create a method to write to the file the results.</p>
<p>This is a very broad question, and you can't expect to get the full working code.
So start your way from the first mission, and we'll be happy to help with more problem-specific problem.</p>
| 3 | 2016-10-17T17:32:31Z | [
"python"
] |
Subsets and Splits