title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Python 3.5: How to read a db of JSON objects | 39,755,424 | <p>so I'm new to working with JSON and I'm trying to work with the <a href="https://github.com/fictivekin/openrecipes" rel="nofollow">openrecipe database from here.</a> The db dump you get looks like this...</p>
<pre><code>{ "_id" : { "$oid" : "5160756d96cc62079cc2db16" }, "name" : "Hot Roast Beef Sandwiches", "ingredients" : "12 whole Dinner Rolls Or Small Sandwich Buns (I Used Whole Wheat)\n1 pound Thinly Shaved Roast Beef Or Ham (or Both!)\n1 pound Cheese (Provolone, Swiss, Mozzarella, Even Cheez Whiz!)\n1/4 cup Mayonnaise\n3 Tablespoons Grated Onion (or 1 Tbsp Dried Onion Flakes))\n1 Tablespoon Poppy Seeds\n1 Tablespoon Spicy Mustard\n1 Tablespoon Horseradish Mayo Or Straight Prepared Horseradish\n Dash Of Worcestershire\n Optional Dressing Ingredients: Sriracha, Hot Sauce, Dried Onion Flakes Instead Of Fresh, Garlic Powder, Pepper, Etc.)", "url" : "http://thepioneerwoman.com/cooking/2013/03/hot-roast-beef-sandwiches/", "image" : "http://static.thepioneerwoman.com/cooking/files/2013/03/sandwiches.jpg", "ts" : { "$date" : 1365276013902 }, "cookTime" : "PT20M", "source" : "thepioneerwoman", "recipeYield" : "12", "datePublished" : "2013-03-13", "prepTime" : "PT20M", "description" : "When I was growing up, I participated in my Episcopal church's youth group, and I have lots of memories of weekly meetings wh..." }
{ "_id" : { "$oid" : "5160756f96cc6207a37ff777" }, "name" : "Morrocan Carrot and Chickpea Salad", "ingredients" : "Dressing:\n1 tablespoon cumin seeds\n1/3 cup / 80 ml extra virgin olive oil\n2 tablespoons fresh lemon juice\n1 tablespoon honey\n1/2 teaspoon fine sea salt, plus more to taste\n1/8 teaspoon cayenne pepper\n10 ounces carrots, shredded on a box grater or sliced whisper thin on a mandolin\n2 cups cooked chickpeas (or one 15- ounce can, drained and rinsed)\n2/3 cup / 100 g dried pluots, plums, or dates cut into chickpea-sized pieces\n1/3 cup / 30 g fresh mint, torn\nFor serving: lots of toasted almond slices, dried or fresh rose petals - all optional (but great additions!)", "url" : "http://www.101cookbooks.com/archives/moroccan-carrot-and-chickpea-salad-recipe.html", "image" : "http://www.101cookbooks.com/mt-static/images/food/moroccan_carrot_salad_recipe.jpg", "ts" : { "$date" : 1365276015332 }, "datePublished" : "2013-01-07", "source" : "101cookbooks", "prepTime" : "PT15M", "description" : "A beauty of a carrot salad - tricked out with chickpeas, chunks of dried pluots, sliced almonds, and a toasted cumin dressing. Thank you Diane Morgan." }
{ "_id" : { "$oid" : "5160757096cc62079cc2db17" }, "name" : "Mixed Berry Shortcake", "ingredients" : "Biscuits\n3 cups All-purpose Flour\n2 Tablespoons Baking Powder\n3 Tablespoons Sugar\n1/2 teaspoon Salt\n1-1/2 stick (3/4 Cup) Cold Butter, Cut Into Pieces\n1-1/4 cup Buttermilk\n1/2 teaspoon Almond Extract (optional)\n Berries\n2 pints Mixed Berries And/or Sliced Strawberries\n1/3 cup Sugar\n Zest And Juice Of 1 Small Orange\n SWEET YOGURT CREAM\n1 package (7 Ounces) Plain Greek Yogurt\n1 cup Cold Heavy Cream\n1/2 cup Sugar\n2 Tablespoons Brown Sugar", "url" : "http://thepioneerwoman.com/cooking/2013/03/mixed-berry-shortcake/", "image" : "http://static.thepioneerwoman.com/cooking/files/2013/03/shortcake.jpg", "ts" : { "$date" : 1365276016700 }, "cookTime" : "PT15M", "source" : "thepioneerwoman", "recipeYield" : "8", "datePublished" : "2013-03-18", "prepTime" : "PT15M", "description" : "It's Monday! It's a brand new week! The birds are chirping! The coffee's brewing! Everything has such hope and promise! A..." }
</code></pre>
<p>I tried the following code to read in the database</p>
<pre><code>import json
f = r'<file_path>\recipeitems-latest.json'
with open(f) as dfile:
data = json.load(dfile)
print(data)
</code></pre>
<p>With this I received the following Traceback</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/<redacted>/Documents/<redacted>/project/test_json.py", line 7, in <module>
data = json.load(dfile)
File "C:\Users\<redacted>\AppData\Local\Continuum\Anaconda3\Lib\json\__init__.py", line 265, in load
return loads(fp.read(),
File "C:\Users\<redacted>\AppData\Local\Continuum\Anaconda3\Lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 101915: character maps to <undefined>
</code></pre>
<p>The only way I could find around this error was to only have one entry in the json file. Is the db formatted incorrectly or am I reading in the data wrong?</p>
<p>Thanks for any help!</p>
| 1 | 2016-09-28T18:47:45Z | 39,755,523 | <p>The file is not a <code>json</code> array. <em>Each line of the file</em> is a <code>json</code> document, but the whole file is not in <code>json</code> format.</p>
<p>Read the file by lines, and use <code>json.loads</code>:</p>
<pre><code>with open('some_file') as f:
for line in f:
doc = json.loads(line)
</code></pre>
<p>You may also need to pass the <code>encoding</code> parameter to <code>open()</code>. <a href="https://docs.python.org/3/library/functions.html#open" rel="nofollow">See here</a>.</p>
| 1 | 2016-09-28T18:53:11Z | [
"python",
"json"
]
|
Python: Unique items of list in the order that it appears | 39,755,464 | <p>In Python, we can get the unique items of the list using <code>set(list)</code>. However doing this breaks the order in which the values appear in the original list. Is there an elegant way to get the unique items in the order in which it appears in the list.</p>
| 1 | 2016-09-28T18:50:04Z | 39,755,540 | <pre><code>l = []
for item in list_:
if item not in l:
l.append(item)
</code></pre>
<p>This gets slow for really big, diverse <code>list_</code>. In those cases, it would be worth it to also keep track of a set of seen values.</p>
| 1 | 2016-09-28T18:54:14Z | [
"python",
"list",
"order",
"set",
"unique"
]
|
Python: Unique items of list in the order that it appears | 39,755,464 | <p>In Python, we can get the unique items of the list using <code>set(list)</code>. However doing this breaks the order in which the values appear in the original list. Is there an elegant way to get the unique items in the order in which it appears in the list.</p>
| 1 | 2016-09-28T18:50:04Z | 39,755,585 | <p>This is an elegant way:</p>
<pre><code>from collections import OrderedDict
list(OrderedDict.fromkeys(list))
</code></pre>
<p>It works if the list items are all hashable (you will know that all the list items are hashable if converting it to a set did not trigger an exception). </p>
<p>If any items are not hashable, there's an alternative which is more robust at the price of poorer performance: I refer you to the <a href="http://stackoverflow.com/a/39755540/674039">answer</a> from Patrick Haugh.</p>
| 4 | 2016-09-28T18:57:07Z | [
"python",
"list",
"order",
"set",
"unique"
]
|
JSON printed to console shows wrong encoding | 39,755,662 | <p>I am trying to read Cyrillic characters from some JSON file and then output it to console using <strong>Python 3.4.3 on Windows</strong>. Normal print('Russian smth бÑквÑ') works as intended.</p>
<p>But when I print JSON contents it seems to print in Windows-1251 - "СÐСÑСÐСÐÐ ÑÐ ÑРµ Р±СÑÐ ÑÐ ÐСâ¹" (though my console, my JSON file and my .py (with coding comment) are in UTF-8).</p>
<p>I've tried re-encoding it to Win-1251 and setting console to Win-1251, but still no luck.</p>
<p><em>My JSON (Encoded in UTF-8):</em></p>
<pre><code>{
"ÑÑÑÑкие бÑквÑ": "ÑÑо-Ñо еÑÑ Ð½Ð° ÑÑÑÑком",
"english letters": "и ÑÑо-Ñо на великом"
}
</code></pre>
<p><em>My code to load dictionary:</em></p>
<pre><code>def load_dictionary():
global Dictionary, isFatal
try:
with open(DictionaryName) as f:
Dictionary = json.load(f)
except Exception as e:
logging.critical('Error loading dictionary: ' + str(e))
isFatal = True
return
logging.info('Dictionary was loaded successfully')
</code></pre>
<p>I am trying to output it in 2 ways (both show the same gibberish):</p>
<pre><code>print(helper.Dictionary.get('rly'))
print(helper.Dictionary)
</code></pre>
<hr>
<p>An interesting add-on: I've added the whole Russian alphabet to my JSON file and it seems to <strong>get stuck at "С Ñ" letter</strong>. <em>(Error loading dictionary: 'charmap' codec can't decode byte 0x81 in position X: character maps to )</em>. If I remove this one letter it shows no exception, but the problem above remains.</p>
| 1 | 2016-09-28T19:01:31Z | 39,779,780 | <p>"<em>But when I print JSON contents â¦</em>" </p>
<p>If you print it using <code>type</code> command, then you get <a href="https://en.wikipedia.org/wiki/Mojibake" rel="nofollow">mojibake</a> <code>СÐСÑСÐСÐÐ ÑÐ ÑРµ â¦</code> instead of <code>ÑÑÑÑкие â¦</code> under <code>CHCP 1251</code> scope. </p>
<p>Try <code>type</code> under <a href="http://ss64.com/nt/chcp.html" rel="nofollow"><code>CHCP 65001</code> (i.e. <code>UTF-8</code>)</a> scope.</p>
<p>Follow <a href="http://stackoverflow.com/questions/39755662/json-printed-to-console-shows-wrong-encoding?noredirect=1#comment66807809_39755662">nauer's advice</a>, use <code>open(DictionaryName, encoding="utf8")</code>.</p>
<p><strong>Example</strong> (<code>39755662.json</code> is saved with <code>UTF-8</code> encoding):</p>
<pre><code>==> chcp 866
Active code page: 866
==> type 39755662.json
{
"â¤Ðâ¤Ðâ¤Ðâ¤Ðâ¨ââ¨ââ¨â¡ â¨ââ¤Ðâ¨ââ¨ââ¤Ð": "â¤Ðâ¤Ðâ¨â-â¤Ðâ¨â â¨â¡â¤Ðâ¤Ð¡ â¨ââ¨â â¤Ðâ¤Ðâ¤Ðâ¤Ðâ¨ââ¨ââ¨â",
"rly": "â¤Ðâ¤Ðâ¤Ðâ¤Ðâ¨ââ¨ââ¨â£"
}
==> chcp 1251
Active code page: 1251
==> type 39755662.json
{
"СÐСÑСÐСÐÐ ÑÐ ÑРµ Р±СÑÐ ÑÐ ÐСâ¹": "Сâ¡Ð¡âÐ Ñ-СâÐ Ñ Ð ÂµÐ¡â°Ð¡â Ð Ð
Р° СÐСÑСÐСÐÐ ÑÐ ÑÐ Ñ",
"rly": "СÐСÑСÐСÐÐ ÑÐ ÑÐ â"
}
==> chcp 65001
Active code page: 65001
==> type 39755662.json
{
"ÑÑÑÑкие бÑквÑ": "ÑÑо-Ñо еÑÑ Ð½Ð° ÑÑÑÑком",
"rly": "ÑÑÑÑкий"
}
==>
</code></pre>
| 0 | 2016-09-29T20:50:43Z | [
"python",
"json",
"python-3.x",
"utf-8",
"cyrillic"
]
|
Overriding update( ) Django rest framework | 39,755,669 | <p>i have a model wich contains a foreign Key so i create my update function but the problem when i want to update my models all fields are updated except the foreign key .I don't know why .I hope that i get an answer </p>
<p>My models:</p>
<pre><code>class Produit (models.Model):
titre=models.CharField(max_length=100)
description=models.TextField()
photo_principal=models.ImageField(upload_to='produits/',default='image.jpg')
photo_1 = models.ImageField(upload_to='produits/', default='image.jpg')
photo_2 = models.ImageField(upload_to='produits/', default='image.jpg')
photo_3 = models.ImageField(upload_to='produits/', default='image.jpg')
prix=models.FloatField()
new_prix=models.FloatField()
categorie=models.ForeignKey(Categorie,related_name= 'produit', on_delete=models.CASCADE)
</code></pre>
<p>serializers.py</p>
<pre><code>class ProduitUpdateSerializer(serializers.ModelSerializer):
categorie_id = serializers.PrimaryKeyRelatedField(queryset=Categorie.objects.all(),source='categorie.id')
class Meta:
model = Produit
fields = ['titre', 'description', 'photo_principal', 'photo_1', 'photo_2', 'photo_3', 'prix', 'new_prix',
'categorie_id', ]
def update(self, instance, validated_data):
print(validated_data)
instance.categorie_id = validated_data.get('categorie_id',instance.categorie_id)
instance.titre = validated_data.get('titre', instance.titre)
instance.description = validated_data.get('description', instance.description)
instance.photo_principal = validated_data.get('photo_principal', instance.photo_principal)
instance.photo_1 = validated_data.get('photo_1', instance.photo_1)
instance.photo_2 = validated_data.get('photo_2', instance.photo_2)
instance.photo_3 = validated_data.get('photo_3', instance.photo_3)
instance.prix = validated_data.get('prix', instance.prix)
instance.new_prix = validated_data.get('new_prix', instance.new_prix)
instance.save()
return instance
</code></pre>
| 0 | 2016-09-28T19:02:18Z | 39,788,667 | <p>You shouldn't play with the id directly in that case since the serializer will return an object:</p>
<pre><code>class ProduitUpdateSerializer(serializers.ModelSerializer):
class Meta:
model = Produit
fields = ['titre', 'description', 'photo_principal', 'photo_1', 'photo_2', 'photo_3', 'prix', 'new_prix',
'categorie', ]
def update(self, instance, validated_data):
print(validated_data)
instance.categorie = validated_data.get('categorie', instance.categorie)
instance.titre = validated_data.get('titre', instance.titre)
instance.description = validated_data.get('description', instance.description)
instance.photo_principal = validated_data.get('photo_principal', instance.photo_principal)
instance.photo_1 = validated_data.get('photo_1', instance.photo_1)
instance.photo_2 = validated_data.get('photo_2', instance.photo_2)
instance.photo_3 = validated_data.get('photo_3', instance.photo_3)
instance.prix = validated_data.get('prix', instance.prix)
instance.new_prix = validated_data.get('new_prix', instance.new_prix)
instance.save()
return instance
</code></pre>
| 0 | 2016-09-30T09:53:38Z | [
"python",
"django",
"django-rest-framework"
]
|
pandas histogram with by: possible to make axes uniform? | 39,755,742 | <p>I am using the option to generate a separate histogram of a value for each group in a data frame like so (example code from documentation)</p>
<pre><code>data = pd.Series(np.random.randn(1000))
data.hist(by=np.random.randint(0, 4, 1000), figsize=(6, 4))
</code></pre>
<p>This is great, but what I am not seeing is a way to set and standardize the axes. Is this possible?</p>
<p>To be specific, I would like to specify the x and y axes of the plots so that the y axis in particular has the same range for all plots. Otherwise it can be hard to compare distributions to one another.</p>
| 2 | 2016-09-28T19:07:08Z | 39,756,668 | <p>you can pass <code>kwds</code> to hist and it will pass them along to appropriate sub processes. The relevant ones here are <code>sharex</code> and <code>sharey</code></p>
<pre><code>data = pd.Series(np.random.randn(1000))
data.hist(by=np.random.randint(0, 4, 1000), figsize=(6, 4),
sharex=True, sharey=True)
</code></pre>
<p><a href="http://i.stack.imgur.com/q86d4.png" rel="nofollow"><img src="http://i.stack.imgur.com/q86d4.png" alt="enter image description here"></a></p>
| 3 | 2016-09-28T20:02:17Z | [
"python",
"pandas"
]
|
Automatic varable creation when when adding new instance of a class | 39,755,813 | <p>I am teaching my self Python, and I am getting my head around OOP classes. I keep seeing examples like this.</p>
<pre><code>import ClassImade
var1 = ClassImade()
Var2 = ClassImade()
</code></pre>
<p>The program I am trying to make will have several thousand instances of the class. My question is: how has this issue been over come? I have seen in other posts that this is part of bad construction of the program. if that is so why do I keep seeing it over and over again in examples?</p>
| -1 | 2016-09-28T19:10:50Z | 39,755,912 | <pre><code>var = []
for i in range(1000):
var.append(ClassImade())
</code></pre>
<p>This makes a list of class instances. Is that what you want?</p>
| 0 | 2016-09-28T19:17:12Z | [
"python"
]
|
Simple Hello World, AJAX FOR DJANGO | 39,755,826 | <p>So I have this function in views.py</p>
<pre><code>def home(request):
return render_to_response('proj1/index.html', RequestContext(request, {'variable': 'world'}))
</code></pre>
<p>Which i want to use for AJAX to display "Hello World"; </p>
<p>The ajax function is like so :</p>
<pre><code>$.ajax({
url: "/proj1"
type:"GET",
dataType: "html",
success: function(data){
<h1>Hello {{data.variable}}, welcome to my AJAX WebSite</h1>
}
});
</code></pre>
<p>How do i achieve it ?
Thanks !</p>
| 0 | 2016-09-28T19:11:42Z | 39,757,291 | <p>Make sure the 'url: "/proj1" ' is pointing to the url that calls that view.</p>
<p>This tutorial will solve your doubts:</p>
<p><a href="https://godjango.com/18-basic-ajax/" rel="nofollow">https://godjango.com/18-basic-ajax/</a></p>
| 0 | 2016-09-28T20:43:06Z | [
"jquery",
"python",
"ajax",
"django"
]
|
Calling a subset of a dataset | 39,755,877 | <pre><code>x_train = train['date_x','activity_category','char_1_x','char_2_x','char_3_x','char_4_x','char_5_x','char_6_x',
'char_7_x','char_8_x','char_9_x','char_10_x',.........,'char_27','char_29','char_30','char_31','char_32','char_33',
'char_34','char_35','char_36','char_37','char_38']
y = y_train
x_test = test['date_x','activity_category','char_1_x','char_2_x','char_3_x','char_4_x','char_5_x','char_6_x',
'char_7_x','char_8_x','char_9_x','char_10_x','char_1_y','group_1','char_1_y','char_2_y','char_3_y', 'char_4_y','char_5_y','char_6_y','char_7_y',
'char_8_y','char_9-y','char_10_y', ...........,'char_29','char_30','char_31','char_32','char_33',
'char_34','char_35','char_36','char_37','char_38']
train.iloc([0:17,19:38])
</code></pre>
<p>After trying to slice columns with <code>train([0:17,19:38)]</code>, I resorted to data entry of all column names. A pretty cumbersome way of doing this, but I am only getting what I call with <code>19:38</code>. I am getting Key error message for doing it the first way, by calling the column names. </p>
| 1 | 2016-09-28T19:14:40Z | 39,756,774 | <p>As suggested by @AndrasDeak<br>
Consider the <code>pd.DataFrame</code> <code>train</code></p>
<pre><code>train = pd.DataFrame(np.arange(1000).reshape(-1, 20))
</code></pre>
<p>Then use the suggestion like this</p>
<pre><code>train.iloc[np.r_[0:17, 19:38]]
</code></pre>
| 2 | 2016-09-28T20:08:51Z | [
"python",
"pandas"
]
|
regex to validate date in DD.MM format | 39,755,893 | <p>I've googled up a lot of regexes to validate dates in DD.MM.YYYY format. Like this one:</p>
<p><code>(0[1-9]|1[0-9]|2[0-8]|(?:29|30)(?!.02)|29(?=.02.\d\d(?:[02468][048]|[13579][26]))|31(?=.0[13578]|.1[02]))(?:\.(?=\d\d\.)|-(?=\d\d-)|\/(?=\d\d\/))(0[1-9]|1[0-2])[.\/\-]([1-9][0-9]{3})</code></p>
<p>and it works fine. </p>
<p>As far as I understand the <code>([1-9][0-9]{3})</code> part refers to year. I tried removing it and it started validating dates ending with dots, like <code>01.05.</code>, <code>10.07.</code> etc.</p>
<pre><code>>>> regex = '^(0[1-9]|1[0-9]|2[0-8]|(?:29|30)(?!.02)|29(?=.02.\d\d(?:[02468][048]|[13579][26]))|31(?=.0[13578]|.1[02]))(?:\.(?=\d\d\.)|-(?=\d\d-)|\/(?=\d\d\/))(0[1-9]|1[0-2])[.\/\-]$'
>>> aaa = '12.02.'
>>> bbb = '32.02.'
>>> print(re.match(regex, aaa))
<_sre.SRE_Match object; span=(0, 6), match='12.02.'>
>>> print(re.match(regex, bbb))
None
</code></pre>
<p>But when I remove the part that takes care of the dot/dash divider</p>
<pre><code>[.\/\-]
</code></pre>
<p>it doesn't validate dates without the trailing dots:</p>
<pre><code>>>> regex = '^(0[1-9]|1[0-9]|2[0-8]|(?:29|30)(?!.02)|29(?=.02.\d\d(?:[02468][048]|[13579][26]))|31(?=.0[13578]|.1[02]))(?:\.(?=\d\d\.)|-(?=\d\d-)|\/(?=\d\d\/))(0[1-9]|1[0-2])$'
>>> aaa = '12.02'
>>> bbb = '32.02'
>>> print(re.match(regex, aaa))
None
>>> print(re.match(regex, bbb))
None
</code></pre>
<p>How do I make this work?</p>
<p><strong>UPDATE ABOUT FEB 28 / FEB 29:</strong></p>
<p>It's okay if it won't validate 28/29 Feb, this is acceptable in my case.</p>
<p><strong>UPDATE ABOUT PYTHON:</strong></p>
<p>I cannot use python validation for this, sadly it's only a regex field in a web form that I can use. </p>
| 0 | 2016-09-28T19:15:56Z | 39,766,116 | <h2>Solution</h2>
<pre><code> ^(0[1-9]|[12][0-9]|30(?!\.02)|31(?!\.(0[2469]|11)))\.(0[1-9]|1[0-2])$
</code></pre>
<h2>Example in python</h2>
<pre><code>>>> daymonth_match = r"^(0[1-9]|[12][0-9]|30(?!\.02)|31(?!\.(0[2469]|11)))\.(0[1-9]|1[0-2])$"
>>> print re.match(daymonth_match, "12.04")
<_sre.SRE_Match object at 0x7f3728125880>
>>> print re.match(daymonth_match, "29.02")
<_sre.SRE_Match object at 0x7f3728125880>
>>> print re.match(daymonth_match, "30.02")
None
>>> print re.match(daymonth_match, "30.04")
<_sre.SRE_Match object at 0x7f3728125880>
>>> print re.match(daymonth_match, "31.04")
None
>>> print re.match(daymonth_match, "31.05")
<_sre.SRE_Match object at 0x7f3728125880>
</code></pre>
<p>It assumes <code>29.02</code> always valid.</p>
<h2>Some details on how it works</h2>
<p>This regexp relies on the negative lookahead assertion <code>(?!...)</code>.
For example the expression <code>30(?!\.02)</code> means that <code>30</code> is a match only if it is not followed by <code>\.02</code> AND, since it is a "look ahead", <code>\.02</code> is not considered as a match of the expression itself (see <a href="https://docs.python.org/2/library/re.html#regular-expression-syntax" rel="nofollow">python documentation</a> for details)</p>
| 0 | 2016-09-29T09:11:17Z | [
"python",
"regex"
]
|
regex to validate date in DD.MM format | 39,755,893 | <p>I've googled up a lot of regexes to validate dates in DD.MM.YYYY format. Like this one:</p>
<p><code>(0[1-9]|1[0-9]|2[0-8]|(?:29|30)(?!.02)|29(?=.02.\d\d(?:[02468][048]|[13579][26]))|31(?=.0[13578]|.1[02]))(?:\.(?=\d\d\.)|-(?=\d\d-)|\/(?=\d\d\/))(0[1-9]|1[0-2])[.\/\-]([1-9][0-9]{3})</code></p>
<p>and it works fine. </p>
<p>As far as I understand the <code>([1-9][0-9]{3})</code> part refers to year. I tried removing it and it started validating dates ending with dots, like <code>01.05.</code>, <code>10.07.</code> etc.</p>
<pre><code>>>> regex = '^(0[1-9]|1[0-9]|2[0-8]|(?:29|30)(?!.02)|29(?=.02.\d\d(?:[02468][048]|[13579][26]))|31(?=.0[13578]|.1[02]))(?:\.(?=\d\d\.)|-(?=\d\d-)|\/(?=\d\d\/))(0[1-9]|1[0-2])[.\/\-]$'
>>> aaa = '12.02.'
>>> bbb = '32.02.'
>>> print(re.match(regex, aaa))
<_sre.SRE_Match object; span=(0, 6), match='12.02.'>
>>> print(re.match(regex, bbb))
None
</code></pre>
<p>But when I remove the part that takes care of the dot/dash divider</p>
<pre><code>[.\/\-]
</code></pre>
<p>it doesn't validate dates without the trailing dots:</p>
<pre><code>>>> regex = '^(0[1-9]|1[0-9]|2[0-8]|(?:29|30)(?!.02)|29(?=.02.\d\d(?:[02468][048]|[13579][26]))|31(?=.0[13578]|.1[02]))(?:\.(?=\d\d\.)|-(?=\d\d-)|\/(?=\d\d\/))(0[1-9]|1[0-2])$'
>>> aaa = '12.02'
>>> bbb = '32.02'
>>> print(re.match(regex, aaa))
None
>>> print(re.match(regex, bbb))
None
</code></pre>
<p>How do I make this work?</p>
<p><strong>UPDATE ABOUT FEB 28 / FEB 29:</strong></p>
<p>It's okay if it won't validate 28/29 Feb, this is acceptable in my case.</p>
<p><strong>UPDATE ABOUT PYTHON:</strong></p>
<p>I cannot use python validation for this, sadly it's only a regex field in a web form that I can use. </p>
| 0 | 2016-09-28T19:15:56Z | 39,767,720 | <p>Just make the dot and year optional:</p>
<pre><code>(?:[.\/\-]([1-9][0-9]{3}))?
</code></pre>
| 0 | 2016-09-29T10:25:09Z | [
"python",
"regex"
]
|
Dump All XPaths | 39,755,923 | <p>Does lxml or exml have a function to export all xpaths in an XML?</p>
<p>XML Example:</p>
<pre><code> <note>
<to>Tove</to>
<from>Jani</from>
<heading>Reminder</heading>
<body>
<content>Don't forget me this weekend!</content>
</body>
</note>
</code></pre>
<p>XPath Results:</p>
<pre><code>\\note
\\note\to
\\note\from
\\note\heading
\\note\body
\\note\body\content
</code></pre>
| 0 | 2016-09-28T19:17:52Z | 39,755,979 | <p>You have to iterate over the tree and call <a href="http://lxml.de/api/lxml.etree._ElementTree-class.html#getpath" rel="nofollow">getpath</a> on each node:</p>
<pre><code>x = """<note>
<to>Tove</to>
<from>Jani</from>
<heading>Reminder</heading>
<body>
<content>Don't forget me this weekend!</content>
</body>
</note>"""
from lxml import etree
from StringIO import StringIO
tree = etree.parse(StringIO(x))
paths = "\n".join(tree.getpath(n) for n in tree.iter())
print(paths)
</code></pre>
<p>Output:</p>
<pre><code>/note
/note/to
/note/from
/note/heading
/note/body
/note/body/content
</code></pre>
| 1 | 2016-09-28T19:21:56Z | [
"python",
"xml",
"xpath"
]
|
How do I use `setrlimit` to limit memory usage? RLIMIT_AS kills too soon; RLIMIT_DATA, RLIMIT_RSS, RLIMIT_STACK kill not at all | 39,755,928 | <p>I'm trying to use <code>setrlimit</code> to limit my memory usage on a Linux system, in order to stop my process from crashing the machine (my code was crashing nodes on a high performance cluster, because a bug led to memory consumption in excess of 100 GiB). I can't seem to find the correct resource to pass to <code>setrlimit</code>; I think it should be resident, which <a href="http://stackoverflow.com/questions/3043709/resident-set-size-rss-limit-has-no-effect">cannot be limited with setrlimit</a>, but I am confused by resident, heap, stack. In the code below; if I uncomment only <code>RLIMIT_AS</code>, the code fails with <code>MemoryError</code> at <code>numpy.ones(shape=(1000, 1000, 10), dtype="f8")</code> even though that array should be only 80 MB. If I uncomment only <code>RLIMIT_DATA</code>, <code>RLIMIT_RSS</code>, or <code>RLIMIT_STACK</code> both arrays get allocated successfully, even though the total memory usage is 2 GB, or twice the desired maximum.</p>
<p>I would like to make make my program fail (no matter how) as soon as it tries to allocate too much RAM. Why do none of <code>RLIMIT_DATA</code>, <code>RLIMIT_RSS</code>, <code>RLIMIT_STACK</code> and <code>RLIMIT_AS</code> do what I mean, and what is the correct resource to pass to <code>setrlimit</code>?</p>
<pre><code>$ cat mwe.py
#!/usr/bin/env python3.5
import resource
import numpy
#rsrc = resource.RLIMIT_AS
#rsrc = resource.RLIMIT_DATA
#rsrc = resource.RLIMIT_RSS
#rsrc = resource.RLIMIT_STACK
soft, hard = resource.getrlimit(rsrc)
print("Limit starts as:", soft, hard)
resource.setrlimit(rsrc, (1e9, 1e9))
soft, hard = resource.getrlimit(rsrc)
print("Limit is now:", soft, hard)
print("Allocating 80 KB, should certainly work")
M1 = numpy.arange(100*100, dtype="u8")
print("Allocating 80 MB, should work")
M2 = numpy.arange(1000*1000*10, dtype="u8")
print("Allocating 2 GB, should fail")
M3 = numpy.arange(1000*1000*250, dtype="u8")
input("Still hereâ¦")
</code></pre>
<p>Output with the <code>RLIMIT_AS</code> line uncommented:</p>
<pre><code>$ ./mwe.py
Limit starts as: -1 -1
Limit is now: 1000000000 -1
Allocating 80 KB, should certainly work
Allocating 80 MB, should work
Traceback (most recent call last):
File "./mwe.py", line 22, in <module>
M2 = numpy.arange(1000*1000*10, dtype="u8")
MemoryError
</code></pre>
<p>Output when running with any of the other ones uncommented:</p>
<pre><code>$ ./mwe.py
Limit starts as: -1 -1
Limit is now: 1000000000 -1
Allocating 80 KB, should certainly work
Allocating 80 MB, should work
Allocating 2 GB, should fail
Still hereâ¦
</code></pre>
<p>At the final line, <code>top</code> reports that my process is using 379 GB VIRT, 2.0 GB RES.</p>
<hr>
<p>System details:</p>
<pre><code>$ uname -a
Linux host.somewhere.ac.uk 2.6.32-573.3.1.el6.x86_64 #1 SMP Mon Aug 10 09:44:54 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.7 (Santiago)
$ free -h
total used free shared buffers cached
Mem: 2.0T 1.9T 37G 1.6G 3.4G 1.8T
-/+ buffers/cache: 88G 1.9T
Swap: 464G 4.8M 464G
$ python3.5 --version
Python 3.5.0
$ python3.5 -c "import numpy; print(numpy.__version__)"
1.11.1
</code></pre>
| 1 | 2016-09-28T19:18:05Z | 39,765,583 | <p>Alas I have no answer for your question. But I hope the following might help:</p>
<ul>
<li>Your script works as expected on my system. Please share exact spec for yours, might be there is a known problem with Linux distro, kernel or even numpy...</li>
<li>You should be OK with <code>RLIMIT_AS</code>. As explained <a href="http://stackoverflow.com/a/33525161/404099">here</a> this should limit the entire virtual memory used by the process. And virtual memory includes all: swap memory, shared libraries, code and data. More details <a href="http://stackoverflow.com/a/21049737/404099">here</a>.</li>
<li><p>You may add the following function (adopted from <a href="http://stackoverflow.com/a/938800/404099">this answer</a>) to your script to check actual virtual memory usage at any point:</p>
<pre><code>def peak_virtual_memory_mb():
with open('/proc/self/status') as f:
status = f.readlines()
vmpeak = next(s for s in status if s.startswith("VmPeak:"))
return vmpeak
</code></pre></li>
<li>A general advice, disable swap memory. In my experience with high performance servers it does more harm than solves problems.</li>
</ul>
| 1 | 2016-09-29T08:46:58Z | [
"python",
"numpy",
"memory",
"setrlimit"
]
|
regex match proc name without slash | 39,755,930 | <p>I have a list of proc names on Linux. Some have slash, some don't. For example,</p>
<p><strong>kworker</strong>/23:1</p>
<strong>migration</strong>/39</p>
<strong>qmgr</strong></p>
<p>I need to extract just the proc name without the slash and the rest. I tried a few different ways but still won't get it completely correct. What's wrong with my regex? Any help would be much appreciated.</p>
<pre><code>>>> str='kworker/23:1'
>>> match=re.search(r'^(.+)\/*',str)
>>> match.group(1)
'kworker/23:1'
</code></pre>
| 0 | 2016-09-28T19:18:13Z | 39,755,958 | <p>An alternative to regex is to <code>split</code> on <em>slash</em> and take the first item:</p>
<pre><code>>>> s ='kworker/23:1'
>>> s.split('/')[0]
'kworker'
</code></pre>
<p>This also works when the string does not contain a slash:</p>
<pre><code>>>> s = 'qmgr'
>>> s.split('/')[0]
'qmgr'
</code></pre>
<p>But if you're going to stick to <code>re</code>, I think <code>re.sub</code> is what you want, as you won't need to fetch the <em>matching group</em>:</p>
<pre><code>>>> import re
>>> s ='kworker/23:1'
>>> re.sub(r'/.*$', '', s)
'kworker'
</code></pre>
<p>On a side note, assignig the name <code>str</code> shadows the in built string type, which you don't want.</p>
| 0 | 2016-09-28T19:20:22Z | [
"python",
"regex"
]
|
regex match proc name without slash | 39,755,930 | <p>I have a list of proc names on Linux. Some have slash, some don't. For example,</p>
<p><strong>kworker</strong>/23:1</p>
<strong>migration</strong>/39</p>
<strong>qmgr</strong></p>
<p>I need to extract just the proc name without the slash and the rest. I tried a few different ways but still won't get it completely correct. What's wrong with my regex? Any help would be much appreciated.</p>
<pre><code>>>> str='kworker/23:1'
>>> match=re.search(r'^(.+)\/*',str)
>>> match.group(1)
'kworker/23:1'
</code></pre>
| 0 | 2016-09-28T19:18:13Z | 39,756,041 | <p>The problem with the regex is, that the greedy <code>.+</code> is going until the end, because everything after it is optional, meaning it is kept as short as possible (essentially empty). To fix this replace the <code>.</code> with anything but a <code>/</code>.</p>
<pre><code>([^\/]+)\/?.*
</code></pre>
<p>works. You can test this regex <a href="https://regex101.com/r/Z8Cixr/1" rel="nofollow">here</a>. In case it is new to you, <code>[^\/]</code> matches anything, but a slash., as the <code>^</code> in the beginning inverts which characters are matched.</p>
<p>Alternatively, you can also use <code>split</code> as suggested by Moses Koledoye. <code>split</code> is often better for simple string manipulation, while regex enables you to perform very complex tasks with rather little code.</p>
| 0 | 2016-09-28T19:25:56Z | [
"python",
"regex"
]
|
Using SortedDictionary for .net (imported from C# .dll) | 39,755,973 | <p>I'm currently working on a python (for .NET) project that interacts with a C# .dll. However, something is wrong with the SortedDictionary I'm importing.</p>
<p>This is what I'm doing:</p>
<pre><code>import clr
from System.Collections.Generic import SortedDictionary
sorted_dict = SortedDictionary<int, bool>(1, True)
</code></pre>
<p>I get the following error when calling Count on sorted_dict:</p>
<pre><code>AttributeError: 'tuple' object has no attribute 'Count'
</code></pre>
<p>sorted_dict doesn't allow me to call any of the public member functions I see in the interface (Add, Clear, ContainsKey, etc.). Am I doing this correctly?</p>
| 1 | 2016-09-28T19:21:20Z | 39,756,514 | <p>"In that case it's definitely a syntax issue. You're using C# syntax which the Python interpreter no comprende. I think you want something like SortedDictionary[int, bool] based on some coding examples I just found" @martineau</p>
| 0 | 2016-09-28T19:53:31Z | [
"python",
".net"
]
|
Using SortedDictionary for .net (imported from C# .dll) | 39,755,973 | <p>I'm currently working on a python (for .NET) project that interacts with a C# .dll. However, something is wrong with the SortedDictionary I'm importing.</p>
<p>This is what I'm doing:</p>
<pre><code>import clr
from System.Collections.Generic import SortedDictionary
sorted_dict = SortedDictionary<int, bool>(1, True)
</code></pre>
<p>I get the following error when calling Count on sorted_dict:</p>
<pre><code>AttributeError: 'tuple' object has no attribute 'Count'
</code></pre>
<p>sorted_dict doesn't allow me to call any of the public member functions I see in the interface (Add, Clear, ContainsKey, etc.). Am I doing this correctly?</p>
| 1 | 2016-09-28T19:21:20Z | 39,756,544 | <p>The problem is this:</p>
<pre><code>SortedDictionary<int, bool>(1, True)
</code></pre>
<p>The <code><</code> and <code>></code> symbols in this line are being taken as <em>comparison operators.</em> Python sees you asking for two things:</p>
<pre><code> SortedDictionary < int
bool > (1, True)
</code></pre>
<p>The comma between these expressions makes the results into a tuple, so you get <code>(True, True)</code> as a result. (Python 2.x lets you compare anything; the result may not have any reasonable meaning, as is the case here.)</p>
<p>Clearly, Python does not use the same <code><...></code> syntax as C# for generic types. Instead, you use <code>[...]</code>:</p>
<pre><code>sorted_dict = SortedDictionary[int, bool](1, True)
</code></pre>
<p>This still doesn't work: you get:</p>
<pre><code>TypeError: expected IDictionary[int, bool], got int
</code></pre>
<p>This is because you are trying to instantiate the class with two parameters, when it wants a single parameter that has a dictionary interface. So this will work:</p>
<pre><code>sorted_dict = SortedDictionary[int, bool]({1: True})
</code></pre>
<p>Edit: I originally assumed you were using IronPython. Looks like Python for .NET uses a similar approach, so I believe the above should still work.</p>
| 0 | 2016-09-28T19:55:11Z | [
"python",
".net"
]
|
Explain how pandas DataFrame join works | 39,755,981 | <p>Why does inner join work so strange in pandas?</p>
<p><strong>For example:</strong></p>
<pre><code>import pandas as pd
import io
t1 = ('key,col1\n'
'1,a\n'
'2,b\n'
'3,c\n'
'4,d')
t2 = ('key,col2\n'
'1,e\n'
'2,f\n'
'3,g\n'
'4,h')
df1 = pd.read_csv(io.StringIO(t1), header=0)
df2 = pd.read_csv(io.StringIO(t2), header=0)
print(df1)
print()
print(df2)
print()
print(df2.join(df1, on='key', how='inner', lsuffix='_l'))
</code></pre>
<p><strong>Outputs:</strong></p>
<pre><code> key col1
0 1 a
1 2 b
2 3 c
3 4 d
key col2
0 1 e
1 2 f
2 3 g
3 4 h
key_l col2 key col1
0 1 e 2 b
1 2 f 3 c
2 3 g 4 d
</code></pre>
<p>If I don't specify <code>lsuffix</code>, it says</p>
<pre><code>ValueError: columns overlap but no suffix specified: Index(['key'], dtype='object')
</code></pre>
<p>Does this function work differently from SQL's JOIN? Why does it want to create an extra 'key' column with a suffix? Why are there only 3 rows?
I expected it to output something like this:</p>
<pre><code> key col1 col2
0 1 a e
1 2 b f
2 3 c g
3 4 d h
</code></pre>
| 2 | 2016-09-28T19:22:13Z | 39,756,074 | <p>First things first:<br>
What you wanted was merge</p>
<pre><code>df1.merge(df2)
</code></pre>
<p><a href="http://i.stack.imgur.com/HnoA8.png" rel="nofollow"><img src="http://i.stack.imgur.com/HnoA8.png" alt="enter image description here"></a></p>
<hr>
<p><code>join</code> defaults to merging on the <code>index</code>. You can specify the <code>on</code> parameter which only says which column from left side to match with the index of the right side. </p>
<p>These might help illustrate</p>
<pre><code>df1.set_index('key').join(df2.set_index('key'))
</code></pre>
<p><a href="http://i.stack.imgur.com/Mpw4c.png" rel="nofollow"><img src="http://i.stack.imgur.com/Mpw4c.png" alt="enter image description here"></a></p>
<pre><code>df1.join(df2.set_index('key'), on='key')
</code></pre>
<p><a href="http://i.stack.imgur.com/9QQgK.png" rel="nofollow"><img src="http://i.stack.imgur.com/9QQgK.png" alt="enter image description here"></a></p>
<hr>
<p>Your example is matching the index of <code>df2</code> which looks like <code>[0, 1, 2, 3]</code> with the <code>key</code> column of <code>df1</code> which looks like <code>[1, 2, 3, 4]</code><br>
That's why you get <code>NaN</code> in <code>col2</code> when <code>key_l</code> is <code>4</code></p>
<pre><code>df1.join(df2, on='key', lsuffix='_l', how='outer')
</code></pre>
<p><a href="http://i.stack.imgur.com/Wg93O.png" rel="nofollow"><img src="http://i.stack.imgur.com/Wg93O.png" alt="enter image description here"></a></p>
| 2 | 2016-09-28T19:27:47Z | [
"python",
"python-3.x",
"pandas",
"dataframe"
]
|
How to map hidden states to their corresponding categories after decoding in hmmlearn (Hidden Markov Model)? | 39,756,006 | <p>I would like to predict hidden states using Hidden Markov Model (decoding problem). The data is categorical. The hidden states include Hungry, Rest, Exercise and Movie. The observation set include Food, Home, Outdoor & Recreation and Arts & Entertainment. My program is first to train the HMM based on the observation sequence (Baum-Welch algorithm). And then I do the decoding (Viterbi algorithm) to predict the hidden state sequence.</p>
<p>My question is that how I can map the result (non-negative integers) to their corresponding categories like Hungry or Rest. Because of the non-deterministic property of the training algorithm , the parameters are different for every training of the same data. Therefore the hidden state sequence is different every time if I do the map like the following code. </p>
<p>The code is as follows:</p>
<pre><code>from __future__ import division
import numpy as np
from hmmlearn import hmm
states = ["Hungry", "Rest", "Exercise", "Movie"]
n_states = len(states)
observations = ["Food", "Home", "Outdoor & Recreation", "Arts & Entertainment"]
# The number in this sequence is the index of observation
category_sequence = [1, 0, 1, 2, 1, 3, 1]
Location = np.array([category_sequence]).T
model = hmm.MultinomialHMM(n_components=n_states).fit(Location)
logprob, result = model.decode(Location)
print "Category:", ", ".join(map(lambda x: observations[x], Location.T[0]))
print "Intent:", ", ".join(map(lambda x: states[x], result))
</code></pre>
| 0 | 2016-09-28T19:23:37Z | 39,767,561 | <p>This is known as label-switching problem. The log-likelihood of the model sums over all of the states and is therefore independent of the particular ordering.</p>
<p>As far as I know, there is no general recipe for dealing with it. Among the things you might try are:</p>
<ul>
<li>Find a partially labelled dataset, run <code>predict</code> on it and use the predictions to map the state indices to the corresponding labels. </li>
<li>Come up with heuristics for possible parameter values in each state. This could be tricky to do for Multinomials, but is possible if you model e.g. accelerometer data.</li>
</ul>
<hr>
<p><strong>Update</strong>: An ad-hoc version of guessing state to label mapping from labelled data.</p>
<pre><code>def guess_labels(hmm, X, labels):
result = [None] * hmm.n_components
for label, y_t in zip(labels, hmm.predict(X)):
assigned = result[y_t]
if assigned is not None:
# XXX clearly for any real data there might be
# (and there will be) conflicts. Here we just blindly
# hope the ``assert`` never fires.
assert assigned == label
else:
result[y_t] = label
return result
</code></pre>
| 0 | 2016-09-29T10:17:23Z | [
"python",
"hidden-markov-models",
"hmmlearn"
]
|
MissingSchema: Invalid URL '/': No schema supplied. Perhaps you meant http:///? | 39,756,016 | <pre><code>for l in l1:
r = requests.get(l)
html = r.content
root = lxml.html.fromstring(html)
urls = root.xpath('//div[@class="media-body"]//@href')
l2.extend(urls)
</code></pre>
<p>while running the above code this error coming. any solution??</p>
<p>MissingSchemaTraceback (most recent call last)</p>
<p>MissingSchema: Invalid URL '/': No schema supplied. Perhaps you meant <a href="http:///" rel="nofollow">http:///</a>?</p>
| -1 | 2016-09-28T19:24:08Z | 39,756,927 | <pre><code>urls = root.xpath('//div[1]/header/div[3]/nav/ul/li/a/@href')
</code></pre>
<p>These HREFs aren't full URLs; they're essentially just pathnames (i.e. <code>/foo/bar/thing.html</code>).</p>
<p>When you click on one of these links in a browser, the browser is smart enough to prepend the current page's scheme and hostname (i.e. <code>https://host.something.com</code>) to these paths, making a full URL.</p>
<p>But your code isn't doing that; you're trying to use the raw HREF value.</p>
<p>Later on in your program you use <code>urljoin()</code> which solves this issue, but you aren't doing that in the <code>for l in l1:</code> loop. <em>Why not?</em></p>
| 0 | 2016-09-28T20:19:46Z | [
"python"
]
|
How to separate a numpy array into separate columns in pandas | 39,756,150 | <p>I have a dataframe that looks like</p>
<pre><code> ID_0 ID_1 ID_2
0 a b 0.05
1 a b 0.10
2 a b 0.19
3 a c 0.25
4 a c 0.40
5 a c 0.65
6 a c 0.71
7 d c 0.95
8 d c 1.00
</code></pre>
<p>I want to groupby and make a normalized histogram of the ID_2 column for each group. So I do</p>
<pre><code>df.groupby(['ID_0', 'ID_1']).apply(lambda x: np.histogram(x['ID_2'], range = (0,1), density=True)[0]).reset_index(name='ID_2')
</code></pre>
<p>However what I would really like is for the 11 elements of the numpy arrays to be in separate columns of the dataframe. </p>
<p>How can I do this?</p>
| 1 | 2016-09-28T19:32:36Z | 39,756,435 | <p>You can construct a series object from each numpy array and the elements will be broadcasted as columns:</p>
<pre><code>import pandas as pd
import numpy as np
df.groupby(['ID_0', 'ID_1']).apply(lambda x: pd.Series(np.histogram(x['ID_2'], range = (0,1), density=True)[0])).reset_index()
</code></pre>
<p><a href="http://i.stack.imgur.com/fc63F.png" rel="nofollow"><img src="http://i.stack.imgur.com/fc63F.png" alt="enter image description here"></a></p>
| 3 | 2016-09-28T19:48:04Z | [
"python",
"pandas",
"numpy"
]
|
What is the cause of the error in this Python WMI script? (Windows 8.1) (Network adapter configuration) (Python 2.7) | 39,756,172 | <p>I am trying to make a Python script that will set my IP address to a static one instead of a dynamic one. I have searched up methods for this and the WMI implementation for Python seemed to be the best option. The stackoverflow question I got the information about is <a href="http://stackoverflow.com/a/7581831/5899439">here</a>. </p>
<p>I can get the IP address to be set to the static address but then I have to set the DNS server as well. <a href="https://community.spiceworks.com/topic/405339-replace-static-dns-settings-with-wmi-and-powershell" rel="nofollow">This site here</a> is where I got the basis for the DNS setting but it is causing problems. </p>
<p><strong>Traceback from IDLE</strong></p>
<pre><code>Traceback (most recent call last):
File "C:\Users\james_000\Desktop\SetIP.py", line 18, in <module>
c = nic.SetDNSServerSearchOrder(dns)
File "C:\Python27\lib\site-packages\wmi.py", line 431, in __call__
handle_com_error ()
File "C:\Python27\lib\site-packages\wmi.py", line 241, in handle_com_error
raise klass (com_error=err)
x_wmi: <x_wmi: Unexpected COM Error (-2147352567, 'Exception occurred.', (0,
u'SWbemProperty', u'Type mismatch ', None, 0, -2147217403), None)>
</code></pre>
<p><strong>SetIP.py</strong></p>
<pre><code>import wmi
nic_configs = wmi.WMI('').Win32_NetworkAdapterConfiguration(IPEnabled=True)
# First network adaptor
nic = nic_configs[0]
# IP address, subnetmask and gateway values should be unicode objects
ip = u'192.168.0.151'
subnetmask = u'255.255.255.0'
gateway = u'192.168.0.1'
dns = u'192.168.0.1'
# Set IP address, subnetmask and default gateway
# Note: EnableStatic() and SetGateways() methods require *lists* of values to be passed
a = nic.EnableStatic(IPAddress=[ip],SubnetMask=[subnetmask])
b = nic.SetGateways(DefaultIPGateway=[gateway])
c = nic.SetDNSServerSearchOrder(dns)
d = nic.SetDynamicDNSRegistration(true)
print(a)
print(b)
print(c)
print(d)
</code></pre>
<p>Please don't add solutions in comments as it makes it harder for other people to learn about how to fix the problem. </p>
| 0 | 2016-09-28T19:33:53Z | 39,756,598 | <p>SetDNSServerSearchOrder is looking for an array of Strings</p>
<pre><code>c = nic.SetDNSServerSearchOrder(dns)
</code></pre>
<p>should be</p>
<pre><code>c = nic.SetDNSServerSearchOrder([dns])
</code></pre>
| 1 | 2016-09-28T19:57:54Z | [
"python",
"windows",
"wmi"
]
|
Cost function erratically varying | 39,756,186 | <p><strong>Background</strong></p>
<p>I am designing a neural network solution for multiclass classification problem using tensorflow.The input data consist of 16 features and 6000 training examples to be read from csv file having 17 columns(16 features+1 label) and 6000 rows(training examples).I have decided to take 16 neurons as input layer 16 neurons in hidden layer and 16 neurons in output layer(as it is a 16 class classification).Here is my code for implementation-</p>
<pre><code>import tensorflow as tf
x=tf.placeholder(tf.float32,shape=[None,16])
y_=tf.placeholder(tf.float32,shape=[None,16])
def weight_variable(shape):
initial=tf.truncated_normal(shape,stddev=0.1,dtype=tf.float32)
return tf.Variable(initial)
def bias_variable(shape):
initial=tf.constant(0.1,shape=shape)
return tf.Variable(initial)
def read_from_csv(filename_queue):
reader=tf.TextLineReader()
key,value=reader.read(filename_queue)
record_defaults=[[1.], [1.], [1.], [1.], [1.],[1.], [1.], [1.], [1.], [1.],[1.], [1.], [1.], [1.], [1.],[1.],[1.]]
col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,col15,col16,col17=tf.decode_csv(value,record_defaults=record_defaults)
features = tf.pack([col1, col2, col3, col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,col15,col16])
labels=tf.pack([col17])
return features,labels
def input_pipeline(filenames,batch_size,num_epochs=None):
filename_queue=tf.train.string_input_producer([filenames],num_epochs=num_epochs,shuffle=True)
features,labels=read_from_csv(filename_queue)
min_after_dequeue=100
capacity=300
feature_batch,label_batch=tf.train.shuffle_batch([features,labels],batch_size=batch_size,capacity=capacity,min_after_dequeue=min_after_dequeue)
return feature_batch,label_batch
x,y_=input_pipeline('csvnew1.csv',20,300)
#input layer
W_1=weight_variable([16,16])
b_1=bias_variable([16])
y_1=tf.nn.relu(tf.matmul(x,W_1)+b_1)
#hidden layer
W_2=weight_variable([16,16])
b_2=bias_variable([16])
y_2=tf.nn.softmax(tf.matmul(y_1,W_2)+b_2)
cross_entropy=tf.reduce_mean(-tf.reduce_sum(y_*tf.log(y_2),reduction_indices=[1]))
train_step=tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
correct_prediction=tf.equal(tf.argmax(y_2,1),tf.argmax(y_,1))
accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
summary_cross=tf.scalar_summary('cost',cross_entropy)
summaries = tf.merge_all_summaries()
init_op = tf.initialize_all_variables()
# Create a session for running operations in the Graph.
sess = tf.Session()
summary_writer = tf.train.SummaryWriter('stats', sess.graph)
# Initialize the variables (like the epoch counter).
sess.run(init_op)
sess.run(tf.initialize_local_variables())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
count=0
try:
while not coord.should_stop():
#print("training....")
#summary_writer.add_summary(sess.run(summaries), count)
sess.run(train_step)
if count in range(300,90000,300):
print(sess.run(cross_entropy))
count=count+1
except tf.errors.OutOfRangeError:
print('Done training -- epoch limit reached')
finally:
# When done, ask the threads to stop.
coord.request_stop()
# Wait for threads to finish.
coord.join(threads)
sess.close()
</code></pre>
<p><strong>Problem</strong></p>
<p>The problem here is that as I print my cost function during training instead of generally decreasing trend it increases and decreases pretty randomly and erratically.I am pasting full code because it looks like implementation problem that I am unable to find.(varying learning rate is vain). </p>
<p>Edit:Decreasing learning rate to 10^-12 gives following costs(still erratic)</p>
<p>201.928,
173.078,
144.212,
97.6255,
133.125,
164.19,
208.571,
208.599,
188.594,
244.078,
237.414,
224.085,
224.1,
206.36,
217.457,
244.083,
246.309,
268.496,
248.517,
272.924,
228.551,
239.637,
301.759,....</p>
<p>I am printing cost after every 300 counts because 1 batch=20 examples,6000/20=300 counts for 1 epoch after which weights are updated.</p>
| 1 | 2016-09-28T19:34:39Z | 39,779,564 | <p>Whenever you see your cost function increase when you use gradient descent, you should try <strong>reducing the learning rate parameter</strong>. Try repeatedly decreasing the learning rate by 1/10 until you see the loss decrease monotonically.</p>
| 0 | 2016-09-29T20:37:44Z | [
"python",
"machine-learning",
"neural-network",
"tensorflow"
]
|
Specfying lease time in DHCP request packet | 39,756,217 | <p>I am trying to create a Dhcp request packet using scapy. Is there any way in which I can specify DHCP lease time in my request packet?</p>
| 1 | 2016-09-28T19:36:15Z | 39,756,801 | <p>Put option #51, with the desired lease time as its value, in the <code>options</code> section of the <code>DHCPREQUEST</code> or <code>DHCPDISCOVER</code> packet.</p>
<p>From <a href="http://www.freesoft.org/CIE/RFC/2131/16.htm" rel="nofollow">RFC 2131 Section 3.5</a>:</p>
<blockquote>
<p>In addition, the client may suggest values for the network address and lease time in the DHCPDISCOVER message. The client may include the 'requested IP address' option to suggest that a particular IP address be assigned, and may include the 'IP address lease time' option to suggest the lease time it would like.</p>
</blockquote>
| 0 | 2016-09-28T20:11:04Z | [
"python",
"networking",
"scapy",
"dhcp"
]
|
How to access the project root folder | 39,756,254 | <p>Below is a screenshot of my project. While I can access my <code>_files_inner</code> folder quite easily with <code>pd.read_csv("./_files_inner/games.csv")</code> I find it tricky to access my 'root' folder <code>_files</code></p>
<p>if you see my project explorer, I'm trying to access the <code>_files</code> but without specifying the absolute path (i.e., <code>C:\\Users\\adhg\\dev\\py\\_files\\games.csv</code>) because other developer have different path.</p>
<p>Question: how to access the root <code>_files</code> folder with something like this one (doesn't work)<code>pd.read_csv("./_files/games.csv")</code>: </p>
<pre><code>import pandas as pd
csv_result = pd.read_csv("./_files/games.csv") #will not work
csv_result = pd.read_csv("./_files_inner/games.csv") #works
print csv_result
</code></pre>
<p><a href="http://i.stack.imgur.com/wTz4c.png" rel="nofollow"><img src="http://i.stack.imgur.com/wTz4c.png" alt="enter image description here"></a></p>
| 1 | 2016-09-28T19:38:18Z | 39,756,504 | <p><code>try using ../_files_inner/games.csv</code></p>
| 1 | 2016-09-28T19:52:39Z | [
"python",
"eclipse"
]
|
tkinter ttk iterating through treeview | 39,756,379 | <p>I an using a tkinter ttk GUI to present data on files in a server. The information is stored in a ttk treeview and presented as a table. The goal is for the user to be able to filter these rows so that functions can be performed only on those visible in the treeview after the user is done filtering.</p>
<p>Problem is, I can't find a way to iterate through the treeview. I need to be able to to do something like this:</p>
<pre><code>def filterTreeview(treeviewToFilter, tvColumn, stringVariable):
for tvRow in treeviewToFilter:
if tvRow.getValue(tvColumn) != stringVariable:
tvRow.detach()
</code></pre>
<p>How can I achieve this?</p>
<p>As a secondary question, does anybody know of a better way to do this? Is there any reason to use a treeview rather than a simple array? What about making the filter on an array of data and then re-creating the treeview table from scratch?</p>
<p>I've spent a lot of time reading tutorials looking for information but I've not been successful in understanding the way to use data in a treeview so far:</p>
<p><a href="http://stackoverflow.com/questions/22032152/python-ttk-treeview-sort-numbers">python ttk treeview sort numbers</a>
<a href="http://www.tkdocs.com/tutorial/tree.html" rel="nofollow">http://www.tkdocs.com/tutorial/tree.html</a></p>
<p><a href="https://fossies.org/dox/Python-3.5.2/classtkinter_1_1ttk_1_1Treeview.html" rel="nofollow">https://fossies.org/dox/Python-3.5.2/classtkinter_1_1ttk_1_1Treeview.html</a> </p>
| 1 | 2016-09-28T19:44:12Z | 39,762,166 | <p>To iterate through a treeview's individual entries, get a list of treeview item 'id's and use that to iterate in a 'for' loop:</p>
<pre><code>#Column integer to match the column which was clicked in the table
col=int(treeview.identify_column(event.x).replace('#',''))-1
#Create list of 'id's
listOfEntriesInTreeView=treeview.get_children()
for each in listOfEntriesInTreeView:
print(treeview.item(each)['values'][col]) #e.g. prints data in clicked cell
treeview.detach(each) #e.g. detaches entry from treeview
</code></pre>
<p>This does what I need but if there is a better way, please let me know.</p>
| 0 | 2016-09-29T05:35:12Z | [
"python",
"tkinter",
"treeview",
"ttk"
]
|
How to remove whitespace in a list | 39,756,599 | <p>I can't remove my whitespace in my list. </p>
<pre><code>invoer = "5-9-7-1-7-8-3-2-4-8-7-9"
cijferlijst = []
for cijfer in invoer:
cijferlijst.append(cijfer.strip('-'))
</code></pre>
<p>I tried the following but it doesn't work. I already made a list from my string and seperated everything but the <code>"-"</code> is now a <code>""</code>. </p>
<pre><code>filter(lambda x: x.strip(), cijferlijst)
filter(str.strip, cijferlijst)
filter(None, cijferlijst)
abc = [x.replace(' ', '') for x in cijferlijst]
</code></pre>
| 0 | 2016-09-28T19:57:54Z | 39,756,657 | <p>Try that:</p>
<pre><code>>>> ''.join(invoer.split('-'))
'597178324879'
</code></pre>
| 4 | 2016-09-28T20:01:40Z | [
"python",
"list"
]
|
How to remove whitespace in a list | 39,756,599 | <p>I can't remove my whitespace in my list. </p>
<pre><code>invoer = "5-9-7-1-7-8-3-2-4-8-7-9"
cijferlijst = []
for cijfer in invoer:
cijferlijst.append(cijfer.strip('-'))
</code></pre>
<p>I tried the following but it doesn't work. I already made a list from my string and seperated everything but the <code>"-"</code> is now a <code>""</code>. </p>
<pre><code>filter(lambda x: x.strip(), cijferlijst)
filter(str.strip, cijferlijst)
filter(None, cijferlijst)
abc = [x.replace(' ', '') for x in cijferlijst]
</code></pre>
| 0 | 2016-09-28T19:57:54Z | 39,756,659 | <p>This looks a lot like the following question:
<a href="http://stackoverflow.com/questions/3232953/python-removing-spaces-from-list-objects">Python: Removing spaces from list objects</a></p>
<p>The answer being to use <code>strip</code> instead of <code>replace</code>. Have you tried </p>
<pre><code>abc = x.strip(' ') for x in x
</code></pre>
| 1 | 2016-09-28T20:01:54Z | [
"python",
"list"
]
|
How to remove whitespace in a list | 39,756,599 | <p>I can't remove my whitespace in my list. </p>
<pre><code>invoer = "5-9-7-1-7-8-3-2-4-8-7-9"
cijferlijst = []
for cijfer in invoer:
cijferlijst.append(cijfer.strip('-'))
</code></pre>
<p>I tried the following but it doesn't work. I already made a list from my string and seperated everything but the <code>"-"</code> is now a <code>""</code>. </p>
<pre><code>filter(lambda x: x.strip(), cijferlijst)
filter(str.strip, cijferlijst)
filter(None, cijferlijst)
abc = [x.replace(' ', '') for x in cijferlijst]
</code></pre>
| 0 | 2016-09-28T19:57:54Z | 39,756,666 | <p>If you want the numbers in string without <code>-</code>, use <a href="https://docs.python.org/2/library/string.html#string.replace" rel="nofollow"><code>.replace()</code></a> as:</p>
<pre><code>>>> string_list = "5-9-7-1-7-8-3-2-4-8-7-9"
>>> string_list.replace('-', '')
'597178324879'
</code></pre>
<p>If you want the numbers as <code>list</code> of numbers, use <a href="https://docs.python.org/2/library/stdtypes.html#str.split" rel="nofollow"><code>.split()</code></a>: </p>
<pre><code>>>> string_list.split('-')
['5', '9', '7', '1', '7', '8', '3', '2', '4', '8', '7', '9']
</code></pre>
| 2 | 2016-09-28T20:02:09Z | [
"python",
"list"
]
|
How should I make a class that can be used as my main app window, but also can be used as a secondary window | 39,756,617 | <p>I know that I can subclass a tk.Frame (or ttk.Frame) and add that to a TopLevel to make secondary windows, but I'm not sure how I should use that as the main window. I know that creating an instance of a Frame class and calling .mainloop() on it seems to work for using it as the main window, but I feel like that's bad practice...</p>
<p>What do other people do when they are making GUI layouts that they want to have available to main windows and secondary windows?</p>
| -1 | 2016-09-28T19:58:59Z | 39,757,007 | <p>Create a subclass of a Frame, and then put it either in the root window or a toplevel. In either case, you still call <code>mainloop</code> only once on the root window.</p>
<p>The only care you have to take is that you have to be careful about letting the user close the root window, because it will cause all of the other windows to be destroyed. </p>
<p>If you're creating a program that can have multiple windows, you might want to consider hiding the root window and always putting your window in a <code>Toplevel</code>. Of course, when you do that you need to make sure you destroy the root window whenever the last toplevel window is destroyed, or your program will continue to run but the user will have no way to access it.</p>
| 1 | 2016-09-28T20:24:18Z | [
"python",
"tkinter",
"tk",
"toplevel"
]
|
How should I make a class that can be used as my main app window, but also can be used as a secondary window | 39,756,617 | <p>I know that I can subclass a tk.Frame (or ttk.Frame) and add that to a TopLevel to make secondary windows, but I'm not sure how I should use that as the main window. I know that creating an instance of a Frame class and calling .mainloop() on it seems to work for using it as the main window, but I feel like that's bad practice...</p>
<p>What do other people do when they are making GUI layouts that they want to have available to main windows and secondary windows?</p>
| -1 | 2016-09-28T19:58:59Z | 39,760,079 | <p>Do you mean having a home screen that you can flip back to? If so, you can try looking here: <a href="http://stackoverflow.com/questions/14817210/using-buttons-in-tkinter-to-navigate-to-different-pages-of-the-application">Using buttons in Tkinter to navigate to different pages of the application?</a></p>
| 1 | 2016-09-29T01:42:47Z | [
"python",
"tkinter",
"tk",
"toplevel"
]
|
Make a non-blocking request with requests when running Flask with Gunicorn and Gevent | 39,756,807 | <p>My Flask application will receive a request, do some processing, and then make a request to a slow external endpoint that takes 5 seconds to respond. It looks like running Gunicorn with Gevent will allow it to handle many of these slow requests at the same time. How can I modify the example below so that the view is non-blocking?</p>
<pre><code>import requests
@app.route('/do', methods = ['POST'])
def do():
result = requests.get('slow api')
return result.content
</code></pre>
<pre class="lang-none prettyprint-override"><code>gunicorn server:app -k gevent -w 4
</code></pre>
| 10 | 2016-09-28T20:11:22Z | 39,903,915 | <p>You can use <code>grequests</code>. It allows other greenlets to run while the request is made. It is compatible with the <code>requests</code> library and returns a <code>requests.Response</code> object. The usage is as follows:</p>
<pre><code>import grequests
@app.route('/do', methods = ['POST'])
def do():
result = grequests.map([grequests.get('slow api')])
return result[0].content
</code></pre>
<p>Edit: I've added a test and saw that the time didn't improve with grequests since gunicorn's gevent worker already performs monkey-patching when it is initialized: <a href="https://github.com/benoitc/gunicorn/blob/master/gunicorn/workers/ggevent.py#L65" rel="nofollow">https://github.com/benoitc/gunicorn/blob/master/gunicorn/workers/ggevent.py#L65</a></p>
| 0 | 2016-10-06T19:10:04Z | [
"python",
"flask",
"python-requests",
"gunicorn",
"gevent"
]
|
Make a non-blocking request with requests when running Flask with Gunicorn and Gevent | 39,756,807 | <p>My Flask application will receive a request, do some processing, and then make a request to a slow external endpoint that takes 5 seconds to respond. It looks like running Gunicorn with Gevent will allow it to handle many of these slow requests at the same time. How can I modify the example below so that the view is non-blocking?</p>
<pre><code>import requests
@app.route('/do', methods = ['POST'])
def do():
result = requests.get('slow api')
return result.content
</code></pre>
<pre class="lang-none prettyprint-override"><code>gunicorn server:app -k gevent -w 4
</code></pre>
| 10 | 2016-09-28T20:11:22Z | 39,911,201 | <p>First a bit of background, A blocking socket is the default kind of socket, once you start reading your app or thread does not regain control until data is actually read, or you are disconnected. This is how <code>python-requests</code>, operates by default. There is a spin off called <code>grequests</code> which provides non blocking reads.</p>
<blockquote>
<p>The major mechanical difference is that send, recv, connect and accept
can return without having done anything. You have (of course) a number
of choices. You can check return code and error codes and generally
drive yourself crazy. If you donât believe me, try it sometime</p>
</blockquote>
<p>Source: <a href="https://docs.python.org/2/howto/sockets.html" rel="nofollow">https://docs.python.org/2/howto/sockets.html</a></p>
<p>It also goes on to say:</p>
<blockquote>
<p>Thereâs no question that the fastest sockets code uses non-blocking
sockets and select to multiplex them. You can put together something
that will saturate a LAN connection without putting any strain on the
CPU. The trouble is that an app written this way canât do much of
anything else - it needs to be ready to shuffle bytes around at all
times.</p>
<p>Assuming that your app is actually supposed to do something more than
that, threading is the optimal solution</p>
</blockquote>
<p>But do you want to add a whole lot of complexity to your view by having it spawn it's own threads. Particularly when gunicorn as <a href="http://docs.gunicorn.org/en/stable/design.html#async-workers" rel="nofollow">async workers</a>?</p>
<blockquote>
<p>The asynchronous workers available are based on Greenlets (via
Eventlet and Gevent). Greenlets are an implementation of cooperative
multi-threading for Python. In general, an application should be able
to make use of these worker classes with no changes.</p>
</blockquote>
<p>and</p>
<blockquote>
<p>Some examples of behavior requiring asynchronous workers: Applications
making long blocking calls (Ie, external web services)</p>
</blockquote>
<p>So to cut a long story short, don't change anything! Just let it be. If you are making any changes at all, let it be to introduce caching. Consider using <a href="https://cachecontrol.readthedocs.io/en/latest/" rel="nofollow">Cache-control</a> an extension recommended by python-requests developers.</p>
| 1 | 2016-10-07T06:56:50Z | [
"python",
"flask",
"python-requests",
"gunicorn",
"gevent"
]
|
Make a non-blocking request with requests when running Flask with Gunicorn and Gevent | 39,756,807 | <p>My Flask application will receive a request, do some processing, and then make a request to a slow external endpoint that takes 5 seconds to respond. It looks like running Gunicorn with Gevent will allow it to handle many of these slow requests at the same time. How can I modify the example below so that the view is non-blocking?</p>
<pre><code>import requests
@app.route('/do', methods = ['POST'])
def do():
result = requests.get('slow api')
return result.content
</code></pre>
<pre class="lang-none prettyprint-override"><code>gunicorn server:app -k gevent -w 4
</code></pre>
| 10 | 2016-09-28T20:11:22Z | 39,941,098 | <p>If you're deploying your Flask application with gunicorn, it is already non-blocking. If a client is waiting on a response from one of your views, another client can make a request to the same view without a problem. There will be multiple workers to process multiple requests concurrently. No need to change your code for this to work. This also goes for pretty much every Flask deployment option.</p>
| 1 | 2016-10-09T07:27:39Z | [
"python",
"flask",
"python-requests",
"gunicorn",
"gevent"
]
|
Why can't I change the icon on a tkMessagebox.askyesno() on OS X? | 39,756,822 | <p><code>tkMessageBox.askyesno('Title', 'Message', icon=tkMessageBox.WARNING)</code> on OS X just gives me the rocket icon.</p>
<p>I know there is some weirdness with OS X and tkMessageBox icons because <code>tkMessageBox.showerror()</code> just shows the rocket icon, but <code>tkMessageBox.showwarning</code> shows a yellow triangle (with a small rocket in the corner)</p>
<p>Is this is a bug?</p>
<p>Is there some workaround to get a warning triangle and Yes/No buttons without having to resort to making my own message box window from scratch?</p>
| 1 | 2016-09-28T20:12:49Z | 39,757,032 | <p>I found a solution:</p>
<pre><code>tkMessageBox.askretrycancel(title, message, type=tkMessageBox.YESNO)
</code></pre>
<p>seems to work, but <strong>both buttons presses return <code>False</code>, so it's not of any use.</strong></p>
<pre><code>tkMessageBox.showwarning(title, message, type=tkMessageBox.YESNO)
</code></pre>
<p><strong>does also work work</strong>, but be aware that it returns <code>'yes'</code> or <code>'no'</code>, not <code>True</code> or <code>False</code>. It's the only real option though.</p>
<hr>
<p>I would still be interested if anyone can tell me whether it is a bug.</p>
| 0 | 2016-09-28T20:26:25Z | [
"python",
"osx",
"tkinter",
"tkmessagebox"
]
|
Using parameters in the XPath | 39,756,842 | <p>I am trying to use parameters in the xpath expression but no luck.</p>
<pre><code>field = //*[@id=%s]/optgroup[@label=%s]/*[contains(@title, %s)]"%(FIELDTYPE, label, fieldtype)
</code></pre>
<p>What am I doing wrong?</p>
| 0 | 2016-09-28T20:14:37Z | 39,756,871 | <p>Don't forget about the quotes around the placeholders:</p>
<pre><code>"//*[@id='%s']/optgroup[@label='%s']/[contains(@title, '%s')]" % (FIELDTYPE, label, fieldtype)
</code></pre>
<p>Also note that I've also added the <code>*</code> after the <code>//</code>.</p>
| 3 | 2016-09-28T20:16:13Z | [
"python",
"selenium",
"xpath"
]
|
Python Schematics model conversion error rouge field | 39,756,878 | <p>Please help me on this, I have a model like this which i wanted to import,</p>
<pre><code>cricket:
players:
1 : Mike
2 : Mark
3 : Miller
scores:
12: 222
13: 255
15: 555
</code></pre>
<p>When using <code>DictType(ModelType(CricModel), default=None, deserialize_from=('cricket', 'params'))</code> it's throwing error.</p>
<p>Am i doing it correctly</p>
| 1 | 2016-09-28T20:16:29Z | 39,794,666 | <p>Need to use ModelType(ModelName) rather thans DictType</p>
| 0 | 2016-09-30T15:06:58Z | [
"python",
"python-2.7",
"python-3.x",
"pip",
"pycharm"
]
|
Stuck defining composition function in python | 39,756,890 | <p>input: two functions f and g, represented as dictionaries, such that g ⦠f exists. output: dictionary that represents the function g ⦠f.
example: given f = {0:âaâ, 1:âbâ} and g = {âaâ:âappleâ, âbâ:âbananaâ}, return {0:âappleâ, 1:âbananaâ}.</p>
<p>The closest to the correct answer I have is with {i:g[j] for i in f for j in g} which outputs {0: 'apple', 1: 'apple'}. What am I doing wrong?</p>
| 1 | 2016-09-28T20:17:06Z | 39,756,948 | <p>You need to just iterate over one dict <code>f</code>, and in the comprehension, replace the value of f with the value of <code>g</code> </p>
<pre><code>f = {0:'a', 1:'b'}
g = {'a':'apple', 'b': 'banana'}
new_dict = {k: g[v] for k, v in f.items()}
# Value of new_dict = {0: 'apple', 1: 'banana'}
</code></pre>
| 0 | 2016-09-28T20:21:31Z | [
"python",
"composition"
]
|
Stuck defining composition function in python | 39,756,890 | <p>input: two functions f and g, represented as dictionaries, such that g ⦠f exists. output: dictionary that represents the function g ⦠f.
example: given f = {0:âaâ, 1:âbâ} and g = {âaâ:âappleâ, âbâ:âbananaâ}, return {0:âappleâ, 1:âbananaâ}.</p>
<p>The closest to the correct answer I have is with {i:g[j] for i in f for j in g} which outputs {0: 'apple', 1: 'apple'}. What am I doing wrong?</p>
| 1 | 2016-09-28T20:17:06Z | 39,756,980 | <p>The correct dict comprehension would be:</p>
<pre><code>{i:g[f[i]] for i in f}
</code></pre>
<p>You did:</p>
<pre><code>{i:g[j] for i in f for j in g}
</code></pre>
<p>That mapped every i to every <code>i</code> to every value in <code>g</code>, but then dict removed duplicate keys.</p>
<p>To see what is going on, try generating a list instead:</p>
<pre><code>>>> [(i, g[j]) for i in f for j in g]
[(0, 'apple'), (0, 'banana'), (1, 'apple'), (1, 'banana')]
</code></pre>
<p>In the correct case:</p>
<pre><code>>>> [(i, g[f[i]]) for i in f]
[(0, 'apple'), (1, 'banana')]
</code></pre>
| 0 | 2016-09-28T20:22:54Z | [
"python",
"composition"
]
|
How to check HTTP errors for more than two URLs? | 39,756,947 | <p><strong><em>Question: I've 3 URLS - testurl1, testurl2 and testurl3. I'd like to try testurl1 first, if I get 404 error then try testurl2, if that gets 404 error then try testurl3. How to achieve this? So far I've tried below but that works only for two url, how to add support for third url?</em></strong></p>
<pre><code>from urllib2 import Request, urlopen
from urllib2 import URLError, HTTPError
def checkfiles():
req = Request('http://testurl1')
try:
response = urlopen(req)
url1=('http://testurl1')
except HTTPError, URLError:
url1 = ('http://testurl2')
print url1
finalURL='wget '+url1+'/testfile.tgz'
print finalURL
checkfiles()
</code></pre>
| 2 | 2016-09-28T20:21:28Z | 39,757,025 | <p>Another job for plain old for loop:</p>
<pre><code>for url in testurl1, testurl2, testurl3
req = Request(url)
try:
response = urlopen(req)
except HttpError as err:
if err.code == 404:
continue
raise
else:
# do what you want with successful response here (or outside the loop)
break
else:
# They ALL errored out with HTTPError code 404. Handle this?
raise err
</code></pre>
| 2 | 2016-09-28T20:25:52Z | [
"python",
"http-error"
]
|
How to check HTTP errors for more than two URLs? | 39,756,947 | <p><strong><em>Question: I've 3 URLS - testurl1, testurl2 and testurl3. I'd like to try testurl1 first, if I get 404 error then try testurl2, if that gets 404 error then try testurl3. How to achieve this? So far I've tried below but that works only for two url, how to add support for third url?</em></strong></p>
<pre><code>from urllib2 import Request, urlopen
from urllib2 import URLError, HTTPError
def checkfiles():
req = Request('http://testurl1')
try:
response = urlopen(req)
url1=('http://testurl1')
except HTTPError, URLError:
url1 = ('http://testurl2')
print url1
finalURL='wget '+url1+'/testfile.tgz'
print finalURL
checkfiles()
</code></pre>
| 2 | 2016-09-28T20:21:28Z | 39,757,114 | <p>Hmmm maybe something like this?</p>
<pre><code>from urllib2 import Request, urlopen
from urllib2 import URLError, HTTPError
def checkfiles():
req = Request('http://testurl1')
try:
response = urlopen(req)
url1=('http://testurl1')
except HTTPError, URLError:
try:
url1 = ('http://testurl2')
except HTTPError, URLError:
url1 = ('http://testurl3')
print url1
finalURL='wget '+url1+'/testfile.tgz'
print finalURL
checkfiles()
</code></pre>
| 0 | 2016-09-28T20:31:16Z | [
"python",
"http-error"
]
|
comparing strings in list to strings in list | 39,757,126 | <p>I see that the code below can check if a word is </p>
<pre><code>list1 = 'this'
compSet = [ 'this','that','thing' ]
if any(list1 in s for s in compSet): print(list1)
</code></pre>
<p>Now I want to check if a word in a list is in some other list as below:</p>
<pre><code>list1 = ['this', 'and', 'that' ]
compSet = [ 'check','that','thing' ]
</code></pre>
<p>What's the best way to check if words in list1 are in compSet, and doing something over non-existing elements, e.g., appending 'and' to compSet or deleting 'and' from list1?</p>
<p>__________________update___________________</p>
<p>I just found that doing the same thing is not working with sys.path. The code below sometimes works to add the path to sys.path, and sometimes not.</p>
<pre><code>myPath = '/some/my path/is here'
if not any( myPath in s for s in sys.path):
sys.path.insert(0, myPath)
</code></pre>
<p>Why is this not working? Also, if I want to do the same operation on a set of my paths, </p>
<pre><code>myPaths = [ '/some/my path/is here', '/some/my path2/is here' ...]
</code></pre>
<p>How can I do it?</p>
| 7 | 2016-09-28T20:32:09Z | 39,757,180 | <p>Try that:</p>
<pre><code> >>> l = list(set(list1)-set(compSet))
>>> l
['this', 'and']
</code></pre>
| 1 | 2016-09-28T20:35:14Z | [
"python",
"list"
]
|
comparing strings in list to strings in list | 39,757,126 | <p>I see that the code below can check if a word is </p>
<pre><code>list1 = 'this'
compSet = [ 'this','that','thing' ]
if any(list1 in s for s in compSet): print(list1)
</code></pre>
<p>Now I want to check if a word in a list is in some other list as below:</p>
<pre><code>list1 = ['this', 'and', 'that' ]
compSet = [ 'check','that','thing' ]
</code></pre>
<p>What's the best way to check if words in list1 are in compSet, and doing something over non-existing elements, e.g., appending 'and' to compSet or deleting 'and' from list1?</p>
<p>__________________update___________________</p>
<p>I just found that doing the same thing is not working with sys.path. The code below sometimes works to add the path to sys.path, and sometimes not.</p>
<pre><code>myPath = '/some/my path/is here'
if not any( myPath in s for s in sys.path):
sys.path.insert(0, myPath)
</code></pre>
<p>Why is this not working? Also, if I want to do the same operation on a set of my paths, </p>
<pre><code>myPaths = [ '/some/my path/is here', '/some/my path2/is here' ...]
</code></pre>
<p>How can I do it?</p>
| 7 | 2016-09-28T20:32:09Z | 39,757,182 | <p>There is a simple way to check for the intersection of two lists: convert them to a set and use <code>intersection</code>:</p>
<pre><code>>>> list1 = ['this', 'and', 'that' ]
>>> compSet = [ 'check','that','thing' ]
>>> set(list1).intersection(compSet)
{'that'}
</code></pre>
<p>You can also use bitwise operators:</p>
<p>Intersection:</p>
<pre><code>>>> set(list1) & set(compSet)
{'that'}
</code></pre>
<p>Union:</p>
<pre><code>>>> set(list1) | set(compSet)
{'this', 'and', 'check', 'thing', 'that'}
</code></pre>
<p>You can make any of these results a list using <code>list()</code>.</p>
| 8 | 2016-09-28T20:35:20Z | [
"python",
"list"
]
|
Distinct rows from a list of dictionaries based on values | 39,757,176 | <p>I have this sample input below <strong>sampleInputDbData</strong></p>
<pre><code>def sampleInputDbData( self ):
return \
[
{'FundCode': 300, 'FundName': 'First Fund', 'ProdStartDate': dt(2016,7,3,4,5,6), 'ProdEndDate': dt(2016,8,3,4,5,6), 'FundFee': 100},
{'FundCode': 300, 'FundName': 'First Fund', 'ProdStartDate': dt(2016,8,3,4,5,6), 'ProdEndDate': dt(2016,8,3,6,5,6), 'FundFee': 101 },
{'FundCode': 300, 'FundName': 'First Fund', 'ProdStartDate': dt(2016,8,3,6,5,6), 'ProdEndDate': dt(2016,8,15,6,5,6), 'FundFee': 102 },
{'FundCode': 301, 'FundName': 'Second Fund', 'ProdStartDate': dt(2016,7,3,4,5,6), 'ProdEndDate': dt(2016,8,3,6,5,6), 'FundFee': 110},
{'FundCode': 301, 'FundName': 'Second Fund', 'ProdStartDate': dt(2016,8,3,6,5,6), 'ProdEndDate': dt(2016,8,15,6,5,6), 'FundFee': 111},
{'FundCode': 302, 'FundName': 'Third Fund', 'ProdStartDate': dt(2016,8,3,6,5,6), 'ProdEndDate': dt(2016,8,15,6,5,6), 'FundFee': 120},
]
</code></pre>
<p>What I want is this <strong>sampleOutputDbData</strong> as output</p>
<pre><code>def sampleOutputDbData( self ):
return \
[
{'FundCode': 300, 'FundName': 'First Fund', 'ProdStartDate': dt(2016,8,3,6,5,6), 'ProdEndDate': dt(2016,8,15,6,5,6), 'FundFee': 102 },
{'FundCode': 301, 'FundName': 'Second Fund', 'ProdStartDate': dt(2016,8,3,6,5,6), 'ProdEndDate': dt(2016,8,15,6,5,6), 'FundFee': 111},
{'FundCode': 302, 'FundName': 'Third Fund', 'ProdStartDate': dt(2016,8,3,6,5,6), 'ProdEndDate': dt(2016,8,15,6,5,6), 'FundFee': 120},
]
</code></pre>
<p>The decision factor is basically: Get all unique <strong>FundCode</strong> based on max value of the key <strong>ProdEndDate</strong>. dt is type datetime</p>
| 1 | 2016-09-28T20:35:06Z | 39,757,344 | <p>This works:</p>
<pre><code>from collections import defaultdict
from operator import itemgetter
code_dict = defaultdict(list)
for d in sampleInputDbData:
code_dict[d["FundCode"]].append(d)
output_data = [max(d, key=itemgetter("ProdEndDate")) for d in code_dict.values()]
</code></pre>
<p>I first create a default dict for temporary sorting by <code>FundCode</code>. Each key will contain a list with all dicts with the same <code>FundCode</code>. Then, I'm taking the last <code>ProdEndDate</code> from each list.</p>
| 0 | 2016-09-28T20:45:59Z | [
"python"
]
|
How can I make a python candlestick chart clickable in matplotlib | 39,757,188 | <p>I am trying to make a OHLC graph plotted with matplotlib interactive upon the user clicking on a valid point. The data is stored as a pandas dataframe of the form </p>
<pre><code>index PX_BID PX_ASK PX_LAST PX_OPEN PX_HIGH PX_LOW
2016-07-01 1.1136 1.1137 1.1136 1.1106 1.1169 1.1072
2016-07-04 1.1154 1.1155 1.1154 1.1143 1.1160 1.1098
2016-07-05 1.1076 1.1077 1.1076 1.1154 1.1186 1.1062
2016-07-06 1.1100 1.1101 1.1100 1.1076 1.1112 1.1029
2016-07-07 1.1062 1.1063 1.1063 1.1100 1.1107 1.1053
</code></pre>
<p>I am plotting it with matplotlib's candlestick function:</p>
<pre><code>candlestick2_ohlc(ax1, df['PX_OPEN'],df['PX_HIGH'],df['PX_LOW'],df['PX_LAST'],width=1)
</code></pre>
<p>When plotted it looks somthing like this:</p>
<p><a href="https://pythonprogramming.net/static/images/matplotlib/candlestick-ohlc-graphs-matplotlib-tutorial.png" rel="nofollow">https://pythonprogramming.net/static/images/matplotlib/candlestick-ohlc-graphs-matplotlib-tutorial.png</a></p>
<p>I want the console to print out the value of the point clicked, the date and whether it is an open, high low or close. So far I have something like:</p>
<pre><code>fig, ax1 = plt.subplots()
ax1.set_title('click on points', picker=True)
ax1.set_ylabel('ylabel', picker=True, bbox=dict(facecolor='red'))
line = candlestick2_ohlc(ax1, df['PX_OPEN'],df['PX_HIGH'],df['PX_LOW'],df['PX_LAST'],width=0.4)
def onpick1(event):
if isinstance(event.artist, (lineCollection, barCollection)):
thisline = event.artist
xdata = thisline.get_xdata()
ydata = thisline.get_ydata()
ind = event.ind
#points = tuple(zip(xdata[ind], ydata[ind]))
#print('onpick points:', points)
print( 'X='+str(np.take(xdata, ind)[0]) ) # Print X point
print( 'Y='+str(np.take(ydata, ind)[0]) ) # Print Y point
fig.canvas.mpl_connect('pick_event', onpick1)
plt.show()
</code></pre>
<p>This code however does not print anything when ran and points are clicked. When I look at examples of interactive matplotlib graphs, they tend to have an argument in the plot function such as:</p>
<pre><code>line, = ax.plot(rand(100), 'o', picker=5)
</code></pre>
<p>However, candlestick2_ohlc does not take a 'picker' arg. Any tips on how I can get around this?</p>
<p>Thanks</p>
| 2 | 2016-09-28T20:35:49Z | 39,757,585 | <p>You need to set <code>set_picker(True)</code> to enable a pick event or give a tolerance in points as a float (see <a href="http://matplotlib.org/api/artist_api.html#matplotlib.artist.Artist.set_picker" rel="nofollow">http://matplotlib.org/api/artist_api.html#matplotlib.artist.Artist.set_picker</a>).</p>
<p>So in your case <code>ax1.set_picker(True)</code> if you want pick event to be fired whenever if mouseevent is over <code>ax1</code>.</p>
<p>You can enable pick events on the elements of the candlestick chart. I read the documentation and <a href="http://matplotlib.org/api/finance_api.html#matplotlib.finance.candlestick2_ohlc" rel="nofollow"><code>candlestick2_ohlc</code></a> returns a tuple of two objects: a <code>LineCollection</code> and a <code>PolyCollection</code>. So you can name these objects and set the picker to true on them</p>
<pre><code>(lines,polys) = candlestick2_ohlc(ax1, ...)
lines.set_picker(True) # collection of lines in the candlestick chart
polys.set_picker(True) # collection of polygons in the candlestick chart
</code></pre>
<p>The index of the event <code>ind = event.ind[0]</code> will tell you which element in the collection contained the mouse event (<code>event.ind</code> returns a list of indices since a mouse event might pertain more than one item).</p>
<p>Once you trigger a pick event on a candlestick you can print the data from the original dataframe.</p>
<p>Here's some working code</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection, PolyCollection
from matplotlib.text import Text
from matplotlib.finance import candlestick2_ohlc
import numpy as np
import pandas as pd
np.random.seed(0)
dates = pd.date_range('20160101',periods=7)
df = pd.DataFrame(np.reshape(1+np.random.random_sample(42)*0.1,(7,6)),index=dates,columns=["PX_BID","PX_ASK","PX_LAST","PX_OPEN","PX_HIGH","PX_LOW"])
df['PX_HIGH']+=.1
df['PX_LOW']-=.1
fig, ax1 = plt.subplots()
ax1.set_title('click on points', picker=20)
ax1.set_ylabel('ylabel', picker=20, bbox=dict(facecolor='red'))
(lines,polys) = candlestick2_ohlc(ax1, df['PX_OPEN'],df['PX_HIGH'],df['PX_LOW'],df['PX_LAST'],width=0.4)
lines.set_picker(True)
polys.set_picker(True)
def onpick1(event):
if isinstance(event.artist, (Text)):
text = event.artist
print 'You clicked on the title ("%s")' % text.get_text()
elif isinstance(event.artist, (LineCollection, PolyCollection)):
thisline = event.artist
mouseevent = event.mouseevent
ind = event.ind[0]
print 'You clicked on item %d' % ind
print 'Day: ' + df.index[ind].normalize().to_datetime().strftime('%Y-%m-%d')
for p in ['PX_OPEN','PX_OPEN','PX_HIGH','PX_LOW']:
print p + ':' + str(df[p][ind])
print('x=%d, y=%d, xdata=%f, ydata=%f' %
( mouseevent.x, mouseevent.y, mouseevent.xdata, mouseevent.ydata))
fig.canvas.mpl_connect('pick_event', onpick1)
plt.show()
</code></pre>
| 1 | 2016-09-28T21:01:11Z | [
"python",
"pandas",
"matplotlib",
"graph",
"interactive"
]
|
using django-registration form instead of own | 39,757,258 | <p>So I am using <code>django-registration</code> for email activation and the like. I defined my form and am calling to it properly however for some reason it is using the base form out of <code>django-registration</code> instead of my own. I have absolutely no clue as to why this is happening, any thoughts?</p>
<p>views.py</p>
<pre><code>from django.shortcuts import render, render_to_response
from django.http import HttpResponseRedirect
from django.template.context_processors import csrf
from django.contrib.auth.models import User
from .forms import UserForm
def register(request):
if request.method == 'POST':
form = UserForm(request.POST)
if form.is_valid():
form.save()
return HttpResponseRedirect('/accounts/register/complete')
else:
form = UserForm()
token = {}
token.update(csrf(request))
token['form'] = form
return render('registration/registration_form.html', token)
def registration_complete(request):
return render('registration/registration_complete.html')
</code></pre>
<p>urls.py</p>
<pre><code>from django.conf.urls import include, url
from django.contrib import admin
from main import views
urlpatterns = [
# Registration URLs
url(r'^accounts/', include('registration.backends.hmac.urls')),
url(r'^accounts/register/$', views.register, name='register'),
url(r'^accounts/register/complete/$', views.registration_complete, name='registration_complete'),
url(r'^admin/', admin.site.urls),
url(r'^$', views.index, name='index'),
url(r'^contact/$', views.contact, name='contact'),
url(r'^login/$', views.login, name='login'),
url(r'^register/$', views.register, name='register'),
url(r'^profile/$', views.profile, name='profile'),
url(r'^blog/$', views.blog, name='blog'),
</code></pre>
<p>forms.py</p>
<pre><code>from django import forms
from django.contrib.auth.models import User
from django.contrib.auth.forms import UserCreationForm
from django.utils.translation import gettext as _
from registration.forms import RegistrationFormUniqueEmail, RegistrationForm
class UserForm(UserCreationForm):
email = forms.EmailField(required=True)
first_name = forms.CharField(label=_("Firstname"), required=False)
last_name = forms.CharField(label=_("Lastname"), required=False)
class Meta:
model = User
fields = ('username', 'first_name', 'last_name', 'email', 'password1', 'password2')
def save(self, commit=True):
user = super(UserForm, self).save(commit=False)
user.first_name = self.cleaned_data['first_name']
user.last_name = self.cleaned_data['last_name']
user.email = self.cleaned_data['email']
if commit:
user.save()
return user
</code></pre>
<p>template</p>
<pre><code>{% extends "base.html" %}
{% block title %}Register{% endblock %}
{% block content %}
<h2>Registration</h2>
<form action="/accounts/register/" method="post">{% csrf_token %}
{{ form.as_p }}
<input type="submit" value="Register" />
</form>
{% endblock %}
</code></pre>
| 1 | 2016-09-28T20:41:20Z | 39,757,683 | <p>Since <code>urlpatterns</code> has that order:</p>
<pre><code>url(r'^accounts/', include('registration.backends.hmac.urls')),
url(r'^accounts/register/$', views.register, name='register'),
url(r'^accounts/register/complete/$', views.registration_complete, name='registration_complete'),
</code></pre>
<p>Url from <code>url(r'^accounts/', include('registration.backends.hmac.urls')),</code> match your <code>/accounts/register/</code> url first, and don't get to your <code>views.register</code> view. So, to change that, you need to change the order, so that url patterns would be</p>
<pre><code>urlpatterns = [
url(r'^accounts/register/$', views.register, name='register'),
url(r'^accounts/register/complete/$', views.registration_complete, name='registration_complete'),
url(r'^accounts/', include('registration.backends.hmac.urls')),
...
]
</code></pre>
| 1 | 2016-09-28T21:07:55Z | [
"python",
"django",
"forms",
"django-registration"
]
|
strftime function pandas - Python | 39,757,293 | <p>I am trying to strip the time out of the date when the dataframe writes to excel but have been unsuccessful. The weird thing is, i am able to manipute the dataframe by using the lambda and striptime function below and can see it the function is working by my 'print df6'. However, for some reason when I open the excel file it shows in the format : '%m%d%y: 12:00am ' I would liek to just have the date show. Ideas?</p>
<p>Here is my code:</p>
<pre><code>df6 = df
workbook = writer.book
df6.to_excel(writer, sheet_name= 'Frogs', startrow= 4, index=False)
df6['ASOFDATE'] = df6['ASOFDATE'].apply(lambda x:x.date().strftime('%d%m%y'))
worksheet = writer.sheets['Frogs']
format5 = workbook.add_format({'num_format': '0.00%'})
#format6= workbook.add_format({'num_format': 'mmm d yyyy'})
#worksheet.set_column('A:A', 18, format6)
worksheet.set_column('B:B', 18, format5)
worksheet.set_column('C:C', 18, format5)
worksheet.set_column('D:D', 18, format5)
now = datetime.now()
date= dt.datetime.today().strftime("%m/%d/%Y")
link_format = workbook.add_format({'color': 'black', 'bold': 1})
nrows = df6.shape[0]
worksheet.write(0, 0, 'Chart4', link_format)
worksheet.write(1, 0, 'Frogs',link_format)
#Multiply dataframe by 100 to get out of decimals (this method is used when dates are in the picture)
df6[df6.columns[np.all(df6.gt(0) & df6.lt(1),axis=0)]] *= 100
print df6
writer.save()
</code></pre>
| 1 | 2016-09-28T20:43:10Z | 39,758,866 | <p>strftime is creating a textual representation of a datetime value which later is formatted and displayed by Excel according to its formatting templates. We could change this formatting template using <code>date_format</code> parameter in our ExcelWriter constructor e.g.:<code>writer = pandas.ExcelWriter("test.xlsx", engine="xlsxwriter", date_format="ddmmyyyy")</code>. I've slightly modified your sample:</p>
<pre><code>import pandas as pd
import datetime
df = pd.DataFrame()
df['ASOFDATE'] = [datetime.datetime.now() for x in xrange(3)]
for i in xrange(3):
df[i] = [1,2,3]
df6 = df
writer = pd.ExcelWriter('test.xlsx', engine='xlsxwriter', date_format="ddmmyyyy")
workbook = writer.book
df6.to_excel(writer, sheet_name= 'Frogs', startrow= 4, index=False)
worksheet = writer.sheets['Frogs']
format5 = workbook.add_format({'num_format': '0.00%'})
worksheet.set_column('B:B', 18, format5)
worksheet.set_column('C:C', 18, format5)
worksheet.set_column('D:D', 18, format5)
link_format = workbook.add_format({'color': 'black', 'bold': 1})
worksheet.write(0, 0, 'Chart4', link_format)
worksheet.write(1, 0, 'Frogs',link_format)
writer.save()
</code></pre>
<p>This code produces a file which has date values formatted according to the specified template.</p>
| 1 | 2016-09-28T22:55:36Z | [
"python",
"pandas"
]
|
How to serialize a scandir.DirEntry in Python for sending through a network socket? | 39,757,325 | <p>I have server and client programs that communicate with each other through a network socket.</p>
<p>What I want is to send a directory entry (<code>scandir.DirEntry</code>) obtained from <code>scandir.scandir()</code> through the socket.</p>
<p>For now I am using <code>pickle</code> and <code>cPickle</code> modules and have come up with the following (excerpt only):</p>
<pre><code>import scandir, pickle
s = scandir.scandir("D:\\PYTHON")
entry = s.next()
data = pickle.dumps(entry)
</code></pre>
<p>However, I am getting the following error stack:</p>
<pre><code>File "untitled.py", line 5, in <module>
data = pickle.dumps(item)
File "C:\Python27\Lib\pickle.py", line 1374, in dumps
Pickler(file, protocol).dump(obj)
File "C:\Python27\Lib\pickle.py", line 224, in dump
self.save(obj)
File "C:\Python27\Lib\pickle.py", line 306, in save
rv = reduce(self.proto)
File "C:\Python27\Lib\copy_reg.py", line 70, in _reduce_ex
raise TypeError, "can't pickle %s objects" % base.__name__
TypeError: can't pickle DirEntry objects
</code></pre>
<p>How can I get rid of this error?</p>
<p>I have heard of using <code>marshall</code> or <code>JSON</code>.
<em>UPDATE</em>: <code>JSON</code> is not dumping all the data within the object.</p>
<p>Is there any completely different way to do so to send the object through the socket?</p>
<p>Thanks in advance for any help.</p>
| 0 | 2016-09-28T20:45:07Z | 39,767,389 | <p>Well I myself have figured out that for instances of non-standard classes like this <code>scandir.DirEntry</code>, the best way is to <strong>convert the class member data into a (possibly nested) combination of standard objects</strong> like (<code>list</code>, <code>dict</code>, etc.).</p>
<p>For example, in the particular case of <code>scandir.DirEntry</code>, it can be done as follows.</p>
<pre><code>import scandir, pickle
s = scandir.scandir("D:\\PYTHON")
entry = s.next()
# first convert the stat object to st_
st = entry.stat()
st_ = {'st_mode':st.st_mode, 'st_size':st.st_size,\
'st_atime':st.st_atime, 'st_mtime':st.st_mtime,\
'st_ctime':st.st_ctime}
# now convert the entry object to entry_
entry_ = {'name':entry.name, 'is_dir':entry.is_dir(), \
'path':entry.path, 'stat':st_}
# one may need some other class member data also as necessary
# now pickle the converted entry_
data = pickle.dumps(entry_)
</code></pre>
<p>Although for my purpose, I only require the data, after the unpickling in the other end, one may need to <strong>reconstruct</strong> the unpickled <code>entry_</code> to unpickled <code>scandir.DirEntry</code> object 'entry'. <strong>However, I am yet to figure out how to reconstruct the class instance and set the data for the behaviour of methods like</strong> <code>is_dir()</code>, <code>stat()</code>. </p>
| 0 | 2016-09-29T10:09:17Z | [
"python",
"sockets",
"serialization",
"scandir"
]
|
Numpy: Computing sum(array[i, a[i]:b[i]]) for all rows i | 39,757,355 | <p>I have a numpy array <code>arr</code> and a list of slice start points <code>start</code> and a list of slice endpoints <code>end</code>. For each row <code>i</code>, I want to determine the sum of the elements from <code>start[i]</code> to <code>end[i]</code>. That is, I want to determine</p>
<pre><code>[np.sum(arr[i, start[i]:end[i]]) for i in range(arr.shape[0])]
</code></pre>
<p>Is there a smarter/faster way to do this using numpy only?</p>
| 1 | 2016-09-28T20:47:02Z | 39,757,459 | <p>Here's a vectorized approach using <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>NumPy broadcasting</code></a> and <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow"><code>np.einsum</code></a> -</p>
<pre><code># Create range array corresponding to the length of the no. of cols
r = np.arange(arr.shape[1])
# Mask of ranges corresponding to the start and end indices using broadcasting
mask = (start[:,None] <= r) & (end[:,None] > r)
# Finally, we use the mask to select and sum rows using einsum
out = np.einsum('ij,ij->i',arr,mask)
</code></pre>
| 3 | 2016-09-28T20:53:33Z | [
"python",
"arrays",
"numpy",
"sum",
"slice"
]
|
How can I resample a DataFrame so that it is properly aligned with another DataFrame? | 39,757,437 | <p>I've got several Pandas DataFrames at different time intervals. One is at the daily level:</p>
<pre><code>DatetimeIndex(['2007-12-01', '2007-12-02', '2007-12-03', '2007-12-04',
'2007-12-05', '2007-12-06', '2007-12-07', '2007-12-08',
'2007-12-09', '2007-12-10',
...
'2016-08-22', '2016-08-23', '2016-08-24', '2016-08-25',
'2016-08-26', '2016-08-27', '2016-08-28', '2016-08-29',
'2016-08-30', '2016-08-31'],
dtype='datetime64[ns]', length=3197, freq=None)
</code></pre>
<p>The others are at some non-daily level (they will <em>always</em> be less resolute than daily). For example, this one is weekly:</p>
<pre><code>DatetimeIndex(['2007-01-01', '2007-01-08', '2007-01-15', '2007-01-22',
'2007-01-29', '2007-02-05', '2007-02-12', '2007-02-19',
'2007-02-26', '2007-03-05',
...
'2010-03-08', '2010-03-15', '2010-03-22', '2010-03-29',
'2010-04-05', '2010-04-12', '2010-04-19', '2010-04-26',
'2010-05-03', 'NaT'],
dtype='datetime64[ns]', name='week', length=176, freq=None)
</code></pre>
<p>This one is monthly:</p>
<pre><code>DatetimeIndex(['2013-04-01', '2013-05-01', '2013-06-01', '2013-07-01',
'2013-08-01', '2013-09-01', '2013-10-01', '2013-11-01',
'2013-12-01', '2014-01-01', '2014-02-01', '2014-03-01',
'2014-04-01', '2014-05-01', '2014-06-01', '2014-07-01',
'2014-08-01', '2014-09-01', '2014-10-01', '2014-11-01',
'2014-12-01', '2015-01-01', '2015-02-01', '2015-03-01',
'2015-04-01', '2015-05-01', '2015-06-01', '2015-07-01',
'2015-08-01', '2015-09-01', '2015-10-01', '2015-11-01',
'2015-12-01', '2016-01-01', '2016-02-01', '2016-03-01',
'2016-04-01', '2016-05-01', '2016-06-01', '2016-07-01',
'2016-08-01'],
dtype='datetime64[ns]', name='month', freq=None)
</code></pre>
<p>This is just an oddball with an irregular interval:</p>
<pre><code>DatetimeIndex(['2014-02-14', '2014-05-08', '2014-09-19', '2014-09-24',
'2015-01-21', '2016-05-26', '2016-06-02', '2016-06-04'],
dtype='datetime64[ns]', name='date', freq=None)
</code></pre>
<p>What I need to do is resample (sum) the daily data to the intervals specified by the others. So if a DatetimeIndex is monthly, I need to resample the daily data to monthly. If it's weekly, it should be resampled weekly. If it's irregular, it needs to match. I need this because I'm building statistical models on these data, and I need the ground truth to line up with the observed values.</p>
<p>How can I get Pandas to resample a DataFrame, <code>df1</code>, to match the DatetimeIndex of another arbitrary DataFrame, <code>df2</code>? I've search around, but I can't figure this out. It seems like it'd be a really common Pandas task, so I must just be missing something. Thanks!</p>
| 0 | 2016-09-28T20:51:51Z | 39,761,232 | <p>Consider using pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html" rel="nofollow">DataFrame.resample()</a>:</p>
<pre><code># EXAMPLE DATA OF SEQUENTIAL DATES AND RANDOM NUMBERS
index = pd.date_range('12/01/2007', periods=3197, freq='D', dtype='datetime64[ns]')
series = pd.Series(np.random.randint(0,100, 3197), index=index)
df = pd.DataFrame({'num':series})
# num
# 2007-12-01 73
# 2007-12-02 17
# 2007-12-03 63
# 2007-12-04 72
# 2007-12-05 4
# 2007-12-06 91
# 2007-12-07 20
# 2007-12-08 99
# 2007-12-09 97
# 2007-12-10 33
wdf = df.resample('W-SAT').sum() # SATURDAY WEEK START
# num
# 2007-12-01 73
# 2007-12-08 366
# 2007-12-15 354
# 2007-12-22 302
# 2007-12-29 310
# 2008-01-05 323
# 2008-01-12 424
mdf = df.resample('MS').sum() # MONTH START
# num
# 2007-12-01 1568
# 2008-01-01 1465
# 2008-02-01 1317
# 2008-03-01 1473
# 2008-04-01 1762
# 2008-05-01 1698
# 2008-06-01 1345
</code></pre>
<p>For the irregular interval, use a custom function in <code>DataFrame.apply()</code> to create a <em>enddate</em> column which would be the end cut-off date of the interval the current row's date falls in series (i.e., <em>2015-01-01</em>'s end date being <em>2015-01-21</em> in Datetimeindex series), calculated by using a series filter. Then, run a <code>groupby()</code> on new <em>enddate</em> column for sum aggregation:</p>
<pre><code>irrdt = pd.DatetimeIndex(['2014-02-14', '2014-05-08', '2014-09-19', '2014-09-24',
'2015-01-21', '2016-05-26', '2016-06-02', '2016-06-04'],
dtype='datetime64[ns]', name='date', freq=None)
def findrng(row):
ed = str(irrdt[irrdt > row['Date']].min())[0:10]
row['enddt'] = ed if ed !='NaT' else str(irrdt.max())[0:10]
return(row)
df['Date'] = df.index
df = df.apply(findrng, axis=1).groupby(['enddt']).sum()
# num
# enddt
# 2014-02-14 112143
# 2014-05-08 3704
# 2014-09-19 5958
# 2014-09-24 365
# 2015-01-21 5730
# 2016-05-26 24126
# 2016-06-02 305
# 2016-06-04 4142
</code></pre>
| 2 | 2016-09-29T04:07:05Z | [
"python",
"python-3.x",
"pandas"
]
|
How to send emails with names in headers from Python | 39,757,504 | <p>I need to send nice-looking emails from Python with address headers that contain names - somehow something that never pops up in tutorials.</p>
<p>I'm using email.mime.text.MIMEText() to create the email, but setting <code>msg['To'] = 'á <[email protected]>'</code> rather than utf8-encoding only the name part, will utf8-encode the whole header value, which of course fails miserably. How to do this correctly?</p>
<p>I have found a sort-of solution <a href="http://stackoverflow.com/questions/10551933/python-email-module-form-header-from-with-some-unicode-name-email">Python email module: form header "From" with some unicode name + email</a> but it feels hard to accept such a hack, since there does seem to be some support for handling this automatically in Python's email package in email.headerregistry which should be used automatically as far as I can see, but it doesn't happen.</p>
| 1 | 2016-09-28T20:56:23Z | 39,758,812 | <p>You have to use the right policy from <code>email.policy</code> to get the correct behaviour.</p>
<h2>Wrong Policy</h2>
<p><a href="https://docs.python.org/3.5/library/email.message.html#email.message.Message" rel="nofollow"><code>email.message.Message</code></a> will use <code>email.policy.Compat32</code> by default. That one was designed for backward-compatibility wih older Python versions and does the wrong thing:</p>
<pre><code>>>> msg = email.message.Message(policy=email.policy.Compat32())
>>> msg['To'] = 'ššššš <[email protected]>'
>>> msg.as_bytes()
b'To: =?utf-8?b?xaHFocWhxaHFoSA8c3Nzc0BleGFtcGxlLmNvbT4=?=\n\n'
</code></pre>
<h2>Correct Policy</h2>
<p><a href="https://docs.python.org/3.5/library/email.policy.html#email.policy.EmailPolicy" rel="nofollow"><code>email.policy.EmailPolicy</code></a> will do what you want:</p>
<pre><code>>>> msg = email.message.Message(policy=email.policy.EmailPolicy())
>>> msg['To'] = 'ššššš <[email protected]>'
>>> msg.as_bytes()
b'To: =?utf-8?b?xaHFocWhxaHFoQ==?= <[email protected]>\n\n'
</code></pre>
<h2>Python 2.7</h2>
<p>With older Python versions (eg 2.7), you have to use the "hack" as you called it:</p>
<pre><code>>>> msg = email.message.Message()
>>> msg['To'] = email.header.Header(u'ššššš').encode() + ' <[email protected]>'
>>> msg.as_string()
'To: =?utf-8?b?xaHFocWhxaHFoQ==?= <[email protected]>\n\n'
</code></pre>
| 1 | 2016-09-28T22:50:06Z | [
"python",
"python-3.x",
"email"
]
|
Test Requests for Django Rest Framework aren't parsable by its own Request class | 39,757,530 | <p>I'm writing an endpoint to receive and parse <a href="https://developer.github.com/webhooks/#example-delivery" rel="nofollow">GitHub Webhook payloads</a> using Django Rest Framework 3. In order to match the payload specification, I'm writing a payload request factory and testing that it's generating valid requests.</p>
<p>However, the problem comes when trying to test the request generated with DRF's <code>Request</code> class. Here's the smallest failing test I could come up with - the problem is that a request generated with DRF's <code>APIRequestFactory</code> seems to not be parsable by DRF's <code>Request</code> class. Is that expected behaviour?</p>
<pre><code>from rest_framework.request import Request
from rest_framework.parsers import JSONParser
from rest_framework.test import APIRequestFactory, APITestCase
class TestRoundtrip(APITestCase):
def test_round_trip(self):
"""
A DRF Request can be loaded into a DRF Request object
"""
request_factory = APIRequestFactory()
request = request_factory.post(
'/',
data={'hello': 'world'},
format='json',
)
result = Request(request, parsers=(JSONParser,))
self.assertEqual(result.data['hello'], 'world')
</code></pre>
<p>And the stack trace is:</p>
<pre><code>E
======================================================================
ERROR: A DRF Request can be loaded into a DRF Request object
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/request.py", line 380, in __getattribute__
return getattr(self._request, attr)
AttributeError: 'WSGIRequest' object has no attribute 'data'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/james/active/prlint/prlint/github/tests/test_payload_factories/test_roundtrip.py", line 22, in test_round_trip
self.assertEqual(result.data['hello'], 'world')
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/request.py", line 382, in __getattribute__
six.reraise(info[0], info[1], info[2].tb_next)
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/request.py", line 186, in data
self._load_data_and_files()
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/request.py", line 246, in _load_data_and_files
self._data, self._files = self._parse()
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/request.py", line 312, in _parse
parsed = parser.parse(stream, media_type, self.parser_context)
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/parsers.py", line 64, in parse
data = stream.read().decode(encoding)
AttributeError: 'str' object has no attribute 'read'
----------------------------------------------------------------------
</code></pre>
<p>I'm obviously doing something stupid - I've messed around with encodings... realised that I needed to pass the parsers list to the <code>Request</code> to avoid the <code>UnsupportedMediaType</code> error, and now I'm stuck here.</p>
<p>Should I do something different? Maybe avoid using <code>APIRequestFactory</code>? Or test my built GitHub requests a different way?</p>
<hr>
<h2>More info</h2>
<p>GitHub sends a request out to registered webhooks that has a <code>X-GitHub-Event</code> header and therefore in order to test my webhook DRF code I need to be able to emulate this header at test time.</p>
<p>My path to succeeding with this has been to build a custom Request and load a payload using a factory into it. This is my factory code:</p>
<pre><code>def PayloadRequestFactory():
"""
Build a Request, configure it to look like a webhook payload from GitHub.
"""
request_factory = APIRequestFactory()
request = request_factory.post(url, data=PingPayloadFactory())
request.META['HTTP_X_GITHUB_EVENT'] = 'ping'
return request
</code></pre>
<p>The issue has arisen because I want to assert that <code>PayloadRequestFactory</code> is generating valid requests for various passed arguments - so I'm trying to parse them and assert their validity but DRF's <code>Request</code> class doesn't seem to be able to achieve this - hence my question with a failing test.</p>
<p>So really my question is - how should I test this <code>PayloadRequestFactory</code> is generating the kind of request that I need?</p>
| 1 | 2016-09-28T20:58:10Z | 39,757,885 | <p>"Yo dawg, I heard you like Request, cos' you put a Request inside a Request" XD</p>
<p>I'd do it like this:</p>
<pre><code>from rest_framework.test import APIClient
client = APIClient()
response = client.post('/', {'github': 'payload'}, format='json')
self.assertEqual(response.data, {'github': 'payload'})
# ...or assert something was called, etc.
</code></pre>
<p>Hope this helps</p>
| 1 | 2016-09-28T21:22:02Z | [
"python",
"unit-testing",
"django-rest-framework"
]
|
Test Requests for Django Rest Framework aren't parsable by its own Request class | 39,757,530 | <p>I'm writing an endpoint to receive and parse <a href="https://developer.github.com/webhooks/#example-delivery" rel="nofollow">GitHub Webhook payloads</a> using Django Rest Framework 3. In order to match the payload specification, I'm writing a payload request factory and testing that it's generating valid requests.</p>
<p>However, the problem comes when trying to test the request generated with DRF's <code>Request</code> class. Here's the smallest failing test I could come up with - the problem is that a request generated with DRF's <code>APIRequestFactory</code> seems to not be parsable by DRF's <code>Request</code> class. Is that expected behaviour?</p>
<pre><code>from rest_framework.request import Request
from rest_framework.parsers import JSONParser
from rest_framework.test import APIRequestFactory, APITestCase
class TestRoundtrip(APITestCase):
def test_round_trip(self):
"""
A DRF Request can be loaded into a DRF Request object
"""
request_factory = APIRequestFactory()
request = request_factory.post(
'/',
data={'hello': 'world'},
format='json',
)
result = Request(request, parsers=(JSONParser,))
self.assertEqual(result.data['hello'], 'world')
</code></pre>
<p>And the stack trace is:</p>
<pre><code>E
======================================================================
ERROR: A DRF Request can be loaded into a DRF Request object
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/request.py", line 380, in __getattribute__
return getattr(self._request, attr)
AttributeError: 'WSGIRequest' object has no attribute 'data'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/james/active/prlint/prlint/github/tests/test_payload_factories/test_roundtrip.py", line 22, in test_round_trip
self.assertEqual(result.data['hello'], 'world')
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/request.py", line 382, in __getattribute__
six.reraise(info[0], info[1], info[2].tb_next)
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/request.py", line 186, in data
self._load_data_and_files()
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/request.py", line 246, in _load_data_and_files
self._data, self._files = self._parse()
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/request.py", line 312, in _parse
parsed = parser.parse(stream, media_type, self.parser_context)
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/parsers.py", line 64, in parse
data = stream.read().decode(encoding)
AttributeError: 'str' object has no attribute 'read'
----------------------------------------------------------------------
</code></pre>
<p>I'm obviously doing something stupid - I've messed around with encodings... realised that I needed to pass the parsers list to the <code>Request</code> to avoid the <code>UnsupportedMediaType</code> error, and now I'm stuck here.</p>
<p>Should I do something different? Maybe avoid using <code>APIRequestFactory</code>? Or test my built GitHub requests a different way?</p>
<hr>
<h2>More info</h2>
<p>GitHub sends a request out to registered webhooks that has a <code>X-GitHub-Event</code> header and therefore in order to test my webhook DRF code I need to be able to emulate this header at test time.</p>
<p>My path to succeeding with this has been to build a custom Request and load a payload using a factory into it. This is my factory code:</p>
<pre><code>def PayloadRequestFactory():
"""
Build a Request, configure it to look like a webhook payload from GitHub.
"""
request_factory = APIRequestFactory()
request = request_factory.post(url, data=PingPayloadFactory())
request.META['HTTP_X_GITHUB_EVENT'] = 'ping'
return request
</code></pre>
<p>The issue has arisen because I want to assert that <code>PayloadRequestFactory</code> is generating valid requests for various passed arguments - so I'm trying to parse them and assert their validity but DRF's <code>Request</code> class doesn't seem to be able to achieve this - hence my question with a failing test.</p>
<p>So really my question is - how should I test this <code>PayloadRequestFactory</code> is generating the kind of request that I need?</p>
| 1 | 2016-09-28T20:58:10Z | 39,770,829 | <p>Looking at the <a href="https://github.com/tomchristie/django-rest-framework/blob/master/tests/test_testing.py" rel="nofollow">tests for <code>APIRequestFactory</code> in
DRF</a>, <a href="https://github.com/tomchristie/django-rest-framework/blob/master/tests/test_testing.py#L18-L23" rel="nofollow">stub
views</a>
are created and then run through that view - the output is inspected for expected results.
Therefore a reasonable, but slightly long solution, is to copy this strategy to
assert that the <code>PayloadRequestFactory</code> is building valid requests, before then
pointing that at a full view.</p>
<p>The test above becomes:</p>
<pre><code>from django.conf.urls import url
from django.test import TestCase, override_settings
from rest_framework.decorators import api_view
from rest_framework.response import Response
from rest_framework.test import APIRequestFactory
@api_view(['POST'])
def view(request):
"""
Testing stub view to return Request's data and GitHub event header.
"""
return Response({
'header_github_event': request.META.get('HTTP_X_GITHUB_EVENT', ''),
'request_data': request.data,
})
urlpatterns = [
url(r'^view/$', view),
]
@override_settings(ROOT_URLCONF='github.tests.test_payload_factories.test_roundtrip')
class TestRoundtrip(TestCase):
def test_round_trip(self):
"""
A DRF Request can be loaded via stub view
"""
request_factory = APIRequestFactory()
request = request_factory.post(
'/view/',
data={'hello': 'world'},
format='json',
)
result = view(request)
self.assertEqual(result.data['request_data'], {'hello': 'world'})
self.assertEqual(result.data['header_github_event'], '')
</code></pre>
<p>Which passes :D</p>
| 0 | 2016-09-29T12:50:30Z | [
"python",
"unit-testing",
"django-rest-framework"
]
|
Working with floating point NumPy arrays for comparison and related operations | 39,757,559 | <p>I have an array of random floats and I need to compare it to another one that has the same values in a different order. For that matter I use the sum, product (and other combinations depending on the dimension of the table hence the number of equations needed).</p>
<p>Nevertheless, I encountered a precision issue when I perform the sum (or product) on the array depending on the order of the values. </p>
<p>Here is a simple standalone example to illustrate this issue :</p>
<pre><code>import numpy as np
n = 10
m = 4
tag = np.random.rand(n, m)
s1 = np.sum(tag, axis=1)
s2 = np.sum(tag[:, ::-1], axis=1)
# print the number of times s1 is not equal to s2 (should be 0)
print np.nonzero(s1 != s2)[0].shape[0]
</code></pre>
<p>If you execute this code it sometimes tells you that <code>s1</code> and <code>s2</code> are not equal and the differents is of magnitude of the computer precision.</p>
<p>The problem is I need to use those in functions like <code>np.in1d</code> where I can't really give a tolerance...</p>
<p>Is there a way to avoid this issue?</p>
| 1 | 2016-09-28T20:59:39Z | 39,758,154 | <p>For the listed code, you can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.isclose.html" rel="nofollow"><code>np.isclose</code></a> and with it tolerance values could be specified too.</p>
<p>Using the provided sample, let's see how it could be used -</p>
<pre><code>In [201]: n = 10
...: m = 4
...:
...: tag = np.random.rand(n, m)
...:
...: s1 = np.sum(tag, axis=1)
...: s2 = np.sum(tag[:, ::-1], axis=1)
...:
In [202]: np.nonzero(s1 != s2)[0].shape[0]
Out[202]: 4
In [203]: (~np.isclose(s1,s2)).sum() # So, all matches!
Out[203]: 0
</code></pre>
<p>To make use of tolerance values in other scenarios, we need to work on a case-by-case basis. So, let's say for an implementation that involve elementwise comparison like in <code>np.in1d</code>, we can bring in <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>broadcasting</code></a> to do those elementwise equality checks for all elems in first input against all elems in the second one. Then, we use <code>np.abs</code> to get the "closeness factor" and finally compare against the input tolerance to decide the matches. As needed to simulate <code>np.in1d</code>, we do ANY operation along one of the axis. Thus, <code>np.in1d</code> with tolerance using <code>broadcasting</code> could be implemented like so -</p>
<pre><code>def in1d_with_tolerance(A,B,tol=1e-05):
return (np.abs(A[:,None] - B) < tol).any(1)
</code></pre>
<p>As suggested in the comments by OP, we can also round floating-pt numbers after scaling them up and this should be memory efficient, as being needed for working with large arrays. So, a modified version would be like so -</p>
<pre><code>def in1d_with_tolerance_v2(A,B,tol=1e-05):
S = round(1/tol)
return np.in1d(np.around(A*S).astype(int),np.around(B*S).astype(int))
</code></pre>
<p>Sample run -</p>
<pre><code>In [372]: A = np.random.rand(5)
...: B = np.random.rand(7)
...: B[3] = A[1] + 0.0000008
...: B[6] = A[4] - 0.0000007
...:
In [373]: np.in1d(A,B) # Not the result we want!
Out[373]: array([False, False, False, False, False], dtype=bool)
In [374]: in1d_with_tolerance(A,B)
Out[374]: array([False, True, False, False, True], dtype=bool)
In [375]: in1d_with_tolerance_v2(A,B)
Out[375]: array([False, True, False, False, True], dtype=bool)
</code></pre>
<p>Finally, on how to make it work for other implementations and use cases - It would depend on the implementation itself. But for most cases, <code>np.isclose</code> and <code>broadcasting</code> should help.</p>
| 2 | 2016-09-28T21:43:27Z | [
"python",
"arrays",
"numpy",
"random",
"precision"
]
|
Use regex to match 5 num, dot, 1 num, dot, 5 num | 39,757,573 | <p>I'm trying to create regex to match the following pattern:</p>
<pre><code>00000.1.17372
</code></pre>
<p>i.e: <code>5 Numbers DOT 1 Number DOT 5 Numbers</code></p>
<p>I have tried the following re.match:</p>
<pre><code>find = re.match('d{5}.d{1}.d{5}', string)
</code></pre>
<p>In context:</p>
<pre><code>import re
string = "{u'blabla': u'asdf', u'dd': u'a', u'cotry': u'jjK', u'l': u'/q/iii:00000.1.17372', u'stfe': u'', u'fdfhdiufhi': u'GB', u'y_name': u'Unitm', u'mw': u'00000.1.17372'}"
find = re.match('d{5}.d{1}.d{5}', string)
print find
</code></pre>
<p>However, this doesn't seem to work, as the output is:</p>
<pre><code>None
</code></pre>
| 0 | 2016-09-28T21:00:18Z | 39,757,619 | <p>Use the following with <code>re.findall</code>:</p>
<pre><code>r'\b\d{5}\.\d\.\d{5}\b'
</code></pre>
<p>See the <a href="https://regex101.com/r/ExVBOp/1" rel="nofollow">regex demo</a></p>
<p>The point is:</p>
<ul>
<li>to match a digit, you need to use <code>\d</code></li>
<li>a dot must be escaped to match a literal dot</li>
<li>to match whole words, you need to use <code>\b</code> word boundary or you will find matches of 5-digit chunks in <code>2234567654</code> like strings</li>
<li><code>re.findall</code> will return a list of all non-overlapping matches (since there are no capturing groups in this pattern) </li>
</ul>
<p>Sample Python code:</p>
<pre><code>import re
regex = r"\b\d{5}\.\d\.\d{5}\b"
test_str = "{u'blabla': u'asdf', u'dd': u'a', u'cotry': u'jjK', u'l': u'/q/iii:00000.1.17372', u'stfe': u'', u'fdfhdiufhi': u'GB', u'y_name': u'Unitm', u'mw': u'00000.1.17372'}"
matches = re.findall(regex, test_str)
print(matches)
</code></pre>
| 2 | 2016-09-28T21:03:21Z | [
"python",
"regex",
"python-2.7"
]
|
Use regex to match 5 num, dot, 1 num, dot, 5 num | 39,757,573 | <p>I'm trying to create regex to match the following pattern:</p>
<pre><code>00000.1.17372
</code></pre>
<p>i.e: <code>5 Numbers DOT 1 Number DOT 5 Numbers</code></p>
<p>I have tried the following re.match:</p>
<pre><code>find = re.match('d{5}.d{1}.d{5}', string)
</code></pre>
<p>In context:</p>
<pre><code>import re
string = "{u'blabla': u'asdf', u'dd': u'a', u'cotry': u'jjK', u'l': u'/q/iii:00000.1.17372', u'stfe': u'', u'fdfhdiufhi': u'GB', u'y_name': u'Unitm', u'mw': u'00000.1.17372'}"
find = re.match('d{5}.d{1}.d{5}', string)
print find
</code></pre>
<p>However, this doesn't seem to work, as the output is:</p>
<pre><code>None
</code></pre>
| 0 | 2016-09-28T21:00:18Z | 39,757,626 | <p>The pattern you want is:</p>
<pre><code>\d{5}\.\d\.\d{5}
</code></pre>
<p>You need to escape the dots and use the proper token for a number, which is <code>\d</code>.</p>
| 0 | 2016-09-28T21:03:42Z | [
"python",
"regex",
"python-2.7"
]
|
What is causing 'unicode' object has no attribute 'toordinal' in pyspark? | 39,757,591 | <p>I got this error but I don't what causes it. My python code ran in pyspark. The stacktrace is long and i just show some of them. All the stacktrace doesn't show my code in it so I don't know where to look for. What is possible the cause for this error? </p>
<pre><code>/usr/hdp/2.4.2.0-258/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
306 raise Py4JJavaError(
307 "An error occurred while calling {0}{1}{2}.\n".
--> 308 format(target_id, ".", name), value)
309 else:
310 raise Py4JError(
Py4JJavaError: An error occurred while calling o107.parquet.
...
File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/sql/types.py", line 435, in toInternal
return self.dataType.toInternal(obj)
File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/sql/types.py", line 172, in toInternal
return d.toordinal() - self.EPOCH_ORDINAL
AttributeError: 'unicode' object has no attribute 'toordinal'
</code></pre>
<p>Thanks,</p>
| 0 | 2016-09-28T21:01:39Z | 39,757,681 | <p>The specific exception is caused by trying to store a <code>unicode</code> value in a <em>date</em> datatype that is part of a struct. The conversion of the Python type to Spark internal representation expected to be able to call <a href="https://docs.python.org/3/library/datetime.html#datetime.date.toordinal" rel="nofollow"><code>date.toordinal()</code></a> method.</p>
<p>Presumably you have a dataframe schema somewhere that consists of a struct type with a date field, and something tried to stuff a string into that.</p>
<p>You can trace this based on the traceback you <em>do</em> have. The <a href="https://github.com/apache/spark" rel="nofollow">Apache Spark source code</a> is hosted on GitHub, and your traceback points to the <a href="https://github.com/apache/spark/blob/master/python/pyspark/sql/types.py" rel="nofollow"><code>pyspark/sql/types.py</code> file</a>. The lines point to the <a href="https://github.com/apache/spark/blob/master/python/pyspark/sql/types.py#L435-L436" rel="nofollow"><code>StructField.toInternal()</code> method</a>, which delegates to the <code>self.dataType.toInternal()</code> method:</p>
<pre><code>class StructField(DataType):
# ...
def toInternal(self, obj):
return self.dataType.toInternal(obj)
</code></pre>
<p>which in your traceback ends up at the <a href="https://github.com/apache/spark/blob/master/python/pyspark/sql/types.py#L170-L172" rel="nofollow"><code>DateType.toInternal()</code> method</a>:</p>
<pre><code>class DateType(AtomicType):
# ...
def toInternal(self, d):
if d is not None:
return d.toordinal() - self.EPOCH_ORDINAL
</code></pre>
<p>So we know this is about a date field in a struct. The <code>DateType.fromInternal()</code> shows you what Python type is produced in the opposite direction:</p>
<pre><code>def fromInternal(self, v):
if v is not None:
return datetime.date.fromordinal(v + self.EPOCH_ORDINAL)
</code></pre>
<p>It is safe to assume that <code>toInternal()</code> expects the same type when converting in the other direction.</p>
| 1 | 2016-09-28T21:07:37Z | [
"python",
"pyspark"
]
|
Write multiple rows from dict using csv | 39,757,609 | <p><strong>Update</strong>: I do not want to use <code>pandas</code> because I have a list of dict's and want to write each one to disk as they come in (part of webscraping workflow).</p>
<p>I have a dict that I'd like to write to a csv file. I've come up with a solution, but I'd like to know if there's a more <code>pythonic</code> solution available. Here's what I envisioned (but doesn't work):</p>
<pre><code>import csv
test_dict = {"review_id": [1, 2, 3, 4],
"text": [5, 6, 7, 8]}
with open('test.csv', 'w') as csvfile:
fieldnames = ["review_id", "text"]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(test_dict)
</code></pre>
<p>Which would ideally result in:</p>
<pre><code>review_id text
1 5
2 6
3 7
4 8
</code></pre>
<p>The code above doesn't seem to work that way I'd expect it to and throws a value error. So, I've turned to following solution (which does work, but seems verbose).</p>
<pre><code>with open('test.csv', 'w') as csvfile:
fieldnames = ["review_id", "text"]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
response = test_dict
cells = [{x: {key: val}} for key, vals in response.items()
for x, val in enumerate(vals)]
rows = {}
for d in cells:
for key, val in d.items():
if key in rows:
rows[key].update(d.get(key, None))
else:
rows[key] = d.get(key, None)
for row in [val for _, val in rows.items()]:
writer.writerow(row)
</code></pre>
<p>Again, to reiterate what I'm looking for: the block of code directly above works (i.e., produces the desired result mentioned early in the post), but seems verbose. So, is there a more <code>pythonic</code> solution?</p>
<p>Thanks!</p>
| 0 | 2016-09-28T21:02:44Z | 39,757,688 | <p>If you don't mind using a 3rd-party package, you could do it with <a href="http://pandas.pydata.org/" rel="nofollow"><code>pandas</code></a>.</p>
<pre><code>import pandas as pd
pd.DataFrame(test_dict).to_csv('test.csv', index=False)
</code></pre>
<p><strong>update</strong></p>
<p>So, you have several dictionaries and all of them seems to come from a scraping routine. </p>
<pre><code>import pandas as pd
test_dict = {"review_id": [1, 2, 3, 4],
"text": [5, 6, 7, 8]}
pd.DataFrame(test_dict).to_csv('test.csv', index=False)
list_of_dicts = [test_dict, test_dict]
for d in list_of_dicts:
pd.DataFrame(d).to_csv('test.csv', index=False, mode='a', header=False)
</code></pre>
<p>This time, you would be appending to the file and without the header.</p>
<p>The output is:</p>
<pre><code>review_id,text
1,5
2,6
3,7
4,8
1,5
2,6
3,7
4,8
1,5
2,6
3,7
4,8
</code></pre>
| 0 | 2016-09-28T21:08:30Z | [
"python",
"csv"
]
|
Write multiple rows from dict using csv | 39,757,609 | <p><strong>Update</strong>: I do not want to use <code>pandas</code> because I have a list of dict's and want to write each one to disk as they come in (part of webscraping workflow).</p>
<p>I have a dict that I'd like to write to a csv file. I've come up with a solution, but I'd like to know if there's a more <code>pythonic</code> solution available. Here's what I envisioned (but doesn't work):</p>
<pre><code>import csv
test_dict = {"review_id": [1, 2, 3, 4],
"text": [5, 6, 7, 8]}
with open('test.csv', 'w') as csvfile:
fieldnames = ["review_id", "text"]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(test_dict)
</code></pre>
<p>Which would ideally result in:</p>
<pre><code>review_id text
1 5
2 6
3 7
4 8
</code></pre>
<p>The code above doesn't seem to work that way I'd expect it to and throws a value error. So, I've turned to following solution (which does work, but seems verbose).</p>
<pre><code>with open('test.csv', 'w') as csvfile:
fieldnames = ["review_id", "text"]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
response = test_dict
cells = [{x: {key: val}} for key, vals in response.items()
for x, val in enumerate(vals)]
rows = {}
for d in cells:
for key, val in d.items():
if key in rows:
rows[key].update(d.get(key, None))
else:
rows[key] = d.get(key, None)
for row in [val for _, val in rows.items()]:
writer.writerow(row)
</code></pre>
<p>Again, to reiterate what I'm looking for: the block of code directly above works (i.e., produces the desired result mentioned early in the post), but seems verbose. So, is there a more <code>pythonic</code> solution?</p>
<p>Thanks!</p>
| 0 | 2016-09-28T21:02:44Z | 39,757,714 | <p>Try using pandas of python..</p>
<p>Here is a simple example</p>
<pre><code>import pandas as pd
test_dict = {"review_id": [1, 2, 3, 4],
"text": [5, 6, 7, 8]}
d1 = pd.DataFrame(test_dict)
d1.to_csv("output.csv")
</code></pre>
<p>Cheers</p>
| 0 | 2016-09-28T21:10:24Z | [
"python",
"csv"
]
|
Write multiple rows from dict using csv | 39,757,609 | <p><strong>Update</strong>: I do not want to use <code>pandas</code> because I have a list of dict's and want to write each one to disk as they come in (part of webscraping workflow).</p>
<p>I have a dict that I'd like to write to a csv file. I've come up with a solution, but I'd like to know if there's a more <code>pythonic</code> solution available. Here's what I envisioned (but doesn't work):</p>
<pre><code>import csv
test_dict = {"review_id": [1, 2, 3, 4],
"text": [5, 6, 7, 8]}
with open('test.csv', 'w') as csvfile:
fieldnames = ["review_id", "text"]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(test_dict)
</code></pre>
<p>Which would ideally result in:</p>
<pre><code>review_id text
1 5
2 6
3 7
4 8
</code></pre>
<p>The code above doesn't seem to work that way I'd expect it to and throws a value error. So, I've turned to following solution (which does work, but seems verbose).</p>
<pre><code>with open('test.csv', 'w') as csvfile:
fieldnames = ["review_id", "text"]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
response = test_dict
cells = [{x: {key: val}} for key, vals in response.items()
for x, val in enumerate(vals)]
rows = {}
for d in cells:
for key, val in d.items():
if key in rows:
rows[key].update(d.get(key, None))
else:
rows[key] = d.get(key, None)
for row in [val for _, val in rows.items()]:
writer.writerow(row)
</code></pre>
<p>Again, to reiterate what I'm looking for: the block of code directly above works (i.e., produces the desired result mentioned early in the post), but seems verbose. So, is there a more <code>pythonic</code> solution?</p>
<p>Thanks!</p>
| 0 | 2016-09-28T21:02:44Z | 39,757,893 | <p>Your first example will work with minor edits. <code>DictWriter</code> expects a <code>list</code> of <code>dict</code>s rather than a <code>dict</code> of <code>list</code>s. Assuming you can't change the format of the <code>test_dict</code>:</p>
<pre><code>import csv
test_dict = {"review_id": [1, 2, 3, 4],
"text": [5, 6, 7, 8]}
def convert_dict(mydict, numentries):
data = []
for i in range(numentries):
row = {}
for k, l in mydict.iteritems():
row[k] = l[i]
data.append(row)
return data
with open('test.csv', 'w') as csvfile:
fieldnames = ["review_id", "text"]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(convert_dict(test_dict, 4))
</code></pre>
| 0 | 2016-09-28T21:22:52Z | [
"python",
"csv"
]
|
Write multiple rows from dict using csv | 39,757,609 | <p><strong>Update</strong>: I do not want to use <code>pandas</code> because I have a list of dict's and want to write each one to disk as they come in (part of webscraping workflow).</p>
<p>I have a dict that I'd like to write to a csv file. I've come up with a solution, but I'd like to know if there's a more <code>pythonic</code> solution available. Here's what I envisioned (but doesn't work):</p>
<pre><code>import csv
test_dict = {"review_id": [1, 2, 3, 4],
"text": [5, 6, 7, 8]}
with open('test.csv', 'w') as csvfile:
fieldnames = ["review_id", "text"]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(test_dict)
</code></pre>
<p>Which would ideally result in:</p>
<pre><code>review_id text
1 5
2 6
3 7
4 8
</code></pre>
<p>The code above doesn't seem to work that way I'd expect it to and throws a value error. So, I've turned to following solution (which does work, but seems verbose).</p>
<pre><code>with open('test.csv', 'w') as csvfile:
fieldnames = ["review_id", "text"]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
response = test_dict
cells = [{x: {key: val}} for key, vals in response.items()
for x, val in enumerate(vals)]
rows = {}
for d in cells:
for key, val in d.items():
if key in rows:
rows[key].update(d.get(key, None))
else:
rows[key] = d.get(key, None)
for row in [val for _, val in rows.items()]:
writer.writerow(row)
</code></pre>
<p>Again, to reiterate what I'm looking for: the block of code directly above works (i.e., produces the desired result mentioned early in the post), but seems verbose. So, is there a more <code>pythonic</code> solution?</p>
<p>Thanks!</p>
| 0 | 2016-09-28T21:02:44Z | 39,757,896 | <p>The built-in <a href="https://docs.python.org/2/library/functions.html#zip" rel="nofollow"><code>zip</code> function</a> can join together different iterables into tuples which can be passed to <code>writerows</code>. Try this as the last line:</p>
<pre><code>writer.writerows(zip(test_dict["review_id"], test_dict["text"]))
</code></pre>
<p>You can see what it's doing by making a list:</p>
<pre><code>>>> list(zip(test_dict["review_id"], test_dict["text"]))
[(1, 5), (2, 6), (3, 7), (4, 8)]
</code></pre>
<p><strong>Edit</strong>: In this particular case, you probably want a regular <a href="https://docs.python.org/2/library/csv.html#writer-objects" rel="nofollow">csv.Writer</a>, since what you effectively have is now a list.</p>
| 0 | 2016-09-28T21:22:57Z | [
"python",
"csv"
]
|
Write multiple rows from dict using csv | 39,757,609 | <p><strong>Update</strong>: I do not want to use <code>pandas</code> because I have a list of dict's and want to write each one to disk as they come in (part of webscraping workflow).</p>
<p>I have a dict that I'd like to write to a csv file. I've come up with a solution, but I'd like to know if there's a more <code>pythonic</code> solution available. Here's what I envisioned (but doesn't work):</p>
<pre><code>import csv
test_dict = {"review_id": [1, 2, 3, 4],
"text": [5, 6, 7, 8]}
with open('test.csv', 'w') as csvfile:
fieldnames = ["review_id", "text"]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(test_dict)
</code></pre>
<p>Which would ideally result in:</p>
<pre><code>review_id text
1 5
2 6
3 7
4 8
</code></pre>
<p>The code above doesn't seem to work that way I'd expect it to and throws a value error. So, I've turned to following solution (which does work, but seems verbose).</p>
<pre><code>with open('test.csv', 'w') as csvfile:
fieldnames = ["review_id", "text"]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
response = test_dict
cells = [{x: {key: val}} for key, vals in response.items()
for x, val in enumerate(vals)]
rows = {}
for d in cells:
for key, val in d.items():
if key in rows:
rows[key].update(d.get(key, None))
else:
rows[key] = d.get(key, None)
for row in [val for _, val in rows.items()]:
writer.writerow(row)
</code></pre>
<p>Again, to reiterate what I'm looking for: the block of code directly above works (i.e., produces the desired result mentioned early in the post), but seems verbose. So, is there a more <code>pythonic</code> solution?</p>
<p>Thanks!</p>
| 0 | 2016-09-28T21:02:44Z | 39,758,030 | <p>The problem is that with <code>DictWriter.writerows()</code> you are forced to have a dict for each row. Instead you can simply add the values changing your csv creation:</p>
<pre><code>with open('test.csv', 'w') as csvfile:
fieldnames = test_dict.keys()
fieldvalues = zip(*test_dict.values())
writer = csv.writer(csvfile)
writer.writerow(fieldnames)
writer.writerows(fieldvalues)
</code></pre>
| 0 | 2016-09-28T21:33:24Z | [
"python",
"csv"
]
|
Write multiple rows from dict using csv | 39,757,609 | <p><strong>Update</strong>: I do not want to use <code>pandas</code> because I have a list of dict's and want to write each one to disk as they come in (part of webscraping workflow).</p>
<p>I have a dict that I'd like to write to a csv file. I've come up with a solution, but I'd like to know if there's a more <code>pythonic</code> solution available. Here's what I envisioned (but doesn't work):</p>
<pre><code>import csv
test_dict = {"review_id": [1, 2, 3, 4],
"text": [5, 6, 7, 8]}
with open('test.csv', 'w') as csvfile:
fieldnames = ["review_id", "text"]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(test_dict)
</code></pre>
<p>Which would ideally result in:</p>
<pre><code>review_id text
1 5
2 6
3 7
4 8
</code></pre>
<p>The code above doesn't seem to work that way I'd expect it to and throws a value error. So, I've turned to following solution (which does work, but seems verbose).</p>
<pre><code>with open('test.csv', 'w') as csvfile:
fieldnames = ["review_id", "text"]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
response = test_dict
cells = [{x: {key: val}} for key, vals in response.items()
for x, val in enumerate(vals)]
rows = {}
for d in cells:
for key, val in d.items():
if key in rows:
rows[key].update(d.get(key, None))
else:
rows[key] = d.get(key, None)
for row in [val for _, val in rows.items()]:
writer.writerow(row)
</code></pre>
<p>Again, to reiterate what I'm looking for: the block of code directly above works (i.e., produces the desired result mentioned early in the post), but seems verbose. So, is there a more <code>pythonic</code> solution?</p>
<p>Thanks!</p>
| 0 | 2016-09-28T21:02:44Z | 39,758,347 | <p>You have two different problems in your question:</p>
<ol>
<li>Create a csv file from a dictionary where the values are containers and not primitives.</li>
</ol>
<p>For the first problem, the solution is generally to transform the container type into a primitive type. The most common method is creating a json-string. So for example:</p>
<pre><code>>>> import json
>>> x = [2, 4, 6, 8, 10]
>>> json_string = json.dumps(x)
>>> json_string
'[2, 4, 6, 8, 10]'
</code></pre>
<p>So your data conversion might look like:</p>
<pre><code>import json
def convert(datadict):
'''Generator which converts a dictionary of containers into a dictionary of json-strings.
args:
datadict(dict): dictionary which needs conversion
yield:
tuple: key and string
'''
for key, value in datadict.items():
yield key, json.dumps(value)
def dump_to_csv_using_dict(datadict, fields=None, filepath=None, delimiter=None):
'''Dumps a datadict value into csv
args:
datadict(list): list of dictionaries to dump
fieldnames(list): field sequence to use from the dictionary [default: sorted(datadict.keys())]
filepath(str): filepath to save to [default: 'tmp.csv']
delimiter(str): delimiter to use in csv [default: '|']
'''
fieldnames = sorted(datadict.keys()) if fields is None else fields
filepath = 'tmp.csv' if filepath is None else filepath
delimiter = '|' if not delimiter else delimiter
with open(filepath, 'w') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames, restval='', extrasaction='ignore', delimiter=delimiter)
writer.writeheader()
for each_dict in datadict:
writer.writerow(each_dict)
</code></pre>
<p>So the naive conversion looks like this:</p>
<pre><code># Conversion code
test_data = {
"review_id": [1, 2, 3, 4],
"text": [5, 6, 7, 8]}
}
converted_data = dict(convert(test_data))
data_list = [converted_data]
dump_to_csv(data_list)
</code></pre>
<ol start="2">
<li>Create a final value that is actually some sort of a merging of two disparate data sets.</li>
</ol>
<p>To do this, you need to find a way to combine data from different keys. This is not an easy problem to generically solve.</p>
<p>That said, it's easy to combine two lists with zip.</p>
<pre><code>>>> x = [2, 4, 6]
>>> y = [1, 3, 5]
>>> zip(y, x)
[(1, 2), (3, 4), (5, 6)]
</code></pre>
<p>In addition, in the event that your lists are not the same size, python's itertools package provides a method, izip_longest, which will yield back the full zip even if one list is shorter than another. Note izip_longest returns a generator.</p>
<pre><code>from itertools import izip_longest
>>> x = [2, 4]
>>> y = [1, 3, 5]
>>> z = izip_longest(y, x, fillvalue=None) # default fillvalue is None
>>> list(z) # z is a generator
[(1, 2), (3, 4), (5, None)]
</code></pre>
<p>So we could add another function here:</p>
<pre><code>from itertoops import izip_longest
def combine(data, fields=None, default=None):
'''Combines fields within data
args:
data(dict): a dictionary with lists as values
fields(list): a list of keys to combine [default: all fields in random order]
default: default fill value [default: None]
yields:
tuple: columns combined into rows
'''
fields = data.keys() if field is None else field
columns = [data.get(field) for field in fields]
for values in izip_longest(*columns, fillvalue=default):
yield values
</code></pre>
<p>And now we can use this to update our original conversion.</p>
<pre><code>def dump_to_csv(data, filepath=None, delimiter=None):
'''Dumps list into csv
args:
data(list): list of values to dump
filepath(str): filepath to save to [default: 'tmp.csv']
delimiter(str): delimiter to use in csv [default: '|']
'''
fieldnames = sorted(datadict.keys()) if fields is None else fields
filepath = 'tmp.csv' if filepath is None else filepath
delimiter = '|' if not delimiter else delimiter
with open(filepath, 'w') as csvfile:
writer = csv.writer(csvfile, delimiter=delimiter)
for each_row in data:
writer.writerow(each_dict)
# Conversion code
test_data = {
"review_id": [1, 2, 3, 4],
"text": [5, 6, 7, 8]}
}
combined_data = combine(test_data)
data_list = [combined_data]
dump_to_csv(data_list)
</code></pre>
| 0 | 2016-09-28T21:59:36Z | [
"python",
"csv"
]
|
tf.contrib.learn Quickstart: Fix float64 Warning | 39,757,639 | <p>I'm getting myself started with TensorFlow by working through the posted tutorials.</p>
<p>I have the Linux CPU python2.7 version 0.10.0 running on Fedora 23 (twenty three).</p>
<p>I am trying the tf.contrib.learn Quickstart tutorial as per the following code.</p>
<blockquote>
<p><a href="https://www.tensorflow.org/versions/r0.10/tutorials/tflearn/index.html#tf-contrib-learn-quickstart" rel="nofollow">https://www.tensorflow.org/versions/r0.10/tutorials/tflearn/index.html#tf-contrib-learn-quickstart</a></p>
</blockquote>
<pre><code>from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import numpy as np
# Data sets
IRIS_TRAINING = "IRIS_data/iris_training.csv"
IRIS_TEST = "IRIS_data/iris_test.csv"
# Load datasets.
training_set = tf.contrib.learn.datasets.base.load_csv(filename=IRIS_TRAINING,
target_dtype=np.int)
test_set = tf.contrib.learn.datasets.base.load_csv(filename=IRIS_TEST,
target_dtype=np.int)
# Specify that all features have real-value data
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=4)]
# Build 3 layer DNN with 10, 20, 10 units respectively.
classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
hidden_units=[10, 20, 10],
n_classes=3,
model_dir="/tmp/iris_model")
# Fit model.
classifier.fit(x=training_set.data,
y=training_set.target,
steps=2000)
# Evaluate accuracy.
accuracy_score = classifier.evaluate(x=test_set.data,
y=test_set.target)["accuracy"]
print('Accuracy: {0:f}'.format(accuracy_score))
# Classify two new flower samples.
new_samples = np.array(
[[6.4, 3.2, 4.5, 1.5], [5.8, 3.1, 5.0, 1.7]], dtype=float)
y = classifier.predict(new_samples)
print('Predictions: {}'.format(str(y)))
</code></pre>
<p>The Code Executes, but gives float64 warnings. As Such:</p>
<pre><code>$ python confErr.py
WARNING:tensorflow:load_csv (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed after 2016-09-15.
Instructions for updating:
Please use load_csv_{with|without}_header instead.
WARNING:tensorflow:load_csv (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed after 2016-09-15.
Instructions for updating:
Please use load_csv_{with|without}_header instead.
WARNING:tensorflow:Using default config.
WARNING:tensorflow:float64 is not supported by many models, consider casting to float32.
WARNING:tensorflow:Setting feature info to TensorSignature(dtype=tf.float64, shape=TensorShape([Dimension(None), Dimension(4)]), is_sparse=False)
WARNING:tensorflow:Setting targets info to TensorSignature(dtype=tf.int64, shape=TensorShape([Dimension(None)]), is_sparse=False)
WARNING:tensorflow:float64 is not supported by many models, consider casting to float32.
WARNING:tensorflow:Given features: Tensor("input:0", shape=(?, 4), dtype=float64), required signatures: TensorSignature(dtype=tf.float64, shape=TensorShape([Dimension(None), Dimension(4)]), is_sparse=False).
WARNING:tensorflow:Given targets: Tensor("output:0", shape=(?,), dtype=int64), required signatures: TensorSignature(dtype=tf.int64, shape=TensorShape([Dimension(None)]), is_sparse=False).
Accuracy: 0.966667
WARNING:tensorflow:float64 is not supported by many models, consider casting to float32.
Predictions: [1 1]
</code></pre>
<p>Note: replace 'load_csv()' with 'load_csv_with_header()' produces the correct Prediction. but float64 warnings remain.</p>
<p>I have tried explicitly listing dtype (np.int32 ; np.float32; tf.int32; tf.float32) for training_set, test_set and new_samples.</p>
<p>I also tried 'casting' feature_columns as:</p>
<pre><code>feature_columns = tf.cast(feature_columns, tf.float32)
</code></pre>
<p>The problems with float64 are known development issue, but I'm wondering if there is some workaround?</p>
| 0 | 2016-09-28T21:04:41Z | 39,859,570 | <p>I received this answer from the development team via Git-hub.</p>
<blockquote>
<p>Hi @qweelar, the float64 warning is due to a bug with the load_csv_with_header function that was fixed in commit b6813bd. This fix isn't in TensorFlow release 0.10, but should be in the next release.</p>
<p>In the meantime, for the purposes of the tf.contrib.learn quickstart, you can safely ignore the float64 warning.</p>
<p>(Side note: In terms of the other deprecation warning, I will be updating the tutorial code to use load_csv_with_header, and will update this issue when that's in place.)</p>
</blockquote>
| 0 | 2016-10-04T18:44:14Z | [
"python",
"python-2.7",
"tensorflow"
]
|
Python video system not intilated | 39,757,662 | <p>Here it is, I dont know what is wrong, I looked at other answers but I still dont know what is wrong?</p>
<pre><code>import pygame
pygame.init()
gameWindow = pygame.display.set_mode((1000,600));
pygame.display.set_caption("Practice")
#game starts
gameActive = True
while gameActive:
for event in pygame.event.get():
#print event
if event.type == pygame.QUIT:
gameActive = False
pygame.quit()
quit()
</code></pre>
| 1 | 2016-09-28T21:06:20Z | 39,759,942 | <p>You have <code>pygame.quit()</code> in your main loop, so after one iteration through the loop you are calling <code>pygame.quit()</code>, which causes pygame to no longer be initialized which creates the error of not having a display surface. </p>
<p>Moving <code>pygame.quit()</code> out of the main while loop fixes the issue.</p>
| 0 | 2016-09-29T01:23:54Z | [
"python",
"pygame"
]
|
Selenium click trouble (Python) | 39,757,696 | <p>I am using Selenium (<code>ChromeDriver</code>) to automate a <a href="https://www.chess.com/play/computer" rel="nofollow">chess site</a> but I am having trouble clicking on a piece and moving it. I have tried <code>click()</code> and <code>ActionChains</code> but nothing is working. Here is my code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
T = r"C:\Users\HP\Downloads\chromedriver.exe"
options = webdriver.ChromeOptions()
options.add_argument("--start-maximized")
Driver = webdriver.Chrome(T, chrome_options=options)
Driver.get("https://www.chess.com/play/computer")
Driver.find_element_by_xpath('//*[@id="boardMessage"]/a').click()
Piece = WebDriverWait(Driver,10).until(EC.element_to_be_clickable((By.XPATH,'//*[@id="chessboard_boardarea"]/img[22]')))
Piece.click()
</code></pre>
<p>When I run the script nothing happens but the white pawn should be highlighted in yellow. Can someone explain why <code>.click()</code> or <code>ActionChains</code> is not working? How can I make it work?</p>
<p>P.S. If solution requires JavaScript help, please write it in more detail because I don't know JavaScript at all.</p>
| 0 | 2016-09-28T21:08:59Z | 39,761,172 | <p>This is somewhat complicated. The chess pieces are <code>IMG</code>s that can be clicked but empty chess squares are not represented by an element. You will have to determine a coordinate system and use <a href="http://selenium-python.readthedocs.io/api.html#selenium.webdriver.common.action_chains.ActionChains.move_to_element_with_offset" rel="nofollow"><code>move_to_element_with_offset(to_element, xoffset, yoffset)</code></a> based off the board represented by <code><div id="chessboard_boardarea" ...></code> and the board labels A-H and 1-8. For <code>move_to_element_with_offset()</code>, the offsets are relative to the top-left corner of the element. So in this case, (0,0) is the top left corner of the chessboard.</p>
<p>The code below should click the white pawn at A2 and then click A3 which moves it. The board is 640px x 640px. Each square is 80px. The code is clicking in the middle of the square so:</p>
<ul>
<li>A8 would be 40,40</li>
<li>A1 is 40,600</li>
<li>H8 is 600,40</li>
<li>H1 is 600,600</li>
</ul>
<p></p>
<pre><code>board = Driver.find_element_by_id("chessboard_boardarea")
action_chains = ActionChains(Driver)
action_chains.move_to_element_with_offset(board, 40, 520).click().perform() # A2
action_chains.move_to_element_with_offset(board, 40, 440).click().perform() # A3
</code></pre>
<p>You can determine what piece is represented by an element (<code>IMG</code> tag) by looking at the filename in the <code>src</code> attribute. For example, <code>src="//images.chesscomfiles.com/chess-themes/pieces/neo/80/bn.png"</code> has the filename <code>bn.png</code> and is the black knight. Each image filename will be two letters. The first letter is the piece color which is either 'b' for black or 'w' for white. The second letter is the piece name, 'p' pawn, 'r' rook, 'n' knight, 'b' bishop, 'q' queen, and 'k' king. So, <code>bn.png</code> is 'b' for black and 'n' for knight... the black knight.</p>
<p>You can determine where pieces are by using the <code>transform: translate(160px, 160px);</code> portion of the style attribute of the <code>IMG</code> tags representing the different pieces. For example, <code>transform: translate(160px, 160px);</code> this element is at 160,160 which is C6 (the coords are the top-left of the square and each square is 80px).</p>
| 0 | 2016-09-29T04:00:04Z | [
"python",
"selenium",
"selenium-webdriver"
]
|
Selenium click trouble (Python) | 39,757,696 | <p>I am using Selenium (<code>ChromeDriver</code>) to automate a <a href="https://www.chess.com/play/computer" rel="nofollow">chess site</a> but I am having trouble clicking on a piece and moving it. I have tried <code>click()</code> and <code>ActionChains</code> but nothing is working. Here is my code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
T = r"C:\Users\HP\Downloads\chromedriver.exe"
options = webdriver.ChromeOptions()
options.add_argument("--start-maximized")
Driver = webdriver.Chrome(T, chrome_options=options)
Driver.get("https://www.chess.com/play/computer")
Driver.find_element_by_xpath('//*[@id="boardMessage"]/a').click()
Piece = WebDriverWait(Driver,10).until(EC.element_to_be_clickable((By.XPATH,'//*[@id="chessboard_boardarea"]/img[22]')))
Piece.click()
</code></pre>
<p>When I run the script nothing happens but the white pawn should be highlighted in yellow. Can someone explain why <code>.click()</code> or <code>ActionChains</code> is not working? How can I make it work?</p>
<p>P.S. If solution requires JavaScript help, please write it in more detail because I don't know JavaScript at all.</p>
| 0 | 2016-09-28T21:08:59Z | 39,764,746 | <p>Selenium Webdriver is not the right tool for it.</p>
<p>You could try <a href="https://sourceforge.net/adobe/genie/wiki/Home/" rel="nofollow">Genie automation tool</a> if you are looking for a free tool. I've tried my hands on Genie, it's a bit complex but at the end it solves your problem.</p>
| 0 | 2016-09-29T08:07:46Z | [
"python",
"selenium",
"selenium-webdriver"
]
|
Trying to find smallest number | 39,757,754 | <p>In my program I'm trying to find the smallest number that python can give me. When I kept dividing a number by 2, I got 5 x 10^-324 (5e-324). I thought I could divide this by the biggest number I can use in python. I tried to get the biggest number in python by doing this:</p>
<pre><code>z = 1
while True:
try:
z = z+1
except OverflowError:
z = z-1
break
</code></pre>
<hr>
<p>Here is my full code:</p>
<pre><code>from os import system
x = 76556758478567587
while True:
x = x/2
if x/2 == 0.0:
break
print("Smallest number I can compute:", x)
print()
z = 1
while True:
try:
z = z+1
except OverflowError:
z = z-1
break
print(str(x) + " divided by " + str(z) + " is...")
z = x/z
print(z)
system("pause >nul")
</code></pre>
<p>Every time I run this it does nothing. I suddenly recognize it's still trying to solve the problem so I open task manager and Python was eating up my CPU like a pack of wolves eating a dead cow.</p>
<hr>
<p>I know the smallest number in python would be negative but I want to get the the smallest number <strong>above zero</strong>.</p>
| 1 | 2016-09-28T21:13:13Z | 39,757,815 | <p>You may use <code>sys.float_info</code> to get the maximum/minimum representable finite float as:</p>
<pre><code>>>> import sys
>>> sys.float_info.min
2.2250738585072014e-308
>>> sys.float_info.max
1.7976931348623157e+308
</code></pre>
<p>Python uses double-precision floats, which can hold values from about 10 to the -308 to 10 to the 308 power. Below is the experiment from the python prompt:</p>
<pre><code># for maximum
>>> 1e308
1e+308
>>> 1e+309
inf <-- Infinite
</code></pre>
<p>You may even get numbers smaller than <code>1e-308</code> via <a href="https://en.wikipedia.org/wiki/Denormal_number" rel="nofollow"><code>denormals</code></a>, but there is a significant performance hit to this and such numbers are <em>represented with a loss of precision</em>. I found that Python is able to handle 1e-323 but underflows on 1e-324 and returns 0.0 as the value.</p>
<pre><code># for minimum
>>> 1e-323
1e-323
>>> 1e-324
0.0
</code></pre>
<p>You can get denormalized minimum as <code>sys.float_info.min * sys.float_info.epsilon</code>, which comes as <strong><code>5e-324</code></strong>. </p>
<p>As per the document:</p>
<blockquote>
<p><strong>sys.float_info.max</strong> is maximum representable finite float</p>
<p><strong>sys.float_info.min</strong> is minimum positive normalized float</p>
</blockquote>
<p>To get more information check: <a href="https://docs.python.org/3/library/sys.html#sys.float_info" rel="nofollow"><code>sys.float_info</code></a>:</p>
<blockquote>
<p><strong>sys.float_info</strong> is a struct sequence holding information about the float type. It contains low level information about the precision and internal representation. The values correspond to the various floating-point constants defined in the standard header file float.h for the âCâ programming language; see section 5.2.4.2.2 of the 1999 ISO/IEC C standard [C99], âCharacteristics of floating typesâ, for details.</p>
</blockquote>
| 3 | 2016-09-28T21:17:06Z | [
"python",
"numbers"
]
|
Trying to find smallest number | 39,757,754 | <p>In my program I'm trying to find the smallest number that python can give me. When I kept dividing a number by 2, I got 5 x 10^-324 (5e-324). I thought I could divide this by the biggest number I can use in python. I tried to get the biggest number in python by doing this:</p>
<pre><code>z = 1
while True:
try:
z = z+1
except OverflowError:
z = z-1
break
</code></pre>
<hr>
<p>Here is my full code:</p>
<pre><code>from os import system
x = 76556758478567587
while True:
x = x/2
if x/2 == 0.0:
break
print("Smallest number I can compute:", x)
print()
z = 1
while True:
try:
z = z+1
except OverflowError:
z = z-1
break
print(str(x) + " divided by " + str(z) + " is...")
z = x/z
print(z)
system("pause >nul")
</code></pre>
<p>Every time I run this it does nothing. I suddenly recognize it's still trying to solve the problem so I open task manager and Python was eating up my CPU like a pack of wolves eating a dead cow.</p>
<hr>
<p>I know the smallest number in python would be negative but I want to get the the smallest number <strong>above zero</strong>.</p>
| 1 | 2016-09-28T21:13:13Z | 39,758,010 | <p>The key word that you're apparently missing is "<a href="https://en.wikipedia.org/wiki/Epsilon" rel="nofollow">epsilon</a>." If you search on <strong>Python epsilon value</strong> then it returns a links to <a href="http://stackoverflow.com/a/9528486/149076">StackOverflow: Value for epsilon in Python</a> near the top of the results in Google.</p>
<p>Obviously it helps if you understand how the term is associated with this concept.</p>
| 0 | 2016-09-28T21:32:14Z | [
"python",
"numbers"
]
|
Trying to find smallest number | 39,757,754 | <p>In my program I'm trying to find the smallest number that python can give me. When I kept dividing a number by 2, I got 5 x 10^-324 (5e-324). I thought I could divide this by the biggest number I can use in python. I tried to get the biggest number in python by doing this:</p>
<pre><code>z = 1
while True:
try:
z = z+1
except OverflowError:
z = z-1
break
</code></pre>
<hr>
<p>Here is my full code:</p>
<pre><code>from os import system
x = 76556758478567587
while True:
x = x/2
if x/2 == 0.0:
break
print("Smallest number I can compute:", x)
print()
z = 1
while True:
try:
z = z+1
except OverflowError:
z = z-1
break
print(str(x) + " divided by " + str(z) + " is...")
z = x/z
print(z)
system("pause >nul")
</code></pre>
<p>Every time I run this it does nothing. I suddenly recognize it's still trying to solve the problem so I open task manager and Python was eating up my CPU like a pack of wolves eating a dead cow.</p>
<hr>
<p>I know the smallest number in python would be negative but I want to get the the smallest number <strong>above zero</strong>.</p>
| 1 | 2016-09-28T21:13:13Z | 39,758,456 | <p>The smallest floating point number representable in Python is <code>5e-324</code>, or <code>2.0**-1075</code>.</p>
<p>The other two answers have each given you half of what you need to find this number. It can be found by multiplying <code>sys.float_info.min</code> (<code>2.2250738585072014e-308</code>, the smallest normalized float) by <code>sys.float_info.epsilon</code> (<code>2.220446049250313e-16</code>, the smallest proportion you can modify a number by while still getting a distinct value).</p>
| 1 | 2016-09-28T22:08:34Z | [
"python",
"numbers"
]
|
Using python requests and beautiful soup to pull text | 39,757,805 | <p>thanks for taking a look at my problem. i would like to know if there is any way to pull the data-sitekey from this text... here is the url to the page <a href="https://e-com.secure.force.com/adidasUSContact/" rel="nofollow">https://e-com.secure.force.com/adidasUSContact/</a></p>
<pre><code><div class="g-recaptcha" data-sitekey="6LfI8hoTAAAAAMax5_MTl3N-5bDxVNdQ6Gx6BcKX" data-type="image" id="ncaptchaRecaptchaId"><div style="width: 304px; height: 78px;"><div><iframe src="https://www.google.com/recaptcha/api2/anchor?k=6LfI8hoTAAAAAMax5_MTl3N-5bDxVNdQ6Gx6BcKX&amp;co=aHR0cHM6Ly9lLWNvbS5zZWN1cmUuZm9yY2UuY29tOjQ0Mw..&amp;hl=en&amp;type=image&amp;v=r20160921114513&amp;size=normal&amp;cb=ei2ddcb6rl03" title="recaptcha widget" width="304" height="78" role="presentation" frameborder="0" scrolling="no" name="undefined"></iframe></div><textarea id="g-recaptcha-response" name="g-recaptcha-response" class="g-recaptcha-response" style="width: 250px; height: 40px; border: 1px solid #c1c1c1; margin: 10px 25px; padding: 0px; resize: none; display: none; "></t
</code></pre>
<p>here is my current code </p>
<pre><code> import requests
from bs4 import BeautifulSoup
headers = {
'Host' : 'e-com.secure.force.com',
'Connection' : 'keep-alive',
'Upgrade-Insecure-Requests' : '1',
'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; WOW64)',
'Accept' : 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Encoding' : 'gzip, deflate, sdch',
'Accept-Language' : 'en-US,en;q=0.8'
}
url = 'https://e-com.secure.force.com/adidasUSContact/'
r = requests.get(url, headers=headers)
soup = BeautifulSoup(r, 'html.parser')
c = soup.find_all('div', attrs={"class": "data-sitekey"})
print c
</code></pre>
| 2 | 2016-09-28T21:16:27Z | 39,757,879 | <p>Ok now we have code, it is as simple as:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
soup = BeautifulSoup(requests.get("https://e-com.secure.force.com/adidasUSContact/").content, "html.parser")
key = soup.select_one("#ncaptchaRecaptchaId")["data-sitekey"]
</code></pre>
<p><em>data-sitekey</em> is an <em>attribute</em>, <strong>not</strong> a <em>css</em> class so you just need to extract it from the element, you can find the element by it's <em>id</em> as above.</p>
<p>You could also use the class name:</p>
<pre><code># css selector
key = soup.select_one("div.g-recaptcha")["data-sitekey"]
# regular find using class name
key = soup.find("div",class_="g-recaptcha")["data-sitekey"]
</code></pre>
| 3 | 2016-09-28T21:21:41Z | [
"python",
"beautifulsoup",
"python-requests",
"bs4"
]
|
Celery redis backend not always returning result | 39,757,813 | <p>I'm running a celery worker such that:</p>
<pre><code> -------------- celery@ v3.1.23 (Cipater)
---- **** -----
--- * *** * -- Linux-4.4.0-31-generic-x86_64-with-debian-stretch-sid
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: __main__:0x7fe76cd42400
- ** ---------- .> transport: amqp://
- ** ---------- .> results: redis://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. tasks.mytask
</code></pre>
<p><strong>tasks.py</strong>:</p>
<pre><code>@celery_app.task(bind=True, ignore_result=False)
def mytask(task):
r = redis.StrictRedis()
r.rpush('/task_finished', task.request.id)
return {'result': 42}
</code></pre>
<p>When I try to run the following code, and run 2 task one after the other it works when getting the first result but fails to return the second one.</p>
<pre><code>import celery.result
import redis
r = redis.StrictRedis()
celery_app = Celery(name="my_long_task", backend="redis://")
while True:
_, resp = r.blpop('/task_finished')
task_id = resp.decode('utf-8')
task = celery.result.AsyncResult(task_id, app=celery_app)
print(task)
print(task.result)
</code></pre>
<p>Will return:</p>
<p><em>First loop</em>: </p>
<pre><code>[1] 990e2d04-5664-4d7c-8a5c-e9cb4ef45e24
[2] {'result': 42}
</code></pre>
<p><em>Second loop</em> (fails to return the result): </p>
<pre><code>[3] 8463cc46-0884-4bf7-b838-f0614f74b271
[4] {}
</code></pre>
<p>However if I instantiate <code>celery_app = Celery(name="my_long_task", backend="redis://")</code> in the <code>while</code> loop it will work each time.<br>
What is wrong with not reinstantiating <code>celery_app</code> ? What am I missing ?</p>
<p><strong>Edit:</strong></p>
<p>Waiting a bit for the result (in case of latency) won't work too</p>
<pre><code>while True:
_, resp = r.blpop('/task_finished')
task_id = resp.decode('utf-8')
for i in range(0, 20):
# Won't work because I need to re instantiate celery_app
task = celery.result.AsyncResult(task_id, app=celery_app)
print(task.result)
time.sleep(1)
</code></pre>
| 1 | 2016-09-28T21:16:55Z | 39,768,592 | <p><strong>You have a race condition.</strong> This is what happens:</p>
<ol>
<li><p>The loop arrives at <code>_, resp = r.blpop('/task_finished')</code> and blocks there.</p></li>
<li><p>The task executes <code>r.rpush('/task_finished', task.request.id)</code></p></li>
<li><p>The loop unblocks, executes <code>task = celery.result.AsyncResult(task_id, app=celery_app)</code> and gets an empty result because the task has not yet recorded its result to the database.</p></li>
</ol>
<p>There may be a way do the <code>r.rpush</code> <em>after</em> celery has committed the results to the backend. Perhaps creating a custom class derived from <code>Task</code> would do it. But that's not something I've tried.</p>
<p>However, you could certainly modify your code to store the results <em>together</em> with the task id. Something like: </p>
<pre><code>r.rpush('/task_finished', json.dumps({ "task_id": task.request.id, "result": 42 }))
</code></pre>
<p>I've used a JSON serialization for the sake of illustration. You can use whatever scheme you want. On reading:</p>
<pre><code>_, resp = r.blpop('/task_finished')
resp = json.loads(resp)
</code></pre>
<p>With this, you might want to change <code>ignore_result=False</code> to <code>ignore_result=True</code>.</p>
| 0 | 2016-09-29T11:07:20Z | [
"python",
"rabbitmq",
"celery"
]
|
how to initialize multiple columns to existing pandas DataFrame | 39,757,901 | <p>how can I initialize multiple columns in a single instance in an existing pandas DataFrame object? I can initialize single column at an instance, this way:</p>
<pre><code>df = pd.DataFrame({'a':[1,2,3],'b':[4,5,6]}, dtype='int')
df['c'] = 0
</code></pre>
<p>but i cannot do something like:</p>
<pre><code>df[['c','d']] = 0 or
df[['c']['d']] = 0
</code></pre>
<p>is there a way i can achieve this?</p>
| 3 | 2016-09-28T21:23:07Z | 39,757,962 | <p>i got a solution <a href="http://stackoverflow.com/questions/30926670/pandas-add-multiple-empty-columns-to-dataframe">here</a>. </p>
<pre><code>df.reindex(columns = list['cd'])
</code></pre>
<p>will do the trick.</p>
<p>actually it will be:</p>
<pre><code>df.reindex(columns = list['abcd'])
</code></pre>
| 1 | 2016-09-28T21:28:16Z | [
"python",
"pandas",
"dataframe"
]
|
how to initialize multiple columns to existing pandas DataFrame | 39,757,901 | <p>how can I initialize multiple columns in a single instance in an existing pandas DataFrame object? I can initialize single column at an instance, this way:</p>
<pre><code>df = pd.DataFrame({'a':[1,2,3],'b':[4,5,6]}, dtype='int')
df['c'] = 0
</code></pre>
<p>but i cannot do something like:</p>
<pre><code>df[['c','d']] = 0 or
df[['c']['d']] = 0
</code></pre>
<p>is there a way i can achieve this?</p>
| 3 | 2016-09-28T21:23:07Z | 39,758,675 | <p><strong><em><code>pd.concat</code></em></strong></p>
<pre><code>pd.concat([df, pd.DataFrame(0, df.index, list('cd'))], axis=1)
</code></pre>
<p><a href="http://i.stack.imgur.com/4L9rZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/4L9rZ.png" alt="enter image description here"></a></p>
<hr>
<p><strong><em><code>join</code></em></strong></p>
<pre><code>df.join(pd.DataFrame(0, df.index, list('cd')))
</code></pre>
<p><a href="http://i.stack.imgur.com/4L9rZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/4L9rZ.png" alt="enter image description here"></a></p>
| 1 | 2016-09-28T22:33:32Z | [
"python",
"pandas",
"dataframe"
]
|
argparse: how to parse a single string argument OR a file listing many arguments? | 39,757,910 | <p>I have a use case where I'd like the user to be able to provide, as an argument to argparse, EITHER a single string OR a filename where each line has a string.</p>
<p>Assume the user launches <code>./myscript.py -i foobar</code></p>
<p>The logical flow I'm looking for is something like this:</p>
<p>The script determines whether the string foobar is a readable file.
IF it is indeed a readable file, we call some function from the script, passing each line in <code>foobar</code> as an argument to that function. If foobar is not a readable file, we call the same function but just use the string <code>foobar</code> as the argument and return. </p>
<p>I have no ability to guarantee that a filename argument will have a specific extension (or even an extension at all). </p>
<p>Is there a more pythonic way to do this OTHER than just coding up the logic exactly as I've described above? I looked through the <a href="https://docs.python.org/2/howto/argparse.html" rel="nofollow">argparse tutorial</a> and didn't see anything, but it also seems reasonable to think that there would be some specific hooks for filenames as arguments, so I figured I'd ask. </p>
| 0 | 2016-09-28T21:23:45Z | 39,758,144 | <p>A way would be:</p>
<p>Let's say that you have created a parser like this:</p>
<pre><code>parser.add_argument('-i',
help='...',
type=function)
</code></pre>
<p>Where <code>type</code> points to the <code>function</code> which will be an outer function that evaluates the input of the user and decides if it is a <code>string</code> or a <code>filename</code></p>
<p>More information about <code>type</code> you can find in the <a href="https://docs.python.org/3/library/argparse.html#type" rel="nofollow">documentation</a>.</p>
<p>Here is a minimal example that demonstrates this use of <code>type</code>:</p>
<pre><code>parser.add_argument('-d','--directory',
type=Val_dir,
help='...')
# ....
def Val_dir(dir):
if not os.path.isdir(dir):
raise argparse.ArgumentTypeError('The directory you specified does not seem to exist!')
else:
return dir
</code></pre>
<p>The above example shows that with <code>type</code> we can control the input at parsing time. Of course in your case the function would implement another logic - evaluate if the input is a string or a filename.</p>
| 0 | 2016-09-28T21:42:41Z | [
"python",
"argparse"
]
|
argparse: how to parse a single string argument OR a file listing many arguments? | 39,757,910 | <p>I have a use case where I'd like the user to be able to provide, as an argument to argparse, EITHER a single string OR a filename where each line has a string.</p>
<p>Assume the user launches <code>./myscript.py -i foobar</code></p>
<p>The logical flow I'm looking for is something like this:</p>
<p>The script determines whether the string foobar is a readable file.
IF it is indeed a readable file, we call some function from the script, passing each line in <code>foobar</code> as an argument to that function. If foobar is not a readable file, we call the same function but just use the string <code>foobar</code> as the argument and return. </p>
<p>I have no ability to guarantee that a filename argument will have a specific extension (or even an extension at all). </p>
<p>Is there a more pythonic way to do this OTHER than just coding up the logic exactly as I've described above? I looked through the <a href="https://docs.python.org/2/howto/argparse.html" rel="nofollow">argparse tutorial</a> and didn't see anything, but it also seems reasonable to think that there would be some specific hooks for filenames as arguments, so I figured I'd ask. </p>
| 0 | 2016-09-28T21:23:45Z | 39,758,720 | <p>This doesn't look like an <code>argparse</code> problem, since all you want from it is a string. That string can be a filename or a function argument. To a <code>parser</code> these will look the same. Also <code>argparse</code> isn't normally used to run functions. It is used to parse the commandline. Your code determines what to do with that information.</p>
<p>So here's a script (untested) that I think does your task:</p>
<pre><code>import argparse
def somefunction(*args):
print(args)
if __name__=='__main__':
parser=argparse.ArgumentParser()
parser.add_argument('-i','--input')
args = parser.parse_args()
try:
with open(args.input) as f:
lines = f.read()
somefunction(*lines)
# or
# for line in lines:
# somefuncion(line.strip())
except:
somefunction(arg.input)
</code></pre>
<p><code>argparse</code> just provides the <code>args.input</code> string. It's the try/except block that determines how it is used.</p>
<p>================</p>
<p>Here's a prefix char approach:</p>
<pre><code>parser=argparse.ArgumentParser(fromfile_prefix_chars='@',
description="use <prog -i @filename> to load values from file")
parser.add_argument('-i','--inputs')
args=parser.parse_args()
for arg in args.inputs:
somefunction(arg)
</code></pre>
<p>this is supposed to work with a file like:</p>
<pre><code> one
two
three
</code></pre>
<p><a href="https://docs.python.org/3/library/argparse.html#fromfile-prefix-chars" rel="nofollow">https://docs.python.org/3/library/argparse.html#fromfile-prefix-chars</a></p>
| 0 | 2016-09-28T22:38:11Z | [
"python",
"argparse"
]
|
How to go through a ManyToMany field in chunks in Django | 39,757,922 | <p>I am trying to display sets of three objects in a HTML template in a tiled interface. So, for example, an Album model has a ManyToMany field with specific photos. I want to iterate through the photos and show them in sets of three in the view. Currently, I can get all the photos using <code>{{ for photo in album.images.all }}</code> in the template, but don't know how to get the results in sets of three. </p>
<p>How would I go about chunking the results into sets of three so that I can then iterate through the sets of three for the template? Or is there a way to get the total length and then index of specific elements using the Template tags?</p>
<p>Thanks</p>
| 0 | 2016-09-28T21:24:55Z | 39,758,064 | <pre><code>{% for photo in album.images.all %}
{{ photo }}
{% if forloop.counter|divisibleby:4 %}
<hr> {# or some other markup #}
{% endif
{% endfor %}
</code></pre>
<p>You might also do the grouping in your view. This might result in cleaner markup. See: <a href="http://stackoverflow.com/questions/11964972/django-template-for-tag-add-li-every-4th-element#answer-11965885">Django template {%for%} tag add li every 4th element</a></p>
| 1 | 2016-09-28T21:36:36Z | [
"python",
"django"
]
|
How to add more functionality with arguments in this code? | 39,757,924 | <p>I have found nice little program for making quick notes in terminal. There is a little lack of functionality- I can't read notes I have made with it and also I cannot clear notes from file where they are stored. I would like to modify this program, so i can run it with arguments to read what I have written there and also clean. I have some idea how to do that, but can't find write place in code to paste my lines. I want to run it with arguments such as: -r -c</p>
<p>original program looks like this:</p>
<pre><code>#!/usr/bin/env python3
import time
import os
import sys
# add the current local time to the entry header
lines = [ time.asctime() + '\n' + '--------------------\n' ]
if len( sys.argv ) > 1:
lines.append( ' '.join( sys.argv[ 1: ] ) )
lines[-1] += '\n'
else:
while 1:
try:
line = input()
except EOFError:
break
# get more user input until an empty line
if len( line ) == 0:
break
else:
lines.append( line + '\n' )
# only write the entry if the user entered something
if len( lines ) > 1:
memoir_path = os.path.expanduser( '~/.memoir' )
# prepend a seperator only if the file exists ( there are entries already in there )
if os.path.exists( memoir_path ):
lines.insert( 0, '--------------------\n' )
with open( memoir_path, 'a' ) as f:
f.writelines( lines )
</code></pre>
<p>my code, which I don't know where to paste (if it is correct):</p>
<pre><code># read memoir file
if str(sys.argv) == ("r"):
os.system('cat ~/.memoir')
# clear memoir file
if str(sys.argv) == ("c"):
os.system('> ~/.memoir')
</code></pre>
<p>EDIT:</p>
<p>I have made few changes, due to answer, and everything works fine, but I would like to make this code a little simplier. Author of this code added some usles feature for me to run this program with number of random arguments which will be "transformed" into empty lines in note. It seems to not work anyway after my update, so I want to get rid of this feature. I think it starts in line nr 37 look for #here!!! comment</p>
<p>new code looks like this:</p>
<pre><code>#!/usr/bin/env python3
import time
import os
import sys
def help():
print ("memoir is a minimal cli diary")
print ("run script with -r argument to read notes")
print ("run script with -c argument to clear notes file")
print ("run script with -h argument for help")
# add the current local time to the entry header
lines = [ time.asctime() + '\n' + '------------------------\n' + '\n' ]
if len(sys.argv) >= 2:
if sys.argv[1] == '-r':
# read .memoir file
os.system('cat ~/.memoir')
print('\n')
exit(0)
if sys.argv[1] == '-h':
# print help
help()
exit(0)
if sys.argv[1] == '-c':
# clear .memoir file
os.system('> ~/.memoir')
exit(0)
else:
print("invalid argument, type m -h for help")
exit(0)
if len(sys.argv) > 1 and len(sys.argv) != 2: #here!!!
lines.append( ' '.join(sys.argv[ 1: ]))
lines[-1] += '\n'
else:
while 1:
try:
line = input()
except EOFError:
break
# get more user input until an empty line
if len( line ) == 0:
break
else:
lines.append( line + '\n' )
# only write the entry if the user entered something
if len( lines ) > 1:
memoir_path = os.path.expanduser( '~/.memoir' )
# prepend a seperator only if the file exists ( there are entries already in there )
if os.path.exists( memoir_path ):
lines.insert(0, '\n------------------------\n')
with open( memoir_path, 'a' ) as f:
f.writelines( lines )
if len(sys.argv) >= 2:
# clear .memoir file
if sys.argv[1] == '-c':
os.system('> ~/.memoir')
</code></pre>
| 0 | 2016-09-28T21:24:59Z | 39,758,222 | <p>First, <code>sys.argv</code> is an array with the arguments of the command line in it. So you need to test the length of it :</p>
<pre><code>if len(sys.argv) >= 2 :
#sys.argv[0] is the name of your program
if sys.argv[1] == '-c' :
</code></pre>
<p>Then you can see that the file is written the latest lines :</p>
<pre><code>if len( lines ) > 1:
memoir_path = os.path.expanduser( '~/.memoir' )
# prepend a seperator only if the file exists ( there are entries already in there )
if os.path.exists( memoir_path ):
lines.insert( 0, '--------------------\n' )
#here the file is opened with 'a' = append mode.
#If you want to override the content of the file (for your command 'clear'), use 'w' = write mode.
with open( memoir_path, 'a' ) as f:
f.writelines( lines )
</code></pre>
<p>So you can include the 'clear' command at the end. The 'read' command will rather find its place at the beggining of the program.</p>
| 0 | 2016-09-28T21:48:21Z | [
"python"
]
|
Trying to create a csv using python, and getting "list indices" error | 39,758,022 | <p>In my last question, I asked how to get python to assign a set of values to phrases in a csv files. I was told to create a list of tuples, and this worked great. </p>
<p>Currently my list, called clean_titles looks like this:</p>
<pre><code>[('a', 1),
('a', 1),
('a', 1),
('a', 1),
('a', 1),
('a', 1),
('a', 1),
('a', 1),
('b', 2),
('b', 2),
('b', 2),
('b', 2),
('b', 2),
('b', 2),
('c', 3),
('c', 3),
('c', 3),
('c', 3),
('c', 3)]
</code></pre>
<p>Now I want to take the list, and export it to as CSV file. I want the name of the phrases in one column and the assigned number in another column.</p>
<pre><code>with open("fiancee_wordfreq.csv" , "wb") as f:
writer = csv.writer(f)
for val in clean_titles:
writer.writerow([val])
</code></pre>
<p>But I keep getting an error message that "list indices must be integers not unicode" </p>
<p>What am I doing wrong or what am I missing. Thanks for your help. </p>
| 0 | 2016-09-28T21:32:59Z | 39,758,138 | <p>(My answer is for Python3)</p>
<p>You are opening the file in "bytes" mode. It worked for me when I removed the "b" argument:</p>
<pre><code>with open("testabc.csv" , "w") as f:
writer = csv.writer(f)
for val in clean_titles:
writer.writerow(val)
</code></pre>
<p>Also, you can just use a built-in <code>writerows</code> function to be less verbose:</p>
<pre><code>with open("testabc.csv" , "w") as f:
writer = csv.writer(f)
writer.writerows(clean_titles)
</code></pre>
| 2 | 2016-09-28T21:42:15Z | [
"python",
"csv"
]
|
Cannot connect to remote MongoDB server using flask-mongoengine | 39,758,077 | <p>Trying to connect to a MongoDB cluster hosted on a remote server using flask-mongoengine but the following error is thrown:</p>
<pre><code>File "test.py", line 9, in <module>
inserted = Something(some='whatever').save()
File "/home/lokesh/Desktop/Work/Survaider_Apps/new_survaider/survaider-env/lib/python3.5/site-packages/mongoengine/document.py", line 323, in save
object_id = collection.save(doc, **write_concern)
File "/home/lokesh/Desktop/Work/Survaider_Apps/new_survaider/survaider-env/lib/python3.5/site-packages/pymongo/collection.py", line 2186, in save
with self._socket_for_writes() as sock_info:
File "/usr/lib/python3.5/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/home/lokesh/Desktop/Work/Survaider_Apps/new_survaider/survaider-env/lib/python3.5/site-packages/pymongo/mongo_client.py", line 762, in _get_socket
server = self._get_topology().select_server(selector)
File "/home/lokesh/Desktop/Work/Survaider_Apps/new_survaider/survaider-env/lib/python3.5/site-packages/pymongo/topology.py", line 210, in select_server
address))
File "/home/lokesh/Desktop/Work/Survaider_Apps/new_survaider/survaider-env/lib/python3.5/site-packages/pymongo/topology.py", line 186, in select_servers
self._error_message(selector))
pymongo.errors.ServerSelectionTimeoutError: admin:27017: [Errno -2] Name or service not known
</code></pre>
<p>Below is the code I am using:</p>
<pre><code># test.py
from my_app_module import app
from flask_mongoengine import MongoEngine
db = MongoEngine(app)
class Something(db.Document):
some = db.StringField()
inserted = Something(some='whatever').save()
print(inserted)
for obj in Something.objects:
print(obj)
</code></pre>
<p>My <code>config.py</code> file contains:</p>
<pre><code># config.py
MONGODB_SETTINGS = {
'db': 'testdb',
'host': 'mongodb://<my_username>:<my_password>@<my_cluster_replica_1>.mongodb.net:27017,<my_cluster_replica_2>.mongodb.net:27017,<my_cluster_replica_3>.mongodb.net:27017/admin?ssl=true&replicaSet=<my_cluster>&authSource=admin',
}
</code></pre>
<p>But I can connect using <code>pymongo</code> using the following code.</p>
<pre><code>from pymongo import MongoClient
uri = 'mongodb://<my_username>:<my_password>@<my_cluster_replica_1>.mongodb.net:27017,<my_cluster_replica_2>.mongodb.net:27017,<my_cluster_replica_3>.mongodb.net:27017/admin?ssl=true&replicaSet=<my_cluster>&authSource=admin'
client = MongoClient(uri)
db = client['testdb']
db.test_collection.insert({'some_key': 'some_value'})
for col in db.test_collection.find():
print(col)
# Prints {'some_key': 'some_value', '_id': ObjectId('57ec35d9312f911329e54d5e')}
</code></pre>
<p>I tried to find a solution but nobody seems to have come across the problem before. I am using MongoDB's Atlas solution to host the MongoDB cluster.</p>
| 0 | 2016-09-28T21:37:09Z | 39,764,540 | <p>I figured out that it's a bug in <a href="https://github.com/MongoEngine/flask-mongoengine" rel="nofollow">flask-mongoengine</a> version <code>0.8</code> and has beed reported <a href="https://github.com/MongoEngine/flask-mongoengine/issues/247" rel="nofollow">here</a>.</p>
| 0 | 2016-09-29T07:57:30Z | [
"python",
"mongodb",
"pymongo",
"mongoengine",
"flask-mongoengine"
]
|
Clearing Tensorflow GPU memory after model execution | 39,758,094 | <p>I've trained 3 models and am now running code that loads each of the 3 checkpoints in sequence and runs predictions using them. I'm using the GPU.</p>
<p>When the first model is loaded it pre-allocates the entire GPU memory (which I want for working through the first batch of data). But it doesn't unload memory when it's finished. When the second model is loaded, using both <code>tf.reset_default_graph()</code> and <code>with tf.Graph().as_default()</code> the GPU memory still is fully consumed from the first model, and the second model is then starved of memory.</p>
<p>Is there a way to resolve this, other than using Python subprocesses or multiprocessing to work around the problem (the only solution I've found on via google searches)?</p>
| 0 | 2016-09-28T21:38:34Z | 39,781,178 | <p>GPU memory allocated by tensors is released as soon as the tensor is not needed anymore (before the .run call terminates). GPU memory allocated for variables is released when variable containers are destroyed. In case of DirectSession (ie, sess=tf.Session("")) it is when session is closed or explicitly reset (added in <a href="https://github.com/tensorflow/tensorflow/commit/62c159ff" rel="nofollow">62c159ff</a>)</p>
| 0 | 2016-09-29T22:47:09Z | [
"python",
"tensorflow",
"gpu"
]
|
python django app settings log level | 39,758,177 | <p>I am new to python and Django. My changes in app/settings.py are not picking up. I have changed the log level
FROM:</p>
<pre><code>LOGGING = {
'version': 1,
...
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
}
}
</code></pre>
<p>TO</p>
<pre><code>LOGGING = {
'version': 1,
...
'handlers': {
'file': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': '/var/log/django/debug.log'
}
},
'loggers': {
'django.request': {
'handlers': ['file'],
'level': 'DEBUG',
'propagate': True,
},
}
}
</code></pre>
<p>but changes are not picking up as I don't see any activity on log despite actions on app. How could I tell Django that settings are updated and reload it?</p>
| 0 | 2016-09-28T21:44:21Z | 39,758,216 | <p>You need to restart the web server for it to pick up changes in your app.</p>
| 0 | 2016-09-28T21:48:05Z | [
"python",
"django"
]
|
Using xlwings with excel, which of these two approaches is the quickest/ preferred? | 39,758,226 | <p>I have just started to learn Python and am using xlwings to write to an excel spreadsheet. I am really new to coding (and this is my first question) so this may be a bit of a simple question but any comments would be really appreciated.</p>
<p>I am reading the page source of a website (using selenium and beautiful soup) to get a few pieces of information about a product, such as price and weight. I am then writing these values to cells in excel.</p>
<p>I have two ways of doing this - the first runs a function and then writes the values to excel before moving on to the next function:</p>
<p><em>(these are excerpts of the main script - both ways work ok)</em></p>
<pre><code> while rowNum < lastRow + 1:
urlCellRef = (rowNum, colNum)
url = wb.sheets[0].range(urlCellRef).value
# Parse HTML with beautiful soup
getPageSource()
# Find a product price field value within HTML
getProductPrice()
itemPriceRef = (rowNum, colNum + 1)
# Write price value back to Excel sheet
wb.sheets[0].range(itemPriceRef).value = productPrice
getProductWeight()
itemWeightRef = (rowNum, colNum + 2)
wb.sheets[0].range(itemWeightRef).value = productWeight
getProductQuantity()
itemQuantityRef = (rowNum, colNum + 4)
wb.sheets[0].range(itemQuantityRef).value = productQuantity
getProductCode()
prodCodeRef = (rowNum, colNum + 6)
wb.sheets[0].range(prodCodeRef).value = productCode
rowNum = rowNum + 1
</code></pre>
<p>The second runs all of the functions and then writes each of the stored values to excel in one go:</p>
<pre><code> while rowNum < lastRow + 1:
urlCellRef = (rowNum, colNum)
url = wb.sheets[0].range(urlCellRef).value
getPageSource()
getProductPrice()
getProductWeight()
getProductQuantity()
getProductCode()
itemPriceRef = (rowNum, colNum + 1)
wb.sheets[0].range(itemPriceRef).value = productPrice
itemWeightRef = (rowNum, colNum + 2)
wb.sheets[0].range(itemWeightRef).value = productWeight
itemQuantityRef = (rowNum, colNum + 4)
wb.sheets[0].range(itemQuantityRef).value = productQuantity
prodCodeRef = (rowNum, colNum + 6)
wb.sheets[0].range(prodCodeRef).value = productCode
rowNum = rowNum + 1
</code></pre>
<p>I was wondering, which is the preferred method for doing this? I haven't noticed much of a speed difference but my laptop is pretty slow so if one approach is considered best practice then I would prefer to go with that as I will be increasing the number of urls that will be used.</p>
<p>Many thanks for your help!</p>
| 1 | 2016-09-28T21:48:26Z | 39,758,533 | <p>The overhead of the Excel call reigns supreme. When using XLWings, write to your spreadsheet as infrequently as possible.</p>
<p>I've found rewriting the whole sheet (or area of the sheet to be changed) using the Range object to be leaps and bounds faster than writing individual cells, rows, or columns. If I'm not doing any heavy data manipulation I just use nested lists - Whether it'll be better for you to treat the sublists as columns or rows (the tranpose option is used for this) is up to how you're handling your data. If you're working with larger datasets or doing more intensive work you may want to use NumPy arrays or Panda.</p>
| 1 | 2016-09-28T22:17:03Z | [
"python",
"xlwings"
]
|
Merging mutliple pandas dfs time series on DATE index and which are contained in a python dictionary | 39,758,268 | <p>I have a python dictionary that contains CLOSE prices for several stocks, stock indices, fixed income instruments and currencies (AAPL, AORD, etc.), using a DATE index. The different DFs in the dictionary have different lengths, i.e. some time series are longer than others. All the DFs have the same field, ie. 'CLOSE'.</p>
<p>The length of the dictionary is variable. How can I merge all the DFs into a single one, by DATE index, and also using lsuffix = partial name and feature of the the file I am reading? (for example, the AAPL_CLOSE.csv file has a DATE & a CLOSE field, but to differentiate from the other 'CLOSE' in the merged DF, its name should be AAPL_CLOSE)</p>
<p>This is what I have:</p>
<pre><code>asset_name = []
files_to_test = glob.glob('*_CLOSE*')
for name in files_to_test:
asset_name.append(name.rsplit('_', 1)[0])
</code></pre>
<p>Which returns:</p>
<pre><code>asset_name = ['AAPL', 'AORD', 'EURODOLLAR1MONTH', 'NGETF', 'USDBRL']
files_to_test = ['AAPL_CLOSE.csv',
'AORD_CLOSE.csv',
'EURODOLLAR1MONTH_CLOSE.csv',
'NGETF_CLOSE.csv',
'USDBRL_CLOSE.csv']
</code></pre>
<p>Then:</p>
<pre><code>asset_dict = {}
for name, file in zip(asset_name, files_to_test):
asset_dict[name] = pd.read_csv(file, index_col = 'DATE', parse_dates = True)
</code></pre>
<p>This is the little function I would like to generalize, to create a big merge of all the DFs in the dictionary by DATE, using lsuffix = the elements in asset_list. </p>
<pre><code>merged = asset_dict['AAPL'].join(asset_dict['AORD'], how = 'right', lsuffix ='_AAPL')
</code></pre>
<p>The DFs will have a lot of N/A due to the mismatch of lengths, but I will deal with that later.</p>
| 0 | 2016-09-28T21:52:01Z | 39,781,422 | <p>After not getting any answers, I found a solution that works, although there might be better ones. This is what I did:</p>
<pre><code>asset_dict = {}
for name, file in zip(asset_name, files_to_test):
asset_dict[name] = pd.read_csv(file, index_col='DATE', parse_dates=True)
asset_dict[name].sort_index(ascending = True, inplace = True)
</code></pre>
<p>Pandas can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow">concatenate</a> multilple dfs (at once, not one by one) contained in dictionaries, 'straight out of the box' without much tweaking, by specifying the axis and other parameters.</p>
<pre><code>df = pd.concat(asset_dict, axis = 1)
</code></pre>
<p>The resulting df is a multi-index df, which is a problem for me. Also, the time series for stock prices are all of different lengths, which creates a lot of NaNs. I solved bot problems with this:</p>
<pre><code>df.columns = df.columns.droplevel(1)
df.dropna(inplace = True)
</code></pre>
<p>Now, the columns of my df are these:</p>
<pre><code>['AAPL', 'AORD', 'EURODOLLAR1MONTH', 'NGETF', 'USDBRL']
</code></pre>
<p>But since I wanted them to contain the 'STOCK_CLOSE' format, I do this: </p>
<pre><code>old_columns = df.columns.tolist()
new_columns = []
for name in old_columns:
new_name = name + '_CLOSE_'
new_columns.append(new_name)
</code></pre>
| 0 | 2016-09-29T23:15:21Z | [
"python",
"pandas",
"dictionary"
]
|
Pandas Dataframe Data Type Conversion or Isomap Transformation | 39,758,315 | <p>I load images with scipy's misc.imread, which returns in my case 2304x3 ndarray. Later, I append this array to the list and convert it to a DataFrame. The purpose of doing so is to later apply Isomap transform on the DataFrame. My data frame is 84 rows/samples (images in the folder) and 2304 features each feature is array/list of 3 elements. When I try using Isomap transform I get error:</p>
<pre><code>ValueError: setting an array element with a sequence.
</code></pre>
<p>I think error is there because elements of my data frame are of the object type. First I tried using a conversion to_numeric on each column, but got an error, then I wrote a loop to convert each element to numeric. The results I get are still of the object type. Here is my code:</p>
<pre><code>import pandas as pd
from scipy import misc
from mpl_toolkits.mplot3d import Axes3D
import matplotlib
import matplotlib.pyplot as plt
import glob
from sklearn import manifold
samples = []
path = 'Datasets/ALOI/32/*.png'
files = glob.glob(path)
for name in files:
img = misc.imread(name)
img = img[::2, ::2]
x = (img/255.0).reshape(-1,3)
samples.append(x)
df = pd.DataFrame.from_records(samples, coerce_float = True)
for i in range(0,2304):
for j in range(0,84):
df[i][j] = pd.to_numeric(df[i][j], errors = 'coerce')
df[i] = pd.to_numeric(df[i], errors = 'coerce')
print df[2303][83]
print df[2303].dtype
print df[2303][83].dtype
#iso = manifold.Isomap(n_neighbors=6, n_components=3)
#iso.fit(df)
#manifold = iso.transform(df)
#print manifold.shape
</code></pre>
<p>Last four lines commented out because they give an error. The output I get is:</p>
<pre><code>[ 0.05098039 0.05098039 0.05098039]
object
float64
</code></pre>
<p>As you can see each element of DataFrame is of the type <em>float64</em> but whole column is an <em>object</em>.</p>
<p>Does anyone know how to convert whole data frame to numeric?</p>
<p>Is there another way of applying Isomap?</p>
| 0 | 2016-09-28T21:56:28Z | 40,058,421 | <p>Do you want to reshape your image to a new shape instead of the original one?</p>
<p>If that is not the case then you should change the following line in your code </p>
<pre><code>x = (img/255.0).reshape(-1,3)
</code></pre>
<p>with</p>
<pre><code>x = (img/255.0).reshape(-1)
</code></pre>
<p>Hope this will resolve your issue</p>
| 0 | 2016-10-15T11:25:33Z | [
"python",
"pandas",
"dataframe"
]
|
Initial value in form's __init__ for the model with generic relation | 39,758,386 | <p>I have a model with generic relation like this:</p>
<pre><code> content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE, blank=True, null=True)
object_id = models.PositiveIntegerField(blank=True, null=True)
content_object = GenericForeignKey('content_type', 'object_id')
</code></pre>
<p>To make the life easier for the user I have modified the form. The idea was to have one field with choices instead of multiple. For that I have merged fields into choices inside of form's init().</p>
<pre><code>def __init__(self, *args, **kwargs):
super(AdminTaskForm, self).__init__(*args, **kwargs)
# combine object_type and object_id into a single 'generic_obj' field
# getall the objects that we want the user to be able to choose from
available_objects = list(Event.objects.all())
available_objects += list(Contest.objects.all())
# now create our list of choices for the <select> field
object_choices = []
for obj in available_objects:
type_id = ContentType.objects.get_for_model(obj.__class__).id
obj_id = obj.id
form_value = "type:%s-id:%s" % (type_id, obj_id) # e.g."type:12-id:3"
display_text = str(obj)
object_choices.append([form_value, display_text])
self.fields['content_object'].choices = object_choices
</code></pre>
<p>Till now everything was working fine, but now I have to provide initial value for content_object field.</p>
<p>I have added this code to init() but it is not working:</p>
<pre><code> initial = kwargs.get('initial')
if initial:
if initial['content_object']:
object = initial['content_object']
object_id = object.id
object_type = ContentType.objects.get_for_model(object).id
form_value = "type:%s-id:%s" % (object_type, object_id)
self.fields['content_object'].initial = form_value
</code></pre>
<p>Any suggestions why I cannot set initial value inside of init? Thanks!</p>
<p>P.S. Debug output looks for me ok, but initial is not set at all.</p>
<pre><code>print(self.fields['content_object'].choices) --> [['type:32-id:10050', 'Value1'], ['type:32-id:10056', 'Value2']]
print(form_value) --> type:32-id:10056
</code></pre>
| 0 | 2016-09-28T22:03:06Z | 39,758,762 | <p>I have found a nice answer to my question <a href="http://stackoverflow.com/a/11400559/709897">here</a>: </p>
<blockquote>
<p>If you have already called super().<strong>init</strong> in your Form class, you
should update the form.initial dictionary, not the field.initial
property. If you study form.initial (e.g. print self.initial after the
call to super().<strong>init</strong>), it will contain values for all the fields.
Having a value of None in that dict will override the field.initial
value</p>
</blockquote>
<p>The solution to the problem was then just adding one additional line:</p>
<pre><code>self.initial['content_object'] = form_value
</code></pre>
| 0 | 2016-09-28T22:43:06Z | [
"python",
"django"
]
|
Python/Psychopy: checking if a point is within a circle | 39,758,412 | <p>I want to know the most efficient way to check if a given point (an eye coordinate) is within a specific region (in this case a circle).</p>
<p>Code:</p>
<pre><code>win = visual.Window([600,600], allowGUI=False)
coordinate = [50,70] #example x and y coordinates
shape = visual.Circle(win, radius=120, units='pix') #shape to check if coordinates are within it
if coordinate in shape:
print "inside"
else:
print "outside"
>>TypeError: argument of type 'Circle' is not iterable
</code></pre>
<p>My x and y coordinates correspond to one point on the window, I need to check if this point falls within the circle whose radius is 120 pixels. </p>
<p>Thanks,
Steve</p>
| 1 | 2016-09-28T22:05:42Z | 39,758,535 | <p>I don't think it needs to be that complicated:</p>
<pre><code>center=(600,600)
rad=120
coordinate=(50,70)
if (coordinate[0]-circle[0])**2 + (coordinate[1]-circle[1])**2 < radius**2:
print "inside"
else:
print "outside"
</code></pre>
| 0 | 2016-09-28T22:17:13Z | [
"python",
"psychopy"
]
|
Python/Psychopy: checking if a point is within a circle | 39,758,412 | <p>I want to know the most efficient way to check if a given point (an eye coordinate) is within a specific region (in this case a circle).</p>
<p>Code:</p>
<pre><code>win = visual.Window([600,600], allowGUI=False)
coordinate = [50,70] #example x and y coordinates
shape = visual.Circle(win, radius=120, units='pix') #shape to check if coordinates are within it
if coordinate in shape:
print "inside"
else:
print "outside"
>>TypeError: argument of type 'Circle' is not iterable
</code></pre>
<p>My x and y coordinates correspond to one point on the window, I need to check if this point falls within the circle whose radius is 120 pixels. </p>
<p>Thanks,
Steve</p>
| 1 | 2016-09-28T22:05:42Z | 39,758,660 | <p>PsychoPy's <code>ShapeStim</code> classes have a <code>.contains()</code> method, as per the API:
<a href="http://psychopy.org/api/visual/shapestim.html#psychopy.visual.ShapeStim.contains">http://psychopy.org/api/visual/shapestim.html#psychopy.visual.ShapeStim.contains</a></p>
<p>So your code could simply be:</p>
<pre><code>if shape.contains(coordinate):
print 'inside'
else:
print 'outside'
</code></pre>
<p>Using this method has the advantage that it is a general solution (taking into account the shape of the stimulus vertices) and is not just a check on the pythagorean distance from the stimulus centre (which is a special case for circles only).</p>
| 6 | 2016-09-28T22:31:25Z | [
"python",
"psychopy"
]
|
Socket issue when connecting to IRC | 39,758,442 | <p>I am trying create a very simple python bot that joins a specific channel, and says a random phrase every time someone says anything. I am doing this primarily to learn how sockets and IRC works. I am getting the following error message: </p>
<pre><code> Traceback (most recent call last):
File "D:/Users/Administrator/Documents/Classwork/Dropbox/Bots/Rizon Bot/ViewBot.py", line 36, in <module>
irc.connect((irc_server, 6667))
File "D:\Users\Administrator\Documents\Classwork\Dropbox\Bots\Rizon Bot\socks.py", line 351, in connect
_orgsocket.connect(self,(self.__proxy[1],portnum))
File "C:\Python27\lib\socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
TypeError: an integer is required
Process finished with exit code 1
</code></pre>
<p>When running the following code: </p>
<pre><code>import random
from urlparse import urlparse
import socks
import socket
import time
#---- Definitions ----#
#Proxylist
proxy_list = []
with open("proxy_list.txt") as f:
proxy_list = f.readlines()
#IRC Name List
irc_names = []
with open("irc_names.txt") as f:
irc_names = f.readlines()
#Phrase List
phrase_list = []
with open("phrase_list.txt") as f:
phrase_list = f.readlines()
irc_server = "irc.rizon.net"
irc_port = 6667
irc_channel = '#test12' # Change this to what ever channel you would like
queue = 0
#---- Code ----#
proxy = random.choice(proxy_list)
hostname = proxy.split(":")[0]
port = proxy.split(":")[1]
#irc = socket.socket ( socket.AF_INET, socket.SOCK_STREAM )
irc = socks.socksocket()
irc.setproxy(socks.PROXY_TYPE_HTTP, hostname, port)
irc.connect((irc_server, 6667))
print irc.recv ( 4096 )
irc.send ( 'NICK ' + random.choice(irc_names) + '\r\n' )
irc.send ( 'USER botty botty botty :Sup\r\n' )
irc.send ( 'JOIN ' + irc_channel + '\r\n' )
time.sleep(4)
irc.send('PRIVMSG ' + irc_channel + ' :' + random.choice(phrase_list) + '\r\n')
while True:
data = irc.recv ( 4096 )
print data
if data.find('KICK') != -1:
irc.send('JOIN '+ irc_channel + '\r\n')
if data.find('') != -1: # !test command
irc.send('PRIVMSG ' + irc_channel + ' :' + (random.choice(phrase_list)) + '\r\n')
</code></pre>
<p>I believe the issue is with: </p>
<pre><code>irc.connect((irc_server, 6667))
</code></pre>
<p>My goal is to connect to the IRC server using a proxy loaded from the proxy list. Any ideas? </p>
| 0 | 2016-09-28T22:07:46Z | 39,758,589 | <p>You are determining the proxy port with</p>
<pre><code>port = proxy.split(":")[1]
</code></pre>
<p>You are then passing it to socks:</p>
<pre><code>irc.setproxy(socks.PROXY_TYPE_HTTP, hostname, port)
</code></pre>
<p>But it's still a string. I bet it will work if you change the latter to</p>
<pre><code>irc.setproxy(socks.PROXY_TYPE_HTTP, hostname, int(port))
</code></pre>
<p>It's not actually <em>used</em> until you call <code>irc.connect</code> but it's the port from the setproxy.</p>
| 1 | 2016-09-28T22:23:24Z | [
"python",
"python-2.7",
"sockets",
"irc"
]
|
Socket issue when connecting to IRC | 39,758,442 | <p>I am trying create a very simple python bot that joins a specific channel, and says a random phrase every time someone says anything. I am doing this primarily to learn how sockets and IRC works. I am getting the following error message: </p>
<pre><code> Traceback (most recent call last):
File "D:/Users/Administrator/Documents/Classwork/Dropbox/Bots/Rizon Bot/ViewBot.py", line 36, in <module>
irc.connect((irc_server, 6667))
File "D:\Users\Administrator\Documents\Classwork\Dropbox\Bots\Rizon Bot\socks.py", line 351, in connect
_orgsocket.connect(self,(self.__proxy[1],portnum))
File "C:\Python27\lib\socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
TypeError: an integer is required
Process finished with exit code 1
</code></pre>
<p>When running the following code: </p>
<pre><code>import random
from urlparse import urlparse
import socks
import socket
import time
#---- Definitions ----#
#Proxylist
proxy_list = []
with open("proxy_list.txt") as f:
proxy_list = f.readlines()
#IRC Name List
irc_names = []
with open("irc_names.txt") as f:
irc_names = f.readlines()
#Phrase List
phrase_list = []
with open("phrase_list.txt") as f:
phrase_list = f.readlines()
irc_server = "irc.rizon.net"
irc_port = 6667
irc_channel = '#test12' # Change this to what ever channel you would like
queue = 0
#---- Code ----#
proxy = random.choice(proxy_list)
hostname = proxy.split(":")[0]
port = proxy.split(":")[1]
#irc = socket.socket ( socket.AF_INET, socket.SOCK_STREAM )
irc = socks.socksocket()
irc.setproxy(socks.PROXY_TYPE_HTTP, hostname, port)
irc.connect((irc_server, 6667))
print irc.recv ( 4096 )
irc.send ( 'NICK ' + random.choice(irc_names) + '\r\n' )
irc.send ( 'USER botty botty botty :Sup\r\n' )
irc.send ( 'JOIN ' + irc_channel + '\r\n' )
time.sleep(4)
irc.send('PRIVMSG ' + irc_channel + ' :' + random.choice(phrase_list) + '\r\n')
while True:
data = irc.recv ( 4096 )
print data
if data.find('KICK') != -1:
irc.send('JOIN '+ irc_channel + '\r\n')
if data.find('') != -1: # !test command
irc.send('PRIVMSG ' + irc_channel + ' :' + (random.choice(phrase_list)) + '\r\n')
</code></pre>
<p>I believe the issue is with: </p>
<pre><code>irc.connect((irc_server, 6667))
</code></pre>
<p>My goal is to connect to the IRC server using a proxy loaded from the proxy list. Any ideas? </p>
| 0 | 2016-09-28T22:07:46Z | 39,758,593 | <p>These lines:</p>
<pre><code>port = proxy.split(":")[1]
irc.setproxy(socks.PROXY_TYPE_HTTP, hostname, port)
</code></pre>
<p>will result in port being a <em>string</em> containing the required port number. An <code>integer</code> is required for the port. Change your code to :</p>
<pre><code>port = int(proxy.split(":")[1])
irc.setproxy(socks.PROXY_TYPE_HTTP, hostname, port)
</code></pre>
<p>which converts the port from a string into an integer as required.</p>
| 1 | 2016-09-28T22:24:03Z | [
"python",
"python-2.7",
"sockets",
"irc"
]
|
Normalise between 0 and 1 ignoring NaN | 39,758,449 | <p>For a list of numbers ranging from <code>x</code> to <code>y</code> that may contain <code>NaN</code>, how can I normalise between 0 and 1, ignoring the <code>NaN</code> values (they stay as <code>NaN</code>).</p>
<p>Typically I would use <code>MinMaxScaler</code> (<a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html#sklearn.preprocessing.MinMaxScaler" rel="nofollow">ref page</a>) from <code>sklearn.preprocessing</code>, but this cannot handle <code>NaN</code> and recommends imputing the values based on mean or median etc. it doesn't offer the option to ignore all the <code>NaN</code> values.</p>
| 5 | 2016-09-28T22:08:13Z | 39,758,551 | <p>consider <code>pd.Series</code> <code>s</code></p>
<pre><code>s = pd.Series(np.random.choice([3, 4, 5, 6, np.nan], 100))
s.hist()
</code></pre>
<p><a href="http://i.stack.imgur.com/hzlrF.png" rel="nofollow"><img src="http://i.stack.imgur.com/hzlrF.png" alt="enter image description here"></a></p>
<hr>
<p><strong><em>Option 1</em></strong><br>
Min Max Scaling</p>
<pre><code>new = s.sub(s.min()).div((s.max() - s.min()))
new.hist()
</code></pre>
<p><a href="http://i.stack.imgur.com/ThfOU.png" rel="nofollow"><img src="http://i.stack.imgur.com/ThfOU.png" alt="enter image description here"></a></p>
<hr>
<p><strong>NOT WHAT OP ASKED FOR</strong><br>
I put these in because I wanted to</p>
<p><strong><em>Option 2</em></strong><br>
sigmoid</p>
<pre><code>sigmoid = lambda x: 1 / (1 + np.exp(-x))
new = sigmoid(s.sub(s.mean()))
new.hist()
</code></pre>
<p><a href="http://i.stack.imgur.com/W4nZv.png" rel="nofollow"><img src="http://i.stack.imgur.com/W4nZv.png" alt="enter image description here"></a></p>
<hr>
<p><strong><em>Option 3</em></strong><br>
tanh (hyperbolic tangent)</p>
<pre><code>new = np.tanh(s.sub(s.mean())).add(1).div(2)
new.hist()
</code></pre>
<p><a href="http://i.stack.imgur.com/7Q0WT.png" rel="nofollow"><img src="http://i.stack.imgur.com/7Q0WT.png" alt="enter image description here"></a></p>
| 3 | 2016-09-28T22:19:29Z | [
"python",
"pandas",
"numpy",
"scikit-learn"
]
|
How do I import a module from within a Pycharm project? | 39,758,606 | <p>In my project, the file structure is as follows:</p>
<p>root/folder1/folder2/script1.py</p>
<p>root/folder1/folder2/script2.py</p>
<p>I have a statement in script2.py that says "import script1", and Pycharm says no module is found. How do I fix this?</p>
| 0 | 2016-09-28T22:25:11Z | 39,758,651 | <p>To import as object : </p>
<p><code>from root.folder1.folder2 import script1</code></p>
<p>To import a function of your script:</p>
<p><code>from root.folder1.folder2.script1 import NameOfTheFunction</code></p>
| 2 | 2016-09-28T22:30:33Z | [
"python",
"pycharm"
]
|
401 error using the Streaming features of Twitter API (through Tweepy) | 39,758,666 | <p>I've got an application which takes advantage of a number of features of the Twitter API. I've tested the application on one Windows 7 system, and all features worked well. </p>
<p>Testing the application on a second Windows 7 system, it seems that everything but the Public Stream and User Stream features is working (i.e. the app managed to authenticate, can follow/unfollow users, etc). On this system, the Stream features produce a <a href="https://dev.twitter.com/overview/api/response-codes" rel="nofollow">401 error</a>. As I understand it, 401 could indicate an authorization error (which isn't happening in this case, since non-streaming features are available), or <a href="https://twittercommunity.com/t/401-fixing-server-time-differences/803/2" rel="nofollow">a difference in time configuration</a> between Twitter's servers, and the client system.</p>
<p>I'd like the streaming features of my app to be available cross platform (Windows, Mac, Unix), and I can't expect end-users to tinker with their system's clock configurations. Can anyone recommend a system-agnostic Tweepy/python-based solution to the 401 error issue under the condition that it's caused by a time-configuration problem? Thanks.</p>
<p><strong>EDIT:</strong></p>
<p>On the system on which the Stream features were not working, after having manually tinkered with the system clock, with no success, I synchronized with time.windows.com. This didn't have any discernible effect on the time that was showing (I didn't have a view of the seconds), but it resolved the problem (i.e. the Twitter User and Public Stream features became available). The question remains - how does one prevent such an error from arising on end users' systems? It's unrealistic for me to warn users to adjust their clocks.</p>
| 0 | 2016-09-28T22:31:49Z | 39,759,792 | <p>Your system clock is probably more than 5 minutes off. </p>
| 0 | 2016-09-29T01:03:31Z | [
"python",
"twitter",
"tweepy"
]
|
401 error using the Streaming features of Twitter API (through Tweepy) | 39,758,666 | <p>I've got an application which takes advantage of a number of features of the Twitter API. I've tested the application on one Windows 7 system, and all features worked well. </p>
<p>Testing the application on a second Windows 7 system, it seems that everything but the Public Stream and User Stream features is working (i.e. the app managed to authenticate, can follow/unfollow users, etc). On this system, the Stream features produce a <a href="https://dev.twitter.com/overview/api/response-codes" rel="nofollow">401 error</a>. As I understand it, 401 could indicate an authorization error (which isn't happening in this case, since non-streaming features are available), or <a href="https://twittercommunity.com/t/401-fixing-server-time-differences/803/2" rel="nofollow">a difference in time configuration</a> between Twitter's servers, and the client system.</p>
<p>I'd like the streaming features of my app to be available cross platform (Windows, Mac, Unix), and I can't expect end-users to tinker with their system's clock configurations. Can anyone recommend a system-agnostic Tweepy/python-based solution to the 401 error issue under the condition that it's caused by a time-configuration problem? Thanks.</p>
<p><strong>EDIT:</strong></p>
<p>On the system on which the Stream features were not working, after having manually tinkered with the system clock, with no success, I synchronized with time.windows.com. This didn't have any discernible effect on the time that was showing (I didn't have a view of the seconds), but it resolved the problem (i.e. the Twitter User and Public Stream features became available). The question remains - how does one prevent such an error from arising on end users' systems? It's unrealistic for me to warn users to adjust their clocks.</p>
| 0 | 2016-09-28T22:31:49Z | 39,760,653 | <p>401 is a Authorization Error. Did you log out of Twitter accidentally? Try logging in again. Here are some links for reference:</p>
<p><a href="https://dev.twitter.com/overview/api/response-codes" rel="nofollow">Twitter Error Code (401)</a></p>
<p><a href="http://pcsupport.about.com/od/findbyerrormessage/a/401error.htm" rel="nofollow">General Error Code (401)</a></p>
<p>P.S. I'm not sure if Twitter supports the "Stream Key" thing but if it does it could also be that. Since you need to have the correct Stream Key to edit/broadcast the livestream. On YouTube you have to have a Stream Key in order to edit or end the broadcast.</p>
| -1 | 2016-09-29T02:55:05Z | [
"python",
"twitter",
"tweepy"
]
|
Python - Filter a dict JSON response to send back only two values or convert to a string? | 39,758,835 | <p>I am manipulating a URL in Python that calls an API and gets a valid result in JSON.</p>
<p>I only need the 'latitude' and 'longitude' provided by [result]</p>
<p>But, I cannot seem to find any good way to handle sending dict values back.</p>
<p>I've tried to convert the JSON to a string, but as my end goal is to append the latitude and longitude to another URL, this seems silly?</p>
<p>Have been searching around, and seems parsing DICT a common issue.</p>
<p>Can anyone point me in the right direction?</p>
<p>My code (that ran)</p>
<pre><code>import urllib2
import json
Postcode = raw_input("Hello hello hello, what's your postcode..? ")
PostcodeURL = 'https://api.postcodes.io/postcodes/' + Postcode
json_str = urllib2.urlopen(PostcodeURL).read()
d = json.loads(json_str)
print(d)
</code></pre>
<p>The output looks like this:</p>
<pre><code>{u'status': 200, u'result': {u'eastings': 531025, u'outcode': u'N8', u'admin_county': None, u'postcode'
</code></pre>
| 2 | 2016-09-28T22:51:52Z | 39,758,883 | <p>You got a nested dictionary back, so access it's values using the right keys:</p>
<pre><code>d = json.loads(json_str)
lat, long = d['result']['latitude'], d['result']['longitude']
newurl = url + '?' + 'lat=' + lat + '&lon=' + lon
</code></pre>
| 0 | 2016-09-28T22:57:13Z | [
"python",
"json",
"string",
"dictionary",
"filter"
]
|
Python - Filter a dict JSON response to send back only two values or convert to a string? | 39,758,835 | <p>I am manipulating a URL in Python that calls an API and gets a valid result in JSON.</p>
<p>I only need the 'latitude' and 'longitude' provided by [result]</p>
<p>But, I cannot seem to find any good way to handle sending dict values back.</p>
<p>I've tried to convert the JSON to a string, but as my end goal is to append the latitude and longitude to another URL, this seems silly?</p>
<p>Have been searching around, and seems parsing DICT a common issue.</p>
<p>Can anyone point me in the right direction?</p>
<p>My code (that ran)</p>
<pre><code>import urllib2
import json
Postcode = raw_input("Hello hello hello, what's your postcode..? ")
PostcodeURL = 'https://api.postcodes.io/postcodes/' + Postcode
json_str = urllib2.urlopen(PostcodeURL).read()
d = json.loads(json_str)
print(d)
</code></pre>
<p>The output looks like this:</p>
<pre><code>{u'status': 200, u'result': {u'eastings': 531025, u'outcode': u'N8', u'admin_county': None, u'postcode'
</code></pre>
| 2 | 2016-09-28T22:51:52Z | 39,758,899 | <p>You can access your JSON object exactly like a dictionary. So to get the latitude and longitude from your result, you can use:</p>
<p><code>lat, long = d[result][latitude], d[result][longitude]</code></p>
<p>Load these into strings, do your error checking, and then append them into your new URL.</p>
| -1 | 2016-09-28T22:59:02Z | [
"python",
"json",
"string",
"dictionary",
"filter"
]
|
Django Template Loader not reaching app template | 39,758,854 | <p>According to Django documentation</p>
<p>"Your projectâs TEMPLATES setting describes how Django will load and render templates. The default settings file configures a DjangoTemplates backend whose APP_DIRS option is set to True. By convention DjangoTemplates looks for a âtemplatesâ subdirectory in each of the INSTALLED_APPS."</p>
<p>My Mezzanine settings.py has this configuration</p>
<pre><code>TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
</code></pre>
<p>]</p>
<p>My directory structure is</p>
<pre><code>project
app_name
templates
app_name
index.html
</code></pre>
<p>But the template loader stops at </p>
<pre><code>project
app_name
templates
</code></pre>
<p>As such my app_name template index.html is not reached. What can be the problem?</p>
| 1 | 2016-09-28T22:54:31Z | 39,759,201 | <p>Then you could reach your template with 'app_name/index.html'</p>
| 0 | 2016-09-28T23:37:09Z | [
"python",
"django",
"django-templates",
"mezzanine"
]
|
Django Template Loader not reaching app template | 39,758,854 | <p>According to Django documentation</p>
<p>"Your projectâs TEMPLATES setting describes how Django will load and render templates. The default settings file configures a DjangoTemplates backend whose APP_DIRS option is set to True. By convention DjangoTemplates looks for a âtemplatesâ subdirectory in each of the INSTALLED_APPS."</p>
<p>My Mezzanine settings.py has this configuration</p>
<pre><code>TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
</code></pre>
<p>]</p>
<p>My directory structure is</p>
<pre><code>project
app_name
templates
app_name
index.html
</code></pre>
<p>But the template loader stops at </p>
<pre><code>project
app_name
templates
</code></pre>
<p>As such my app_name template index.html is not reached. What can be the problem?</p>
| 1 | 2016-09-28T22:54:31Z | 39,760,450 | <p>Did you try to change your template dirs on <code>setting.py</code> like this</p>
<pre><code>'DIRS': ["templates"],
</code></pre>
<p>and did your <code>views.py</code> provide specific location of template E.g.</p>
<pre><code>return render(request, 'foldername/templatename.html', {})
</code></pre>
| 0 | 2016-09-29T02:31:57Z | [
"python",
"django",
"django-templates",
"mezzanine"
]
|
Why is my protected loop erroring? | 39,758,857 | <p>I have this script</p>
<pre><code>for line in file:
while line[i] == " ":
if i == len(line):
break
i += 1
if i == len(line):
pass
while not line[i] == " ":
if i == len(line):
break
obj += line[i]
i += 1
print obj
</code></pre>
<p>As of now, <code>file</code> equals <code>["clear", "exit"]</code> and <code>i</code> equals <code>0</code>.</p>
<p>When I run this script, it errors like this</p>
<pre><code>Traceback (most recent call last):
File "/home/ubuntu/workspace/lib/source.py", line 8, in <module>
while not line[i] == " ":
IndexError: string index out of range
</code></pre>
<p>I'm pretty sure my loop is protected right, and it should break before this happens. If this the case, then why is it happening?</p>
| 2 | 2016-09-28T22:54:48Z | 39,758,890 | <p>Your loop is not "safe", "break" only breaks <strong>inner</strong> loop, thus once i==len(line) it goes out of first while, stays in the main for and you try to read out line[i] in "while not line[i] == """ line, which causes you to index outside "line" (since the condition is checked before entering the loop).</p>
<pre><code>for line in file:
while line[i] == " ":
if i == len(line):
break # THIS executes, thus i == len(line)
i += 1
if i == len(line): # this is true, thus nothing happens, pass is no-op
pass
while not line[i] == " ": # this executes, and fails since i == len(line)
if i == len(line):
break
obj += line[i]
i += 1
print obj
</code></pre>
| 1 | 2016-09-28T22:57:54Z | [
"python",
"while-loop"
]
|
Why is my protected loop erroring? | 39,758,857 | <p>I have this script</p>
<pre><code>for line in file:
while line[i] == " ":
if i == len(line):
break
i += 1
if i == len(line):
pass
while not line[i] == " ":
if i == len(line):
break
obj += line[i]
i += 1
print obj
</code></pre>
<p>As of now, <code>file</code> equals <code>["clear", "exit"]</code> and <code>i</code> equals <code>0</code>.</p>
<p>When I run this script, it errors like this</p>
<pre><code>Traceback (most recent call last):
File "/home/ubuntu/workspace/lib/source.py", line 8, in <module>
while not line[i] == " ":
IndexError: string index out of range
</code></pre>
<p>I'm pretty sure my loop is protected right, and it should break before this happens. If this the case, then why is it happening?</p>
| 2 | 2016-09-28T22:54:48Z | 39,758,893 | <blockquote>
<p>my loop is protected</p>
</blockquote>
<p>No it isn't. The condition on the <code>while</code> has to be evaluated first before reaching the body of the loop where the <em>protection</em> <code>if</code> condition is.</p>
<p>To be sure <code>i</code> does not exceed the length of the list, move the condition on the <code>if</code> and make it the first condition on the <code>while</code>:</p>
<pre><code>while i < len(line) and not line[i] == " ":
obj += line[i]
i += 1
</code></pre>
<hr>
<p>Otherwise you could move the <code>if</code> block after the update of <code>i</code> so the condition is evaluated before that of the next iteration of <code>while</code>:</p>
<pre><code>while not line[i] == " ":
obj += line[i]
i += 1
if i == len(line):
break
</code></pre>
| 1 | 2016-09-28T22:57:57Z | [
"python",
"while-loop"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.