title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Is regex the best way to extract data from log
| 39,565,603 |
<p>I've got a file full of log and I'm trying to extract some data from those log, a log look like:</p>
<pre><code>IP_adress - - [Date_time] "method" response_nb time "page" "UA" "IP_adress"
</code></pre>
<p>I want to extract the IP_adress and UA.
Is using a regex a good idea to extract data from those log or is there some other way to do it properly?</p>
| -1 |
2016-09-19T04:52:36Z
| 39,565,616 |
<p>Just split the string and get last two elements.</p>
<pre><code>>>>
>>> str = 'IP_adress - - [Date_time] "method" response_nb time "page" "UA" "IP_a
dress"'
>>> tmp_list = str.split()
>>>
>>> tmp_list
['IP_adress', '-', '-', '[Date_time]', '"method"', 'response_nb', 'time', '"page
"', '"UA"', '"IP_adress"']
>>> tmp_list[-1]
'"IP_adress"'
>>> tmp_list[-2]
'"UA"'
>>>
</code></pre>
<p>If first IP Adress is required...</p>
<pre><code>>>> tmp_list[0]
'IP_adress'
>>>
</code></pre>
<p>Replace double quotes as below from last IP Adress.</p>
<pre><code>>>>
>>> tmp_list[-1].replace('"','')
'IP_adress'
>>>
</code></pre>
| 2 |
2016-09-19T04:55:03Z
|
[
"python",
"regex",
"python-2.7"
] |
Simple Python web crawler
| 39,565,730 |
<p>I'm following a python tutorial on youtube and got up to where we make a basic web crawler. I tried making my own to do a very simple task. Go to my cities car section on craigslist and print the title/link of every entry, and jump to the next page and repeat if needed. It works for the first page, but won't continue to change pages and get the data. Can someone help explain what's wrong?</p>
<pre><code>import requests
from bs4 import BeautifulSoup
def widow(max_pages):
page = 0 # craigslist starts at page 0
while page <= max_pages:
url = 'http://orlando.craigslist.org/search/cto?s=' + str(page) # craigslist search url + current page number
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, 'lxml') # my computer yelled at me if 'lxml' wasn't included. your mileage may vary
for link in soup.findAll('a', {'class':'hdrlnk'}):
href = 'http://orlando.craigslist.org' + link.get('href') # href = /cto/'number'.html
title = link.string
print(title)
print(href)
page += 100 # craigslist pages go 0, 100, 200, etc
widow(0) # 0 gets the first page, replace with multiples of 100 for extra pages
</code></pre>
| 0 |
2016-09-19T05:06:48Z
| 39,565,808 |
<p>Looks like you have a problem with your indentation, you need to do
<code>page += 100</code> in the main while block and <strong>not</strong> inside the for loop.</p>
<pre><code>def widow(max_pages):
page = 0 # craigslist starts at page 0
while page <= max_pages:
url = 'http://orlando.craigslist.org/search/cto?s=' + str(page) # craigslist search url + current page number
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, 'lxml') # my computer yelled at me if 'lxml' wasn't included. your mileage may vary
for link in soup.findAll('a', {'class':'hdrlnk'}):
href = 'http://orlando.craigslist.org' + link.get('href') # href = /cto/'number'.html
title = link.string
print(title)
print(href)
page += 100 # craigslist pages go 0, 100, 200, etc
</code></pre>
| 1 |
2016-09-19T05:15:10Z
|
[
"python",
"web-crawler"
] |
Select specific fields in Django get_object_or_404
| 39,565,749 |
<p>I have a model in Django with too many fields.
Ex:</p>
<pre><code>class MyModel(models.Model):
param_1 = models.CharField(max_length=100)
...
param_25 = models.CharField(max_length=100)
</code></pre>
<p>Now I need to get the detail view based on an id. I have seen may methods like,</p>
<pre><code>obj = MyModel.objects.get(pk=5)
obj = MyModel.objects.filter(pk=5)[0]
obj = get_object_or_404(MyModel, pk=1)
</code></pre>
<p>The last method suits the best as I can provide a 404 error without any code change. But I need only param_1 and param_2. Hence I need a query similar to,</p>
<pre><code>SELECT "param_1" FROM mymodel WHERE pk=1
</code></pre>
<p>How can this be done using get_object_or_404?</p>
<p>Can some one help to find a solution for this?</p>
| 0 |
2016-09-19T05:09:04Z
| 39,565,807 |
<p>The first argument to is the class name of the model and all other arguments are parameters that will be passed to get. So it cannot be used here, but it's quite simple to mimic it's functionality. </p>
<pre><code>def get_object_or_404_custom(klass,fields = None, *args, **kwargs):
queryset = _get_queryset(klass)
try:
if fields:
queryset = queryset.only(*fields)
return queryset.get(*args, **kwargs)
except AttributeError:
klass__name = klass.__name__ if isinstance(klass, type) else klass.__class__.__name__
raise ValueError(
"First argument to get_object_or_404() must be a Model, Manager, "
"or QuerySet, not '%s'." % klass__name
)
except queryset.model.DoesNotExist:
raise Http404('No %s matches the given query.' % queryset.model._meta.object_name)
</code></pre>
<p>Based on : <a href="https://docs.djangoproject.com/en/1.10/_modules/django/shortcuts/#get_object_or_404" rel="nofollow">https://docs.djangoproject.com/en/1.10/_modules/django/shortcuts/#get_object_or_404</a></p>
| 1 |
2016-09-19T05:15:09Z
|
[
"python",
"sql",
"django",
"model"
] |
Select specific fields in Django get_object_or_404
| 39,565,749 |
<p>I have a model in Django with too many fields.
Ex:</p>
<pre><code>class MyModel(models.Model):
param_1 = models.CharField(max_length=100)
...
param_25 = models.CharField(max_length=100)
</code></pre>
<p>Now I need to get the detail view based on an id. I have seen may methods like,</p>
<pre><code>obj = MyModel.objects.get(pk=5)
obj = MyModel.objects.filter(pk=5)[0]
obj = get_object_or_404(MyModel, pk=1)
</code></pre>
<p>The last method suits the best as I can provide a 404 error without any code change. But I need only param_1 and param_2. Hence I need a query similar to,</p>
<pre><code>SELECT "param_1" FROM mymodel WHERE pk=1
</code></pre>
<p>How can this be done using get_object_or_404?</p>
<p>Can some one help to find a solution for this?</p>
| 0 |
2016-09-19T05:09:04Z
| 39,565,956 |
<p>The first argument to <code>get_object_or_404</code> can be <a href="https://docs.djangoproject.com/en/1.10/topics/http/shortcuts/#django.shortcuts.get_object_or_404" rel="nofollow">a Model, a Manager or a QuerySet</a>:</p>
<blockquote>
<p>Required arguments</p>
<p><strong>klass</strong></p>
<p>A Model class, a Manager, or a QuerySet instance from which to
get the object.</p>
</blockquote>
<p>So, all you have to do is pass in a pre-filtered QuerySet, such as the one returned by <a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#only" rel="nofollow"><code>only</code></a>:</p>
<pre><code>obj = get_object_or_404(MyModel.objects.only('param_1', 'param_2'), pk=1)
</code></pre>
| 3 |
2016-09-19T05:29:15Z
|
[
"python",
"sql",
"django",
"model"
] |
zshell pip can't find package autobahn[serialization]
| 39,565,867 |
<p>I was trying to install <code>serialization</code> variant of autobahn. However, when I do that in <code>zsh</code>, I get an error. </p>
<p><code>zsh: no matches found: autobahn[serialization]</code></p>
<p>However, as soon as I use <code>bash</code>, it works. Below is my command line log:</p>
<pre><code>kapil@kapil-linux ~
[23:59:19]
> $ pip install autobahn[serialization]
zsh: no matches found: autobahn[serialization]
kapil@kapil-linux ~ [23:59:25]
> $ bash
[kapil@kapil-linux ~]$ pip install autobahn[serialization]
Collecting autobahn[serialization]
Using cached autobahn-0.16.0-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): six>=1.10.0 in ./anaconda2/lib/python2.7/site-packages (from autobahn[serialization])
Requirement already satisfied (use --upgrade to upgrade): txaio>=2.5.1 in ./anaconda2/lib/python2.7/site-packages (from autobahn[serialization])
Requirement already satisfied (use --upgrade to upgrade): u-msgpack-python>=2.1; extra == "serialization" in ./anaconda2/lib/python2.7/site-packages (from autobahn[serialization])
Requirement already satisfied (use --upgrade to upgrade): py-ubjson>=0.8.4; extra == "serialization" in ./anaconda2/lib/python2.7/site-packages (from autobahn[serialization])
Requirement already satisfied (use --upgrade to upgrade): cbor>=1.0.0; extra == "serialization" in ./anaconda2/lib/python2.7/site-packages (from autobahn[serialization])
Installing collected packages: autobahn
Successfully installed autobahn-0.16.0
[kapil@kapil-linux ~]$ pip install --upgrade --force-reinstall autobahn[serialization]
Collecting autobahn[serialization]
Using cached autobahn-0.16.0-py2.py3-none-any.whl
Collecting six>=1.10.0 (from autobahn[serialization])
Using cached six-1.10.0-py2.py3-none-any.whl
Collecting txaio>=2.5.1 (from autobahn[serialization])
Using cached txaio-2.5.1-py2.py3-none-any.whl
Collecting u-msgpack-python>=2.1; extra == "serialization" (from autobahn[serialization])
Collecting py-ubjson>=0.8.4; extra == "serialization" (from autobahn[serialization])
Collecting cbor>=1.0.0; extra == "serialization" (from autobahn[serialization])
Installing collected packages: six, txaio, u-msgpack-python, py-ubjson, cbor, autobahn
Found existing installation: six 1.10.0
Uninstalling six-1.10.0:
Successfully uninstalled six-1.10.0
Found existing installation: txaio 2.5.1
Uninstalling txaio-2.5.1:
Successfully uninstalled txaio-2.5.1
Found existing installation: u-msgpack-python 2.1
Uninstalling u-msgpack-python-2.1:
Successfully uninstalled u-msgpack-python-2.1
Found existing installation: py-ubjson 0.8.5
Uninstalling py-ubjson-0.8.5:
Successfully uninstalled py-ubjson-0.8.5
Found existing installation: cbor 1.0.0
Uninstalling cbor-1.0.0:
Successfully uninstalled cbor-1.0.0
Found existing installation: autobahn 0.16.0
Uninstalling autobahn-0.16.0:
Successfully uninstalled autobahn-0.16.0
Successfully installed autobahn-0.16.0 cbor-1.0.0 py-ubjson-0.8.5 six-1.10.0 txaio-2.5.1 u-msgpack-python-2.1
[kapil@kapil-linux ~]$ exit
exit
kapil@kapil-linux ~ [0:00:27]
> $ pip install autobahn[serialization]
zsh: no matches found: autobahn[serialization]
</code></pre>
<p>I don't understand what might be going on with zsh.</p>
<p>Also, here is the output of my <code>which pip</code>:</p>
<pre><code>> $ which pip
~/anaconda2/bin/pip
kapil@kapil-linux ~ [0:18:24]
> $ bash
[kapil@kapil-linux ~]$ which pip
~/anaconda2/bin/pip
[kapil@kapil-linux ~]$
</code></pre>
| 0 |
2016-09-19T05:20:47Z
| 39,653,120 |
<p>Square brackets are special characters in zsh, you can escape them with backslash:</p>
<pre><code>pip install autobahn\[serialization\]
</code></pre>
<p>Or <a href="http://kinopyo.com/en/blog/escape-square-bracket-by-default-in-zsh" rel="nofollow">escape square brackets by default in zsh</a>.</p>
| 0 |
2016-09-23T05:21:20Z
|
[
"python",
"pip",
"zsh",
"autobahn",
"msgpack"
] |
How can I make a python thread count down and then perform an action?
| 39,565,933 |
<p>I have a script that accesses the api for Telegram bots, but it unfortunately can only process one message at a time.</p>
<p>My end goal is to have it start a timer thread when someone starts a game, and after a certain time (if they haven't already won the game) it would reset the game to avoid preventing another user in the group from playing a game (I set it up to only have one game at a time to avoid confusion).</p>
<p>So for example:</p>
<p>With a word unscramble game I tried:</p>
<pre><code>import time
import telepot
def handle(msg):
message_text = msg['text']
message_location_id = msg['chat']['id']
global game_active
global word_to_unscramble
if message_text == '/newgame':
game_active = True
<game function, defines word_to_unscramble>
time.sleep(30)
if game_active:
game_active = False
word_to_unscramble = None
bot.sendMessage(message_location_id, 'Sorry the answer was ' + word_to_unscramble)
if message_text == 'word_to_unscramble':
game_active = False
bot.sendMessage(message_location_id, 'You win!')
# I added this part in as an echo, just to see when it processed the message
if 'text' in msg:
bot.sendMessage(message_location_id, message_text)
game_active = False
word_to_unscramble = None
bot = telepot.Bot('<my api token>')
bot.message_loop(handle)
</code></pre>
<p>With this code, however, it would receive and process the first message, then wait 30 seconds, send the failed code, and then process the second one.</p>
<p>I'm not too familiar with the process of threading, so is there a way I can set it up to start a new thread to handle the countdown timer so it can continue to process messages or is that all a lost cause?</p>
<p>If there's not a way to set up a timer using my current method of accessing the telegram api, what would the smarter way be?</p>
| 0 |
2016-09-19T05:27:37Z
| 39,566,098 |
<p>Haven't worked over python-telegram-bot yet, But all I know about the timer use in Python is using <code>sched</code> gives you leverage to timer abilities to your python code. Since in your app <code>multithreading</code> will make more sense and for that you can look out for <code>threading.Timer</code> class.</p>
| 0 |
2016-09-19T05:44:24Z
|
[
"python",
"multithreading",
"python-telegram-bot"
] |
Script to print names in Python modules
| 39,565,937 |
<p>The following script isn't printing names contained within each Python module in the list. When I run it, each dir(mod) command returns the same list of names. It's like the 'mod' variable isn't being understood by dir. The for loop doesn't appear to be the problem. Any ideas how to fix?</p>
<pre><code>#!/usr/bin/python
# Print out names in modules
# https://docs.python.org/2/py-modindex.html
import os, sys, re, subprocess, platform, shutil, argparse, test, xml, time, urllib2, getopt
def print_modules(module_list):
for mod in module_list:
print '------------'
print mod
print '------------'
print dir(mod)
print
# Use split() on a string to create a list (the lazy way!)
module_list = 'os sys list __builtins__ re subprocess platform shutil argparse test xml time urllib2 getopt'.split()
print type(module_list)
print_modules (module_list)
</code></pre>
| 0 |
2016-09-19T05:27:45Z
| 39,566,153 |
<p>Problem is you are passing module name as variable to <code>dir</code> function.
Since module name is passed in variable, it is considering as string and <code>dir</code> is giving output for string.</p>
<pre><code>>>>
>>>
>>> mod = 'os'
>>>
>>> dir(mod)
['__add__', '__class__', '__contains__', '__delattr__', '__doc__', '__eq__', '__
format__', '__ge__', '__getattribute__', '__getitem__', '__getnewargs__', '__get
slice__', '__gt__', '__hash__', '__init__', '__le__', '__len__', '__lt__', '__mo
d__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__',
'__rmod__', '__rmul__', '__setattr__', '__sizeof__', '__str__', '__subclasshook
__', '_formatter_field_name_split', '_formatter_parser', 'capitalize', 'center',
'count', 'decode', 'encode', 'endswith', 'expandtabs', 'find', 'format', 'index
', 'isalnum', 'isalpha', 'isdigit', 'islower', 'isspace', 'istitle', 'isupper',
'join', 'ljust', 'lower', 'lstrip', 'partition', 'replace', 'rfind', 'rindex', '
rjust', 'rpartition', 'rsplit', 'rstrip', 'split', 'splitlines', 'startswith', '
strip', 'swapcase', 'title', 'translate', 'upper', 'zfill']
>>>
>>> type(mod)
<type 'str'>
>>>
>>>
>>> import os
>>>
>>> type(os)
<type 'module'>
>>>
>>> dir(os)
['F_OK', 'O_APPEND', 'O_BINARY', 'O_CREAT', 'O_EXCL', 'O_NOINHERIT', 'O_RANDOM',
'O_RDONLY', 'O_RDWR', 'O_SEQUENTIAL', 'O_SHORT_LIVED', 'O_TEMPORARY', 'O_TEXT',
'O_TRUNC', 'O_WRONLY', 'P_DETACH', 'P_NOWAIT', 'P_NOWAITO', 'P_OVERLAY', 'P_WAI
T', 'R_OK', 'SEEK_CUR', 'SEEK_END', 'SEEK_SET', 'TMP_MAX', 'UserDict', 'W_OK', '
X_OK', '_Environ', '__all__', '__builtins__', '__doc__', '__file__', '__name__',
'__package__', '_copy_reg', '_execvpe', '_exists', '_exit', '_get_exports_list'
, '_make_stat_result', '_make_statvfs_result', '_pickle_stat_result', '_pickle_s
tatvfs_result', 'abort', 'access', 'altsep', 'chdir', 'chmod', 'close', 'closera
nge', 'curdir', 'defpath', 'devnull', 'dup', 'dup2', 'environ', 'errno', 'error'
, 'execl', 'execle', 'execlp', 'execlpe', 'execv', 'execve', 'execvp', 'execvpe'
, 'extsep', 'fdopen', 'fstat', 'fsync', 'getcwd', 'getcwdu', 'getenv', 'getpid',
'isatty', 'kill', 'linesep', 'listdir', 'lseek', 'lstat', 'makedirs', 'mkdir',
'name', 'open', 'pardir', 'path', 'pathsep', 'pipe', 'popen', 'popen2', 'popen3'
, 'popen4', 'putenv', 'read', 'remove', 'removedirs', 'rename', 'renames', 'rmdi
r', 'sep', 'spawnl', 'spawnle', 'spawnv', 'spawnve', 'startfile', 'stat', 'stat_
float_times', 'stat_result', 'statvfs_result', 'strerror', 'sys', 'system', 'tem
pnam', 'times', 'tmpfile', 'tmpnam', 'umask', 'unlink', 'unsetenv', 'urandom', '
utime', 'waitpid', 'walk', 'write']
>>>
</code></pre>
<p>After getting module name as variable, we need to evaluate as module.
So, we will require <code>importlib</code> module which convert it into module.</p>
<p><strong>Code :</strong></p>
<pre><code>#!/usr/bin/python
# Print out names in modules
# https://docs.python.org/2/py-modindex.html
import os, sys, re, subprocess, platform, shutil, argparse, test, xml, time, urllib2, getopt
import importlib
def print_modules(module_list):
for mod in module_list:
module = importlib.import_module(mod, package=None)
print '------------'
print mod
print '------------'
print dir(module)
print
# Use split() on a string to create a list (the lazy way!)
module_list = 'os sys'.split()
print type(module_list)
print_modules (module_list)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>C:\Users\dinesh_pundkar\Desktop>python c.py
<type 'list'>
------------
os
------------
['F_OK', 'O_APPEND', 'O_BINARY', 'O_CREAT', 'O_EXCL', 'O_NOINHERIT', 'O_RANDOM',
'O_RDONLY', 'O_RDWR', 'O_SEQUENTIAL', 'O_SHORT_LIVED', 'O_TEMPORARY', 'O_TEXT',
'O_TRUNC', 'O_WRONLY', 'P_DETACH', 'P_NOWAIT', 'P_NOWAITO', 'P_OVERLAY', 'P_WAI
T', 'R_OK', 'SEEK_CUR', 'SEEK_END', 'SEEK_SET', 'TMP_MAX', 'UserDict', 'W_OK', '
X_OK', '_Environ', '__all__', '__builtins__', '__doc__', '__file__', '__name__',
'__package__', '_copy_reg', '_execvpe', '_exists', '_exit', '_get_exports_list'
, '_make_stat_result', '_make_statvfs_result', '_pickle_stat_result', '_pickle_s
tatvfs_result', 'abort', 'access', 'altsep', 'chdir', 'chmod', 'close', 'closera
nge', 'curdir', 'defpath', 'devnull', 'dup', 'dup2', 'environ', 'errno', 'error'
, 'execl', 'execle', 'execlp', 'execlpe', 'execv', 'execve', 'execvp', 'execvpe'
, 'extsep', 'fdopen', 'fstat', 'fsync', 'getcwd', 'getcwdu', 'getenv', 'getpid',
'isatty', 'kill', 'linesep', 'listdir', 'lseek', 'lstat', 'makedirs', 'mkdir',
'name', 'open', 'pardir', 'path', 'pathsep', 'pipe', 'popen', 'popen2', 'popen3'
, 'popen4', 'putenv', 'read', 'remove', 'removedirs', 'rename', 'renames', 'rmdi
r', 'sep', 'spawnl', 'spawnle', 'spawnv', 'spawnve', 'startfile', 'stat', 'stat_
float_times', 'stat_result', 'statvfs_result', 'strerror', 'sys', 'system', 'tem
pnam', 'times', 'tmpfile', 'tmpnam', 'umask', 'unlink', 'unsetenv', 'urandom', '
utime', 'waitpid', 'walk', 'write']
------------
sys
------------
['__displayhook__', '__doc__', '__excepthook__', '__name__', '__package__', '__s
tderr__', '__stdin__', '__stdout__', '_clear_type_cache', '_current_frames', '_g
etframe', '_mercurial', 'api_version', 'argv', 'builtin_module_names', 'byteorde
r', 'call_tracing', 'callstats', 'copyright', 'displayhook', 'dllhandle', 'dont_
write_bytecode', 'exc_clear', 'exc_info', 'exc_type', 'excepthook', 'exec_prefix
', 'executable', 'exit', 'flags', 'float_info', 'float_repr_style', 'getcheckint
erval', 'getdefaultencoding', 'getfilesystemencoding', 'getprofile', 'getrecursi
onlimit', 'getrefcount', 'getsizeof', 'gettrace', 'getwindowsversion', 'hexversi
on', 'long_info', 'maxint', 'maxsize', 'maxunicode', 'meta_path', 'modules', 'pa
th', 'path_hooks', 'path_importer_cache', 'platform', 'prefix', 'py3kwarning', '
setcheckinterval', 'setprofile', 'setrecursionlimit', 'settrace', 'stderr', 'std
in', 'stdout', 'subversion', 'version', 'version_info', 'warnoptions', 'winver']
C:\Users\dinesh_pundkar\Desktop>
</code></pre>
| 1 |
2016-09-19T05:50:56Z
|
[
"python",
"module",
"directory",
"names"
] |
Django ModelAdmin custom method obj parameter
| 39,566,087 |
<p>In Django's document
<a href="https://docs.djangoproject.com/en/1.10/ref/contrib/admin/" rel="nofollow">Django Document</a></p>
<p>It has following code.</p>
<pre><code>from django.contrib import admin
class AuthorAdmin(admin.ModelAdmin):
fields = ('name', 'title', 'view_birth_date')
def view_birth_date(self, obj ):
return obj.birth_date
view_birth_date.empty_value_display = '???'
</code></pre>
<p>I don't understand in the custom method view_birth_date(self, <strong><em>obj</em></strong> ) where this obj parameter came from?</p>
<p>Note in the last line, it called this function, </p>
<pre><code>view_birth_date.empty_value_display = '???'
</code></pre>
<p>but did not pass any parameter for obj. I don't understand where how obj got a value.</p>
<p>Thanks!</p>
| 0 |
2016-09-19T05:43:08Z
| 39,566,129 |
<blockquote>
<p>I don't understand in the custom method <code>view_birth_date(self, obj)</code> where this <code>obj</code> parameter came from?</p>
</blockquote>
<p>This method is called by Django's machinery and <code>obj</code> is passed also by it. It's just a predefined interface.</p>
<blockquote>
<p><code>view_birth_date.empty_value_display = '???'</code>
but did not pass any parameter for obj. I don't understand where how <code>obj</code> got a value.</p>
</blockquote>
<p>This line has nothing to do with your question. Again, according to the interface, Django when looking at your method (function) looks for <code>empty_value_display</code> attribute to find out what value the developer expects to see when the current value is empty.</p>
<p>Yes, looks weird but it's just how Django works, how its creators made it work. They chose this interface -- you just have to use the docs to find out the interface.</p>
| 2 |
2016-09-19T05:48:18Z
|
[
"python",
"django"
] |
SoftLayer API: How to get the blockDevice information from a image template?
| 39,566,113 |
<p><a href="http://i.stack.imgur.com/dSjqz.png" rel="nofollow"><img src="http://i.stack.imgur.com/dSjqz.png" alt="enter image description here"></a></p>
<p>I have a image template as shown in the picture. I want to get the disk space and the virtual disks list as marked in the Figure. I used:</p>
<p><code>http://sldn.softlayer.com/zh/reference/services/SoftLayer_Virtual_Guest_Block_Device_Template_Group/getObject to get the image template with mask as ('mask[id,accountId,name,globalIdentifier,blockDevices[device,diskImageId,diskSpace,groupId,id,units],parentId,createDate,datacenter,imageType,storageRepository,datacenters]')</code>. </p>
<p>The mask blockDevices is set. But the result is as flows:</p>
<pre><code>{'accountId': xxxxxxx,
'blockDevices': [],
'createDate': '2016-09-18T07:16:57-05:00',
'datacenters': [{'id': xx4092,
'longName': 'Singapore 1',
'name': 'sng01',
'statusId': 2}],
'globalIdentifier': 'xxxxxxxx-b068-40b1-8377-9ab66df80131',
'id': 1331697,
'imageType': {'description': 'a disk that may be replaced on upgrade',
'keyName': 'SYSTEM',
'name': 'System'},
'name': 'xxx-test-all-disk',
'parentId': ''}
</code></pre>
<p>The item of blockDevices is a empty array. Why? </p>
<p>Any api can help me to get the image disk space and virtual disks info ?</p>
| 0 |
2016-09-19T05:46:46Z
| 39,566,776 |
<p>Please try the following <code>mask</code> (This a Rest example):</p>
<pre><code> https://[username]:[apikey]@api.softlayer.com/rest/v3/SoftLayer_Virtual_Guest_Block_Device_Template_Group/[imageId]/getObject?objectMask=mask[id,name,children[id,name,blockDevices[diskSpace,units,diskImage[localDiskFlag]]]]
Method:GET
</code></pre>
<p>The <code>children</code> items contain the information that you want. Currently, you are trying to get of <code>parent</code>item.</p>
<p>I hope it help you.</p>
| 0 |
2016-09-19T06:37:28Z
|
[
"python",
"api",
"softlayer"
] |
Keras shape of features for training
| 39,566,123 |
<p>I'm trying to train nn with keras <code>train_on_batch</code> function. I have 39 features and want a batch to contain 32 samples. So I have a list of 32 numpy arrays for every training iteration. </p>
<p>So here is my code(here every batch_x is a list of 32 numpy array each containing 39 features):</p>
<pre><code>input_shape = (39,)
model = Sequential()
model.add(Dense(39, input_shape=input_shape)) # show you is only first layer
...
for batch_x, batch_y in train_gen:
model.train_on_batch(batch_x, batch_y)
</code></pre>
<p>But suddenly I got an error:</p>
<pre><code>Exception: Error when checking model input: the list of Numpy arrays
that you are passing to your model is not the size the model expected.
Expected to see 1 arrays but instead got the following list of 32 arrays:
</code></pre>
<p>I'm not really sure what's wrong.</p>
<p>P.S. I also tried different <code>input_shape</code> such as (32, 39), (39, 32) etc.</p>
| 0 |
2016-09-19T05:47:46Z
| 39,568,178 |
<p>You don't want 32 arrays of size 39, you want one array of size (32, 39). </p>
<p>So you must change input_shape to (None, 39), the None allowing you to dynamically change your batch_size, and change batch_x to be a numpy array of shape (32, 39).</p>
| 3 |
2016-09-19T08:04:46Z
|
[
"python",
"python-2.7",
"machine-learning",
"keras",
"training-data"
] |
Keras shape of features for training
| 39,566,123 |
<p>I'm trying to train nn with keras <code>train_on_batch</code> function. I have 39 features and want a batch to contain 32 samples. So I have a list of 32 numpy arrays for every training iteration. </p>
<p>So here is my code(here every batch_x is a list of 32 numpy array each containing 39 features):</p>
<pre><code>input_shape = (39,)
model = Sequential()
model.add(Dense(39, input_shape=input_shape)) # show you is only first layer
...
for batch_x, batch_y in train_gen:
model.train_on_batch(batch_x, batch_y)
</code></pre>
<p>But suddenly I got an error:</p>
<pre><code>Exception: Error when checking model input: the list of Numpy arrays
that you are passing to your model is not the size the model expected.
Expected to see 1 arrays but instead got the following list of 32 arrays:
</code></pre>
<p>I'm not really sure what's wrong.</p>
<p>P.S. I also tried different <code>input_shape</code> such as (32, 39), (39, 32) etc.</p>
| 0 |
2016-09-19T05:47:46Z
| 39,602,126 |
<p>In Keras, the <strong>output</strong> not the <strong>input</strong> dimension is the first arg. The <a href="http://keras.io" rel="nofollow">Keras docs</a> front-page example is pretty clear:</p>
<pre><code>model.add(Dense(output_dim=64, input_dim=100))
</code></pre>
<p>Adjusting that example to match what I guess are your requirements:</p>
<pre><code>model.add(Dense(output_dim=39, input_dim=39))
</code></pre>
<p>In your code, the first positional arg in your <code>Dense</code> layer is <code>39</code> which sets the <strong>output</strong> to be 39-D, not the input, as you probably assumed. You said that you had 39 input features. That first layer (in my attempt to duplicate what you were intending) doesn't do any compression or feature extraction from your 39-dimension input feature vectors.</p>
<p>Why don't you just set the dimensions of your input and output arrays for each layer (as in the example) and leave the input_<strong>shape</strong> alone? Just reshape your inputs (and labels) to match the default assumptions? Also, you might try running the basic <code>fit</code> method on your input data set (or some portion of it) before moving on to more complicated arrangements, like manually training in batches as you've done.</p>
<p>Here's an example for a toy problem with your feature dimension:</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.regularizers import l1l2
X = np.random.randn(1000, 39)
y = np.array([X[i,7:13].sum() for i in range(X.shape[0])])
nn = Sequential()
nn.add(Dense(output_dim=1, input_dim=39))
nn.compile('sgd', 'mse')
nn.fit(X, y, nb_epoch=10)
</code></pre>
<p>Which gives:</p>
<pre><code>Epoch 1/10
1000/1000 [==============================] - 0s - loss: 4.6266
...
Epoch 10/10
1000/1000 [==============================] - 0s - loss: 1.4048e-04
</code></pre>
| 1 |
2016-09-20T19:15:56Z
|
[
"python",
"python-2.7",
"machine-learning",
"keras",
"training-data"
] |
Correct way to extend AbstractUser in Django?
| 39,566,144 |
<p>I'm trying to integrate two django apps where each had their individual auths working. To do that, I'm trying to subclass AbstractUser instead of User. I'm following the <a href="https://pybbm.readthedocs.io/en/latest/customuser.html" rel="nofollow">PyBB docs</a> and <a href="https://docs.djangoproject.com/en/1.9/topics/auth/customizing/#substituting-a-custom-user-model" rel="nofollow">Django#substituting_custom_model</a>. I've removed all migration files in all my apps apart from their individual <strong>init</strong>.py (including the migrations from the PyBB library sitting in my site-packages). I've also changed the Mysql database to a blank one to start afresh and I'm trying to subclass AbstractUser as shown below. </p>
<p>My Models.py:</p>
<pre><code>from django.contrib.auth.models import User
from django.contrib.auth.models import AbstractUser
from django.db import models
class Student_User(models.Model):
"""
Table to store accounts
"""
su_student = models.OneToOneField(AbstractUser)
USERNAME_FIELD = 'su_student'
su_type = models.PositiveSmallIntegerField(db_column='su_type', default=0)
su_access = models.TextField(db_column='su_access', default='')
su_packs = models.TextField(db_column='su_packs', default='')
REQUIRED_FIELDS = []
def __unicode__(self):
return str(self.su_student)
</code></pre>
<p>My settings.py:</p>
<pre><code>AUTH_USER_MODEL = "app.Student_User"
PYBB_PROFILE_RELATED_NAME = 'pybb_profile'
</code></pre>
<p>When running makemigrations for my primary app, I get this error:</p>
<pre><code>app.Student_User.su_student: (fields.E300) Field defines a relation with model 'AbstractUser', which is either not installed, or is abstract.
</code></pre>
<p>How do I achieve what I am trying to do here? </p>
<p>PS: The app was working fine with onetoone with User without username_field or required_field.</p>
<p>PPS: I just checked the AbstractUser model in my contrib.auth.models and it has <code>class Meta: abstract = True</code>. Ok so its abstract, still, how do I resolve this? I just need one login, currently, two parts of my site, although connected through urls, ask for seperate logins and dont detect each others logins. What do I need to do for this?</p>
| 0 |
2016-09-19T05:50:04Z
| 39,567,793 |
<p>You can't have a one-to-one relationship with an abstract model; by definition, an abstract model is never actually instantiated.</p>
<p>AbstractUser is supposed to be <em>inherited</em>. Your structure should be:</p>
<pre><code>class Student_User(AbstractUser):
...
</code></pre>
| 0 |
2016-09-19T07:41:11Z
|
[
"python",
"mysql",
"django",
"authorization",
"django-authentication"
] |
float precision in numpy arrays differ from their elements
| 39,566,194 |
<p>I have values of <code>numpy.float32</code> type, and a numpy array with <code>dtype="float32"</code>.</p>
<p>When outputting the representation of the values directly using the individual array element references, I get a difference in precision than when I output the representation of the array object itself.</p>
<p>The ensuing calculations are also a bit off, so the latter value seems to be the one used in consequtive vector / matrix artihmetic.</p>
<p>Why is there a difference, and must these precision "anomalies" be handled manually?</p>
<pre><code>>>> a = math.pi / 2
>>> a
1.5707963267948966
>>> elementx = numpy.float32(math.cos(a))
>>> arrayx = numpy.array([elementx], dtype="float32")
>>> elementx
6.1232343e-17
>>> arrayx
array([ 6.12323426e-17], dtype=float32)
>>> t = numpy.float32(3.0)
>>> t * elementx
1.8369703e-16
>>> t * arrayx
array([ 1.83697028e-16], dtype=float32)
</code></pre>
<p>(Python 3.5.2, GCC 5.4.0, linux 4.4.0-21-generic)</p>
| 0 |
2016-09-19T05:54:13Z
| 39,566,846 |
<p>These are just differences in the string representation, the values are the same and calculations are not off.</p>
<pre><code>>>> (t * elementx) == (t * arrayx)[0]
True
</code></pre>
| 2 |
2016-09-19T06:42:02Z
|
[
"python",
"numpy",
"floating-point",
"precision"
] |
Is there a way to bind a session object to MetaData.create_all method in SQLAlchemy?
| 39,566,286 |
<p>I have an application that creates tables on demand using <strong>SQLAlchemy</strong>. More precisely <strong>Flask-SQLAlchemy</strong> and <strong>PostgreSQL</strong> as database. </p>
<p>To do that, <strong>(1)</strong> I create a PostgreSQL schema to hold the new tables:</p>
<pre><code># extra checks on the schema name before I execute it.
statement = 'CREATE SCHEMA IF NOT EXISTS {}'.format(schema_name)
database.session.execute(statement)
database.session.commit()
</code></pre>
<p>After that, <strong>(2)</strong> I change PostgreSQL <code>search_path</code> value to the schema that I have created.</p>
<p><strong>(3)</strong> and then I make a list of tables that I want to create in the database and pass it to <code>create_all</code> MetaData's method:</p>
<pre><code>metadata.create_all(database.engine, tables=list_of_tables)
</code></pre>
<p><strong>SQLAlchemy</strong> make queries to check if those tables already exist (on the new schema) and then send <code>CREATE TABLE</code> statements to the database.</p>
<p>The tables are created correctly, in the desired schema and everything works Ok.</p>
<p>My problem starts when I wrap all those tasks in a <strong>nested transaction</strong> (using PostgreSQL SAVEPOINTs) for <strong>testing purpose</strong>, in order to rollback everything in the current session in the end of the tests. I am using the example in <a href="http://docs.sqlalchemy.org/en/latest/orm/session_transaction.html#joining-a-session-into-an-external-transaction-such-as-for-test-suites" rel="nofollow">Supporting Tests with Rollbacks</a>, from the SQLAlchemy documentation.</p>
<p>The schema creation happen inside the nested transaction, but <code>MetaData.crate_all</code> make the job in another transaction, and is unable to find the new schema in the database, because the new schema is just alive in the wrapped session and not physically created in the database.</p>
<p>This make the tests fail with <code>(psycopg2.ProgrammingError) no schema has been selected to create in ...</code></p>
<p>The solution that I was thinking is to create the tables one by one using the wrapped session, or figuring out how to bind create_all to the wrapped session.</p>
<h2>Update</h2>
<p>To clarify more my question, as I say in the top, the application must create tables in the database on <strong>demand</strong> inside a new schema on the go. This means that I can't set my declarative table with a fixed schema. Because we don't know what will be the schema name, and in consequence, to what schema the table will belong.</p>
| 0 |
2016-09-19T06:01:30Z
| 39,571,930 |
<p><a href="http://docs.sqlalchemy.org/en/latest/orm/session_transaction.html#joining-a-session-into-an-external-transaction-such-as-for-test-suites" rel="nofollow">"Joining a Session into an External Transaction (such as for test suites)"</a> from the SQLAlchemy docs is a good starting point in this case. I would refactor your approach a bit though: create the schema in a transaction of a connection acquired from your <code>engine</code>. Then join the test session to said connection, perform your tests and rollback. Here's a quick example:</p>
<pre><code>In [2]: engine
Out[2]: Engine(postgresql://baz@localhost/sopython)
In [3]: conn = engine.connect()
In [4]: trans = conn.begin()
In [5]: class Foo(Base):
...: __tablename__ = 'foo'
...: __table_args__ = {'schema': 'bar'}
...: id = Column(Integer, primary_key=True)
In [7]: from sqlalchemy.schema import CreateSchema
In [8]: conn.execute(CreateSchema('bar'))
Out[8]: <sqlalchemy.engine.result.ResultProxy at 0x7f8fd4084d68>
In [9]: Base.metadata.create_all(conn) # Explicitly pass `conn` as bind!
In [10]: session = Session(bind=conn) # This here joins the `session` to the
...: # external transaction.
...:
In [11]: session.query(Foo).all()
Out[11]: []
In [12]: trans.rollback() # Undo everything.
In [13]: session.query(Foo).all() # Table should not exist anymore.
---------------------------------------------------------------------------
ProgrammingError Traceback (most recent call last)
...
ProgrammingError: (psycopg2.ProgrammingError) relation "bar.foo" does not exist
LINE 2: FROM bar.foo
^
[SQL: 'SELECT bar.foo.id AS bar_foo_id \nFROM bar.foo']
In [14]: Base.metadata.create_all() # Uses metadata.engine implicitly,
...: # acquires a new connection etc, but
...: # the schema is now gone.
...:
---------------------------------------------------------------------------
ProgrammingError Traceback (most recent call last)
...
ProgrammingError: (psycopg2.ProgrammingError) schema "bar" does not exist
[SQL: '\nCREATE TABLE bar.foo (\n\tid SERIAL NOT NULL, \n\tPRIMARY KEY (id)\n)\n\n']
</code></pre>
<p>The session is clever enough to use nested transactions (savepoints) automatically when it is passed an existing connection with an open transaction as bind. If your session has to rollback in a test, see the topic "Supporting Tests with Rollbacks" at the bottom of the linked documentation.</p>
<p>Since you're using <a href="/questions/tagged/flask" class="post-tag" title="show questions tagged 'flask'" rel="tag">flask</a> and <a href="/questions/tagged/flask-sqlalchemy" class="post-tag" title="show questions tagged 'flask-sqlalchemy'" rel="tag">flask-sqlalchemy</a>, you may have to adapt this a bit to fit your testing environment. <a href="http://alexmic.net/flask-sqlalchemy-pytest/" rel="nofollow">This post</a> by Alex Michael has an example for flask-sqlalchemy and <a href="http://doc.pytest.org/en/latest/" rel="nofollow">pytest</a>. The gist of it is to create a new joined session during setup:</p>
<pre><code>connection = db.engine.connect()
transaction = connection.begin()
options = dict(bind=connection, binds={})
session = db.create_scoped_session(options=options)
db.session = session
</code></pre>
<p>and to do the required rollbacks, closing etc. during teardown:</p>
<pre><code>def teardown():
transaction.rollback()
connection.close()
session.remove()
</code></pre>
| 1 |
2016-09-19T11:18:55Z
|
[
"python",
"postgresql",
"sqlalchemy"
] |
Derive extra column from Pandas dataframe columns
| 39,566,311 |
<p>Given a dataframe, how to add an extra column which is derived from the columns in the dataframe i.e.</p>
<pre><code>data = {'date': ['2016-01-01', '2016-01-01', '2016-01-02'],
'number': [10, 21, 20],
'location': ['CA', 'NY', 'NJ']
}
print pd.DataFrame(data)
location number date
0 CA 10 2016-01-01
1 NY 21 2016-01-01
2 NJ 20 2016-01-02
</code></pre>
<p>I want to generate an extra column from <code>location</code> and <code>date</code> i.e.get date and then generate key values for <code>extra_column</code> where key is <code>date + i</code> and value is some random string. Where <code>i = random.randint(1,3)</code></p>
<pre><code> location number date extra_column
0 CA 10 2016-01-01 {{2016-01-01, CA}, {2016-01-02, something}, {2016-01-03, something else}}
1 NY 21 2016-01-01 {{2016-01-01, NY}, {2016-01-02, someplace}}
2 NJ 20 2016-01-02 {{2016-01-02, NJ}, {2016-01-03, anything}}
</code></pre>
| 0 |
2016-09-19T06:04:07Z
| 39,567,872 |
<p>You can write a function do to the manipulation with the current columns and just add the column to the <code>DataFrame</code>. See the code below:</p>
<pre><code>import pandas as pd
data = {'date': ['2016-01-01', '2016-01-01', '2016-01-02'],
'number': [10, 21, 20],
'location': ['CA', 'NY', 'NJ']
}
df = pd.DataFrame(data)
def somefunc(date, location):
# some code to generate extra column
date_vals = df['date'].values
loc_vals = df['location'].values
new_col_vals = somefunc(date_vals, loc_vals)
# add the column by doing the following
df['new_col'] = new_col_vals
</code></pre>
<p>Hope it helps.</p>
| 1 |
2016-09-19T07:45:50Z
|
[
"python",
"pandas"
] |
Forbid Python from writing anything to disk
| 39,566,470 |
<p>Are there any command-line options or configurations that forbids Python from writing to disk?</p>
<p>I know I can hack <code>open</code> but it doesn't sound very safe.</p>
<hr>
<p>I've hosted some Python tutorials I wrote myself on my website for friends who want to learn Python, and I want them to have access to a Python console so they can try as they learn. This is done by creating a Python subprocess from the http server.</p>
<p>However, I do not want them to accidentally or intentionally damage my server, so I need to forbid the Python process from writing anything to disk.</p>
<p>Also I'm running the server on Ubuntu Linux so doing it Python-wise or system-wise are both OK.</p>
| 2 |
2016-09-19T06:15:13Z
| 39,566,721 |
<p>I doubt there's a way to do this in the interpreter itself: there are way too many things to patch (<code>open</code>, <code>subprocess</code>, <code>os.system</code>, <code>file</code>, and probably others). I'd suggest looking into a way of containerizing the python runtime via something like Docker. The containerization gives some guarantees restricting access, though not as much as virtualization. See <a href="http://security.stackexchange.com/questions/107850/docker-as-a-sandbox-for-untrusted-code">here</a> for more discussion about the security implications.</p>
<p>Running a jupyter/ipython notebook in the docker container would probably be the easiest way to expose a web-frontend. <code>jupyter</code> provides a collection of docker containers for this purpose: see <a href="https://github.com/jupyter/tmpnb">https://github.com/jupyter/tmpnb</a> and <a href="https://github.com/jupyter/docker-stacks">https://github.com/jupyter/docker-stacks</a></p>
| 5 |
2016-09-19T06:33:34Z
|
[
"python",
"linux"
] |
Fastest way to insert a 2D array into a (larger) 2D array
| 39,566,474 |
<p>Say there's two 2D arrays, <code>a</code> and <code>b</code></p>
<pre><code>import numpy as np
a = np.random.rand(3, 4)
b = np.random.zeros(8, 8)
</code></pre>
<p>and <code>b</code> is always larger than <code>a</code> over both axes.</p>
<p><em>(Edit: <code>b</code> is initialized as an array of zeros, to reflect the fact that all elements not occupied by <code>a</code> will remain zero.)</em></p>
<p><strong>Question</strong>. What's the <em>fastest</em> or <em>most Pythonic</em> way to "insert" <code>a</code> into <code>b</code>?</p>
<p>So far I've tried 2 things:</p>
<ol>
<li>Using <code>np.pad</code> to "turn" <code>a</code> into an array of shape <code>(8, 8)</code></li>
<li>Loop through every row in <code>a</code> and place it in some the corresponding row in <code>b</code></li>
</ol>
<p>What I <strong>haven't</strong> tried is using a doubly-nested loop to iterate over every element of <code>a</code>, since I think that's not performance-friendly.</p>
<p><strong>Motivation</strong>. Each array <code>a</code> is a tiny character, and I'd like to feed each character to a neural net that accepts <em>flattened</em> arrays of shape <code>(8, 8)</code>, ie. arrays of shape <code>(64,)</code>. (I <em>think</em> I can't simply flatten <code>a</code> to one dimension and pad it with zeros because then its two-dimensional structure gets warped, so, rather, I must first "reshape" it into <code>(8, 8)</code>, right?) There's a few millions of characters.</p>
| 0 |
2016-09-19T06:15:28Z
| 39,568,478 |
<p>What about :</p>
<pre><code>b[:a.shape[0],:a.shape[1]] = a
</code></pre>
<p>Note I assumed <code>a</code> is to be placed at the begining of <code>b</code> but you could refine it a bit to put <code>a</code> anywhere:</p>
<pre><code>a0,a1=1,1
b[a0:a0+a.shape[0],a1:a1+a.shape[1]] = a
</code></pre>
| 2 |
2016-09-19T08:23:26Z
|
[
"python",
"numpy"
] |
Fastest way to insert a 2D array into a (larger) 2D array
| 39,566,474 |
<p>Say there's two 2D arrays, <code>a</code> and <code>b</code></p>
<pre><code>import numpy as np
a = np.random.rand(3, 4)
b = np.random.zeros(8, 8)
</code></pre>
<p>and <code>b</code> is always larger than <code>a</code> over both axes.</p>
<p><em>(Edit: <code>b</code> is initialized as an array of zeros, to reflect the fact that all elements not occupied by <code>a</code> will remain zero.)</em></p>
<p><strong>Question</strong>. What's the <em>fastest</em> or <em>most Pythonic</em> way to "insert" <code>a</code> into <code>b</code>?</p>
<p>So far I've tried 2 things:</p>
<ol>
<li>Using <code>np.pad</code> to "turn" <code>a</code> into an array of shape <code>(8, 8)</code></li>
<li>Loop through every row in <code>a</code> and place it in some the corresponding row in <code>b</code></li>
</ol>
<p>What I <strong>haven't</strong> tried is using a doubly-nested loop to iterate over every element of <code>a</code>, since I think that's not performance-friendly.</p>
<p><strong>Motivation</strong>. Each array <code>a</code> is a tiny character, and I'd like to feed each character to a neural net that accepts <em>flattened</em> arrays of shape <code>(8, 8)</code>, ie. arrays of shape <code>(64,)</code>. (I <em>think</em> I can't simply flatten <code>a</code> to one dimension and pad it with zeros because then its two-dimensional structure gets warped, so, rather, I must first "reshape" it into <code>(8, 8)</code>, right?) There's a few millions of characters.</p>
| 0 |
2016-09-19T06:15:28Z
| 39,568,653 |
<p>More generally you can determine the position where you want to insert the array if you create a tuple of slices (this works for arbitary dimensions):</p>
<pre><code>>>> a = np.random.random((5,5))
>>> b = np.ones((3,3))
>>> edge_coordinate = (0,0)
>>> slicer = tuple(slice(edge, edge+i) for edge, i in zip(edge_coordinate, b.shape))
>>> a[slicer] = b
>>> a
array([[ 1. , 1. , 1. , 0.14206495, 0.36385016],
[ 1. , 1. , 1. , 0.08861402, 0.7888898 ],
[ 1. , 1. , 1. , 0.1975496 , 0.13345192],
[ 0.550487 , 0.22471952, 0.47050879, 0.04669643, 0.13480528],
[ 0.25139511, 0.06499812, 0.42247189, 0.05840351, 0.74735495]])
</code></pre>
<p>by varying the <code>edge_coordinate</code> you can vary the position:</p>
<pre><code>>>> a = np.random.random((5,5))
>>> b = np.ones((3,3))
>>> edge_coordinates = (1,1) # changed
>>> slicer = tuple(slice(edge, edge+i) for edge, i in zip(edge_coordinates, b.shape))
>>> a[slicer] = b
>>> a
array([[ 0.21385714, 0.68789872, 0.3915475 , 0.67342566, 0.05642307],
[ 0.19778658, 1. , 1. , 1. , 0.70717406],
[ 0.73678924, 1. , 1. , 1. , 0.90285997],
[ 0.39709332, 1. , 1. , 1. , 0.96959814],
[ 0.89627195, 0.21295355, 0.72598992, 0.80749348, 0.76660287]])
</code></pre>
<p>Ideally one could make a function of it - if you regularly use it.</p>
| 2 |
2016-09-19T08:32:44Z
|
[
"python",
"numpy"
] |
Fastest way to insert a 2D array into a (larger) 2D array
| 39,566,474 |
<p>Say there's two 2D arrays, <code>a</code> and <code>b</code></p>
<pre><code>import numpy as np
a = np.random.rand(3, 4)
b = np.random.zeros(8, 8)
</code></pre>
<p>and <code>b</code> is always larger than <code>a</code> over both axes.</p>
<p><em>(Edit: <code>b</code> is initialized as an array of zeros, to reflect the fact that all elements not occupied by <code>a</code> will remain zero.)</em></p>
<p><strong>Question</strong>. What's the <em>fastest</em> or <em>most Pythonic</em> way to "insert" <code>a</code> into <code>b</code>?</p>
<p>So far I've tried 2 things:</p>
<ol>
<li>Using <code>np.pad</code> to "turn" <code>a</code> into an array of shape <code>(8, 8)</code></li>
<li>Loop through every row in <code>a</code> and place it in some the corresponding row in <code>b</code></li>
</ol>
<p>What I <strong>haven't</strong> tried is using a doubly-nested loop to iterate over every element of <code>a</code>, since I think that's not performance-friendly.</p>
<p><strong>Motivation</strong>. Each array <code>a</code> is a tiny character, and I'd like to feed each character to a neural net that accepts <em>flattened</em> arrays of shape <code>(8, 8)</code>, ie. arrays of shape <code>(64,)</code>. (I <em>think</em> I can't simply flatten <code>a</code> to one dimension and pad it with zeros because then its two-dimensional structure gets warped, so, rather, I must first "reshape" it into <code>(8, 8)</code>, right?) There's a few millions of characters.</p>
| 0 |
2016-09-19T06:15:28Z
| 39,583,342 |
<p>I think the best way is to use padding.</p>
<p>if you want to place a in the top left corner:</p>
<pre><code>np.pad(a,((0,5),(0,4)),'constant')
</code></pre>
<p>if you want to place a in centre:</p>
<pre><code>np.pad(a,((2,3),(2,2)),'constant')
</code></pre>
| 0 |
2016-09-19T23:02:36Z
|
[
"python",
"numpy"
] |
find the index of values that satisfy A + B =C + D
| 39,566,527 |
<p>Working on below problem, using Python 2.7. Post my code and wondering if any further smart ideas to make it run faster? I thought there might be some ideas which sort the list first, and leveraging sorting behavior, but cannot figure out so far. My code is <code>O(n^2)</code> time complexity.</p>
<p><strong>Problem</strong>,</p>
<p>Given an array A of integers, find the index of values that satisfy A + B =C + D, where A,B,C & D are integers values in the array. Find all combinations of quadruples.</p>
<p><strong>Code</strong>,</p>
<pre><code>from collections import defaultdict
sumIndex = defaultdict(list)
def buildIndex(numbers):
for i in range(len(numbers)):
for j in range(i+1,len(numbers)):
sumIndex[numbers[i]+numbers[j]].append((i,j))
def checkResult():
for k,v in sumIndex.items():
if len(v) > 1:
for i in v:
print k, i
if __name__ == "__main__":
buildIndex([1,2,3,4])
checkResult()
</code></pre>
<p><strong>Output</strong>, which is sum value, and indexes which sum could result in such value,</p>
<pre><code>5 (0,3)
5 (1,2)
</code></pre>
| 1 |
2016-09-19T06:19:19Z
| 39,566,702 |
<p>Consider the case where all the elements of the array are equal. Then we know the answer beforehand but merely printing the result will take <code>O(n^2)</code> time since there are <code>n*(n-1)/2</code> number of such pairs. So I think it is safe to say that there is no approach with a better complexity than <code>O(n^2)</code> for this problem.</p>
| 2 |
2016-09-19T06:32:47Z
|
[
"python",
"algorithm",
"python-2.7"
] |
find the index of values that satisfy A + B =C + D
| 39,566,527 |
<p>Working on below problem, using Python 2.7. Post my code and wondering if any further smart ideas to make it run faster? I thought there might be some ideas which sort the list first, and leveraging sorting behavior, but cannot figure out so far. My code is <code>O(n^2)</code> time complexity.</p>
<p><strong>Problem</strong>,</p>
<p>Given an array A of integers, find the index of values that satisfy A + B =C + D, where A,B,C & D are integers values in the array. Find all combinations of quadruples.</p>
<p><strong>Code</strong>,</p>
<pre><code>from collections import defaultdict
sumIndex = defaultdict(list)
def buildIndex(numbers):
for i in range(len(numbers)):
for j in range(i+1,len(numbers)):
sumIndex[numbers[i]+numbers[j]].append((i,j))
def checkResult():
for k,v in sumIndex.items():
if len(v) > 1:
for i in v:
print k, i
if __name__ == "__main__":
buildIndex([1,2,3,4])
checkResult()
</code></pre>
<p><strong>Output</strong>, which is sum value, and indexes which sum could result in such value,</p>
<pre><code>5 (0,3)
5 (1,2)
</code></pre>
| 1 |
2016-09-19T06:19:19Z
| 39,567,138 |
<p>Yes it can be done in a way with complexity less than O(n^2). The algo is:</p>
<ol>
<li>Create a duplicate array suppose indexArr[] storing the index of the element of the original array say origArr[].</li>
<li>Sort the origArr[] in ascending order using some algo having complexity O(nLogn). Likewise also shuffle the indexArr[] while sorting the origArr[].</li>
<li>Now you have to find the pairs in the sorted array, you will run 2 loops finding all the possible combinations. Suppose you select <code>origArr[i] + origArr[i + 1] = sum.</code> </li>
<li>Now you will search iff <code>sum <= origArr[n]</code> where n is the last element of the array which is the maximum element. Also if <code>sum > origArr[n]</code> then you will break the inner loop as well as the outer loop as no other combinations are possible.</li>
<li>Also you will break the inner loop if <code>sum > origArr[j]</code> as no other combinations are possible for that sum.</li>
</ol>
<p>PS - The worst case scenario will be O(n^2).</p>
| 1 |
2016-09-19T07:01:35Z
|
[
"python",
"algorithm",
"python-2.7"
] |
find the index of values that satisfy A + B =C + D
| 39,566,527 |
<p>Working on below problem, using Python 2.7. Post my code and wondering if any further smart ideas to make it run faster? I thought there might be some ideas which sort the list first, and leveraging sorting behavior, but cannot figure out so far. My code is <code>O(n^2)</code> time complexity.</p>
<p><strong>Problem</strong>,</p>
<p>Given an array A of integers, find the index of values that satisfy A + B =C + D, where A,B,C & D are integers values in the array. Find all combinations of quadruples.</p>
<p><strong>Code</strong>,</p>
<pre><code>from collections import defaultdict
sumIndex = defaultdict(list)
def buildIndex(numbers):
for i in range(len(numbers)):
for j in range(i+1,len(numbers)):
sumIndex[numbers[i]+numbers[j]].append((i,j))
def checkResult():
for k,v in sumIndex.items():
if len(v) > 1:
for i in v:
print k, i
if __name__ == "__main__":
buildIndex([1,2,3,4])
checkResult()
</code></pre>
<p><strong>Output</strong>, which is sum value, and indexes which sum could result in such value,</p>
<pre><code>5 (0,3)
5 (1,2)
</code></pre>
| 1 |
2016-09-19T06:19:19Z
| 39,570,006 |
<p>faster, more Pythonic approach using <code>itertools.combinations:</code></p>
<pre><code>from collections import defaultdict
from itertools import combinations
def get_combos(l):
d = defaultdict(list)
for indices in combinations(range(len(l)),2):
d[(l[indices[0]] + l[indices[1]])].append(indices)
return {k:v for k,v in d.items() if len(v) > 1}
</code></pre>
<p><hr>
<em>timing results</em></p>
<pre><code> OP this
len(l)=4, min(repeat=100, number=10000) | 0.09334 | 0.08050
len(l)=50, min(repeat=10, number=100) | 0.08689 | 0.08996
len(l)=500, min(repeat=10, number=10) | 0.64974 | 0.59553
len(l)=1000, min(repeat=3, number=3) | 1.01559 | 0.83494
len(l)=5000, min(repeat=3, number=1) | 10.26168 | 8.92959
</code></pre>
<hr>
<p><em>timing code</em></p>
<pre><code>from collections import defaultdict
from itertools import combinations
from random import randint
from timeit import repeat
def lin_get_combos(l):
sumIndex = defaultdict(list)
for i in range(len(l)):
for j in range(i+1,len(l)):
sumIndex[l[i]+l[j]].append((i,j))
return {k:v for k,v in sumIndex.items() if len(v) > 1}
def craig_get_combos(l):
d = defaultdict(list)
for indices in combinations(range(len(l)),2):
d[(l[indices[0]] + l[indices[1]])].append(indices)
return {k:v for k,v in d.items() if len(v) > 1}
l = []
for _ in range(4):
l.append(randint(0,1000))
t1 = min(repeat(stmt='lin_get_combos(l)', setup='from __main__ import lin_get_combos, l', repeat=100, number=10000))
t2 = min(repeat(stmt='craig_get_combos(l)', setup='from __main__ import craig_get_combos, l', repeat= 100, number=10000))
print '%0.5f, %0.5f' % (t1, t2)
l = []
for _ in range(50):
l.append(randint(0,1000))
t1 = min(repeat(stmt='lin_get_combos(l)', setup='from __main__ import lin_get_combos, l', repeat=10, number=100))
t2 = min(repeat(stmt='craig_get_combos(l)', setup='from __main__ import craig_get_combos, l', repeat= 10, number=100))
print '%0.5f, %0.5f' % (t1, t2)
l = []
for _ in range(500):
l.append(randint(0,1000))
t1 = min(repeat(stmt='lin_get_combos(l)', setup='from __main__ import lin_get_combos, l', repeat=10, number=10))
t2 = min(repeat(stmt='craig_get_combos(l)', setup='from __main__ import craig_get_combos, l', repeat= 10, number=10))
print '%0.5f, %0.5f' % (t1, t2)
l = []
for _ in range(1000):
l.append(randint(0,1000))
t1 = min(repeat(stmt='lin_get_combos(l)', setup='from __main__ import lin_get_combos, l', repeat=3, number=3))
t2 = min(repeat(stmt='craig_get_combos(l)', setup='from __main__ import craig_get_combos, l', repeat= 3, number=3))
print '%0.5f, %0.5f' % (t1, t2)
l = []
for _ in range(5000):
l.append(randint(0,1000))
t1 = min(repeat(stmt='lin_get_combos(l)', setup='from __main__ import lin_get_combos, l', repeat=3, number=1))
t2 = min(repeat(stmt='craig_get_combos(l)', setup='from __main__ import craig_get_combos, l', repeat= 3, number=1))
print '%0.5f, %0.5f' % (t1, t2)
</code></pre>
| 1 |
2016-09-19T09:43:46Z
|
[
"python",
"algorithm",
"python-2.7"
] |
.pyc file gets created when it is not even imported
| 39,566,724 |
<p>I have 2 Python files called <code>numbers.py</code> and <code>numpyBasicOps.py</code>. <code>numbers.py</code> is a simple Python file, not importing any module. <code>numpyBasicOps.py</code> imports the <code>numpy</code> library.</p>
<p>Whenever I run <code>numpyBasicOps.py</code>, <code>numbers.py</code>'s output is displayed first followed by some error relating to <code>numpy</code> module:</p>
<pre><code>Traceback (most recent call last):
File "./numpyBasicOps.py", line 3, in <module>
import numpy as np
File "/Library/Python/2.7/site-packages/numpy-1.11.2rc1-py2.7-macosx-10.11-intel.egg/numpy/__init__.py", line 142, in <module>
from . import add_newdocs
File "/Library/Python/2.7/site-packages/numpy-1.11.2rc1-py2.7-macosx-10.11-intel.egg/numpy/add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "/Library/Python/2.7/site-packages/numpy-1.11.2rc1-py2.7-macosx-10.11-intel.egg/numpy/lib/__init__.py", line 8, in <module>
from .type_check import *
File "/Library/Python/2.7/site-packages/numpy-1.11.2rc1-py2.7-macosx-10.11-intel.egg/numpy/lib/type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "/Library/Python/2.7/site-packages/numpy-1.11.2rc1-py2.7-macosx-10.11-intel.egg/numpy/core/__init__.py", line 22, in <module>
from . import _internal # for freeze programs
File "/Library/Python/2.7/site-packages/numpy-1.11.2rc1-py2.7-macosx-10.11-intel.egg/numpy/core/_internal.py", line 15, in <module>
from .numerictypes import object_
File "/Library/Python/2.7/site-packages/numpy-1.11.2rc1-py2.7-macosx-10.11-intel.egg/numpy/core/numerictypes.py", line 962, in <module>
_register_types()
File "/Library/Python/2.7/site-packages/numpy-1.11.2rc1-py2.7-macosx-10.11-intel.egg/numpy/core/numerictypes.py", line 958, in _register_types
numbers.Integral.register(integer)
AttributeError: 'module' object has no attribute 'Integral'
</code></pre>
<p>Also, I see a <code>.pyc</code> file for <code>numbers.py</code> being generated.</p>
<p>How does the <code>numbers.pyc</code> file get generated even when it is not imported in <code>numpyBasicOps.py</code> and why is output for <code>numbers.py</code> displayed?</p>
| 2 |
2016-09-19T06:33:44Z
| 39,566,871 |
<p><code>numpy</code> registers their own integer-like objects as implementing the Abstract Base Class <a href="https://docs.python.org/2/library/numbers.html#numbers.Integral" rel="nofollow"><code>numbers.Integral</code></a>. To do this, it must use <code>import numbers</code> to get access to that object.</p>
<p>Or at least, it tried to and failed; as you named your module <code>numbers</code> <em>as well</em> it was imported instead. In other words, your <code>numbers.py</code> module masked the built-in <a href="https://docs.python.org/2/library/numbers.html" rel="nofollow">standard library module <code>numbers</code></a>.</p>
<p>Rename your module to something else, and make sure you delete the <code>numbers.pyc</code> file that was created.</p>
| 1 |
2016-09-19T06:43:29Z
|
[
"python",
"python-2.7",
"numpy"
] |
Install npm packages in Python virtualenv
| 39,566,769 |
<p>There are some npm packages which I would like to install in a Python virtualenv. For example: </p>
<ul>
<li><a href="https://www.npmjs.com/package/pdfjs-dist" rel="nofollow">https://www.npmjs.com/package/pdfjs-dist</a></li>
<li><a href="https://www.npmjs.com/package/jquery-ui" rel="nofollow">https://www.npmjs.com/package/jquery-ui</a></li>
</ul>
<p>Up to now I only found the complicated way to get these installable in a virtualenv: Create a python package for them.</p>
<p>Is there no simpler way to get npm packages installed in a Python virtualenv?</p>
| 0 |
2016-09-19T06:36:59Z
| 39,567,927 |
<p>NPM and pip have nothing to do with each other, so you won't be able to install NPM packages inside a virtualenv.</p>
<p>However: <a href="https://docs.npmjs.com/files/folders" rel="nofollow">NPM installs packages in <code>./node_modules</code></a>.</p>
<p>So if you created a virtualenv and installed npm modules inside it</p>
<pre><code>virtualenv myproj
cd myproj
source bin/activate
npm install pdfjs-dist jquery-ui
</code></pre>
<p>you will end up with the node packages in <code>myproj/node_modules</code>, which is as close as it gets to "installing NPM inside virtualenv".</p>
| 2 |
2016-09-19T07:49:11Z
|
[
"python",
"npm",
"installation",
"pip"
] |
Using two environment in Anaconda - issue with importing matplotlib
| 39,566,777 |
<p>I am using two environments in Anaconda (Mac OS X). When I use Python 2.7 environment I can't import matplotlib: </p>
<pre><code>%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
</code></pre>
<p>It tells me that there is no module name like that. </p>
<p>I installed matplotlib using <code>pip install matplotlib</code>. It installed successfully but then I opened Jupyter Notebook and I tried to import it but still it doesn't work because there is no module name like that.</p>
| 0 |
2016-09-19T06:37:29Z
| 39,567,018 |
<p>You have to make sure matplotlib is installed in the env you've activated. If you need to use it in both envs, you need to install matplotlib into both envs. </p>
| 0 |
2016-09-19T06:53:24Z
|
[
"python",
"matplotlib"
] |
Writing Dask partitions into single file
| 39,566,809 |
<p>New to <code>dask</code>,I have a <code>1GB</code> CSV file when I read it in <code>dask</code> dataframe it creates around 50 partitions after my changes in the file when I write, it creates as many files as partitions.<br>
<strong>Is there a way to write all partitions to single CSV file and is there a way access partitions?</strong><br>
Thank you.</p>
| 2 |
2016-09-19T06:39:10Z
| 39,573,112 |
<h3>Short answer</h3>
<p>No, Dask.dataframe.to_csv only writes CSV files to different files, one file per partition. However, there are ways around this.</p>
<h3>Concatenate Afterwards</h3>
<p>Perhaps just concatenate the files after dask.dataframe writes them? This is likely to be near-optimal in terms of performance.</p>
<pre><code>df.to_csv('/path/to/myfiles.*.csv')
from glob import glob
filenames = glob('/path/to/myfiles.*.csv')
with open('outfile.csv', 'w') as out:
for fn in filenames:
with open(fn) as f:
out.write(f.read()) # maybe add endline here as well?
</code></pre>
<h3>Or use Dask.delayed</h3>
<p>However, you can do this yourself using <a href="http://dask.pydata.org/en/latest/delayed.html" rel="nofollow">dask.delayed</a>, by <a href="http://dask.pydata.org/en/latest/delayed-collections.html" rel="nofollow">using dask.delayed alongside dataframes</a></p>
<p>This gives you a list of delayed values that you can use however you like:</p>
<pre><code>list_of_delayed_values = df.to_delayed()
</code></pre>
<p>It's then up to you to structure a computation to write these partitions sequentially to a single file. This isn't hard to do, but can cause a bit of backup on the scheduler.</p>
| 2 |
2016-09-19T12:24:15Z
|
[
"python",
"dask"
] |
Python script not deleting Git files in Windows
| 39,566,812 |
<p>I'm using the following code to delete a directory containing a git repo:</p>
<pre><code>import errno
import os
import stat
import shutil
def clear_dir(path):
shutil.rmtree(path, ignore_errors=False, onerror=handle_remove_readonly)
def handle_remove_readonly(func, path, exc):
excvalue = exc[1]
if func in (os.rmdir, os.remove) and excvalue.errno == errno.EACCES:
os.chmod(path, stat.S_IRWXU| stat.S_IRWXG| stat.S_IRWXO) # 0777
func(path)
else:
raise
</code></pre>
<p>This code should deal well with read-only files. I can delete the directory/folder from Windows Explorer, but when I run the following code:</p>
<pre><code>if __name__ == '__main__':
clear_dir(r'c:\path\to\ci-monitor')
</code></pre>
<p>I get the following error:</p>
<pre><code> File "C:\Users\m45914\code\ci-monitor\utils\filehandling.py", line 8, in clear_dir
shutil.rmtree(path, ignore_errors=False, onerror=handle_remove_readonly)
File "C:\Users\m45914\AppData\Local\Programs\Python\Python35\lib\shutil.py", line 488, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\m45914\AppData\Local\Programs\Python\Python35\lib\shutil.py", line 378, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
File "C:\Users\m45914\AppData\Local\Programs\Python\Python35\lib\shutil.py", line 378, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
File "C:\Users\m45914\AppData\Local\Programs\Python\Python35\lib\shutil.py", line 378, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
File "C:\Users\m45914\AppData\Local\Programs\Python\Python35\lib\shutil.py", line 378, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
File "C:\Users\m45914\AppData\Local\Programs\Python\Python35\lib\shutil.py", line 383, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "C:\Users\m45914\AppData\Local\Programs\Python\Python35\lib\shutil.py", line 381, in _rmtree_unsafe
os.unlink(fullname)
PermissionError: [WinError 5] Access is denied: 'scratch\\repos\\ci-monitor\\.git\\objects\\pack\\pack-83e55c6964d
21e8be0afb2cbccd887eae3e32bf4.idx'
</code></pre>
<p>I've tried running the script as administrator (no change.)</p>
<p>The directory being deleted is a git repo, and I am periodically cloning, checking and deleting it. The checks are to make sure there are no unmerged release and hotfix branches in the repo.</p>
<p>Anyone got any ideas?</p>
| 1 |
2016-09-19T06:39:23Z
| 39,566,888 |
<p>If that file is being used by another process then it would not be possible to delete it. cross check it by using 'unlocker' OR any other similar software.</p>
| 1 |
2016-09-19T06:44:30Z
|
[
"python",
"windows",
"git",
"delete-file"
] |
How can I hide an id sequence and provide a "friendly" url using pyramid?
| 39,566,919 |
<p>I have a list of things in a database and would like to hide the sequence of the primary key from being shown in the url when being accessed. So I would like to turn something like this:</p>
<pre><code>example.com/post/9854
</code></pre>
<p>into this:</p>
<pre><code>example.com/post/one-two-three-four
</code></pre>
<p>While obviously still using the primary key in the query. What is the pyramid way of accomplishing this? </p>
| 1 |
2016-09-19T06:47:07Z
| 39,582,862 |
<p>This "user friendly URL fragment" is usually called a "slug", which I think comes from the times when newspapers were typeset in lead.</p>
<p>What you usually do is have an additional field in your model which stores the slug. The field should be unique and indexed (you may even consider having it as your model's primary key depending on where you stand in the great "natural vs surrogate primary keys" debate :) )</p>
<pre><code>class SurrogatePost(Base):
id = sa.Integer(primary_key=True)
slug = sa.String(unique=True)
title = sa.String()
class NaturalPost(Base)
id = sa.String(primary_key=True)
title = sa.String()
</code></pre>
<p>You generate the slug from the post's title ONCE when your post is first saved and never change it again, even if the title changes - this is important for SEO and linking.</p>
<pre><code>class Post(Base)
...
def __init__(self, title, body):
self.slug = generate_slug(title)
self.title = title
self.body = body
</code></pre>
<p>Then, in your views code, you use the slug to look up the post in the database, just like you would use your primary key.</p>
<pre><code> def my_view(request):
slug = request.matchdict['slug']
post = DBSession.query(Post).filter(Post.slug==slug).one()
...
</code></pre>
<p>The URL schema you're thinking about has a requirement that all slugs of all your posts have to be unique, which may be annoying. If you look at many news websites, you'll notice that they use a "combined" URL scheme, where both primary key and the slug are present in the URL:</p>
<pre><code>/posts/123/one-two-three-four
/posts/123-one-two-three-four
etc.
</code></pre>
<p>The primary key is used to look up the data in the database and the slug part is purely for SEO and readability. </p>
| 1 |
2016-09-19T22:09:43Z
|
[
"python",
"url",
"url-rewriting",
"url-routing",
"pyramid"
] |
convert translation output into a string
| 39,566,935 |
<p>The translated output text that I get using Google Translate API seems only to be made available in a browser and in html format. How can I get output as a string that can be analyzed using Python, for example. </p>
<h2>I would also like to understand how larger blocks of text can be translated in this way. Examples provided all seem to be short strings.</h2>
<p>I've experimented a bit more and share findings. Following the usage guide that accompanies the API registration, common practice is to include the text to be translated in a URL provided for this purpose. The place within the URL where the text is to be inserted is designated with the letter "q". This is preceded by a place to specify parameters for source and target language as well as registered API key.</p>
<p>The output from this appears in the browser (I've used Chrome) in the following format:</p>
<pre><code>200 OK
{
"data": {
"translations": [
{
"translatedText": "Hallo Welt"
}
</code></pre>
<p>This example makes use of a single string as input that is inserted following the "q" referenced above. The guide suggests that translating multiple strings is best accomplished by replicating the "q" entry method for each subsequent string. </p>
<p>Entering text to be translated in this way is cumbersome, to say the least. Doing anything with the output (parsing, tokenizing, etc.) is also not very convenient or straightforward. </p>
<p>Any advice on a more efficient and effective approach (perhaps one that does not require use of a browser and html would be appreciated. </p>
| 1 |
2016-09-19T06:48:12Z
| 39,646,771 |
<p>This is just the way the browser displays what the API has sent you, it's the browser's kind interpretation of it, not how it actually looks. It's json. You need to use the Python json module I expect, although I haven't attempted to use the API myself.</p>
| 0 |
2016-09-22T18:49:23Z
|
[
"python",
"api",
"text",
"translate"
] |
Mapping data from other dataset. Python Pandas
| 39,566,992 |
<p>So i got 2 datasets, <code>df1</code> has the colour for all fruits and <code>df2</code> don't. How do I map the color values for <code>df2</code> based on the color data from <code>d1</code>, according to the fruit names? </p>
<pre><code> df1 df2
Name Color Name Color
Apple Red Orange Na
Orange Orange Coconut Na
Pear Pear Pear Na
Pear Pear Strawberries Na
Papaya Papaya Banana Na
Watermelon Watermelon Papaya Na
" " " "
</code></pre>
| 1 |
2016-09-19T06:51:48Z
| 39,567,024 |
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow"><code>map</code></a>, but first need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.drop_duplicates.html" rel="nofollow"><code>Series.drop_duplicates</code></a>:</p>
<pre><code>df2['Color'] = df2['Name'].map(df1.set_index('Name')['Color'].drop_duplicates())
print (df2)
Name Color
0 Orange Orange
1 Coconut NaN
2 Pear Pear
3 Strawberries NaN
4 Banana NaN
5 Papaya Papaya
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow"><code>DataFrame.drop_duplicates</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" rel="nofollow"><code>DataFrame.drop</code></a>:</p>
<pre><code>df2 = pd.merge(df2.drop('Color', axis=1),df1.drop_duplicates(), how='left')
print (df2)
Name Color
0 Orange Orange
1 Coconut NaN
2 Pear Pear
3 Strawberries NaN
4 Banana NaN
5 Papaya Papaya
</code></pre>
| 1 |
2016-09-19T06:54:05Z
|
[
"python",
"pandas",
"merge",
"mapping",
"multiple-columns"
] |
Mapping data from other dataset. Python Pandas
| 39,566,992 |
<p>So i got 2 datasets, <code>df1</code> has the colour for all fruits and <code>df2</code> don't. How do I map the color values for <code>df2</code> based on the color data from <code>d1</code>, according to the fruit names? </p>
<pre><code> df1 df2
Name Color Name Color
Apple Red Orange Na
Orange Orange Coconut Na
Pear Pear Pear Na
Pear Pear Strawberries Na
Papaya Papaya Banana Na
Watermelon Watermelon Papaya Na
" " " "
</code></pre>
| 1 |
2016-09-19T06:51:48Z
| 39,567,063 |
<p>You can do this with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow">merge</a>:</p>
<pre><code>df2 = df2.merge(df1, on="Name", how="left", suffixes=('_1','_2'))
</code></pre>
<p>if name is your index column you can just do a <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html" rel="nofollow">join</a>:</p>
<pre><code>df2 = df2.join(df1[['color']])
</code></pre>
<p>For a more complete example you can look at the answer above/below that was kind enough to elaborate on my answer.</p>
| 1 |
2016-09-19T06:56:49Z
|
[
"python",
"pandas",
"merge",
"mapping",
"multiple-columns"
] |
display items in a database in a webpage and edit
| 39,567,072 |
<p>I am getting the below error when I tried to run the edit part.</p>
<p>Displaying the contents of database in webpage is working fine. I have put up code for just the edit part.</p>
<blockquote>
<p>TypeError at /edit/ <br/>
<code>__init__()</code> takes exactly 1 argument (2 given)</p>
</blockquote>
<p>views.py</p>
<pre><code>class userUpdate(UpdateView):
model = user
fields = ['name','phone','dob','gender']
template_name_suffix = '_update_form'
</code></pre>
<p>urls.py</p>
<pre><code>from django.conf.urls import include, url
from newapp import views
urlpatterns = [url(r'^edit/',views.userUpdate, name = 'user_update_form'),]
</code></pre>
<p>user_update_form.html</p>
<pre><code><form action="" method="post">{% csrf_token %}
{{ form.as_p }}
<input type="submit" value="Update" />
</form>
</code></pre>
| 0 |
2016-09-19T06:57:30Z
| 39,567,902 |
<p>Class-based views need to be referenced in urls.py via their <code>as_view</code> method:</p>
<pre><code>url(r'^edit/', views.userUpdate.as_view(), name = 'user_update_form'),
</code></pre>
| 1 |
2016-09-19T07:47:41Z
|
[
"python",
"django"
] |
using mkdir and touch sub-processes sequentially doesn't work
| 39,567,154 |
<p>I have an error that I keep encountering repeatedly, sadly without being able to find solution to at the site.</p>
<pre><code>try:
#create working dir if it doens't exist already
if not os.path.isdir(WORKINGDIR):
print '>>>mdkir ',WORKINGDIR
subprocess.Popen(['mkdir',WORKINGDIR]).wait()
print os.path.isdir(WORKINGDIR)
#create output csv file
outputCSVFile = WORKINGDIR+ '/'+'results.csv'
if not os.path.isfile(outputCSVFile):
print '>>> touch',outputCSVFile
subprocess.check_output(['touch',outputCSVFile])
</code></pre>
<p>Although the line: <code>print os.path.isdir(WORKINGDIR)</code> always prints <code>True</code>, <code>subprocess</code> returns this error:</p>
<blockquote>
<p>touch: cannot touch
`/nfs/iil/proj/mpgarch/archive_06/CommandsProfiling/fastScriptsOutput190916/results.csv':
No such file or directory</p>
</blockquote>
<p>The same error doesn't appear when I use <code>subprocess.checkoutput</code> instead of <code>subprocess.Popen().wait()</code>.
I know that this issue can be solved in many ways (such as using <code>os</code> methods to creates directories and files), but I am interested on why my way isn't working.</p>
<p>Thanks in advance. </p>
<p>EDIT: as some suggested, the problem probably lies with the fact that the program continues too fast after the <code>subprocess.Popen</code>, and hence the issue is solved using <code>subprocess.checkoutput</code> which is probably slower (since it has to wait for output). But still - I don't understand exactly what is happening, since <code>os.path.istdir</code> shows that the dir was created, before continuing to the line that performs <code>touch</code></p>
| 1 |
2016-09-19T07:02:36Z
| 39,567,672 |
<p>I suppose you have file permission problems.
In your path it appears that you are using NFS. Did you already try it on the local file system?</p>
<p>Anyway, you should avoid to use sub processes for simple file operations. </p>
<p>To create a directory:</p>
<pre><code>if not os.path.exists(WORKINGDIR):
os.makedirs(WORKINGDIR)
</code></pre>
<p>For touch:</p>
<pre><code>import os
def touch(fname, times=None):
with open(fname, 'a'):
os.utime(fname, times)
touch(WORKINGDIR+ '/'+'results.csv')
</code></pre>
| 0 |
2016-09-19T07:34:05Z
|
[
"python",
"python-2.7",
"unix"
] |
Buttons overlap each other
| 39,567,158 |
<p>I have the following code:</p>
<pre><code> self.btn1 = wx.Button(self, -1, _("a"))
self.btn2 = wx.Button(self, -1, _("b"))
btnSizer = wx.BoxSizer(wx.HORIZONTAL)
btnSizer.Add(self.btn1 , 0, wx.RIGHT, 10)
btnSizer.Add(self.btn2 , 0, wx.RIGHT, 10)
</code></pre>
<p>This works well.
However there is a case where I change the title of <code>btn2</code> :</p>
<pre><code>self.btn1.SetLabel('bbbbb')
</code></pre>
<p>When I do that <code>btn1</code> overlaps <code>btn2</code>....</p>
<p>first row is the original
second row is after set label.</p>
<p><a href="http://i.stack.imgur.com/hi1zP.png" rel="nofollow"><img src="http://i.stack.imgur.com/hi1zP.png" alt="enter image description here"></a></p>
<p>How do I make the screen refresh to the new size of buttons?</p>
| 0 |
2016-09-19T07:02:49Z
| 39,569,732 |
<p>You can use <code>self.Layout()</code> but in this case it really shouldn't be necessary. There must be some issue that you are having with your code.</p>
<pre><code>import wx
class ButtonFrame(wx.Frame):
def __init__(self, value):
wx.Frame.__init__(self,None)
self.btn1 = wx.Button(self, -1, ("a"))
self.btn2 = wx.Button(self, -1, ("b"))
self.btnSizer = wx.BoxSizer(wx.HORIZONTAL)
self.btnSizer.Add(self.btn1 , 0, wx.RIGHT, 10)
self.btnSizer.Add(self.btn2 , 0, wx.RIGHT, 10)
self.btn1.Bind(wx.EVT_BUTTON, self.OnPressA)
self.btn2.Bind(wx.EVT_BUTTON, self.OnPressB)
self.SetSizer(self.btnSizer)
self.Centre()
self.Show()
def OnPressA(self,evt):
self.btn1.SetLabel('bbbbbbbbbbbbbbbbbbbbbbbbbb')
# self.Layout()
def OnPressB(self,evt):
self.btn2.SetLabel('aaaaaaaaaaaaaaaaaaaaaaaaaa')
# self.Layout()
if __name__ == "__main__":
app = wx.App(False)
ButtonFrame(None)
app.MainLoop()
</code></pre>
| 1 |
2016-09-19T09:29:50Z
|
[
"python",
"wxpython"
] |
iterate through nltk dictionaries
| 39,567,162 |
<p>I'd like to know whether it's possible to iterate through some of the available nltk dictionaries, ie: Spanish dictionary. I'd like to find certain words matching some requirements.</p>
<p>Let's say I got this list <code>["tv", "tb", "tp", "dv", "db", "dp"]</code>, the algorithm would give me words like <code>["tapa", "tubo", "tuba", ...]</code>. As you can see, if you get rid of the vowels in those words they'll be in the initial list:</p>
<ul>
<li>tapa => tp</li>
<li>tubo => tb</li>
<li>tuba => tb</li>
</ul>
<p>Anyway, I just want to know whether it's possible to iterate through spanish words on nltk dictionaries and how, that's pretty much</p>
| 0 |
2016-09-19T07:02:59Z
| 39,569,950 |
<p>The nltk has plenty of Spanish language resources, but I'm not aware of a dictionary. So I'll leave the choice of wordlist up to you, and go on from there.</p>
<p>In general, the nltk represents wordlists as corpus readers with the usual method <code>words()</code> for the individual words. So here's how you could find words matching your template in the English wordlist:</p>
<pre><code>templates = set(["tv", "tb", "tp", "dv", "db", "dp"])
for w in nltk.corpus.words.words("en"):
<remove vowels and check if it is in `templates`>
</code></pre>
<p>I notice there's a Spanish stopwords list; here's how you would iterate over it:</p>
<pre><code>for w in nltk.corpus.stopwords.words("spanish"):
...
</code></pre>
<p>You could also create your own "wordlist" from a Spanish-language corpus. I used the scare quotes because the best data structure for this purpose is a set. In python, iterating over a <code>set</code> or <code>dict</code> will give you its keys:</p>
<pre><code>mywords = set(w.lower() for w in nltk.corpus.conll2002.words("esp.train"))
for w in mywords:
...
</code></pre>
| 1 |
2016-09-19T09:40:32Z
|
[
"python",
"nltk"
] |
re.findall() returns extra data when using optionals in between Regex expressions
| 39,567,168 |
<p>I seem to be getting additional variables that I do not want stored into this array. What I expected to return after running the following code is this </p>
<pre><code>[('999-999-9999'), ('999 999 9999'), ('999.999.9999')]
</code></pre>
<p>However what I end up with is the following</p>
<pre><code>[('999-999-9999', '-', '-'), ('999 999 9999', ' ', ' '), ('999.999.9999', '.', '.')]
</code></pre>
<p>The following is what I have</p>
<pre><code>teststr = '''
Phone: 999-999-9999,
999 999 9999,
999.999.9999
'''
phoneRegex = re.compile(r'(\d{3}(-|\s|\.)\d{3}(-|\s|\.)\d{4})')
regexMatches = phoneRegex.findall(teststr)
print(regexMatches)
</code></pre>
| 0 |
2016-09-19T07:03:11Z
| 39,567,200 |
<p>Turn the inner capturing groups to non-capturing groups.</p>
<pre><code>(?:-|\s|\.)
</code></pre>
<p>or</p>
<pre><code>[-\s.]
</code></pre>
<p>Example:</p>
<pre><code>>>> import re
>>> teststr = '''
Phone: 999-999-9999,
999 999 9999,
999.999.9999
'''
>>> re.findall(r'\b(\d{3}[-.\s]\d{3}[.\s-]\d{4})\b', teststr)
['999-999-9999', '999 999 9999', '999.999.9999']
>>>
</code></pre>
| 3 |
2016-09-19T07:05:20Z
|
[
"python"
] |
Export an excel file using a variable name in Python
| 39,567,236 |
<p>I am new to python, trying to figure out dynamic naming of exported files. Right now, I am exporting an xlsx file in a traditional way:</p>
<p>data_subset.to_csv('Destination/existing_process.csv')</p>
<p>I have a string called 'user_name' and the resultant tuple for every user is exported into an excel file called existing_process. Instead of the name existing_process, I would like to rename the file dynamically using the string user_name. </p>
<p>For instance, for a given user_name 'Matt' I would like the exported file to be named as Matt.csv. Thanks! </p>
| -5 |
2016-09-19T07:07:13Z
| 39,567,343 |
<p>You can use <code>string concatenation</code> for that.</p>
<pre><code>name = 'Matt'
with open(name + '.txt', 'w') as f:
f.write('test')
</code></pre>
<p>The same you can use with <code>xlsx</code> but you will you using it slightly different, but the concatenation process is same. Kindly let me know if you want the exact procedure with <code>xlsx</code></p>
| 1 |
2016-09-19T07:13:19Z
|
[
"python",
"python-2.7",
"python-3.x"
] |
Check if variable is None or numpy.array
| 39,567,422 |
<p>I look up in a table if keys have associated arrays, or not. By design, my <code>table.__getitem__()</code> somtimes returns <code>None</code> rather than <code>KeyError</code>-s. I would like this value to be either <code>None</code>, or the numpy array associated with <code>w</code>.</p>
<pre><code>value = table[w] or table[w.lower()]
# value should be a numpy array, or None
if value is not None:
stack = np.vstack((stack, value))
</code></pre>
<p>Only if I go with the above code, and the first lookup is a match, I get :</p>
<pre><code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>and if I go with <code>value = table[w].any() or table[w.lower()].any()</code>, then if it's a mismatch, I expectedly bump into :</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'any'
</code></pre>
<p>I must be missing the correct way to do this, how to do ?</p>
| 0 |
2016-09-19T07:18:26Z
| 39,567,645 |
<p>Use <a href="https://docs.python.org/3.5/library/stdtypes.html#dict.get" rel="nofollow"><code>dict.get</code></a>.</p>
<blockquote>
<p>Return the value for key if key is in the dictionary, else default. If default is not given, it defaults to None, so that this method never raises a KeyError.</p>
</blockquote>
<pre><code>value = table.get(w, table.get(w.lower()))
</code></pre>
<p>So if <code>table[w]</code> is not there you will get <code>table[w.lower()]</code> and if that's not there you'll get <code>None</code>.</p>
| 1 |
2016-09-19T07:32:30Z
|
[
"python",
"numpy"
] |
Check if variable is None or numpy.array
| 39,567,422 |
<p>I look up in a table if keys have associated arrays, or not. By design, my <code>table.__getitem__()</code> somtimes returns <code>None</code> rather than <code>KeyError</code>-s. I would like this value to be either <code>None</code>, or the numpy array associated with <code>w</code>.</p>
<pre><code>value = table[w] or table[w.lower()]
# value should be a numpy array, or None
if value is not None:
stack = np.vstack((stack, value))
</code></pre>
<p>Only if I go with the above code, and the first lookup is a match, I get :</p>
<pre><code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>and if I go with <code>value = table[w].any() or table[w.lower()].any()</code>, then if it's a mismatch, I expectedly bump into :</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'any'
</code></pre>
<p>I must be missing the correct way to do this, how to do ?</p>
| 0 |
2016-09-19T07:18:26Z
| 39,567,684 |
<pre><code>if type(value) is numpy.ndarray:
#do numpy things
else
# Handle None
</code></pre>
<p>Though the above would work, I would suggest to keep signatures simple and consistent, ie table[w] should always return numpy array. In case of None, return empty array. </p>
| 0 |
2016-09-19T07:34:44Z
|
[
"python",
"numpy"
] |
Check if variable is None or numpy.array
| 39,567,422 |
<p>I look up in a table if keys have associated arrays, or not. By design, my <code>table.__getitem__()</code> somtimes returns <code>None</code> rather than <code>KeyError</code>-s. I would like this value to be either <code>None</code>, or the numpy array associated with <code>w</code>.</p>
<pre><code>value = table[w] or table[w.lower()]
# value should be a numpy array, or None
if value is not None:
stack = np.vstack((stack, value))
</code></pre>
<p>Only if I go with the above code, and the first lookup is a match, I get :</p>
<pre><code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>and if I go with <code>value = table[w].any() or table[w.lower()].any()</code>, then if it's a mismatch, I expectedly bump into :</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'any'
</code></pre>
<p>I must be missing the correct way to do this, how to do ?</p>
| 0 |
2016-09-19T07:18:26Z
| 39,567,970 |
<p>IIUC this should work:</p>
<pre><code>value = table[w]
if value is None:
value = table[w.lower()]
</code></pre>
| 2 |
2016-09-19T07:52:10Z
|
[
"python",
"numpy"
] |
python UI app hangs while updating stdout output to UI using subprocess.Popen in a for loop
| 39,567,467 |
<p>I'm running subprocess.Popen() to call a commandline tool which prints it's output to console like so:</p>
<pre><code>[app] : Initializing...
[app] : Starting process
[app] : .............
[app] : .............
[app] : Extracting information
[app] : Downloading information
[app] : 100% completed
[app] : Saving file to disk
[app] : Completed
</code></pre>
<p>Now i'm trying to capture the output so I can show this output to my python UI application in realtime. I'm capturing the output like so:</p>
<pre><code>cmd = "..."
p = subprocess.Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
for line in p.stdout:
self.status.showmsg(str(line.rstrip()))
p.stdout.flush()
</code></pre>
<p><strong>The problem:</strong> The output are displayed on my UI correctly until it reaches that output line where it shows the percentage of process completed(100% in the example). This is where the UI hangs and the gradual increase of the % is not displayed or any output thereafter. The program is running correctly in the background though(does its thing) and when it's 100% complete, the UI unfreezes.</p>
<p>What am I doing wrong here? Looks to me that (%) output is also another loop and failing there but I don't know why or how to go about handling this in python.</p>
<p>Thanks for the valuable suggestions!</p>
| 0 |
2016-09-19T07:21:17Z
| 39,643,978 |
<p>Run the worker process in a separate thread and send the data back to the gui using a custom signal. Here's a very basic demo script:</p>
<pre><code>import sys
from PyQt4 import QtCore, QtGui
class Thread(QtCore.QThread):
dataReceived = QtCore.pyqtSignal(str)
def run(self):
# subprocess stuff goes here
for step in range(6):
self.sleep(1)
self.dataReceived.emit('data received: %d' % step)
class Window(QtGui.QMainWindow):
def __init__(self):
super(Window, self).__init__()
self.button = QtGui.QPushButton('Start')
self.button.clicked.connect(self.handleButton)
self.setCentralWidget(self.button)
self.thread = Thread(self)
self.thread.dataReceived.connect(self.statusBar().showMessage)
def handleButton(self):
if not self.thread.isRunning():
self.statusBar().showMessage('starting...')
self.thread.start()
def closeEvent(self, event):
self.thread.wait()
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
window = Window()
window.resize(200, 60)
window.show()
sys.exit(app.exec_())
</code></pre>
| 1 |
2016-09-22T16:10:32Z
|
[
"python",
"loops",
"subprocess",
"pyqt4",
"hang"
] |
Pandas operations on selected columns
| 39,567,528 |
<p>I want to standardize certain columns in my pandas dataframe.</p>
<pre><code>dfTest = pd.DataFrame({
'A':[14.00,90.20,90.95,96.27,91.21],
'B':[103.02,107.26,110.35,114.23,114.68],
'C':['big','small','big','small','small']
})
</code></pre>
<p>This does not work as only a single index can be set. In case only a single column like 'A' is selected sklearn emits several deprecation warnings</p>
<pre><code>scaler = StandardScaler()
dfTest['A_scaled', 'B_scaled'] = scaler.fit_transform(dfTest[['A', 'B']])
dfTest
</code></pre>
<p>How could I achieve something like that? </p>
| 0 |
2016-09-19T07:25:27Z
| 39,568,157 |
<p>You could concatenate the scaled columns to the original <code>DF</code> as shown:</p>
<pre><code>scaler = StandardScaler()
scaled_data = pd.DataFrame(data=scaler.fit_transform(dfTest[['A', 'B']]),
columns=['A_scaled', 'B_scaled'])
pd.concat([dfTest, scaled_data], axis=1)
</code></pre>
<p><a href="http://i.stack.imgur.com/LulfV.png" rel="nofollow"><img src="http://i.stack.imgur.com/LulfV.png" alt="Image"></a></p>
| 1 |
2016-09-19T08:03:12Z
|
[
"python",
"pandas",
"scikit-learn"
] |
File as input in python netaddr
| 39,567,577 |
<p>Im trying to supernet list of IP networks using netaddr in python.</p>
<p><strong>Code:</strong></p>
<pre><code>import netaddr
from netaddr import *
iplist = [IPNetwork('10.105.205.8/29'),IPNetwork('10.105.205.16/28'),IPNetwork('10.105.205.0/29')]
print '%s' % netaddr.cidr_merge(iplist)
</code></pre>
<p>**Output:**Which actually works.</p>
<pre><code>[IPNetwork('10.105.205.0/27')]
</code></pre>
<p>Can I provide the input from a file? which contains IP networks in CSV files?</p>
<p><strong>IP_Network.csv</strong></p>
<pre><code>8.35.196.0/23
8.35.196.0/24
8.35.197.0/24
8.35.198.0/23
8.35.198.0/24
8.35.199.0/24
8.35.200.0/21
8.35.200.0/23
8.35.200.0/24
8.35.201.0/24
8.35.202.0/23
</code></pre>
| 0 |
2016-09-19T07:28:20Z
| 39,567,697 |
<p>I am not familiar with the netaddr module but to me it seems you are just building a list from inputs. If you have an input file with a network in each line, would this work:</p>
<pre><code>with open ("inputfile", "r") as fp:
iplist = [IPNetwork(q) for q in fp.read().splitlines()]
</code></pre>
<p>Hannu</p>
| 0 |
2016-09-19T07:35:13Z
|
[
"python",
"linux",
"bash",
"ip"
] |
Node.js to Python communication - server or child process?
| 39,567,607 |
<p>I am currently at a project that is mainly written in Node.js, which involves non-linear curve fitting. After trying to do this task with Node.js itself, I gave up on it, because it is simply impractical. So I was looking for a high level language for solving mathmatical problems like that one I am facing. I had to decide between MATLAB and Python, but since Python has really powerful methods by now and it's free, I decided to go with Python. </p>
<p>Now I need to find a way to communicate between Node.js and Python, and I already found two completely different approaches:</p>
<ol>
<li>Setting up a Python server, that solves the mathematical problem as described <a href="http://stackoverflow.com/questions/10775351/combining-node-js-and-python">here</a></li>
<li>Or spawning a child process from my node.js code as described <a href="http://www.sohamkamani.com/blog/2015/08/21/python-nodejs-comm/" rel="nofollow">here</a></li>
</ol>
<p>Now I would usually go with the client server approach because in my opinion it is cleaner, because it separates both languages. But since somebody is writing a blog post about the second approach, there must be some advantage with it, right? </p>
<p>Would somebody briefly explain what are the advantages and disadvantages with both approaches in this case ?</p>
| 2 |
2016-09-19T07:30:27Z
| 39,568,675 |
<p>There are pros and cons of both approaches.</p>
<h2>Separate server</h2>
<p>Setting up a server is more time consuming, you need to make sure that the Python server is started before the Node app needs to talk with it, you have to restart it if it stops etc.</p>
<p>On the other hand you have a nicely separated service that can be used by other apps and can be easily moved to a separate box or a set of boxes if you need more performance.</p>
<h2>Spawning a process</h2>
<p>Spawning a process is much easier than running a separate server, you always know that it's running when you need it. You don't have to manage a separate server with start up scripts, respawning etc.</p>
<p>On the other hand you are restricted to run the Python program on the same machine as your Node program and if the performance is a problem, then you will have to make it a separate server to be able to run it on a different machine or a set of machines.</p>
<h2>Choice</h2>
<p>The choice is really a matter of your own expectations on the future usage and server load. Both approaches will work and both will have different strong and weak sides.</p>
<h2>Abstraction</h2>
<p>In any case, it may be useful to abstract away that choice in a form of a module, so that your main code wouldn't need to know which choice you made. It will mean that you can change your mind in the future.</p>
<p>Making a module can be as simple as putting the related code in a separate <code>.js</code> file and requiring that file from your main code. That module can export one or more functions that take a callback or return a promise and your main code doesn't need to know what happens under the hood as long as the callback gets called or the promise gets resolved with the expected data.</p>
| 2 |
2016-09-19T08:33:44Z
|
[
"python",
"node.js",
"client-server",
"ipc"
] |
Add streaming step to MR job in boto3 running on AWS EMR 5.0
| 39,567,608 |
<p>I'm trying to migrate a couple of MR jobs that I have written in python from AWS EMR 2.4 to AWS EMR 5.0. Till now I was using boto 2.4, but it doesn't support EMR 5.0, so I'm trying to shift to boto3. Earlier, while using boto 2.4, I used the <code>StreamingStep</code> module to specify input location and output location, as well as the location of my mapper and reducer source files. Using this module, I effectively didn't have to create or upload any jar to run my jobs. However, I cannot find the equivalent for this module anywhere in the boto3 documentation. How can I add a streaming step in boto3 to my MR job, so that I don't have to upload a jar file to run it?</p>
| 7 |
2016-09-19T07:30:31Z
| 39,763,973 |
<p>It's unfortunate that boto3 and EMR API are rather poorly documented. Minimally, the word counting example would look as follows:</p>
<pre><code>import boto3
emr = boto3.client('emr')
resp = emr.run_job_flow(
Name='myjob',
ReleaseLabel='emr-5.0.0',
Instances={
'InstanceGroups': [
{'Name': 'master',
'InstanceRole': 'MASTER',
'InstanceType': 'c1.medium',
'InstanceCount': 1,
'Configurations': [
{'Classification': 'yarn-site',
'Properties': {'yarn.nodemanager.vmem-check-enabled': 'false'}}]},
{'Name': 'core',
'InstanceRole': 'CORE',
'InstanceType': 'c1.medium',
'InstanceCount': 1,
'Configurations': [
{'Classification': 'yarn-site',
'Properties': {'yarn.nodemanager.vmem-check-enabled': 'false'}}]},
]},
Steps=[
{'Name': 'My word count example',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': [
'hadoop-streaming',
'-files', 's3://mybucket/wordSplitter.py#wordSplitter.py',
'-mapper', 'python2.7 wordSplitter.py',
'-input', 's3://mybucket/input/',
'-output', 's3://mybucket/output/',
'-reducer', 'aggregate']}
}
],
JobFlowRole='EMR_EC2_DefaultRole',
ServiceRole='EMR_DefaultRole',
)
</code></pre>
<p>I don't remember needing to do this with boto, but I have had issues running the simple streaming job properly without disabling <code>vmem-check-enabled</code>.</p>
<p>Also, if your script is located somewhere on S3, download it using <code>-files</code> (appending <code>#filename</code> to the argument make the downloaded file available as <code>filename</code> in the cluster).</p>
| 2 |
2016-09-29T07:26:17Z
|
[
"python",
"amazon-web-services",
"emr",
"boto3"
] |
Python - text file with all values 0 to 2^24 in HEX
| 39,567,700 |
<p>I am trying to create a file with all possible values for 0 up to 2^24 in HEX.</p>
<p>This is what I have so far:</p>
<pre><code>file_name = "values.txt"
counter = 0
value = 0x0000000000
with open (file_name, 'w') as writer:
while counter < 16777216:
data_to_write = str(value) + '\n'
writer.write(data_to_write)
counter = counter + 1
value = value + 0x0000000001
</code></pre>
<p>This do what I want but with integers. Is there an easy way to turn this to HEX values in file instead (still as string)?</p>
<p>Thank you</p>
| -2 |
2016-09-19T07:35:18Z
| 39,567,805 |
<p>just use the format method from string when writting:</p>
<pre><code>writer.write("{:#X}".format(data_to_write))
</code></pre>
<p>Example:</p>
<pre><code>>>> "{:#x}".format(123324)
'0x1e1bc'
>>> "{:#X}".format(123324)
'0X1E1BC'
</code></pre>
| 0 |
2016-09-19T07:41:37Z
|
[
"python"
] |
Python - text file with all values 0 to 2^24 in HEX
| 39,567,700 |
<p>I am trying to create a file with all possible values for 0 up to 2^24 in HEX.</p>
<p>This is what I have so far:</p>
<pre><code>file_name = "values.txt"
counter = 0
value = 0x0000000000
with open (file_name, 'w') as writer:
while counter < 16777216:
data_to_write = str(value) + '\n'
writer.write(data_to_write)
counter = counter + 1
value = value + 0x0000000001
</code></pre>
<p>This do what I want but with integers. Is there an easy way to turn this to HEX values in file instead (still as string)?</p>
<p>Thank you</p>
| -2 |
2016-09-19T07:35:18Z
| 39,568,050 |
<p>Thank you for your help.</p>
<p>It is working now:</p>
<pre><code>file_name = "value.txt"
counter = 0
value = 0x0000000000
with open (file_name, 'w') as writer:
while counter < 10000:
data_to_write = str(hex(value)[2:].zfill(6)) + '\n'
writer.write(data_to_write)
counter = counter + 1
value = value + 0x0000000001
</code></pre>
| 0 |
2016-09-19T07:56:38Z
|
[
"python"
] |
Python Pandas - how is 25 percentile calculated by describe function
| 39,567,712 |
<p>For a given dataset in a data frame, when I apply the <code>describe</code> function, I get the basic stats which include min, max, 25%, 50% etc.</p>
<p>For example:</p>
<pre><code>data_1 = pd.DataFrame({'One':[4,6,8,10]},columns=['One'])
data_1.describe()
</code></pre>
<p>The output is:</p>
<pre><code> One
count 4.000000
mean 7.000000
std 2.581989
min 4.000000
25% 5.500000
50% 7.000000
75% 8.500000
max 10.000000
</code></pre>
<p><strong>My question is</strong>: What is the mathematical formula to calculate the 25%?</p>
<p>1) Based on what I know, it is:</p>
<pre><code>formula = percentile * n (n is number of values)
</code></pre>
<p>In this case:</p>
<pre><code>25/100 * 4 = 1
</code></pre>
<p>So the first position is number 4 but according to the describe function it is <code>5.5</code>.</p>
<p>2) Another example says - if you get a whole number then take the average of 4 and 6 - which would be 5 - still does not match <code>5.5</code> given by describe.</p>
<p>3) Another tutorial says - you take the difference between the 2 numbers - multiply by 25% and add to the lower number:</p>
<pre><code>25/100 * (6-4) = 1/4*2 = 0.5
</code></pre>
<p>Adding that to the lower number: <code>4 + 0.5 = 4.5</code></p>
<p>Still not getting <code>5.5</code>.</p>
<p>Can someone please clarify?</p>
| 2 |
2016-09-19T07:36:23Z
| 39,568,222 |
<p>In the <a href="https://github.com/pydata/pandas/blob/master/pandas/core/series.py">pandas documentation</a> there is information about hte computation of quantiles, where a reference to numpy.percentile is made: </p>
<blockquote>
<p>Return value at the given quantile, a la numpy.percentile.</p>
</blockquote>
<p>Then, checking numpy.percentile <a href="http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.percentile.html">explanation</a>, we can see that the interpolation method is set to <strong>linear</strong> by default:</p>
<blockquote>
<p>linear: i + (j - i) * fraction, where fraction is the fractional part
of the index surrounded by i and j</p>
</blockquote>
<p>For your specfic case, the 25% quantile results from:</p>
<p>res_25 = 4 + (6-4)*(3/4) = 5.5</p>
<p>For the 75% quantile we thus get: </p>
<p>res_75 = 8 + (10-8)*(1/4) = 8.5</p>
<p>If you set the interpolation method to "midpoint", then you will the results, that you thought of. </p>
<p>.</p>
| 5 |
2016-09-19T08:07:16Z
|
[
"python",
"pandas",
"percentile"
] |
sympy: how to evaluate integral of absolute value
| 39,568,077 |
<p>When I try to simplify the following integral in <code>sympy</code>, it will not evaluate, i.e. the output is $\int_{-1}^1 |z| dz$ while the output I expect is 1.</p>
<pre><code>z = symbols('z', real=True)
a = integrate(abs(z), (z, -1, 1))
simplify(a)
</code></pre>
<p>Similar integral without the absolute value on <code>z</code> will evaluate.</p>
<p>How can I get <code>sympy</code> to evaluate this integral?</p>
| 2 |
2016-09-19T07:57:56Z
| 39,568,244 |
<p>I believe you should use Sympy's built in <a href="http://docs.sympy.org/0.7.0/modules/functions.html" rel="nofollow">Abs()</a> function.</p>
<p>Enjoy!</p>
| -3 |
2016-09-19T08:09:02Z
|
[
"python",
"sympy",
"integral",
"absolute-value"
] |
sympy: how to evaluate integral of absolute value
| 39,568,077 |
<p>When I try to simplify the following integral in <code>sympy</code>, it will not evaluate, i.e. the output is $\int_{-1}^1 |z| dz$ while the output I expect is 1.</p>
<pre><code>z = symbols('z', real=True)
a = integrate(abs(z), (z, -1, 1))
simplify(a)
</code></pre>
<p>Similar integral without the absolute value on <code>z</code> will evaluate.</p>
<p>How can I get <code>sympy</code> to evaluate this integral?</p>
| 2 |
2016-09-19T07:57:56Z
| 39,622,177 |
<p><code>integrate</code> already does all it can to evaluate an integral. If you get an <code>Integral</code> object back, that means it couldn't evaluate it. The only thing that might help is rewriting the integrand in a way that SymPy can recognize. </p>
<p>Looking at <a href="https://github.com/sympy/sympy/issues/8430" rel="nofollow">this issue</a>, it looks like a workaround is to rewrite it as Heaviside:</p>
<pre><code>In [201]: z = symbols('z', real=True)
In [202]: a = integrate(abs(z).rewrite(Heaviside), (z, -1, 1))
In [203]: a
Out[203]: 1
</code></pre>
| 2 |
2016-09-21T16:44:17Z
|
[
"python",
"sympy",
"integral",
"absolute-value"
] |
Error when trying to install PyCrypto
| 39,568,110 |
<p>I'm using Mac with latest OS X update. I've trying to install PyCrypto over Terminal but I'm getting error which is shown on image below. The command I used is <code>sudo pip install pycrypto</code>. Can you please help me with this issue? How do I resolve this? Thanks for your answers.</p>
<p><a href="http://i.stack.imgur.com/W77XY.png" rel="nofollow"><img src="http://i.stack.imgur.com/W77XY.png" alt="enter image description here"></a></p>
<p>Here is the error:</p>
<pre><code>macfive:Desktop admin$ sudo pip install pycrypto
The directory '/Users/admin/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/admin/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting pycrypto
Downloading pycrypto-2.6.1.tar.gz (446kB)
100% |ââââââââââââââââââââââââââââââââ| 450kB 2.4MB/s
Installing collected packages: pycrypto
Running setup.py install for pycrypto ... error
Complete output from command /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -u -c "import setuptools, tokenize;__file__='/private/tmp/pip-build-CYttJL/pycrypto/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-mWAGUD-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
.
.
.
src/hash_template.c:291: warning: return from incompatible pointer type
src/hash_template.c: At top level:
src/hash_template.c:306: error: initializer element is not constant
src/hash_template.c:306: error: (near initialization for âALG_functions[1].ml_nameâ)
src/hash_template.c:306: error: initializer element is not constant
src/hash_template.c:306: error: (near initialization for âALG_functions[1].ml_methâ)
fatal error: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/lipo: can't figure out the architecture type of: /var/tmp//ccCeO0Zf.out
error: command 'gcc-4.2' failed with exit status 1
----------------------------------------
Command "/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -u -c "import setuptools, tokenize;__file__='/private/tmp/pip-build-CYttJL/pycrypto/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-mWAGUD-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/tmp/pip-build-CYttJL/pycrypto/
</code></pre>
<p>Error is to big to copy it all. So I just copied the beginning and the end.</p>
| 1 |
2016-09-19T08:00:00Z
| 39,571,648 |
<p>You need to install the Python development files. I think it will work. Try </p>
<pre><code>apt-get install autoconf g++ python2.7-dev
</code></pre>
<p>Or</p>
<pre><code>sudo apt-get install python-dev
</code></pre>
<p>Either one of the above and then this below one </p>
<pre><code>pip install pycrypto
</code></pre>
| 0 |
2016-09-19T11:04:29Z
|
[
"python",
"pip",
"pycrypto"
] |
Scraping a specific table in selenium
| 39,568,186 |
<p>I am trying to scrape a table found inside a div on a page. </p>
<p>Basically here's my attempt so far:</p>
<pre><code># NOTE: Download the chromedriver driver
# Then move exe file on C:\Python27\Scripts
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
import sys
driver = webdriver.Chrome()
driver.implicitly_wait(10)
URL_start = "http://www.google.us/trends/explore?"
date = '&date=today%203-m' # Last 90 days
location = "&geo=US"
symbol = sys.argv[1]
query = 'q='+symbol
URL = URL_start+query+date+location
driver.get(URL)
table = driver.find_element_by_xpath('//div[@class="line-chart"]/table/tbody')
print table.text
</code></pre>
<p>If I run the script, with an argument like "stackoverflow" I should be able to scrape this site: <a href="https://www.google.us/trends/explore?date=today%203-m&geo=US&q=stackoverflow" rel="nofollow">https://www.google.us/trends/explore?date=today%203-m&geo=US&q=stackoverflow</a></p>
<p>Apparently the xpath I have there is not working, the program is not printing anything, it's just plain blank. </p>
<p>I am basically in need on the values of the chart that appears on that website. And those values (and dates) are inside a table, here is a screenshot:</p>
<p><a href="http://i.stack.imgur.com/wmQkb.png" rel="nofollow"><img src="http://i.stack.imgur.com/wmQkb.png" alt="enter image description here"></a></p>
<p>Could you help me locate the correct xpath of the table to retrieve those values using selenium on python?</p>
<p>Thanks in advance!</p>
| 0 |
2016-09-19T08:05:06Z
| 39,570,311 |
<p>you can use Xpath As Follow:</p>
<pre><code>//div[@class="line-chart"]/div/div[1]/div/div/table/tbody/tr
</code></pre>
<p>Here I will Refine my answer and make some changes in your code not it's work.</p>
<pre><code># NOTE: Download the chromedriver driver
# Then move exe file on C:\Python27\Scripts
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
import sys
from lxml.html import fromstring,tostring
driver = webdriver.Chrome()
driver.implicitly_wait(20)
'''
URL_start = "http://www.google.us/trends/explore?"
date = '&date=today%203-m' # Last 90 days
location = "&geo=US"
symbol = sys.argv[1]
query = 'q='+symbol
URL = URL_start+query+date+location
'''
driver.get("https://www.google.us/trends/explore?date=today%203-m&geo=US&q=stackoverflow")
table_trs = driver.find_elements_by_xpath('//div[@class="line-chart"]/div/div[1]/div/div/table/tbody/tr')
for tr in table_trs:
#print tr.get_attribute("innerHTML").encode("UTF-8")
td = tr.find_elements_by_xpath(".//td")
if len(td)==2:
print td[0].get_attribute("innerHTML").encode("UTF-8") +"\t"+td[1].get_attribute("innerHTML").encode("UTF-8")
</code></pre>
| 0 |
2016-09-19T09:57:48Z
|
[
"python",
"selenium",
"xpath",
"web-scraping"
] |
How can I print a string depending on the list value?
| 39,568,197 |
<p>I want to print a string depending on the values in the list. Values can be 0 or 1. For example:</p>
<pre><code># Example [a,b,c] = [0,0,1] -- > print str c
# [1,0,1] -- print str a and str c
index_list = [0,0,1] # Example
str_a = "str_a"
str_b = "str_b"
str_c = "str_c"
print str
</code></pre>
| 0 |
2016-09-19T08:05:41Z
| 39,568,262 |
<pre><code>for condition, string in zip(index_list, [str_a, str_b, str_c]):
if condition:
print string
</code></pre>
<p>Since the question is tagged as <a href="/questions/tagged/python-2.7" class="post-tag" title="show questions tagged 'python-2.7'" rel="tag">python-2.7</a>, <a href="https://docs.python.org/2/library/functions.html#zip" rel="nofollow"><code>zip</code></a> produces a new list of tuples. If you have a large list of indexes and strings, consider using <a href="https://docs.python.org/2/library/itertools.html#itertools.izip" rel="nofollow"><code>itertools.izip</code></a>, or upgrade to python 3.</p>
<p><a href="http://stackoverflow.com/a/39568508/2681632">This answer</a> provides a standard lib function exactly for this pattern, removing the need for explicit condition checking.</p>
| 4 |
2016-09-19T08:10:17Z
|
[
"python",
"python-2.7"
] |
How can I print a string depending on the list value?
| 39,568,197 |
<p>I want to print a string depending on the values in the list. Values can be 0 or 1. For example:</p>
<pre><code># Example [a,b,c] = [0,0,1] -- > print str c
# [1,0,1] -- print str a and str c
index_list = [0,0,1] # Example
str_a = "str_a"
str_b = "str_b"
str_c = "str_c"
print str
</code></pre>
| 0 |
2016-09-19T08:05:41Z
| 39,568,324 |
<pre><code>>>> a = [str_a,str_b,str_c]
>>> b= [0,0,1]
>>> ','.join(i for i,j in zip(a,b) if j)
'str_c'
</code></pre>
| 2 |
2016-09-19T08:14:30Z
|
[
"python",
"python-2.7"
] |
How can I print a string depending on the list value?
| 39,568,197 |
<p>I want to print a string depending on the values in the list. Values can be 0 or 1. For example:</p>
<pre><code># Example [a,b,c] = [0,0,1] -- > print str c
# [1,0,1] -- print str a and str c
index_list = [0,0,1] # Example
str_a = "str_a"
str_b = "str_b"
str_c = "str_c"
print str
</code></pre>
| 0 |
2016-09-19T08:05:41Z
| 39,568,508 |
<p>Here's an elegant way. Use the compress function in itertools:</p>
<pre><code>import itertools as it
l1 = [1, 0, 1]
l2 = ["a", "b", "c"]
for item in it.compress(l2, l1):
print item
</code></pre>
<p>Output:</p>
<pre><code>=================== RESTART: C:/Users/Joe/Desktop/stack.py ===================
a
c
>>>
</code></pre>
| 7 |
2016-09-19T08:24:34Z
|
[
"python",
"python-2.7"
] |
read_sql and redshift giving error on unicode
| 39,568,229 |
<p>Query 1: Using pandas read_sql to read from MySQL. The resulting dataframe has a column whose datatype is unicode strings. This column is converted to a tuple and used in the following query.</p>
<p>Query 2: Using pandas read_sql to read from Redshift. The query is something like </p>
<pre><code>select b.a from b where b.c in {0}
</code></pre>
<p>On one string, it gives me an error. The string is like u"Hello 'There" which is a valid unicode string.
The error is </p>
<pre><code>syntax error at or near ""Hello 'There""
</code></pre>
<p>But it shouldn't take it that way. Its treating it as an empty string ("") followed by un-understandable symbol like (Hello 'There"")</p>
<p>Should some configuration be changed or some parameters in read_sql to be added</p>
| 0 |
2016-09-19T08:07:48Z
| 39,568,585 |
<p>I suspect that that error message is being generated by someone replacing a marker (I've used <code>{something}</code>) a template like</p>
<pre><code>'syntax error at or near "{something}"'
</code></pre>
<p>This means you should instead see it as a complaint about a single string whose value would appear to be</p>
<pre><code>'"Hello \' There"'
</code></pre>
<p>In other words, it seems likely that you are dealing with a fifteen-character string starting and ending in a double-quote character, though there is always the possibility that some inappropriate conversion or quoting has added the double-quotes. No code, no traceback, so difficult to tell.</p>
<p>It seems as though somewhere in the processing chain there is a failure to correctly handle single apostrophes in SQL string literals. In SQL that string would be correctly represented as</p>
<pre><code>'"Hello, ''There"'
</code></pre>
<p>You will observe that the single-quote embedded in the string had had to be doubled to represent it - the SQL syntax specifically denotes this as the correct way to embed apostrophes in SQL string constants. Since you don't actually show details of your code there is a limit to how helpful this answer can be.</p>
<p>I suspect you might be writing your own SQL, in which case given a query in the variable <code>sql</code> you might want to consider using</p>
<pre><code>sql.replace("'", "''")
</code></pre>
<p>as the value injected into the query. But beware all the usual warnings about SQL injection if you plan to incorporate user inputs into this scheme.</p>
| 0 |
2016-09-19T08:28:47Z
|
[
"python",
"mysql",
"pandas",
"unicode",
"amazon-redshift"
] |
Python > Connection with JDBC to Oracle service name (jaydebeapi)
| 39,568,378 |
<p>This sample code is used to connect in Python to Oracle SID. </p>
<pre><code>import jpype
import jaydebeapi
jHome = jpype.getDefaultJVMPath()
jpype.startJVM(jHome, '-Djava.class.path=/path/to/ojdbc6.jar')
conn = jaydebeapi.connect('oracle.jdbc.driver.OracleDriver','jdbc:oracle:thin:user/password@DB_HOST_IP:1521:DB_NAME')
</code></pre>
<p>How can we connect to Oracle Service Name?</p>
| 1 |
2016-09-19T08:18:32Z
| 39,569,187 |
<p>Regarding your connection string, you could use <code>TNS</code> syntax (<a href="https://docs.oracle.com/cd/B28359_01/java.111/b31224/jdbcthin.htm" rel="nofollow">read on, here</a>),
as opposed to <code>host:port:sid</code> syntax you're using now.
In that case you would describe <code>SERVICE_NAME</code> inside <code>CONNECT_DATA</code>, as follows:</p>
<pre><code> jaydebeapi.connect('oracle.jdbc.driver.OracleDriver','[MYUSER]/[MYPASS]@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=[MYHOST])(PORT=1521))(CONNECT_DATA=(SERVER=dedicated) (SERVICE_NAME=[MYSERVICENAME])))')
</code></pre>
<p>By the way - you could also use <a href="https://pypi.python.org/pypi/cx_Oracle/5.2.1" rel="nofollow">cx_Oracle</a> to connect to oracle - no <code>java</code> hassle. (just a suggestion)</p>
| 0 |
2016-09-19T09:00:38Z
|
[
"java",
"python",
"oracle",
"jdbc",
"jaydebeapi"
] |
Python > Connection with JDBC to Oracle service name (jaydebeapi)
| 39,568,378 |
<p>This sample code is used to connect in Python to Oracle SID. </p>
<pre><code>import jpype
import jaydebeapi
jHome = jpype.getDefaultJVMPath()
jpype.startJVM(jHome, '-Djava.class.path=/path/to/ojdbc6.jar')
conn = jaydebeapi.connect('oracle.jdbc.driver.OracleDriver','jdbc:oracle:thin:user/password@DB_HOST_IP:1521:DB_NAME')
</code></pre>
<p>How can we connect to Oracle Service Name?</p>
| 1 |
2016-09-19T08:18:32Z
| 39,633,617 |
<p>This way should work</p>
<pre><code> conn = jaydebeapi.connect('oracle.jdbc.driver.OracleDriver','jdbc:oracle:thin:user/password@//DB_HOST_IP:1521/DB_NAME')
</code></pre>
| 0 |
2016-09-22T08:10:35Z
|
[
"java",
"python",
"oracle",
"jdbc",
"jaydebeapi"
] |
web.py select returning no results, but length of that list does (also manual query in sql promt returns results)
| 39,568,499 |
<p>I am working on my first (bigger) python application but I am running into some issues. I am trying to select entries from a table using the web.py import (I am using this since I will be using a web front-end later).</p>
<p>Below is my (simplified) code:</p>
<pre><code>db = web.database(dbn='mysql', host='xxx', port=3306, user='monitor', pw='xxx', db='monitor')
dict = dict(hostname=nodeName)
#nodes = db.select('Nodes', dict,where="hostName = $hostname")
nodes = db.query('SELECT * FROM Nodes') <-- I have tried both, but have comparable results (this returns more entries)
length = len(list(nodes))
print(length)
print(list(nodes))
print(list(nodes)[0])
</code></pre>
<p>Below is the output from python:</p>
<pre><code>0.03 (1): SELECT * FROM Nodes
6 <-- Length is correct
[] <-- Why is this empty?
Traceback (most recent call last):
File "monitor.py", line 30, in <module>
print(list(nodes)[0]) <-- If it is empty I can't select first element
IndexError: list index out of range
</code></pre>
<p>Below is mySQL output:</p>
<pre><code>mysql> select * from monitor.Nodes;
+--------+-------------+
| nodeId | hostName |
+--------+-------------+
| 1 | TestServer |
| 2 | raspberryPi |
| 3 | TestServer |
| 4 | TestServer |
| 5 | TestServer |
| 6 | TestServer |
+--------+-------------+
6 rows in set (0.00 sec)
</code></pre>
<p>Conclusion: table contains entries, and the select/query statement is able to get them partially (it gets the length, but not the actual values?)</p>
<p>I have tried mutliple ways but currently I am not able to get the what I want. I want to select the data from my table and use it in my code.</p>
<p>Thanks for helping</p>
| 0 |
2016-09-19T08:24:21Z
| 39,575,441 |
<p>Thanks to the people over at reddit I was able to solve the issue: <a href="https://www.reddit.com/r/learnpython/comments/53hdq1/webpy_select_returning_no_results_but_length_of/" rel="nofollow">https://www.reddit.com/r/learnpython/comments/53hdq1/webpy_select_returning_no_results_but_length_of/</a></p>
<p>Bottom line is: the implementation of the query method already fetches the data, so the second time I am calling list(nodes) the data is already gone, hence the empty result.</p>
<p>Solution is to store the list(nodes) in a variable and work from that.</p>
| 0 |
2016-09-19T14:18:15Z
|
[
"python",
"mysql",
"database",
"select",
"web.py"
] |
Get GUI element value from QThread
| 39,568,543 |
<p>How can a QThread get text from a QLineEdit?</p>
<p>i tried <code>self.t.updateSignal.connect(self.inputedittext.text)</code> to get the QLineEdit value, but I get an error:</p>
<blockquote>
<p>TypeError: unsupported operand type(s) for +=:
PyQt4.QtCore.pyqtBoundSignal' and 'int'</p>
</blockquote>
<p>or I get the message:</p>
<blockquote>
<p>bound signal updateSignal of xxxxxx at 0x02624580</p>
</blockquote>
<p>Code:</p>
<pre><code>import sys
import time
from PyQt4 import QtGui, QtCore
class mc(QtGui.QWidget):
def __init__(self):
super(mc,self).__init__()
self.initUI()
def initUI(self):
self.setWindowTitle('QThread')
self.inputedittext = QtGui.QLineEdit()
self.startbutton = QtGui.QPushButton('start')
self.stopbutton = QtGui.QPushButton('stop')
self.textlable = QtGui.QLabel('0')
lv1 = QtGui.QVBoxLayout()
lb1 = QtGui.QHBoxLayout()
lb1.addWidget(self.inputedittext)
lb1.addWidget(self.startbutton)
lb1.addWidget(self.stopbutton)
lb2 = QtGui.QHBoxLayout()
lb2.addWidget(self.textlable)
lv1.addLayout(lb1)
lv1.addLayout(lb2)
self.setLayout(lv1)
self.t = test_QThread()
self.t.updateSignal.connect(self.inputedittext.text)
self.startbutton.clicked.connect(self.start_t)
self.connect(self.t,QtCore.SIGNAL('ri'),self.setlable)
def setlable(self,i):
self.textlable.setText(i)
def start_t(self):
self.t.start()
# print(self.inputedittext.text())
class test_QThread(QtCore.QThread):
updateSignal = QtCore.pyqtSignal(QtCore.QString)
def __init__(self):
QtCore.QThread.__init__(self)
def run(self):
i = self.updateSignal
# i=0
go = True
while go:
i+=1
time.sleep(1)
self.emit(QtCore.SIGNAL('ri'),str(i))
print('run...')
def main():
app = QtGui.QApplication(sys.argv)
mw = mc()
mw.show()
app.exec_()
if __name__ == '__main__':
main()
</code></pre>
| 1 |
2016-09-19T08:26:33Z
| 39,573,444 |
<p>Use signals to communicate from thread to gui, and from gui to thread:</p>
<pre><code>class mc(QtGui.QWidget):
...
def initUI(self):
...
self.t = test_QThread()
self.t.progressSignal.connect(self.setlable)
self.t.requestSignal.connect(self.sendText)
self.startbutton.clicked.connect(self.start_t)
self.stopbutton.clicked.connect(self.stop_t)
def sendText(self):
self.t.updateSignal.emit(self.inputedittext.text())
def setlable(self, i):
self.textlable.setText(str(i))
def start_t(self):
self.t.start()
def stop_t(self):
self.t.stop()
class test_QThread(QtCore.QThread):
requestSignal = QtCore.pyqtSignal()
updateSignal = QtCore.pyqtSignal(str)
progressSignal = QtCore.pyqtSignal(int)
def __init__(self, parent=None):
super(test_QThread, self).__init__(parent)
self.updateSignal.connect(self.updateSlot)
self._running = False
self._count = 0
def updateSlot(self, text):
print 'received: "%s", count: %d' % (text, self._count)
def stop(self):
self._running = False
def run(self):
self._count = 0
self._running = True
while self._running:
self._count += 1
time.sleep(1)
self.progressSignal.emit(self._count)
if self._count % 3 == 0:
self.requestSignal.emit()
</code></pre>
| 0 |
2016-09-19T12:39:58Z
|
[
"python",
"pyqt4",
"signals-slots",
"qthread"
] |
accessing MySQL database in XAMPP through flask framework
| 39,568,579 |
<p>I just started looking into Flask framework for a pet project of mine, and so, I have been working on a tutorial in envato tutsplus. I build the database in MySQL present in the XAMPP server and I am trying to access it, but I do not know how to go about doing it.</p>
<p>I get this error in the chrome console <code>Failed to load resource: the server responded with a status of 500 (INTERNAL SERVER ERROR)</code>. I have already started XAMPP prior to executing this code.</p>
<p>I know that the <code>@app.route('/signUp',methods=['POST','GET'])</code> is the key method to look into and also the key lies in <code>app.config['MYSQL_DATABASE_HOST']</code>. I thought that putting localhost as a value would work, but that is not the case and so I tried different values like <code>http://localhost:80</code> etc but that does not seem to work. What is it that I am missing? Please help me out</p>
<p>Here is the backend code :</p>
<pre><code>from flask import Flask, render_template, request, json
from flask_mysqldb import MySQL
from werkzeug import generate_password_hash, check_password_hash
mysql = MySQL()
app = Flask(__name__)
# MySQL configurations
app.config['MYSQL_DATABASE_USER'] = 'root'
app.config['MYSQL_DATABASE_PASSWORD'] = ''
app.config['MYSQL_DATABASE_DB'] = 'BucketList'
app.config['MYSQL_DATABASE_HOST'] = 'localhost'
mysql.init_app(app)
@app.route('/')
def main():
return render_template('index.html')
@app.route('/showSignUp')
def showSignUp():
return render_template('signup.html')
@app.route('/signUp',methods=['POST','GET'])
def signUp():
try:
_name = request.form['inputName']
_email = request.form['inputEmail']
_password = request.form['inputPassword']
# validate the received values
if _name and _email and _password:
# All Good, let's call MySQL
conn = mysql.connect()
cursor = conn.cursor()
_hashed_password = generate_password_hash(_password)
cursor.callproc('sp_createUser',(_name,_email,_hashed_password))
data = cursor.fetchall()
if len(data) is 0:
conn.commit()
return json.dumps({'message':'User created successfully !'})
else:
return json.dumps({'error':str(data[0])})
else:
return json.dumps({'html':'<span>Enter the required fields</span>'})
except Exception as e:
return json.dumps({'error':str(e)})
finally:
cursor.close()
conn.close()
if __name__ == "__main__":
app.run(port=5002)
</code></pre>
| 0 |
2016-09-19T08:28:39Z
| 39,568,905 |
<p>MYSQL_DATABASE_HOST should be the address of your <em>database host</em>, hence the name, not your web app. </p>
<p>Looking at <a href="http://flask-mysql.readthedocs.io/en/latest/" rel="nofollow">the docs</a>, the default is "localhost" and the matching port setting is 3306, which should be fine for your setup; you should remove your definition altogether. </p>
| 0 |
2016-09-19T08:46:09Z
|
[
"python",
"mysql",
"xampp"
] |
Timer issue in Python/Pygame game
| 39,568,593 |
<p>I am relatively new to Python. I have been modifying this game code and included a start screen etc. The problem I'm having is that when the game ends I am trying to have it start again when a user inputs y at 12. I have got the key press down but there seems to be an issue with the timer. In 6.4 it is using get.ticks() but I can't seem to reinitialize this timer when the game restarts. 6.4 is used to draw and count down the clock. Then further down in 10 it does a comparison for a win/loose check which then leads onto 11 which determines whether a win/loose display is shown. The game is meant to only have a 90 second time limit and then restart when the game re-initializes (ie player selects y at 12). (I have set it to 9 seconds for debugging). The health side of the game works but again the timer is effected when the game restarts. Please help if you can. See code below.</p>
<pre><code># 1 - Import library
import pygame
from pygame.locals import *
import math
import random
import time
import sys
# 2 - Initialize the game
width, height = 640, 480
keys = [False, False, False, False]
fps = 15
# 2.0.1 - Initialises colours R G B
white = (255, 255, 255)
black = ( 0, 0, 0)
red = (255, 0, 0)
green = ( 0, 255, 0)
darkgreen = ( 0, 155, 0)
darkgrey = ( 40, 40, 40)
bgcolor = black
def main():
print "1"
global screen, basicfont, fpsclock
pygame.init()
fpsclock = pygame.time.Clock()
screen = pygame.display.set_mode((width, height))
# 2.0.2 - Sets font and size for press any key on start screen ---
basicfont = pygame.font.Font('freesansbold.ttf', 15)
# 2.0.3 - Sets Game name in window much like Title tag in HTML -
pygame.display.set_caption('Developed by: xxxx')
#-------------------------------------------------------------
# 2.0.4 - Puts a game icon next to your caption ----------------
gameIcon = pygame.image.load("resources/images/plane.png")
pygame.display.set_icon(gameIcon)
#-------------------------------------------------------
startScreen()
def restart():
pygame.quit()
main()
def drawPressKeyMsg():
pressKeySurf = basicfont.render('Press any key to play.', True, white)
pressKeyRect = pressKeySurf.get_rect()
pressKeyRect.topleft = (width - 200, height - 30)
screen.blit(pressKeySurf, pressKeyRect)
def terminate():
pygame.quit()
sys.exit()
def checkForKeyPress():
if len(pygame.event.get(QUIT)) > 0:
terminate()
keyUpEvents = pygame.event.get(KEYUP)
if len(keyUpEvents) == 0:
return None
if keyUpEvents[0].key == K_ESCAPE:
terminate()
return keyUpEvents[0].key
def startScreen():
# 2.0. - Change start screen attributes between hashes --------------
titleFont = pygame.font.Font('freesansbold.ttf', 60)
titleSurf1 = titleFont.render('Naval Warfare', True, red)
titleSurf2 = titleFont.render('Naval Warfare', True, green)
#---------------------------------------------------------------------
degrees1 = 0
degrees2 = 0
while True:
background = pygame.image.load("resources/images/falcons.png")
screen.blit(background, (0,0))
rotatedSurf1 = pygame.transform.rotate(titleSurf1, degrees1)
rotatedRect1 = rotatedSurf1.get_rect()
rotatedRect1.center = (width/2, height/2)
screen.blit(rotatedSurf1, rotatedRect1)
rotatedSurf2 = pygame.transform.rotate(titleSurf2, degrees2)
rotatedRect2 = rotatedSurf2.get_rect()
rotatedRect2.center = (width/2, height/2)
screen.blit(rotatedSurf2, rotatedRect2)
drawPressKeyMsg()
if checkForKeyPress():
pygame.event.get() # clear event queue
runGame()
return
pygame.display.update()
fpsclock.tick(fps)
degrees1 -= 7 # rotate by -7 degrees each frame
degrees2 += 7 # rotate by 7 degrees each frame
def runGame():
print"2"
# 2.1 - Changes initial player position --
playerpos=[100,240]
# ----------------------------------------
acc=[0,0]
arrows=[]
badtimer=100
badtimer1=0
badguys=[[640,100]]
healthvalue=194
pygame.mixer.init()
# 3 - Load image
# 3.0.1 - Remember when using own images change file names below ----------
player = pygame.image.load("resources/images/plane.png")
grass = pygame.image.load("resources/images/wave.png")
castle = pygame.image.load("resources/images/carrier.png")
arrow = pygame.image.load("resources/images/missile.png")
badguyimg1 = pygame.image.load("resources/images/helo.png")
badguyimg=badguyimg1
healthbar = pygame.image.load("resources/images/healthbar.png")
health = pygame.image.load("resources/images/health.png")
gameover = pygame.image.load("resources/images/gameover.png")
youwin = pygame.image.load("resources/images/youwin.png")
# 3.1 - Load audio
hit = pygame.mixer.Sound("resources/audio/explode.wav")
enemy = pygame.mixer.Sound("resources/audio/enemy.wav")
shoot = pygame.mixer.Sound("resources/audio/shoot.wav")
hit.set_volume(0.05)
enemy.set_volume(0.05)
shoot.set_volume(0.05)
pygame.mixer.music.load('resources/audio/moonlight.wav')
pygame.mixer.music.play(-1, 0.0)
pygame.mixer.music.set_volume(0.25)
# 4 - keep looping through
running = 1
exitcode = 0
print "exitcode before if: ", exitcode
while running:
print running, " running"
badtimer-=1
# 5 - clear the screen before drawing it again
screen.fill(0)
# 6 - draw the castles and the background
# ------------------------------------------------------------------------
# 6.0.1 - If using a 640 x 480 image remove code between hashes
# for x in range(width/grass.get_width()+1):
# for y in range(height/grass.get_height()+1):
# screen.blit(grass,(x*100,y*100))
# ------------------------------------------------------------------------
# 6.0.2- Replace code below with: screen.blit(grass,(0,0)) and remove indent
screen.blit(grass,(0,0))
# ------------------------------------------------------------------------
screen.blit(castle,(0,30))
screen.blit(castle,(0,135))
screen.blit(castle,(0,240))
screen.blit(castle,(0,345 ))
# 6.1 - Set player position and rotation
position = pygame.mouse.get_pos()
angle = math.atan2(position[1]-(playerpos[1]+32),position[0]-(playerpos[0]+26))
playerrot = pygame.transform.rotate(player, 360-angle*57.29)
playerpos1 = (playerpos[0]-playerrot.get_rect().width/2, playerpos[1]-playerrot.get_rect().height/2)
screen.blit(playerrot, playerpos1)
# 6.2 - Draw arrows
for bullet in arrows:
index=0
# 6.2.1 - Changing the multiplier value changes the speed at which bullets travel ie 10 -> 5
velx=math.cos(bullet[0])*5
vely=math.sin(bullet[0])*5
# ----------------------------------------------------------------------------
bullet[1]+=velx
bullet[2]+=vely
if bullet[1]<-64 or bullet[1]>640 or bullet[2]<-64 or bullet[2]>480:
arrows.pop(index)
index+=1
for projectile in arrows:
arrow1 = pygame.transform.rotate(arrow, 360-projectile[0]*57.29)
screen.blit(arrow1, (projectile[1], projectile[2]))
# 6.3 - Draw badgers
if badtimer==0:
badguys.append([640, random.randint(50,430)])
badtimer=100-(badtimer1*2)
if badtimer1>=35:
badtimer1=35
else:
badtimer1+=5
index=0
for badguy in badguys:
if badguy[0]<-64:
badguys.pop(index)
# 6.3.0 - Initial x position value is 640. The more we subtract the faster the x position reduces.
# Hence the faster the badger moves across the screen when it updates the x position in the loop.
badguy[0]-=2 # ie 7 -> 2
# ----------------------------------------------------------------------------
# 6.3.1 - Attack castle
badrect=pygame.Rect(badguyimg.get_rect())
badrect.top=badguy[1]
badrect.left=badguy[0]
if badrect.left<64:
hit.play()
healthvalue -= random.randint(5,20)
badguys.pop(index)
# 6.3.2 - Check for collisions
index1=0
for bullet in arrows:
bullrect=pygame.Rect(arrow.get_rect())
bullrect.left=bullet[1]
bullrect.top=bullet[2]
if badrect.colliderect(bullrect):
enemy.play()
acc[0]+=1
badguys.pop(index)
arrows.pop(index1)
index1+=1
# 6.3.3 - Next bad guy
index+=1
for badguy in badguys:
screen.blit(badguyimg, badguy)
# 6.4 - Draw clock
time = pygame.time.get_ticks()
font = pygame.font.Font(None, 24)
survivedtext = font.render(str((90000-time)/60000)+":"+str((90000-time)/1000%60).zfill(2), True, (0,0,0))
textRect = survivedtext.get_rect()
textRect.topright=[635,5]
screen.blit(survivedtext, textRect)
# 6.5 - Draw health bar
screen.blit(healthbar, (5,5))
for health1 in range(healthvalue):
screen.blit(health, (health1+8,8))
# 7 - update the screen
pygame.display.flip()
# 8 - loop through the events
for event in pygame.event.get():
# check if the event is the X button
if event.type==pygame.QUIT:
# if it is quit the game
terminate()
if event.type == pygame.KEYDOWN:
if event.key==K_w:
keys[0]=True
elif event.key==K_a:
keys[1]=True
elif event.key==K_s:
keys[2]=True
elif event.key==K_d:
keys[3]=True
if event.type == pygame.KEYUP:
if event.key==pygame.K_w:
keys[0]=False
elif event.key==pygame.K_a:
keys[1]=False
elif event.key==pygame.K_s:
keys[2]=False
elif event.key==pygame.K_d:
keys[3]=False
if event.type==pygame.MOUSEBUTTONDOWN:
shoot.play()
position=pygame.mouse.get_pos()
acc[1]+=1
arrows.append([math.atan2(position[1]-(playerpos1[1]+32),position[0]-(playerpos1[0]+26)),playerpos1[0]+32,playerpos1[1]+32])
# 9 - Move player
if keys[0]:
playerpos[1]-=5
elif keys[2]:
playerpos[1]+=5
if keys[1]:
playerpos[0]-=5
elif keys[3]:
playerpos[0]+=5
print running , "before win lose check"
#10 - Win/Lose check
if time >=9000:
running=0
print running, "time check"
exitcode=1
if healthvalue<=0:
running=0
print running, "health check"
exitcode=0
if acc[1]!=0:
accuracy=acc[0]*1.0/acc[1]*100
else:
accuracy=0
print running , "after win lose check"
# 11 - Win/lose display
if exitcode==0:
pygame.font.init()
font = pygame.font.Font(None, 24)
text = font.render("Accuracy: "+str(accuracy)+"%", True, white)
textRect = text.get_rect()
textRect.centerx = screen.get_rect().centerx
textRect.centery = screen.get_rect().centery+24
screen.blit(gameover, (0,0))
screen.blit(text, textRect)
pygame.display.flip()
else:
pygame.font.init()
font = pygame.font.Font(None, 24)
text = font.render("Accuracy: "+str(accuracy)+"%.", True, white)
textRect = text.get_rect()
textRect.centerx = screen.get_rect().centerx
textRect.centery = screen.get_rect().centery+24
screen.blit(youwin, (0,0))
screen.blit(text, textRect)
pygame.display.flip()
print time
print "3"
print "exitcode in win/lose display: ", exitcode
print "running" , running
#12 player selections at end of game
while 1:
for event in pygame.event.get():
if event.type == pygame.QUIT:
terminate()
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_y:
print "y pressed down."
restart()
break
elif event.key == pygame.K_n:
print "n pressed down."
terminate()
elif event.type == pygame.KEYUP:
if event.key == pygame.K_y:
print "y released."
elif event.key == pygame.K_n:
print "n released."
terminate()
if __name__ == '__main__':
main()
</code></pre>
| -1 |
2016-09-19T08:29:25Z
| 39,569,002 |
<p>I can unfortunately not test my suggestion, but the logic here will help. If there is somewhere to download the code it would help - there are resources missing that prevent me from running the code.</p>
<p>Your #12 isn't restarting the game because it's taking place inside the <code>rungame()</code> def. Try moving it out of <code>rungame</code>, possibly wrapping it around the code that starts <code>rungame()</code> in #2. If that doesn't work, have it called from main, or any other exit point.</p>
<p><strong>EDIT 1:</strong> </p>
<p>Try moving the code to the section below 2.0.4, as shown in the example below. The code below is intended to replace <code>startScreen()</code> completely. </p>
<p><sub><sup><strong>NOTE:</strong> That <code>restart()</code> function is likely to cause problems because of the <code>pygame.quit()</code> code...</sup></sub></p>
<pre><code>#-------------------------------------------------------
start_game = True
while 1:
if start_game:
startScreen()
start_game = False
for event in pygame.event.get():
if event.type == pygame.QUIT:
terminate()
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_y:
print "y pressed down."
##########
start_game = True
##########
#restart()
#break
elif event.key == pygame.K_n:
print "n pressed down."
terminate()
elif event.type == pygame.KEYUP:
if event.key == pygame.K_y:
print "y released."
##########
start_game = True
##########
elif event.key == pygame.K_n:
print "n released."
terminate()
</code></pre>
| 0 |
2016-09-19T08:51:01Z
|
[
"python"
] |
exposing multiple databases in django admin
| 39,568,636 |
<p>My use case requires me to expose multiple databases in the admin site of my django project. Did that following this link: <a href="https://docs.djangoproject.com/en/dev/topics/db/multi-db/#exposing-multiple-databases-in-django-s-admin-interface" rel="nofollow">https://docs.djangoproject.com/en/dev/topics/db/multi-db/#exposing-multiple-databases-in-django-s-admin-interface</a></p>
<p>Here's the code used:</p>
<pre><code>class MultiDBModelAdmin(admin.ModelAdmin):
# A handy constant for the name of the alternate database.
using = 'other'
def save_model(self, request, obj, form, change):
# Tell Django to save objects to the 'other' database.
obj.save(using=self.using)
def delete_model(self, request, obj):
# Tell Django to delete objects from the 'other' database
obj.delete(using=self.using)
def get_queryset(self, request):
# Tell Django to look for objects on the 'other' database.
return super(MultiDBModelAdmin, self).get_queryset(request).using(self.using)
def formfield_for_foreignkey(self, db_field, request, **kwargs):
# Tell Django to populate ForeignKey widgets using a query
# on the 'other' database.
return super(MultiDBModelAdmin, self).formfield_for_foreignkey(db_field, request, using=self.using, **kwargs)
def formfield_for_manytomany(self, db_field, request, **kwargs):
# Tell Django to populate ManyToMany widgets using a query
# on the 'other' database.
return super(MultiDBModelAdmin, self).formfield_for_manytomany(db_field, request, using=self.using, **kwargs)
</code></pre>
<p>And then:</p>
<pre><code>admin.site.register(Author, MultiDBModelAdmin)
admin.site.register(Publisher, PublisherAdmin)
othersite = admin.AdminSite('othersite')
othersite.register(Publisher, MultiDBModelAdmin)
</code></pre>
<p>The example's documentation states: This example sets up two admin sites. On the first site, the Author and Publisher objects are exposed; Publisher objects have an tabular inline showing books published by that publisher. The second site exposes just publishers, without the inlines.</p>
<p>What I don't seem to find out anywhere is: how do I access the other 'site'? What URL has to be used to view the tables exposed in the other 'site'? Should be something straightforward, but I cannot seem to find it anywhere.</p>
| 0 |
2016-09-19T08:31:48Z
| 39,568,753 |
<p>You need to add a url pattern for your admin site, similar to how you enable the regular site:</p>
<pre><code># urls.py
from django.conf.urls import url
from django.contrib import admin
from myapp.admin import othersite
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^otheradmin/', othersite.urls),
]
</code></pre>
<p>You then access the other admin at whatever url you used. In this case, <code>/otheradmin/</code>.</p>
<p>This syntax is for Django 1.10+. On earlier versions of Django, you use <code>include(othersite.urls)</code> instead of <code>othersite.urls</code>.</p>
| 0 |
2016-09-19T08:37:25Z
|
[
"python",
"django",
"django-admin"
] |
Regex to return set of words from file that can be spelled with letters passed as parameter (python)
| 39,568,651 |
<p>I have a list of words such as </p>
<pre><code>name
age
abhor
apple
ape
</code></pre>
<p>I want to do regex on a list by passing a random set of letters such as 'apbecd'</p>
<p>Now all those words from the list having the set of letters must be returned.</p>
<p>eg: <code>python retun_words.py apbelcdg</code></p>
<p>will return </p>
<pre><code>ape
apple
age
</code></pre>
<p>As of now, i am able to return words based only on words match. How can I achieve the results as I have mentioned above.
also if there is any other way to achieve the results instead of regex kindly let me know</p>
<p>Thanks in advance</p>
| 0 |
2016-09-19T08:32:37Z
| 39,568,887 |
<p>There's no need to use regex. Following code works.</p>
<pre><code>import sys
word_list = ['name', 'age', 'abhor', 'apple', 'ape']
letter_list = sys.argv[1]
for word in word_list:
for letter in word:
if letter not in letter_list:
break
elif letter == word[-1]:
print word
</code></pre>
<p>Output </p>
<pre><code>[root@mbp:~]# python return_words.py apbelcdg
age
apple
ape
</code></pre>
| 0 |
2016-09-19T08:45:09Z
|
[
"python",
"regex"
] |
Regex to return set of words from file that can be spelled with letters passed as parameter (python)
| 39,568,651 |
<p>I have a list of words such as </p>
<pre><code>name
age
abhor
apple
ape
</code></pre>
<p>I want to do regex on a list by passing a random set of letters such as 'apbecd'</p>
<p>Now all those words from the list having the set of letters must be returned.</p>
<p>eg: <code>python retun_words.py apbelcdg</code></p>
<p>will return </p>
<pre><code>ape
apple
age
</code></pre>
<p>As of now, i am able to return words based only on words match. How can I achieve the results as I have mentioned above.
also if there is any other way to achieve the results instead of regex kindly let me know</p>
<p>Thanks in advance</p>
| 0 |
2016-09-19T08:32:37Z
| 39,568,991 |
<p>Here, using set and returning the item is also a way, if you do not want to use regex.</p>
<pre><code>string_list = ["name", "age", "abhor", "apple", "ape"]
allowed_characters = "apbelcdg"
character_set = set(allowed_charcters)
print [item for item in string_list if not set(item)-character_set]
</code></pre>
<p>This will give you the list of strings that adhere to the character set.</p>
<p>However, if regex is what you desire the most, here we go :-)</p>
<pre><code>from re import match
string_list = ["name", "age", "abhor", "apple", "ape"]
allowed_characters = "apbelcdg"
print [item for item in string_list if match('[%s]*$' % (allowed_characters), item)]
</code></pre>
| 1 |
2016-09-19T08:50:49Z
|
[
"python",
"regex"
] |
Regex to return set of words from file that can be spelled with letters passed as parameter (python)
| 39,568,651 |
<p>I have a list of words such as </p>
<pre><code>name
age
abhor
apple
ape
</code></pre>
<p>I want to do regex on a list by passing a random set of letters such as 'apbecd'</p>
<p>Now all those words from the list having the set of letters must be returned.</p>
<p>eg: <code>python retun_words.py apbelcdg</code></p>
<p>will return </p>
<pre><code>ape
apple
age
</code></pre>
<p>As of now, i am able to return words based only on words match. How can I achieve the results as I have mentioned above.
also if there is any other way to achieve the results instead of regex kindly let me know</p>
<p>Thanks in advance</p>
| 0 |
2016-09-19T08:32:37Z
| 39,570,992 |
<p>I believe shellmode's method needs a minor fix, since it would not work for cases when the examined letter is the same as the last letter in the word, but the word itself contains letters that are not from the letter list. I believe that this code would work:</p>
<pre><code>import sys
word_list = ['name', 'age', 'abhor', 'apple', 'ape']
letter_list = sys.argv[1]
for word in word_list:
for counter,letter in enumerate(word):
if letter not in letter_list:
break
if counter == len(word)-1: #reached the end of word
print word
</code></pre>
| 0 |
2016-09-19T10:30:57Z
|
[
"python",
"regex"
] |
Crawl a large number of web pages
| 39,568,727 |
<p>I want to use Scrappy to crawl a large selection of web pages. Because I have to use proxy, and the proxy is bad, lots of time is wasted on changing IPs. How can I use multi-threading to speed this along?</p>
<p>(Ps: I use a HttpProxyMiddleware.py to get proxyIP from a redis database.</p>
<pre><code> proxy_config = settings.get("PROXY_CONFIG")
self.r =redis.Redis(host=proxy_config.get('redis_host'),
port=proxy_config.get("redis_port", 6379))
self.ips_key = proxy_config.get('ip_key')
</code></pre>
<p>there are lots of ips in it. but part of them do not work. I set the timeout = 5s , so lots operation of changing ips waste lots of time.</p>
<p>because scrapy is use twisted, its work stream is<br>
spider.py(generate request)-> HttpProxyMiddleware.py(add proxy to request,check response to see if the ip is working )-> spider.py(parse() process response)</p>
<p>At first ,I try to use multi-threading to speed this. the result shows that all threading depend on the same one customized-middleware "HttpProxyMiddleware.py". as i known this is just a Class and it is not a singleton. I don't how scrapy to implement it. so I have to create multiple HttpProxyMiddleware , as HttpProxyMiddleware1.py HttpProxyMiddleware2.py ...... , and I also create mutilple spider.py as spider1.py ,spider2.py...... each spider use one HttpProxyMiddleware correspondingly. And it work. but look bad. I ask google for help, I get the answer is that use reactor(Twisted), and i used it :</p>
<pre><code> from twisted.internet import reactor
reactor.suggestThreadPoolSize(30)
</code></pre>
<p>but it not work. Maybe my usage is wrong. so my question is how to use reactor or else mutiple-thread method to speed this along?</p>
<pre><code> HttpProxyMiddleware.py extends scrapy's downloadmiddleware
class HttpProxyMiddleware(object):{
def process_request(self, request, spider):
#add proxy to request
def process_response(self, request, response, spider):
#check response to decide to change ips or not
}
</code></pre>
<p>At last, I am a newcomer here, if my description of questions is not clear, please point it out. I will correct it immediately. Thank you, I appreciate every assistance.</p>
| 0 |
2016-09-19T08:36:06Z
| 39,569,060 |
<p>Use lots of proxies.</p>
<p>For instance for a similar project, I've installed tor.</p>
<p>You can run multiple instances of tor, thus having multiple dedicated proxies available.</p>
<p>Run an instance on 127.0.0.1:9050, another on 127.0.0.1:9051, and so on and so forth.</p>
<p>Script the launch of all these instances, and also script their restart (so you get a different exit node, hence another IP).</p>
<p>Now, have your scrapy script taking the proxy adress as a parameter (<code>argv</code>)</p>
<p>And have a controller script running everything like so :</p>
<pre><code># pseudo code
while 1:
tor_restart.sh # restart all tor nodes
for i between 0 and n:
python scrapy_script.py 9050+i &
while there is a scrapy_script.py in the process list, sleep.
</code></pre>
<p>Look at this :</p>
<p><a href="http://tor.stackexchange.com/questions/327/how-may-i-run-multiple-tor-relay-instances-in-a-single-linux-machine">http://tor.stackexchange.com/questions/327/how-may-i-run-multiple-tor-relay-instances-in-a-single-linux-machine</a></p>
| 0 |
2016-09-19T08:53:55Z
|
[
"python",
"scrapy"
] |
Load extracted vectors to TfidfVectorizer
| 39,568,774 |
<p>I am looking for a way to load vectors I generated previously using scikit-learn's TfidfVectorizer. In general what I wish is to get a better understanding of the TfidfVectorizer's data persistence.</p>
<p>For instance, what I did so far is:</p>
<pre><code>vectorizer = TfidfVectorizer(stop_words=stop)
vect_train = vectorizer.fit_transform(corpus)
</code></pre>
<p>Then I wrote 2 functions in order to be able to save and load my vectorizer:</p>
<pre><code>def save_model(model,name):
'''
Function that enables us to save a trained model
'''
joblib.dump(model, '{}.pkl'.format(name))
def load_model(name):
'''
Function that enables us to load a saved model
'''
return joblib.load('{}.pkl'.format(name))
</code></pre>
<p>I checked posts like the one below but i still didn't manage to make much sense.</p>
<p><a href="http://stackoverflow.com/questions/32764991/how-do-i-store-a-tfidfvectorizer-for-future-use-in-scikit-learn">How do I store a TfidfVectorizer for future use in scikit-learn?</a></p>
<p>What I ultimately wish is to be able to have a training session and then load this set of produced vectors, transform some newly text input based on those vectors and perform cosine_similarity using old vectors and new ones generated based on them.</p>
<p>One of the reasons that I wish to do this is because the vectorization in such a large dataset takes approximately 10 minutes and I wish to do this once and not every time a new query comes in.</p>
<p>I guess what I should be saving is vect_train right? But then which is the correct way to firstly save it and then load it to a newly created instance of TfidfVectorizer?</p>
<p>First time I tried to save vect_train with joblib as the kind people in scikit-learn advise to do I got 4 files: tfidf.pkl, tfidf.pkl_01.npy, tfidf.pkl_02.npy, tfidf.pkl_03.npy. It would be great if I knew what exactly are those and how I could load them to a new instance of</p>
<pre><code>vectorizer = TfidfVectorizer(stop_words=stop)
</code></pre>
<p>created in a different script.</p>
<p>Thank you in advance.</p>
| 0 |
2016-09-19T08:38:51Z
| 39,620,729 |
<p>The result of your <code>vect_train = vectorizer.fit_transform(corpus)</code> is twofold: (i) the vectorizer fits your data, that is it learns the corpus vocabulary and the idf for each term, and
(ii) <code>vect_train</code> is instantiated with the vectors of your corpus.</p>
<p>The <code>save_model</code> and <code>load_model</code> functions you propose persist and load the vectorizer, that is the internal parameters that it has learned such as the vocabulary and the idfs. Having loaded the vectorizer, all you need to get vectors is to transform a list with data. It can be unseen data, or the raw data you used during the <code>fit_transform</code>. Therefore, all you need is:</p>
<pre><code>vectorizer = load_model(name)
vect_train = vectorizer.transform(corpus) # (1) or any unseen data
</code></pre>
<p>At this point, you have everything you had before saving, but the transformation call (1) will take some time depending on your corpus. In case you want to skip this, you need to also save the content of <code>vect_train</code>, as you correctly wonder in your question. This is a sparse matrix and can be saved/loaded using scipy, you can find information in this <a href="http://stackoverflow.com/questions/8955448/save-load-scipy-sparse-csr-matrix-in-portable-data-format/8956767">question</a> for example. Copying from that question, to actually save the csr matrices you also need: </p>
<pre><code>def save_sparse_csr(filename,array):
np.savez(filename,data = array.data ,indices=array.indices,
indptr =array.indptr, shape=array.shape )
def load_sparse_csr(filename):
loader = np.load(filename)
return csr_matrix(( loader['data'], loader['indices'], loader['indptr']),
shape = loader['shape'])
</code></pre>
<p>Concluding, the above functions can be used for saving/loading your <code>vec_train</code> whereas the ones you provided for saving/loading the transformer in order to vectorize the new data.</p>
| 0 |
2016-09-21T15:27:08Z
|
[
"python",
"machine-learning",
"scikit-learn",
"persistence",
"tf-idf"
] |
Python: read files from directory and concatenate that
| 39,568,925 |
<p>I have a folder <code>et</code> with <code>.csv</code> files and I try to read that and next concatenate that and get one file.
I try </p>
<pre><code>import os
path = 'et/'
for filename in os.listdir(path):
et = open(filename)
print et
</code></pre>
<p>but I get an error </p>
<pre><code>Traceback (most recent call last):
File "C:/Users/����� �����������/Desktop/projects/PMI/join et.py", line 5, in <module>
et = open(filename)
IOError: [Errno 2] No such file or directory: '0et.csv'
</code></pre>
<p>I can't understand, why I get this error, because when I
<code>print filename</code>
I get</p>
<pre><code>0et.csv
1et.csv
2et.csv
3et.csv
4et.csv
5et.csv
6et.csv
7et.csv
8et.csv
</code></pre>
| 1 |
2016-09-19T08:47:30Z
| 39,568,980 |
<p>You probably want to use <code>et = open(path+filename)</code>, instead of just <code>et = open(filename)</code>.</p>
<p>Edit : as suggested by @thiruvenkadam best practice would be to use <code>et = open(os.path.join(path,filename))</code></p>
| 0 |
2016-09-19T08:50:27Z
|
[
"python",
"pandas",
"directory"
] |
Python: read files from directory and concatenate that
| 39,568,925 |
<p>I have a folder <code>et</code> with <code>.csv</code> files and I try to read that and next concatenate that and get one file.
I try </p>
<pre><code>import os
path = 'et/'
for filename in os.listdir(path):
et = open(filename)
print et
</code></pre>
<p>but I get an error </p>
<pre><code>Traceback (most recent call last):
File "C:/Users/����� �����������/Desktop/projects/PMI/join et.py", line 5, in <module>
et = open(filename)
IOError: [Errno 2] No such file or directory: '0et.csv'
</code></pre>
<p>I can't understand, why I get this error, because when I
<code>print filename</code>
I get</p>
<pre><code>0et.csv
1et.csv
2et.csv
3et.csv
4et.csv
5et.csv
6et.csv
7et.csv
8et.csv
</code></pre>
| 1 |
2016-09-19T08:47:30Z
| 39,569,067 |
<p>Maybe it's the coding problem</p>
<p>You can try to add following code at the top of your code</p>
<pre><code># -*- coding: utf-8 -*-
</code></pre>
| -1 |
2016-09-19T08:54:05Z
|
[
"python",
"pandas",
"directory"
] |
Python: read files from directory and concatenate that
| 39,568,925 |
<p>I have a folder <code>et</code> with <code>.csv</code> files and I try to read that and next concatenate that and get one file.
I try </p>
<pre><code>import os
path = 'et/'
for filename in os.listdir(path):
et = open(filename)
print et
</code></pre>
<p>but I get an error </p>
<pre><code>Traceback (most recent call last):
File "C:/Users/����� �����������/Desktop/projects/PMI/join et.py", line 5, in <module>
et = open(filename)
IOError: [Errno 2] No such file or directory: '0et.csv'
</code></pre>
<p>I can't understand, why I get this error, because when I
<code>print filename</code>
I get</p>
<pre><code>0et.csv
1et.csv
2et.csv
3et.csv
4et.csv
5et.csv
6et.csv
7et.csv
8et.csv
</code></pre>
| 1 |
2016-09-19T08:47:30Z
| 39,569,260 |
<p>Using glob.glob will be a better option, along with using os.path.join to get to the full path:</p>
<pre><code>from glob import glob
from os.path import join, abspath
from os import listdir, getcwd
import pandas as pd
data_frame = pd.DataFrame()
dir_path = "et"
full_path = join(abspath(getcwd()), dir_path, "*.csv")
for file_name in glob(full_path):
csv_reader = pd.read_csv(file_name, names=columns)
# Guessing that all csv files will have the header
#If header is absent, use names=None
data_frame = data_frame.append(csv_reader, ignore_index=True)
# There is also a concat funtion to use. I am comfortable with append
# For concat, it will be data_frame = pd.concat(data_frame, csv_reader, ignore_index=True)
</code></pre>
<ol>
<li>abspath will make sure that the full directory from the root(in case of windows, from the main file system drive) is taken</li>
<li>Adding *.csv with the join will make sure that you will check for csv files with the directory</li>
<li>glob(full_path) will return a list of csv files, with absolute path, of the given directory</li>
<li>Always make sure that you will either close the file descriptor explicitly or use the with statement to do it automatically, as it is a clean practice. Any C developer can vouch that closing the file descriptor is best. Since we need to put the value in the dataframe, I took out the with statement and added the read_csv from pandas.</li>
<li>pandas.read_csv will make life lot better while reading the csv, in case we are into writing the csv file contents to dataframes. With read_csv and pandas append(or concat), we can write csv files easily without writing the header content from other csv files. I have given in append, because of personal opinion. Added how to use concat in the comment though.</li>
</ol>
| 0 |
2016-09-19T09:04:32Z
|
[
"python",
"pandas",
"directory"
] |
Tail file till process exits
| 39,568,941 |
<p>Going through the answers at <a href="http://superuser.com/questions/270529/monitoring-a-file-until-a-string-is-found">superuser</a>.</p>
<p>I'm trying to modify this to listen for multiple strings and echo custom messages such as ; 'Your server started successfully' etc</p>
<p>I'm also trying to tack it to another command i.e. pip</p>
<pre><code>wait_str() {
local file="$1"; shift
local search_term="Successfully installed"; shift
local search_term2='Exception'
local wait_time="${1:-5m}"; shift # 5 minutes as default timeout
(timeout $wait_time tail -F -n0 "$file" &) | grep -q "$search_term" && echo 'Custom success message' && return 0 || || grep -q "$search_term2" && echo 'Custom success message' && return 0
echo "Timeout of $wait_time reached. Unable to find '$search_term' or '$search_term2' in '$file'"
return 1
}
</code></pre>
<p>The usage I have in mind is:</p>
<pre><code>pip install -r requirements.txt > /var/log/pip/dump.log && wait_str /var/log/pip/dump.log
</code></pre>
<p>To clarify, I'd like to get wait_str to stop tailing when pip exits, whether successfully or not.</p>
| 1 |
2016-09-19T08:48:28Z
| 39,569,332 |
<p>Following is general answer and <code>tail</code> could be replaced by any command that result in stream of lines. </p>
<p>IF different string needs <strong>different</strong> actions then use following: </p>
<pre><code>tail -f var/log/pip/dump.log |awk '/condition1/ {action for condition-1} /condition-2/ {action for condition-2} .....'
</code></pre>
<p>If multiple conditions need <strong>same</strong> action them ,separate them using OR operator :</p>
<pre><code>tail -f var/log/pip/dump.log |awk '/condition-1/ || /condition-2/ || /condition-n/ {take this action}'
</code></pre>
<p>Based on comments : Single <code>awk</code> can do this. </p>
<pre><code>tail -f /path/to/file |awk '/Exception/{ print "Worked"} /compiler/{ print "worked"}'
</code></pre>
<p>or </p>
<pre><code>tail -f /path/to/file | awk '/Exception/||/compiler/{ print "worked"}'
</code></pre>
<p>OR Exit if match is found </p>
<pre><code>tail -f logfile |awk '/Exception/||/compiler/{ print "worked";exit}'
</code></pre>
| 3 |
2016-09-19T09:08:21Z
|
[
"python",
"linux",
"bash",
"shell",
"pip"
] |
How to open a thread whose target is in another file?
| 39,568,975 |
<p>I need to open a thread whose target is defined in a different file.</p>
<p>I would like to pass the target name through a string containing, of course, the name of the function I want to run on thread.
Is that impossible, or am I missing something? </p>
<p>For instance, here is my code:</p>
<p>Here is the code that has the target function:</p>
<pre><code># command.py
def hello():
print("hello world")
</code></pre>
<p>and here is the code I will run:</p>
<pre><code># run.py
import threading
import commands
funcname = "hello"
thread1 = threading.Thread(target= ... , daemon=True)
thread1.start()
</code></pre>
<p>What do i need to put as target?</p>
| 1 |
2016-09-19T08:50:14Z
| 39,570,586 |
<p>From the <a href="https://docs.python.org/3/library/threading.html#threading.Thread" rel="nofollow">python docs</a> the <code>target</code> has to be a callable object. From your example you should be able to just put </p>
<p><code>target=command.hello</code></p>
| 0 |
2016-09-19T10:12:19Z
|
[
"python",
"multithreading"
] |
how to make logging.logger to behave like print
| 39,569,020 |
<p>Let's say I got this <a href="https://docs.python.org/2/library/logging.html" rel="nofollow">logging.logger</a> instance:</p>
<pre><code>import logging
logger = logging.getLogger('root')
FORMAT = "[%(filename)s:%(lineno)s - %(funcName)20s() ] %(message)s"
logging.basicConfig(format=FORMAT)
logger.setLevel(logging.DEBUG)
</code></pre>
<p>Problem comes when I try to use it like the builtin print with a dynamic number of arguments:</p>
<pre><code>>>> logger.__class__
<class 'logging.Logger'>
>>> logger.debug("hello")
[<stdin>:1 - <module>() ] hello
>>> logger.debug("hello","world")
Traceback (most recent call last):
File "c:\Python2711\Lib\logging\__init__.py", line 853, in emit
msg = self.format(record)
File "c:\Python2711\Lib\logging\__init__.py", line 726, in format
return fmt.format(record)
File "c:\Python2711\Lib\logging\__init__.py", line 465, in format
record.message = record.getMessage()
File "c:\Python2711\Lib\logging\__init__.py", line 329, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Logged from file <stdin>, line 1
</code></pre>
<p>How could i emulate the print behaviour still using logging.Logger?</p>
| 2 |
2016-09-19T08:51:33Z
| 39,569,095 |
<p>Set sys.stdout as the stream for your logging.</p>
<p>e.g.
<code>logging.basicConfig(level=logging.INFO, stream=sys.stdout</code>)</p>
| 0 |
2016-09-19T08:55:44Z
|
[
"python",
"python-3.x",
"logging"
] |
how to make logging.logger to behave like print
| 39,569,020 |
<p>Let's say I got this <a href="https://docs.python.org/2/library/logging.html" rel="nofollow">logging.logger</a> instance:</p>
<pre><code>import logging
logger = logging.getLogger('root')
FORMAT = "[%(filename)s:%(lineno)s - %(funcName)20s() ] %(message)s"
logging.basicConfig(format=FORMAT)
logger.setLevel(logging.DEBUG)
</code></pre>
<p>Problem comes when I try to use it like the builtin print with a dynamic number of arguments:</p>
<pre><code>>>> logger.__class__
<class 'logging.Logger'>
>>> logger.debug("hello")
[<stdin>:1 - <module>() ] hello
>>> logger.debug("hello","world")
Traceback (most recent call last):
File "c:\Python2711\Lib\logging\__init__.py", line 853, in emit
msg = self.format(record)
File "c:\Python2711\Lib\logging\__init__.py", line 726, in format
return fmt.format(record)
File "c:\Python2711\Lib\logging\__init__.py", line 465, in format
record.message = record.getMessage()
File "c:\Python2711\Lib\logging\__init__.py", line 329, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Logged from file <stdin>, line 1
</code></pre>
<p>How could i emulate the print behaviour still using logging.Logger?</p>
| 2 |
2016-09-19T08:51:33Z
| 39,570,939 |
<p>Alternatively, define a function that accepts <code>*args</code> and then <code>join</code> them in your call to <code>logger</code>:</p>
<pre><code>def log(*args, logtype='debug', sep=' '):
getattr(logger, logtype)(sep.join(str(a) for a in args))
</code></pre>
<p>I added a <code>logtype</code> for flexibility here but you could remove it if not required.</p>
| 1 |
2016-09-19T10:28:34Z
|
[
"python",
"python-3.x",
"logging"
] |
how to make logging.logger to behave like print
| 39,569,020 |
<p>Let's say I got this <a href="https://docs.python.org/2/library/logging.html" rel="nofollow">logging.logger</a> instance:</p>
<pre><code>import logging
logger = logging.getLogger('root')
FORMAT = "[%(filename)s:%(lineno)s - %(funcName)20s() ] %(message)s"
logging.basicConfig(format=FORMAT)
logger.setLevel(logging.DEBUG)
</code></pre>
<p>Problem comes when I try to use it like the builtin print with a dynamic number of arguments:</p>
<pre><code>>>> logger.__class__
<class 'logging.Logger'>
>>> logger.debug("hello")
[<stdin>:1 - <module>() ] hello
>>> logger.debug("hello","world")
Traceback (most recent call last):
File "c:\Python2711\Lib\logging\__init__.py", line 853, in emit
msg = self.format(record)
File "c:\Python2711\Lib\logging\__init__.py", line 726, in format
return fmt.format(record)
File "c:\Python2711\Lib\logging\__init__.py", line 465, in format
record.message = record.getMessage()
File "c:\Python2711\Lib\logging\__init__.py", line 329, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Logged from file <stdin>, line 1
</code></pre>
<p>How could i emulate the print behaviour still using logging.Logger?</p>
| 2 |
2016-09-19T08:51:33Z
| 39,571,473 |
<p>Wrapper based on @Jim's original answer:</p>
<pre><code>import logging
import sys
_logger = logging.getLogger('root')
FORMAT = "[%(filename)s:%(lineno)s - %(funcName)20s() ] %(message)s"
logging.basicConfig(format=FORMAT)
_logger.setLevel(logging.DEBUG)
class LogWrapper():
def __init__(self, logger):
self.logger = logger
def info(self, *args, sep=' '):
self.logger.info(sep.join("{}".format(a) for a in args))
def debug(self, *args, sep=' '):
self.logger.debug(sep.join("{}".format(a) for a in args))
def warning(self, *args, sep=' '):
self.logger.warning(sep.join("{}".format(a) for a in args))
def error(self, *args, sep=' '):
self.logger.error(sep.join("{}".format(a) for a in args))
def critical(self, *args, sep=' '):
self.logger.critical(sep.join("{}".format(a) for a in args))
def exception(self, *args, sep=' '):
self.logger.exception(sep.join("{}".format(a) for a in args))
def log(self, *args, sep=' '):
self.logger.log(sep.join("{}".format(a) for a in args))
logger = LogWrapper(_logger)
</code></pre>
| 1 |
2016-09-19T10:56:28Z
|
[
"python",
"python-3.x",
"logging"
] |
Python: slow nested for loop
| 39,569,120 |
<p>I need to find out an optimal selection of media, based on certain constraints. I am doing it in FOUR nested for loop and since it would take about O(n^4) iterations, it is slow. I had been trying to make it faster but it is still damn slow. My variables can be as high as couple of thousands. </p>
<p>Here is a small example of what I am trying to do:</p>
<pre><code> max_disks = 5
max_ssds = 5
max_tapes = 1
max_BR = 1
allocations = []
for i in range(max_disks):
for j in range(max_ssds):
for k in range(max_tapes):
for l in range(max_BR):
allocations.append((i,j,k,l)) # this is just for example. In actual program, I do processing here, like checking for bandwidth and cost constraints, and choosing the allocation based on that.
</code></pre>
<p>It wasn't slow for up to hundreds of each media type but would slow down for thousands. </p>
<p>Other way I tried is :</p>
<pre><code> max_disks = 5
max_ssds = 5
max_tapes = 1
max_BR = 1
allocations = [(i,j,k,l) for i in range(max_disks) for j in range(max_ssds) for k in range(max_tapes) for l in range(max_BR)]
</code></pre>
<p>This way it is slow even for such small numbers.</p>
<p>Two questions:</p>
<ol>
<li>Why the second one is slow for small numbers?</li>
<li>How can I make my program work for big numbers (in thousands)?</li>
</ol>
<p>Here is the version with itertools.product</p>
<pre><code> max_disks = 500
max_ssds = 100
max_tapes = 100
max_BR = 100
# allocations = []
for i, j, k,l in itertools.product(range(max_disks),range(max_ssds),range(max_tapes),range(max_BR)):
pass
</code></pre>
<p>It takes 19.8 seconds to finish with these numbers.</p>
| 0 |
2016-09-19T08:57:09Z
| 39,569,545 |
<p>From the comments, I got that you're working on a problem that can be rewritten as an <a href="https://en.wikipedia.org/wiki/Integer_programming" rel="nofollow">ILP</a>. You have several constraints, and need to find a (near) optimal solution.</p>
<p>Now, ILPs are quite difficult to solve, and brute-forcing them quickly becomes intractable (as you've already witnessed). This is why there are several really clever algorithms used in the industry that truly work magic.</p>
<p>For Python, there are quite a few interfaces that hook-up to modern solvers; for more details, see <em>e.g.</em> this <a href="http://stackoverflow.com/questions/26305704/python-mixed-integer-linear-programming">SO post</a>. You could also consider using an optimizer, like <a href="http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html" rel="nofollow"><code>SciPy optimize</code></a>, but those generally don't do integer programming.</p>
| 2 |
2016-09-19T09:20:26Z
|
[
"python",
"performance",
"linear-programming",
"constraint-programming"
] |
Python: slow nested for loop
| 39,569,120 |
<p>I need to find out an optimal selection of media, based on certain constraints. I am doing it in FOUR nested for loop and since it would take about O(n^4) iterations, it is slow. I had been trying to make it faster but it is still damn slow. My variables can be as high as couple of thousands. </p>
<p>Here is a small example of what I am trying to do:</p>
<pre><code> max_disks = 5
max_ssds = 5
max_tapes = 1
max_BR = 1
allocations = []
for i in range(max_disks):
for j in range(max_ssds):
for k in range(max_tapes):
for l in range(max_BR):
allocations.append((i,j,k,l)) # this is just for example. In actual program, I do processing here, like checking for bandwidth and cost constraints, and choosing the allocation based on that.
</code></pre>
<p>It wasn't slow for up to hundreds of each media type but would slow down for thousands. </p>
<p>Other way I tried is :</p>
<pre><code> max_disks = 5
max_ssds = 5
max_tapes = 1
max_BR = 1
allocations = [(i,j,k,l) for i in range(max_disks) for j in range(max_ssds) for k in range(max_tapes) for l in range(max_BR)]
</code></pre>
<p>This way it is slow even for such small numbers.</p>
<p>Two questions:</p>
<ol>
<li>Why the second one is slow for small numbers?</li>
<li>How can I make my program work for big numbers (in thousands)?</li>
</ol>
<p>Here is the version with itertools.product</p>
<pre><code> max_disks = 500
max_ssds = 100
max_tapes = 100
max_BR = 100
# allocations = []
for i, j, k,l in itertools.product(range(max_disks),range(max_ssds),range(max_tapes),range(max_BR)):
pass
</code></pre>
<p>It takes 19.8 seconds to finish with these numbers.</p>
| 0 |
2016-09-19T08:57:09Z
| 39,569,618 |
<p>Doing any operation in Python a trillion times is going to be slow. However, that's not all you're doing. By attempting to store all the trillion items in a single list you are storing lots of data in memory and manipulating it in a way that creates a lot of work for the computer to swap memory in and out once it no longer fits in RAM.</p>
<p>The way that Python lists work is that they allocate some amount of memory to store the items in the list. When you fill up the list and it needs to allocate more, Python will allocate twice as much memory and copy all the old entries into the new storage space. This is fine so long as it fits in memory - even though it has to copy all the contents of the list each time it expands the storage, it has to do so less frequently as it keeps doubling the size. The problem comes when it runs out of memory and has to swap unused memory out to disk. The next time it tries to resize the list, it has to reload from disk all the entries that are now swapped out to disk, then swap them all back out again to get space to write the new entries. So this creates lots of slow disk operations that will get in the way of your task and slow it down even more.</p>
<p>Do you really need to store every item in a list? What are you going to do with them when you're done? You could perhaps write them out to disk as you're going instead of accumulating them in a giant list, though if you have a trillion of them, that's still a very large amount of data! Or perhaps you're filtering most of them out? That will help.</p>
<p>All that said, without seeing the actual program itself, it's hard to know if you have a hope of completing this work by an exhaustive search. Can all the variables be on the thousands scale at once? Do you really need to consider every combination of these variables? When max_disks==2000, do you really need to distinguish the results for i=1731 from i=1732? For example, perhaps you could consider values of i 1,2,3,4,5,10,20,30,40,50,100,200,300,500,1000,2000? Or perhaps there's a mathematical solution instead? Are you just counting items?</p>
| 0 |
2016-09-19T09:24:32Z
|
[
"python",
"performance",
"linear-programming",
"constraint-programming"
] |
Using arg parser in python in another class
| 39,569,170 |
<p>I'm trying to write a test in Selenium using python,</p>
<p>I managed to run the test and it passed, But now I want add arg parser so I can give the test a different URL as an argument.</p>
<p>The thing is that my test is inside a class,
So when I'm passing the argument I get an error:</p>
<pre><code> app_url= (args['base_url'])
NameError: global name 'args' is not defined
</code></pre>
<p>How can I get args to be defined inside the Selenium class?</p>
<p>This is my code:</p>
<pre><code>from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import NoAlertPresentException
from selenium import webdriver
import unittest, time, re
import os
import string
import random
import argparse
def id_generator(size=6, chars=string.ascii_uppercase + string.digits):
return ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(8))
agmuser = id_generator()
class Selenium(unittest.TestCase):
def setUp(self):
chromedriver = "c:\chromedriver.exe"
os.environ["webdriver.chrome.driver"] = chromedriver
self.driver = webdriver.Chrome(chromedriver)
app_url = (args['base_url'])
#app_url= "http://myd-vm16635.fufu.net:8080/"
print "this is the APP URL:" + ' ' + app_url
self.base_url = app_url
self.verificationErrors = []
self.accept_next_alert = True
def test_selenium(self):
#id_generator.user = id_generator()
driver = self.driver
driver.get(self.base_url + "portal/")
driver.find_element_by_css_selector("span").click()
driver.find_element_by_id("j_loginName").clear()
driver.find_element_by_id("j_loginName").send_keys(agmuser)
driver.find_element_by_id("btnSubmit").click()
driver.find_element_by_link_text("Login as" + ' ' + agmuser).click()
driver.find_element_by_css_selector("#mock-portal-Horizon > span").click()
# driver.find_element_by_id("gwt-debug-new-features-cancel-button").click()
# driver.find_element_by_xpath("//table[@id='gwt-debug-module-dropdown']/tbody/tr[2]/td[2]").click()
# driver.find_element_by_id("gwt-debug-menu-item-release-management").click()
def is_element_present(self, how, what):
try: self.driver.find_element(by=how, value=what)
except NoSuchElementException as e: return False
return True
def is_alert_present(self):
try: self.driver.switch_to_alert()
except NoAlertPresentException as e: return False
return True
def close_alert_and_get_its_text(self):
try:
alert = self.driver.switch_to_alert()
alert_text = alert.text
if self.accept_next_alert:
alert.accept()
else:
alert.dismiss()
return alert_text
finally: self.accept_next_alert = True
def tearDown(self):
self.driver.quit()
self.assertEqual([], self.verificationErrors)
if __name__ == "__main__":
#####################******SCRIPT PARAMS****;**###################################
# these values can be changed type 'python selenium_job.py --help' for assistance
##################################################################################
parser = argparse.ArgumentParser(description='DevOps team - Sanity test')
parser.add_argument('-b', '--base_url', help='base_url', default="http://myd-vm16635.fufu.net:8080/")
args = vars(parser.parse_args())
unittest.main()
</code></pre>
| 0 |
2016-09-19T08:59:44Z
| 39,570,287 |
<p>Put the <code>parser = argparse.ArgumentParser(...)</code> and <code>parser.add_argument()</code> outside <code>if __name__ == "__main__":</code> so that it always gets created but not evaluated. Keep <code>args = vars(parser.parse_args())</code> inside <code>__main__</code>.</p>
<p>That way you can import it from the file like <code>from selenium_tests import parser</code> and then in your other script, do <code>parser.parse_args()</code>.</p>
<p>And a cleaner way to do it is to create a function which returns the parser, like:</p>
<pre><code>def get_parsed_args():
parser = argparse.ArgumentParser(...)
parser.add_argument(...)
# etc.
args = parser.parse_args()
return args
# or just...
return parser.parse_args()
#and then call that in the main program:
if __name__ == '__main__':
args = get_parsed_args()
# etc.
</code></pre>
<p>And in other scripts which you want to import it into, do </p>
<pre><code>from selenium_tests import get_parsed_args
if __name__ == '__main__':
args = get_parsed_args()
# etc.
</code></pre>
| 0 |
2016-09-19T09:56:50Z
|
[
"python",
"selenium",
"argparse"
] |
API access authentication/application key (django/nginx/gunicorn)
| 39,569,286 |
<p>I have a web app created in django, running in gunicorn app server behind nginx webserver/reverse-proxy. I need to have external application to access some processed data (csv/json), for which I need some sort of authentication. The basic django auth/login is not optimal as a simple script needs to pull the data with a simple request, no cookies etc (not created by me).</p>
<p>For now, I have </p>
<ol>
<li><p>set up the service being available with https/tls only </p></li>
<li><p>created an IP-filter in django to reduce the "attack surface" with:</p></li>
</ol>
<p><code>request.META['HTTP_X_REAL_IP']</code></p>
<p>and using nginx to forward the ip with:</p>
<pre><code>proxy_set_header X-Real-IP $remote_addr;
</code></pre>
<p>Next I was thinking to include and application key (hash of a pw or something) which needs to be included in the request, and is checked against db for a list of valid keys. </p>
<p>Is this a suitable api authentication or is there something else which can be used/recomennded? some sort of application key framework? </p>
| 0 |
2016-09-19T09:05:58Z
| 39,573,345 |
<p>There are many authentication methods beside of session/cookie based ones. For your case I will suggest simple token authentication. Just save same token in your django app and external app and on each request from external app to django, send additional header:</p>
<pre><code>Authentication: Token YOUR_TOKEN_KEY
</code></pre>
<p>Now all you need to do in django is to fetch that token and check if it matches one saved locally.</p>
<p>If you want more auth options for API, check <a href="http://www.django-rest-framework.org/api-guide/authentication/" rel="nofollow">Django Rest Framework documentation</a>.</p>
| 1 |
2016-09-19T12:35:27Z
|
[
"python",
"django",
"nginx"
] |
ggplot multiple plots in one object
| 39,569,306 |
<p>I've created a script to create multiple plots in one object. The results I am looking for are two plots one over the other such that each plot has different y axis scale but x axis is fixed - dates. However, only one of the plots (the top) is properly created, the bottom plot is visible but empty i.e the <code>geom_line</code> is not visible. Furthermore, the y-axis of the second plot does not match the range of values - min to max. I also tried using <code>facet_grid (scales="free")</code> but no change in the y-axis. The y-axis for the second graph has a range of 0 to 0.05.</p>
<p>I've limited the date range to the past few weeks. This is the code I am using:</p>
<pre><code> df = df.set_index('date')
weekly = df.resample('w-mon',label='left',closed='left').sum()
data = weekly[-4:].reset_index()
data= pd.melt(data, id_vars=['date'])
pplot = ggplot(aes(x="date", y="value", color="variable", group="variable"), data)
#geom_line()
scale_x_date(labels = date_format('%d.%m'),
limits=(data.date.min() - dt.timedelta(2),
data.date.max() + dt.timedelta(2)))
#facet_grid("variable", scales="free_y")
theme_bw()
</code></pre>
<p>The dataframe sample (df), its a daily dataset containing values for each variable x and a, in this case 'date' is the index:</p>
<pre><code>date x a
2016-08-01 100 20
2016-08-02 50 0
2016-08-03 24 18
2016-08-04 0 10
</code></pre>
<p>The dataframe sample (to_plot) - weekly overview:</p>
<pre><code> date variable value
0 2016-08-01 x 200
1 2016-08-08 x 211
2 2016-08-15 x 104
3 2016-08-22 x 332
4 2016-08-01 a 8
5 2016-08-08 a 15
6 2016-08-15 a 22
7 2016-08-22 a 6
</code></pre>
<p>Sorry for not adding the df dataframe before.</p>
| 3 |
2016-09-19T09:06:56Z
| 39,741,321 |
<p>Your calls to the plot directives <code>geom_line()</code>, <code>scale_x_date()</code>, etc. are standing on their own in your script; you do not connect them to your plot object. Thus, they do not have any effect on your plot.</p>
<p>In order to apply a plot directive to an existing plot object, use the <em>graphics language</em> and "add" them to your plot object by connecting them with a <code>+</code> operator.</p>
<p>The result (as intended):</p>
<p><a href="http://i.stack.imgur.com/Pz1Ky.png" rel="nofollow"><img src="http://i.stack.imgur.com/Pz1Ky.png" alt="ggplot in python with facet_grid"></a></p>
<p>The full script:</p>
<pre><code>from __future__ import print_function
import sys
import pandas as pd
import datetime as dt
from ggplot import *
if __name__ == '__main__':
df = pd.DataFrame({
'date': ['2016-08-01', '2016-08-08', '2016-08-15', '2016-08-22'],
'x': [100, 50, 24, 0],
'a': [20, 0, 18, 10]
})
df['date'] = pd.to_datetime(df['date'])
data = pd.melt(df, id_vars=['date'])
plt = ggplot(data, aes(x='date', y='value', color='variable', group='variable')) +\
scale_x_date(
labels=date_format('%y-%m-%d'),
limits=(data.date.min() - dt.timedelta(2), data.date.max() + dt.timedelta(2))
) +\
geom_line() +\
facet_grid('variable', scales='free_y')
plt.show()
</code></pre>
| 0 |
2016-09-28T07:55:24Z
|
[
"python",
"plot",
"python-ggplot"
] |
How to change NLTK default wordnet language to zsm?
| 39,569,307 |
<p>I'm new to NLTK and I'm doing the Python 3 Text Processing with NLTK 3 Cookbook: Chapter 4. I've done "Using WordNet for tagging" and works fine in default language English. I've download Language Bahasa (zsm) to omw and want to try in Bahasa using other datasets. Using the same approach, how can I change the language default from English to zsm now?</p>
<p>Code that I'm using:</p>
<pre><code>class WordNetTagger(SequentialBackoffTagger):
def __init__(self, *args, **kwargs):
SequentialBackoffTagger.__init__(self, *args, **kwargs)
self.wordnet_tag_map = {
'n': 'NN',
's': 'JJ',
'a': 'JJ',
'r': 'RB',
'v': 'VB'
}
def choose_tag(self, tokens, index, history):
word = tokens[index]
fd = FreqDist()
for synset in wordnet.synsets(word):
fd[synset.pos()] += 1
if not fd: return None
return self.wordnet_tag_map.get(fd.max())
</code></pre>
<p>Thanks in advance.</p>
| 0 |
2016-09-19T09:06:58Z
| 39,572,071 |
<p>After some trial I just figured:</p>
<pre><code>def choose_tag(self, tokens, index, history):
word = tokens[index]
fd = FreqDist()
for synset in wordnet.synsets(word, lang='zsm'):
fd[synset.pos()] += 1
if not fd: return None
return self.wordnet_tag_map.get(fd.max())
</code></pre>
<p>Key is wordnet.synsets(word, lang='zsm') works now for me. And I'm still open for any other suggestion or correction. Thanks.</p>
| 0 |
2016-09-19T11:26:47Z
|
[
"python",
"nltk"
] |
How to change NLTK default wordnet language to zsm?
| 39,569,307 |
<p>I'm new to NLTK and I'm doing the Python 3 Text Processing with NLTK 3 Cookbook: Chapter 4. I've done "Using WordNet for tagging" and works fine in default language English. I've download Language Bahasa (zsm) to omw and want to try in Bahasa using other datasets. Using the same approach, how can I change the language default from English to zsm now?</p>
<p>Code that I'm using:</p>
<pre><code>class WordNetTagger(SequentialBackoffTagger):
def __init__(self, *args, **kwargs):
SequentialBackoffTagger.__init__(self, *args, **kwargs)
self.wordnet_tag_map = {
'n': 'NN',
's': 'JJ',
'a': 'JJ',
'r': 'RB',
'v': 'VB'
}
def choose_tag(self, tokens, index, history):
word = tokens[index]
fd = FreqDist()
for synset in wordnet.synsets(word):
fd[synset.pos()] += 1
if not fd: return None
return self.wordnet_tag_map.get(fd.max())
</code></pre>
<p>Thanks in advance.</p>
| 0 |
2016-09-19T09:06:58Z
| 39,578,212 |
<p>As you seem to have figured out, you don't change the <em>default</em> language; you explicitly specify the language you want, whenever you don't want the default. If you find this onerous, you could wrap the <code>wordnet</code> object in your own custom class that provides its own defaults. </p>
<pre><code>class MyWordNet:
def __init__(self, wn):
self._wordnet = wn
def synsets(self, word, pos=None, lang="zsm"):
return self._wordnet.synsets(word, pos=pos, lang=lang)
# and similarly for any other methods you need
</code></pre>
<p>Then you initialize a wrapper object, passing it the nltk's <code>wordnet</code> reader object, and later you use this instead of the original:</p>
<pre><code>wn = MyWordNet(wordnet)
...
for synset it wn.synsets(word):
...
</code></pre>
| 0 |
2016-09-19T16:49:22Z
|
[
"python",
"nltk"
] |
Skipping the unmatched Index lines pyspark
| 39,569,405 |
<p>I want to skip the lines which are out of list index, i.e keep only the lines which are matching to given index. </p>
<p>Following is my data,</p>
<pre><code>12,34,5,6,7,8,.......
23,45,657,78,34,.......
0,2,34
15,78,65,78,9,...
</code></pre>
<p><strong>I want to extract the fields x[0],x[1],x[2],x[3]</strong> also in my dataset some lines have less fields and it will throw list index out of range, SO I want to skipp the lines having index <3. How could I achieve this in spark python</p>
<p>This i tried,</p>
<pre><code>def takeOnly3fields(data):
for row in data:
if not len(row) <=3:
return ",".join(row)
ff = file1.map(takeOnly3fields)
print(ff.collect()) will return NULL,NULL,NULL
</code></pre>
| 0 |
2016-09-19T09:12:59Z
| 39,569,957 |
<p>If the data is single dimensional, following is the code you need:</p>
<pre><code>def takeOnly3fields(data):
return ",".join(data) if len(data)>3 else None
</code></pre>
<p>map will iterate the entire data for you and you just need a transformation function on the data set.</p>
| 1 |
2016-09-19T09:40:56Z
|
[
"python",
"list",
"apache-spark",
"pyspark"
] |
XPath in scrapy returns elements which don't exist
| 39,569,480 |
<p>I am creating a new scrapy spider and everything is going pretty good, although I have a problem with one of the websites, where response.xpath is returning objects in the list which doesn't exist in html code:</p>
<pre><code>{"pdf_name": ["\n\t\t\t\t\t\t\t\t\t", "ZZZZZZ", "\n\t\t\t\t\t\t\t\t\t", "PDF", "\n\t\t\t\t\t\t\t\t"],
{"pdf_name": ["\n\t\t\t\t\t\t\t\t\t\t", "YYYYYY", "\n\t\t\t\t\t\t\t\t\t\t", "XXXXXX"]}
</code></pre>
<p>As you can see below, these "empty" objects (\t and \n) are not included in HTML tags. If I understand correctly, xpath is including whitespaces before tags:</p>
<pre><code><div class="inner d-i-b va-t" role="group">
<a class="link-to" href="A.pdf" target="_blank">
<i class="offscreen">ZZZZZZ</i>
<span>PDF</span>
</a>
<div class="text-box">
<a href="A.pdf">
<i class="offscreen">YYYYYY</i>
<p>XXXXXX</p></a>
</div>
</div>
</code></pre>
<p>I know that I can strip() strings and remove white spaces, although it would only mitigate the issue, not remove the main problem, which is including white spaces in results.</p>
<p>Why is it happening? How to limit XPath results only to tags (I thought previously that it is done by default)?</p>
<p><strong>Spider code - parse function (pdf_name is causing problems)</strong></p>
<pre><code>def parse(self, response):
# Select all links to pdfs
for pdf in response.xpath('//a[contains(@href, ".pdf")]'):
item = PdfItem()
# Create a list of text fields for links to PDFs and their descendants
item['pdf_name'] = pdf.xpath('descendant::text()').extract()
yield item
</code></pre>
| 0 |
2016-09-19T09:16:45Z
| 39,569,686 |
<p>Whitespace is part of the document. Just because <em>you</em> think it is unimportant does not make it go away.</p>
<p>A text node is a text node, whether it consists of <code>' '</code> (the space character) or any other character makes no difference at all.</p>
<p>You can normalize the whitespace with the <code>normalize-space()</code> XPath function:</p>
<pre><code>def parse(self, response):
for pdf_link in response.xpath('//a[contains(@href, ".pdf")]'):
item = PdfItem()
item['pdf_name'] = pdf_link.xpath('normalize-space(.)').extract()
yield item
</code></pre>
<p>First, <code>normalize-space()</code> converts its argument to string, which is done by concatenating all descendant text nodes. Then it trims leading and trailing spaces and collapses any consecutive whitespace (including line breaks) into a single space. Something like this <code>'\n bla \n\n bla '</code> would become <code>'bla bla'</code>.</p>
| 2 |
2016-09-19T09:27:47Z
|
[
"python",
"xpath",
"scrapy"
] |
Where do I place the XlsxWriter file in Python?
| 39,569,497 |
<p>I downloaded XlsxWriter zip file as I cannot use pip because of organisational restrictions. I extracted the zip file. Where inside the python directory should I place the XlsxWriter folder now ?</p>
| -2 |
2016-09-19T09:17:33Z
| 39,569,631 |
<p>For most of the cases use <code>pip</code> to install Python modules. It is very easy.</p>
<p>Just do:</p>
<pre><code>pip install XlsxWriter
</code></pre>
<p>And then in your script you can do the following:</p>
<pre><code>import xlsxwriter
{...your code goes here}
</code></pre>
<p>If somehow you are not able to use pip, please follow <a href="http://www.instructables.com/id/How-to-install-Python-packages-on-Windows-7/" rel="nofollow">this</a></p>
<p>After that close and re-open your IDLE and check.</p>
| 2 |
2016-09-19T09:25:09Z
|
[
"python",
"xlsxwriter"
] |
Seaborn tsplot shows nothing
| 39,569,582 |
<p>I have a data frame called <code>amounts_month</code> of such a type:</p>
<pre><code> product accounting_month amount
0 A 201404 204748.0
1 A 201405 445064.0
2 B 201404 649326.0
3 B 201405 876738.0
4 C 201404 1046336.0
</code></pre>
<p>But when I evaluate</p>
<pre><code>import seaborn as sns
import matplotlib.pyplot as plt
sns.tsplot(data=amounts_month,
time='accounting_month',
value='amount',
condition='product'
)
</code></pre>
<p>I get an empty plot. What's wrong with my code?</p>
| 0 |
2016-09-19T09:22:27Z
| 39,570,865 |
<p>You can try add one more data for product <code>C</code></p>
<pre><code>product accounting_month amount
A 201404 204748.0
A 201405 445064.0
B 201404 649326.0
B 201405 876738.0
C 201404 1046336.0
C 201405 1046336.0
</code></pre>
<p>then try the following code:</p>
<pre><code>import seaborn as sns
import matplotlib as mpl
#change the `accounting_month` to datatime
amounts_month['accounting_month']= pd.to_datetime(amounts_month['accounting_month'], format="%Y%m")
fig, ax = plt.subplots()
sns.tsplot(data=amounts_month,
time='accounting_month',
value='amount',
unit='product', # add a unit
condition='product',
ax=ax)
def myFormatter(x, pos):
return pd.to_datetime(x)
# assign locator and formatter for the xaxis ticks.
ax.xaxis.set_major_formatter(mpl.ticker.FuncFormatter(myFormatter))
# put the labels at 45deg since they tend to be too long
fig.autofmt_xdate()
plt.show()
</code></pre>
<p>Result:</p>
<p><a href="http://i.stack.imgur.com/tz74O.png" rel="nofollow"><img src="http://i.stack.imgur.com/tz74O.png" alt="enter image description here"></a></p>
| 1 |
2016-09-19T10:24:57Z
|
[
"python",
"matplotlib",
"seaborn"
] |
How to use sqoop command in python code for incremental import
| 39,569,733 |
<p>I want to do incremental import from user_location_history and after incremental import want to save the last id of in the user_location_updated,so that it can get automated for future.</p>
<pre><code>#!/usr/bin/python
import subprocess
import time
import subprocess
import MySQLdb
import datetime
import sys
import pytz
import os
from subprocess import call
def get_mysql_cursor():
conn_1 = MySQLdb.connect(user='db', passwd='bazookadb', host='10.216.204.20', db='bazooka')
conn_2 = MySQLdb.connect(user='db', passwd='bazookadb', host='10.216.204.7', db='bazooka')
#print conn_1,conn_2
return conn_1.cursor(),conn_2.cursor()
def get_records():
cur_1,cur_2 = get_mysql_cursor()
cur_1.execute("select updated from user_location_updated")
cur_2.execute("select max(moving_date) from user_location_history")
return cur_1.fetchone(),cur_2.fetchone()
def update_records(update_date):
cur_1,cur_2 = get_mysql_cursor()
print update_date
query = "update user_location_updated set updated = '"+str(update_date)+"' where id='1' "
print query
result = cur_1.execute(query)
print result
result = get_records()
update_result = update_records(result[1][0])
print result[0][0]
print result[1][0]
sqoopcom = "sqoop import --connect jdbc:mysql://10.216.204.7:3306/bazooka --username db --password bazookadb --fields-terminated-by , --escaped-by \\ --enclosed-by '\"' --table user_location_history -m 1 --hive-delims-replacement ' ' --as-textfile --incremental append --check-column moving_date --last-value 2016-08-04 19:00:36 --target-dir hdfs://example:9000/user/bigdata/sqoopip --verbose"
#os.system(sqoopcom)
exec (sqoopcom)
----but this code is giving error
</code></pre>
| 0 |
2016-09-19T09:29:56Z
| 39,586,561 |
<p>Wrap <code>--last-value</code> in single quotes.</p>
<p>Use <code>--last-value '2016-08-04 19:00:36'</code></p>
| 0 |
2016-09-20T05:47:08Z
|
[
"python",
"hadoop",
"pyspark",
"data-warehouse",
"sqoop"
] |
'filters' is not a registered tag library
| 39,569,832 |
<p>I have an issue regarding template filter. I have used <strong>django allauth</strong> for user registration. I have edited its signup.html and used loop to iterate over the fields to show them dynamically. I could show the fields but could not define the type field. </p>
<p>What i did is </p>
<p><strong>account/signup.html</strong></p>
<pre><code>{% load filters %}
<form class="signup" id="signup_form" method="post" action="{% url 'account_signup' %}">
{% csrf_token %}
{% for field in forms.visible_fields %}
<input type="{{ field.field.widget|input_type}}" name="{{ field.name }}" id="{{ field.id_for_label}}" class="form-control">
{% endfor %}
</form>
</code></pre>
<p><strong>template filter inside main app(filters.py)</strong></p>
<pre><code>from django import template
register = template.Library()
@register.filter('input_type')
def input_type(ob):
return ob.__class__.__name__
</code></pre>
<p><strong>templates location</strong></p>
<pre><code>'DIRS': [os.path.join(BASE_DIR, 'templates')
</code></pre>
<p>As i am using django allauth where should i place my template filter code? Sign up form to extend the allauth form using first name and last name is inside main app.</p>
| 0 |
2016-09-19T09:34:52Z
| 39,570,069 |
<p>The tag library should be placed in a <code>templatetags</code> directory in the root directory of the app:</p>
<p>See code layout from the <a href="https://docs.djangoproject.com/en/1.10/howto/custom-template-tags/#code-layout" rel="nofollow">docs</a>:</p>
<blockquote>
<p>When a Django app is added to <code>INSTALLED_APPS</code>, any tags it defines in
the conventional location described below are automatically made
available to load within templates.</p>
<p>The app should contain a <code>templatetags</code> directory, at the same level as
<code>models.py</code>, <code>views.py</code>, etc. If this doesnât already exist, create it -
donât forget the <code>__init__.py</code> file to ensure the directory is treated
as a Python package.</p>
</blockquote>
| 2 |
2016-09-19T09:46:33Z
|
[
"python",
"django",
"python-3.x",
"django-allauth"
] |
Custom user model permissions
| 39,569,857 |
<p>So everything works fine, but I can't add permissions to my users through admin.
When I change an user the box with User permissions appears in place but all the permissions I add have no effect. When I log in as that user I don't have permission to do anything.</p>
<p><a href="http://i.stack.imgur.com/7TuNI.png" rel="nofollow"><img src="http://i.stack.imgur.com/7TuNI.png" alt="When I try to change an user"></a></p>
<p>This is what I have in "accounts.models":</p>
<pre><code>from django.utils import timezone
from django.db import models
from django.contrib.auth.models import PermissionsMixin
from django.contrib.auth.models import AbstractBaseUser, BaseUserManager
from django.core.mail import send_mail
from django.utils.translation import ugettext_lazy as _
from django.utils.http import urlquote
class CustomUserManager(BaseUserManager):
def _create_user(self, email, password, is_staff, is_superuser, **extra_fields):
now = timezone.now()
if not email:
raise ValueError('The given email must be set')
email= self.normalize_email(email)
user = self.model(
email=email,
is_staff=is_staff, is_active=True,
is_superuser=is_superuser, last_login=now,
date_joined=now, **extra_fields
)
user.set_password(password)
user.save(using=self._db)
return user
def create_user(self, email, password=None, **extra_fields):
return self._create_user(email, password, False, False, **extra_fields)
def create_superuser(self, email, password, **extra_fields):
return self._create_user(email, password, True, True, **extra_fields)
class CustomUser(AbstractBaseUser, PermissionsMixin):
username = models.CharField(max_length=30)
email = models.EmailField(unique=True)
date_joined = models.DateTimeField(_('date joined'))
is_active = models.BooleanField(default=True)
is_admin = models.BooleanField(default=False)
is_staff = models.BooleanField(default=False)
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['username']
objects = CustomUserManager()
class Meta:
verbose_name=_('user')
verbose_name_plural = ('users')
def get_absolute_url(self):
return "/users/%s/" % urlquote(self.email)
def get_full_name(self):
return self.username
def get_short_name(self):
return self.username
def email_user(self, subject, message, from_email=None):
send_mail(subject, message, from_email, [self.email])
</code></pre>
<p>This is what I have in "accounts.admin":</p>
<pre><code>from django import forms
from django.contrib import admin
from django.contrib.auth.models import Group
from django.contrib.auth.admin import UserAdmin as BaseUserAdmin
from django.contrib.auth.forms import ReadOnlyPasswordHashField
from .models import CustomUser
class UserCreationForm(forms.ModelForm):
"""A form for creating new users. Includes all the required
fields, plus a repeated password."""
password1 = forms.CharField(label='Password', widget=forms.PasswordInput)
password2 = forms.CharField(label='Password confirmation', widget=forms.PasswordInput)
class Meta:
model = CustomUser
fields = ('email',)
def clean_password2(self):
# Check that the two password entries match
password1 = self.cleaned_data.get("password1")
password2 = self.cleaned_data.get("password2")
if password1 and password2 and password1 != password2:
raise forms.ValidationError("Passwords don't match")
return password2
def save(self, commit=True):
# Save the provided password in hashed format
user = super(UserCreationForm, self).save(commit=False)
user.set_password(self.cleaned_data["password1"])
if commit:
user.save()
return user
class UserChangeForm(forms.ModelForm):
"""A form for updating users. Includes all the fields on
the user, but replaces the password field with admin's
password hash display field.
"""
password = ReadOnlyPasswordHashField()
class Meta:
model = CustomUser
fields = ('email', 'password', 'is_active')
def clean_password(self):
# Regardless of what the user provides, return the initial value.
# This is done here, rather than on the field, because the
# field does not have access to the initial value
return self.initial["password"]
class UserAdmin(BaseUserAdmin):
# The forms to add and change user instances
form = UserChangeForm
add_form = UserCreationForm
# The fields to be used in displaying the User model.
# These override the definitions on the base UserAdmin
# that reference specific fields on auth.User.
list_display = ('email', 'username', 'is_staff', 'is_active', 'last_login', 'date_joined')
list_filter = ('is_staff', 'is_active')
fieldsets = (
(None, {'fields': ('email', 'password', 'is_active')}),
('Permissions', {'fields': ('is_staff', 'groups', 'user_permissions')}),
)
# add_fieldsets is not a standard ModelAdmin attribute. UserAdmin
# overrides get_fieldsets to use this attribute when creating a user.
add_fieldsets = (
(None, {
'classes': ('wide',),
'fields': ('email', 'password1', 'password2')}
),
)
search_fields = ('email', 'username')
ordering = ('email',)
# Now register the new UserAdmin...
admin.site.register(CustomUser, UserAdmin)
# ... and, since we're not using Django's built-in permissions,
# unregister the Group model from admin.
# admin.site.unregister(Group)
</code></pre>
<p><a href="http://i.stack.imgur.com/i338H.png" rel="nofollow"><img src="http://i.stack.imgur.com/i338H.png" alt="enter image description here"></a></p>
| 0 |
2016-09-19T09:35:44Z
| 39,570,433 |
<p>Please check at your db , whether your UserID is mapped with "auth_user_groups" table.</p>
<p>For example: </p>
<p>id: 2155
user_id: 2447276
group_id: 45</p>
| 2 |
2016-09-19T10:04:20Z
|
[
"python",
"django",
"django-custom-user"
] |
Custom user model permissions
| 39,569,857 |
<p>So everything works fine, but I can't add permissions to my users through admin.
When I change an user the box with User permissions appears in place but all the permissions I add have no effect. When I log in as that user I don't have permission to do anything.</p>
<p><a href="http://i.stack.imgur.com/7TuNI.png" rel="nofollow"><img src="http://i.stack.imgur.com/7TuNI.png" alt="When I try to change an user"></a></p>
<p>This is what I have in "accounts.models":</p>
<pre><code>from django.utils import timezone
from django.db import models
from django.contrib.auth.models import PermissionsMixin
from django.contrib.auth.models import AbstractBaseUser, BaseUserManager
from django.core.mail import send_mail
from django.utils.translation import ugettext_lazy as _
from django.utils.http import urlquote
class CustomUserManager(BaseUserManager):
def _create_user(self, email, password, is_staff, is_superuser, **extra_fields):
now = timezone.now()
if not email:
raise ValueError('The given email must be set')
email= self.normalize_email(email)
user = self.model(
email=email,
is_staff=is_staff, is_active=True,
is_superuser=is_superuser, last_login=now,
date_joined=now, **extra_fields
)
user.set_password(password)
user.save(using=self._db)
return user
def create_user(self, email, password=None, **extra_fields):
return self._create_user(email, password, False, False, **extra_fields)
def create_superuser(self, email, password, **extra_fields):
return self._create_user(email, password, True, True, **extra_fields)
class CustomUser(AbstractBaseUser, PermissionsMixin):
username = models.CharField(max_length=30)
email = models.EmailField(unique=True)
date_joined = models.DateTimeField(_('date joined'))
is_active = models.BooleanField(default=True)
is_admin = models.BooleanField(default=False)
is_staff = models.BooleanField(default=False)
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['username']
objects = CustomUserManager()
class Meta:
verbose_name=_('user')
verbose_name_plural = ('users')
def get_absolute_url(self):
return "/users/%s/" % urlquote(self.email)
def get_full_name(self):
return self.username
def get_short_name(self):
return self.username
def email_user(self, subject, message, from_email=None):
send_mail(subject, message, from_email, [self.email])
</code></pre>
<p>This is what I have in "accounts.admin":</p>
<pre><code>from django import forms
from django.contrib import admin
from django.contrib.auth.models import Group
from django.contrib.auth.admin import UserAdmin as BaseUserAdmin
from django.contrib.auth.forms import ReadOnlyPasswordHashField
from .models import CustomUser
class UserCreationForm(forms.ModelForm):
"""A form for creating new users. Includes all the required
fields, plus a repeated password."""
password1 = forms.CharField(label='Password', widget=forms.PasswordInput)
password2 = forms.CharField(label='Password confirmation', widget=forms.PasswordInput)
class Meta:
model = CustomUser
fields = ('email',)
def clean_password2(self):
# Check that the two password entries match
password1 = self.cleaned_data.get("password1")
password2 = self.cleaned_data.get("password2")
if password1 and password2 and password1 != password2:
raise forms.ValidationError("Passwords don't match")
return password2
def save(self, commit=True):
# Save the provided password in hashed format
user = super(UserCreationForm, self).save(commit=False)
user.set_password(self.cleaned_data["password1"])
if commit:
user.save()
return user
class UserChangeForm(forms.ModelForm):
"""A form for updating users. Includes all the fields on
the user, but replaces the password field with admin's
password hash display field.
"""
password = ReadOnlyPasswordHashField()
class Meta:
model = CustomUser
fields = ('email', 'password', 'is_active')
def clean_password(self):
# Regardless of what the user provides, return the initial value.
# This is done here, rather than on the field, because the
# field does not have access to the initial value
return self.initial["password"]
class UserAdmin(BaseUserAdmin):
# The forms to add and change user instances
form = UserChangeForm
add_form = UserCreationForm
# The fields to be used in displaying the User model.
# These override the definitions on the base UserAdmin
# that reference specific fields on auth.User.
list_display = ('email', 'username', 'is_staff', 'is_active', 'last_login', 'date_joined')
list_filter = ('is_staff', 'is_active')
fieldsets = (
(None, {'fields': ('email', 'password', 'is_active')}),
('Permissions', {'fields': ('is_staff', 'groups', 'user_permissions')}),
)
# add_fieldsets is not a standard ModelAdmin attribute. UserAdmin
# overrides get_fieldsets to use this attribute when creating a user.
add_fieldsets = (
(None, {
'classes': ('wide',),
'fields': ('email', 'password1', 'password2')}
),
)
search_fields = ('email', 'username')
ordering = ('email',)
# Now register the new UserAdmin...
admin.site.register(CustomUser, UserAdmin)
# ... and, since we're not using Django's built-in permissions,
# unregister the Group model from admin.
# admin.site.unregister(Group)
</code></pre>
<p><a href="http://i.stack.imgur.com/i338H.png" rel="nofollow"><img src="http://i.stack.imgur.com/i338H.png" alt="enter image description here"></a></p>
| 0 |
2016-09-19T09:35:44Z
| 39,578,825 |
<p>I have redone the custom user model following this <a href="https://www.caktusgroup.com/blog/2013/08/07/migrating-custom-user-model-django/" rel="nofollow">article</a>, and now it works as expected.</p>
| 0 |
2016-09-19T17:26:36Z
|
[
"python",
"django",
"django-custom-user"
] |
Specify 'pip' version in requirements.txt
| 39,569,911 |
<p>I develop a Python/Django application, which runs from a virtual environment (created by <code>virtualenv</code>). </p>
<p>When the virtual environment is created, the global version of <code>pip</code> is copied to the newly created environment by default, which might be quite outdated (for example, version <code>1.5.4</code> from <code>python-pip</code> package on Ubuntu 14.04).</p>
<p>To avoid manual <code>pip</code> upgrades, it sounds like a good idea to pin the <code>pip</code> version in <code>requirements.txt</code> file, for instance by adding the following line:</p>
<p><code>pip==8.1.2</code></p>
<p>Specifying the <code>pip</code> version there will also allow to upgrade <code>pip</code> in all the managed application environments (local, dev, production) by changing the line in the requirements file. </p>
<p>Does this sound like a good practice? Is there anything that can go wrong with this approach?</p>
| 0 |
2016-09-19T09:38:24Z
| 39,570,152 |
<p>What you experience is caused by an old version of <code>python-virtualenv</code> delivered with Ubuntu 14.04. You should remove the Ubuntu package and install via pip: </p>
<pre><code>sudo pip install virtualenv
</code></pre>
<p>Then make sure you have the newest pip installed as well. </p>
<pre><code>sudo pip install -U pip
</code></pre>
<p>And you should get that version installed in new virtual environments.</p>
| 0 |
2016-09-19T09:51:08Z
|
[
"python",
"pip",
"virtualenv",
"requirements.txt"
] |
Specify 'pip' version in requirements.txt
| 39,569,911 |
<p>I develop a Python/Django application, which runs from a virtual environment (created by <code>virtualenv</code>). </p>
<p>When the virtual environment is created, the global version of <code>pip</code> is copied to the newly created environment by default, which might be quite outdated (for example, version <code>1.5.4</code> from <code>python-pip</code> package on Ubuntu 14.04).</p>
<p>To avoid manual <code>pip</code> upgrades, it sounds like a good idea to pin the <code>pip</code> version in <code>requirements.txt</code> file, for instance by adding the following line:</p>
<p><code>pip==8.1.2</code></p>
<p>Specifying the <code>pip</code> version there will also allow to upgrade <code>pip</code> in all the managed application environments (local, dev, production) by changing the line in the requirements file. </p>
<p>Does this sound like a good practice? Is there anything that can go wrong with this approach?</p>
| 0 |
2016-09-19T09:38:24Z
| 39,570,844 |
<p>Please note that <code>pip</code> version listed in <code>requirements.txt</code> is going to be installed along with other requirements. So all requirements are going to be installed by old version of <code>pip</code> and the version specified in <code>requirements.txt</code> will be available afterwards. </p>
<p>I always do:</p>
<pre><code>virtualenv /path/to/my/desired/venv/
source /path/to/my/desired/venv/bin/activate
pip install -U pip
pip install -r requirements.txt
</code></pre>
| 2 |
2016-09-19T10:24:12Z
|
[
"python",
"pip",
"virtualenv",
"requirements.txt"
] |
BigQuery Standard SQL using Python cannot use OFFSET keyword
| 39,569,928 |
<p>I am trying to use BigQuery Standard SQL with Python API, though I cannot execute the query that ran successfully in WEB UI.</p>
<p>Basically, I am splitting a string and then using OFFSET keyword to get the value at a particular index. As follows:</p>
<pre><code>CASE WHEN t.depth = 1 THEN '' WHEN t.depth = 2 THEN '' WHEN t.depth = 3 THEN '' WHEN t.depth = 4 THEN '' WHEN t.depth = 5 THEN '' WHEN t.depth = 6 THEN t.curr WHEN t.depth = 7 THEN SPLIT(t.ancestry,'/')[OFFSET(6)] ELSE '' END AS level7,
CASE WHEN t.depth = 1 THEN '' WHEN t.depth = 2 THEN '' WHEN t.depth = 3 THEN '' WHEN t.depth = 4 THEN '' WHEN t.depth = 5 THEN t.curr WHEN t.depth = 6 THEN SPLIT(t.ancestry,'/')[OFFSET(5)] WHEN t.depth = 7 THEN SPLIT(t.ancestry,'/')[OFFSET(5)] ELSE '' END AS level6,
</code></pre>
<p>The above code runs without an issue in WEB UI, whereas using Python API and setting <code>useLegacySQL = False</code>, I get the following error</p>
<pre><code>raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 400 when requesting https://www.googleapis.com/bigquery/v2/projects/*************
returned "Encountered " "]" "[OFFSET(6)] "" at line 7, column 217. Was expecting: "END" ...">
</code></pre>
<p>Any help is appreciated.</p>
| 0 |
2016-09-19T09:39:19Z
| 39,577,282 |
<p>It looks like the query is being executed using legacy SQL based on the error message. I see the same message when I try to execute this using legacy SQL, for instance:</p>
<pre><code>SELECT
CASE s
WHEN 'first' THEN SPLIT(arr, ',')[OFFSET(0)]
WHEN 'second' THEN SPLIT(arr, ',')[OFFSET(1)]
ELSE NULL
END AS val
FROM (SELECT '1,2' AS arr, 'second' AS s);
Error: Encountered " "]" "[OFFSET(0)] "" at line 1, column 48. Was expecting: "END" ...
</code></pre>
<p>Edit: The example linked from <a href="https://cloud.google.com/bigquery/sql-reference/enabling-standard-sql" rel="nofollow">Enabling Standard SQL</a> is incorrect. Rather than <code>useLegacySQL</code>, the option is <code>useLegacySql</code> (with <code>q</code> and <code>l</code> in lowercase). The tracking issue is at <a href="https://code.google.com/p/google-bigquery/issues/detail?id=701" rel="nofollow">https://code.google.com/p/google-bigquery/issues/detail?id=701</a>.</p>
| 0 |
2016-09-19T15:56:10Z
|
[
"python",
"google-bigquery"
] |
ImportError: No module named hmmlearn.hmm python 2.7
| 39,570,126 |
<ul>
<li><ol>
<li>I got this error in python script:</li>
</ol></li>
<li>from hmmlearn.hmm import </li>
<li><p>GaussianHMM I know I need some libraries thats why I ran following</p>
<pre><code> git clone git://github.com/hmmlearn/hmmlearn.git
pip install -U --user hmmlearn
</code></pre></li>
</ul>
<p>I getting stuck because of this problem. I didn't get any solution, I tried google and many commands but problem still happen.</p>
| 0 |
2016-09-19T09:50:04Z
| 39,570,402 |
<p>It seems like according to the git-hub page of the repo it requires specific versions of some packages. Make sure you do a pip upgrade for all those packages</p>
<ul>
<li>Python >=2.6 </li>
<li>numpy >= 1.9.3 </li>
<li>scipy >= 0.16.0 </li>
<li>scikit-learn >= 0.16</li>
</ul>
<p>then try it will work</p>
<p>to upgrade a package do simple
<code>pip install --upgrade <package-name></code></p>
| 0 |
2016-09-19T10:02:51Z
|
[
"python",
"ubuntu-14.04",
"hmmlearn"
] |
ImportError: No module named hmmlearn.hmm python 2.7
| 39,570,126 |
<ul>
<li><ol>
<li>I got this error in python script:</li>
</ol></li>
<li>from hmmlearn.hmm import </li>
<li><p>GaussianHMM I know I need some libraries thats why I ran following</p>
<pre><code> git clone git://github.com/hmmlearn/hmmlearn.git
pip install -U --user hmmlearn
</code></pre></li>
</ul>
<p>I getting stuck because of this problem. I didn't get any solution, I tried google and many commands but problem still happen.</p>
| 0 |
2016-09-19T09:50:04Z
| 39,572,768 |
<p>I solved this by cloning the repository and running:</p>
<pre><code>sudo python setup.py install
</code></pre>
| 0 |
2016-09-19T12:05:56Z
|
[
"python",
"ubuntu-14.04",
"hmmlearn"
] |
python def creation within a .py
| 39,570,190 |
<p>I am trying to create a def file within a py file that is external eg.</p>
<p><code>calls.py</code>:</p>
<pre><code>def printbluewhale():
whale = animalia.whale("Chordata",
"",
"Mammalia",
"Certariodactyla",
"Balaenopteridae",
"Balaenoptera",
"B. musculus",
"Balaenoptera musculus",
"Blue whale")
print("Phylum - " + whale.getPhylum())
print("Clade - " + whale.getClade())
print("Class - " + whale.getClas())
print("Order - " + whale.getOrder())
print("Family - " + whale.getFamily())
print("Genus - " + whale.getGenus())
print("Species - " + whale.getSpecies())
print("Latin Name - "+ whale.getLatinName())
print("Name - " + whale.getName())
</code></pre>
<p><code>mainwindow.py</code>:</p>
<pre><code>import calls
import animalist
#import defs
keepgoing = 1
print("Entering main window")
while True:
question = input("Which animal would you like to know about?" #The question it self
+ animalist.lst) #Animal Listing
if question == "1":
print(calls.printlion())#Calls the animal definition and prints the characteristics
if question == "2":
print(calls.printdog())
if question == "3":
print(calls.printbluewhale())
'''if question == "new":
def new_animal():
question_2=input("Enter the name of the new animal :")'''
</code></pre>
<p>What I am trying to do is that <code>question == new</code> would create a new def in the <code>calls.py</code> and that I would be able to add a name to the <code>def</code> and the attributes as well.</p>
<p>I was hoping you could lead me to a way of how to do this, and if it is not possible please just say and I will rethink my project :)</p>
| 0 |
2016-09-19T09:52:38Z
| 39,574,257 |
<p>What you're trying to do here seems a bit of a workaround, at least in the way you're trying to handle it.</p>
<p>If i understood the question correctly, you're trying to make a python script that takes input from the user, then if that input is equal to "new", have it be able to define a new animal name.</p>
<p>You're currently handling this using a whole lot of manual work, and this is going to be extremely hard to expand, especially considering the size of the data set you're presumably working with (the whole animal kingdom?).</p>
<p>You could try handling it like this:</p>
<p>define a data set using a dictionary:</p>
<pre><code>birds = dict()
fish = dict()
whales = dict()
whales["Blue Whale"] = animalia.whale("Chordata",
"",
"Mammalia",
"Certariodactyla",
"Balaenopteridae",
"Balaenoptera",
"B. musculus",
"Balaenoptera musculus",
"Blue whale")
whales["Killer Whale"] = ... # just as an example, keep doing this to define more whale species.
animals = {"birds": birds, "fish": fish, "whales": whales} # using a dict for this makes you independent from indices, which is much less messy.
</code></pre>
<p>This will build your data set. Presuming every <code>whale</code> class instance (if there is one) inherits properties from a presumptive <code>Animal</code> class that performs all the printing, say:</p>
<pre><code>Class Animal():
# do some init
def print_data(self):
print("Phylum - " + self.getPhylum())
print("Clade - " + self.getClade())
print("Class - " + self.getClas())
print("Order - " + self.getOrder())
print("Family - " + self.getFamily())
print("Genus - " + self.getGenus())
print("Species - " + self.getSpecies())
print("Latin Name - "+ self.getLatinName())
print("Name - " + self.getName())
</code></pre>
<p>You can then have a Whale class:</p>
<pre><code>class Whale(Animal)
</code></pre>
<p>Which now has the print_data method.</p>
<pre><code>for whale in whales:
whales[whale].print_data()
</code></pre>
<p>With that out of the way, you can move on to adding input:
In your main.py:</p>
<pre><code>while True:
question = input("Which animal would you like to know about?" #The question it self
+ animalist.lst) #Animal Listing
try:
id = int(question)
# if the input can be converted to an integer, we assume the user has entered an index.
print(calls.animals[animals.keys[id]])
except:
if str(question).lower() == "new": # makes this case insensitive
new_species = input("Please input a new species")
calls.animals[str(new_species)] = new_pecies
# here you should process the input to determine what new species you want
</code></pre>
<p>Beyond this it's worth mentioning that if you use dicts and arrays, you can put things in a database, and pull your data from there.</p>
<p>Hope this helps :)</p>
| 0 |
2016-09-19T13:19:49Z
|
[
"python",
"python-3.x"
] |
What process is using a given file?
| 39,570,207 |
<p>I'm having trouble with one of my scripts, where it erratically seems to have trouble writing to its own log, throwing the error "This file is being used by another process."</p>
<p>I know there are ways to handle this with try excepts, but I'd like to find out <em>why</em> this is happening rather than just papering over it. Nothing else should be accessing that file at all. So in order to confirm the source of the bug, I'd like to find out what service is using that file.</p>
<p>Is there a way in Python on Windows to check what process is using a given file?</p>
| 0 |
2016-09-19T09:53:20Z
| 39,637,414 |
<p>You can use Microsoft's <a href="https://technet.microsoft.com/en-us/sysinternals/handle.aspx" rel="nofollow">handle.exe</a> command-line utility. For example: </p>
<pre><code>import re
import subprocess
_handle_pat = re.compile(r'(.*?)\s+pid:\s+(\d+).*[0-9a-fA-F]+:\s+(.*)')
def open_files(name):
"""return a list of (process_name, pid, filename) tuples for
open files matching the given name."""
lines = subprocess.check_output('handle.exe "%s"' % name).splitlines()
results = (_handle_pat.match(line.decode('mbcs')) for line in lines)
return [m.groups() for m in results if m]
</code></pre>
<p>Note that this has limitations regarding Unicode filenames. In Python 2 subprocess passes <code>name</code> as an ANSI string because it calls <code>CreateProcessA</code> instead of <code>CreateProcessW</code>. In Python 3 the name gets passed as Unicode. In either case, handle.exe writes its output using a lossy ANSI encoding, so the matched filename in the result tuple may contain best-fit characters and "?" replacements.</p>
| 1 |
2016-09-22T11:10:41Z
|
[
"python",
"windows",
"file"
] |
How to render output of cartridge API's on custom HTML page?
| 39,570,221 |
<p>I am working on a cartridge project. I have created custom html templates for better visual and now I want to render all data which is coming through cartridge's built in APIs on my custom html pages. For.ex. I have a product.html, on which I want to show all products stored in db (category wise). </p>
<p>Actually, I tried to explore url,</p>
<pre><code>url("^shop/", include("cartridge.shop.urls")),
</code></pre>
<p>I am not getting that on which API or Function, this url is hitting.</p>
<p>urls.py file of shop app looks like this, I tested it, none of those url get called,</p>
<pre><code>from __future__ import unicode_literals
from django.conf.urls import url
from mezzanine.conf import settings
from cartridge.shop import views
_slash = "/" if settings.APPEND_SLASH else ""
urlpatterns = [
url("^product/(?P<slug>.*)%s$" % _slash, views.product,
name="shop_product"),
url("^wishlist%s$" % _slash, views.wishlist, name="shop_wishlist"),
url("^cart%s$" % _slash, views.cart, name="shop_cart"),
url("^checkout%s$" % _slash, views.checkout_steps, name="shop_checkout"),
url("^checkout/complete%s$" % _slash, views.complete,
name="shop_complete"),
url("^invoice/(?P<order_id>\d+)%s$" % _slash, views.invoice,
name="shop_invoice"),
url("^invoice/(?P<order_id>\d+)/resend%s$" % _slash,
views.invoice_resend_email, name="shop_invoice_resend"),
]
</code></pre>
<p>These are cartridge's views for '/shop/product', '/shop/wishlist' and '/shop/cart'</p>
<pre><code>from __future__ import unicode_literals
from future.builtins import int, str
from json import dumps
from django.contrib.auth.decorators import login_required
from django.contrib.messages import info
from django.core.urlresolvers import reverse
from django.db.models import Sum
from django.http import Http404, HttpResponse
from django.shortcuts import get_object_or_404, redirect
from django.template import RequestContext
from django.template.defaultfilters import slugify
from django.template.loader import get_template
from django.template.response import TemplateResponse
from django.utils.translation import ugettext as _
from django.views.decorators.cache import never_cache
from mezzanine.conf import settings
from mezzanine.utils.importing import import_dotted_path
from mezzanine.utils.views import set_cookie, paginate
from mezzanine.utils.urls import next_url
from cartridge.shop import checkout
from cartridge.shop.forms import (AddProductForm, CartItemFormSet,
DiscountForm, OrderForm)
from cartridge.shop.models import Product, ProductVariation, Order
from cartridge.shop.models import DiscountCode
from cartridge.shop.utils import recalculate_cart, sign
try:
from xhtml2pdf import pisa
except (ImportError, SyntaxError):
pisa = None
HAS_PDF = pisa is not None
# Set up checkout handlers.
handler = lambda s: import_dotted_path(s) if s else lambda *args: None
billship_handler = handler(settings.SHOP_HANDLER_BILLING_SHIPPING)
tax_handler = handler(settings.SHOP_HANDLER_TAX)
payment_handler = handler(settings.SHOP_HANDLER_PAYMENT)
order_handler = handler(settings.SHOP_HANDLER_ORDER)
def product(request, slug, template="shop/product.html",
form_class=AddProductForm, extra_context=None):
"""
Display a product - convert the product variations to JSON as well as
handling adding the product to either the cart or the wishlist.
"""
published_products = Product.objects.published(for_user=request.user)
product = get_object_or_404(published_products, slug=slug)
fields = [f.name for f in ProductVariation.option_fields()]
variations = product.variations.all()
variations_json = dumps([dict([(f, getattr(v, f))
for f in fields + ["sku", "image_id"]]) for v in variations])
to_cart = (request.method == "POST" and
request.POST.get("add_wishlist") is None)
initial_data = {}
if variations:
initial_data = dict([(f, getattr(variations[0], f)) for f in fields])
initial_data["quantity"] = 1
add_product_form = form_class(request.POST or None, product=product,
initial=initial_data, to_cart=to_cart)
if request.method == "POST":
if add_product_form.is_valid():
if to_cart:
quantity = add_product_form.cleaned_data["quantity"]
request.cart.add_item(add_product_form.variation, quantity)
recalculate_cart(request)
info(request, _("Item added to cart"))
return redirect("shop_cart")
else:
skus = request.wishlist
sku = add_product_form.variation.sku
if sku not in skus:
skus.append(sku)
info(request, _("Item added to wishlist"))
response = redirect("shop_wishlist")
set_cookie(response, "wishlist", ",".join(skus))
return response
related = []
if settings.SHOP_USE_RELATED_PRODUCTS:
related = product.related_products.published(for_user=request.user)
context = {
"product": product,
"editable_obj": product,
"images": product.images.all(),
"variations": variations,
"variations_json": variations_json,
"has_available_variations": any([v.has_price() for v in variations]),
"related_products": related,
"add_product_form": add_product_form
}
context.update(extra_context or {})
templates = [u"shop/%s.html" % str(product.slug), template]
return TemplateResponse(request, templates, context)
@never_cache
def wishlist(request, template="shop/wishlist.html",
form_class=AddProductForm, extra_context=None):
"""
Display the wishlist and handle removing items from the wishlist and
adding them to the cart.
"""
if not settings.SHOP_USE_WISHLIST:
raise Http404
skus = request.wishlist
error = None
if request.method == "POST":
to_cart = request.POST.get("add_cart")
add_product_form = form_class(request.POST or None,
to_cart=to_cart)
if to_cart:
if add_product_form.is_valid():
request.cart.add_item(add_product_form.variation, 1)
recalculate_cart(request)
message = _("Item added to cart")
url = "shop_cart"
else:
error = list(add_product_form.errors.values())[0]
else:
message = _("Item removed from wishlist")
url = "shop_wishlist"
sku = request.POST.get("sku")
if sku in skus:
skus.remove(sku)
if not error:
info(request, message)
response = redirect(url)
set_cookie(response, "wishlist", ",".join(skus))
return response
# Remove skus from the cookie that no longer exist.
published_products = Product.objects.published(for_user=request.user)
f = {"product__in": published_products, "sku__in": skus}
wishlist = ProductVariation.objects.filter(**f).select_related("product")
wishlist = sorted(wishlist, key=lambda v: skus.index(v.sku))
context = {"wishlist_items": wishlist, "error": error}
context.update(extra_context or {})
response = TemplateResponse(request, template, context)
if len(wishlist) < len(skus):
skus = [variation.sku for variation in wishlist]
set_cookie(response, "wishlist", ",".join(skus))
return response
@never_cache
def cart(request, template="shop/cart.html",
cart_formset_class=CartItemFormSet,
discount_form_class=DiscountForm,
extra_context=None):
"""
Display cart and handle removing items from the cart.
"""
cart_formset = cart_formset_class(instance=request.cart)
discount_form = discount_form_class(request, request.POST or None)
if request.method == "POST":
valid = True
if request.POST.get("update_cart"):
valid = request.cart.has_items()
if not valid:
# Session timed out.
info(request, _("Your cart has expired"))
else:
cart_formset = cart_formset_class(request.POST,
instance=request.cart)
valid = cart_formset.is_valid()
if valid:
cart_formset.save()
recalculate_cart(request)
info(request, _("Cart updated"))
else:
# Reset the cart formset so that the cart
# always indicates the correct quantities.
# The user is shown their invalid quantity
# via the error message, which we need to
# copy over to the new formset here.
errors = cart_formset._errors
cart_formset = cart_formset_class(instance=request.cart)
cart_formset._errors = errors
else:
valid = discount_form.is_valid()
if valid:
discount_form.set_discount()
# Potentially need to set shipping if a discount code
# was previously entered with free shipping, and then
# another was entered (replacing the old) without
# free shipping, *and* the user has already progressed
# to the final checkout step, which they'd go straight
# to when returning to checkout, bypassing billing and
# shipping details step where shipping is normally set.
recalculate_cart(request)
if valid:
return redirect("shop_cart")
context = {"cart_formset": cart_formset}
context.update(extra_context or {})
settings.use_editable()
if (settings.SHOP_DISCOUNT_FIELD_IN_CART and
DiscountCode.objects.active().exists()):
context["discount_form"] = discount_form
return TemplateResponse(request, template, context)
</code></pre>
| 0 |
2016-09-19T09:54:00Z
| 39,570,291 |
<p>When you hit <strong>shop</strong> url, your application will try to use an empty url from your cartridge.shop.urls file. So basically when you would like to check which API / view is called, go to this file and look for something similar to this:</p>
<pre><code>url(r'^$', 'your-view', name='your-view'),
</code></pre>
<p>ok after posting your second urls file you have following options:</p>
<p>you call:</p>
<ol>
<li>/shop/wishlist/ - you are executing a view named <strong>wishlist</strong></li>
<li>/shop/cart/ - you are executing a view named <strong>cart</strong></li>
<li>/shop/checkout/complete/ - you are executing a view named <strong>complete</strong></li>
</ol>
<p>so just find your views.py file, and all those views are going to be there</p>
| 0 |
2016-09-19T09:57:04Z
|
[
"python",
"html",
"django",
"mezzanine",
"cartridge"
] |
How to partially remove content from cell in a dataframe using Python
| 39,570,240 |
<p>I have the following dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([
['\nSOVAT\n', 'DVR', 'MEA', '\n195\n'],
['PINCO\nGALLO ', 'DVR', 'MEA\n', '195'],
])
</code></pre>
<p>which looks like this:</p>
<p><a href="http://i.stack.imgur.com/ldKxo.png" rel="nofollow"><img src="http://i.stack.imgur.com/ldKxo.png" alt="enter image description here"></a></p>
<p>My goal is to analyze every single cell of the dataframe so that:</p>
<ul>
<li>if the substring <code>\n</code> appears only once, then I delete it along with all the characters that come before it;</li>
<li>if the substring <code>\n</code> appears more than once in a specific cell, then I remove all the <code>\n</code> contained along with what comes before and after them (except for what is in between)</li>
</ul>
<p>The output of the code should be this:</p>
<p><a href="http://i.stack.imgur.com/Zws8B.png" rel="nofollow"><img src="http://i.stack.imgur.com/Zws8B.png" alt="enter image description here"></a></p>
<p>Notice: so far I only know how to remove the what comes before or after the substring by using the following command:</p>
<pre><code>df = df.astype(str).stack().str.split('\n').str[-1].unstack()
df = df.astype(str).stack().str.split('\n').str[0].unstack()
</code></pre>
<p>However this line of code does not lead me to the desired results since the output is:</p>
<p><a href="http://i.stack.imgur.com/WMgyN.png" rel="nofollow"><img src="http://i.stack.imgur.com/WMgyN.png" alt="enter image description here"></a></p>
| 0 |
2016-09-19T09:54:54Z
| 39,570,434 |
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html" rel="nofollow"><code>df.replace</code></a> and some regex.</p>
<pre><code>In [1]: import pandas as pd
...: df = pd.DataFrame([
...: ['\nSOVAT\n', 'DVR', 'MEA', '\n195\n'],
...: ['PINCO\nGALLO ', 'DVR', 'MEA\n', '195'],
...: ])
...:
In [2]: df.replace(r'.*\n(.*)\n?.*', r'\1', regex=True)
Out[3]:
0 1 2 3
0 SOVAT DVR MEA 195
1 GALLO DVR 195
</code></pre>
| 1 |
2016-09-19T10:04:22Z
|
[
"python",
"pandas",
"dataframe",
"removing-whitespace"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.