a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
39,034,399 | <p>a) A tile can emit arbitrary number of instructions. For example, if you have instruction like <code>%x <- %y + %z</code>, but the target machine has only two-address instructions, then a matching tile might emit the assembly sequence (destination is first operand)</p>
<pre><code>mov %x, %y
add %x, %z
</code></pre>
<p>b) what kind of register (or const, or mem reference) is allowed as an operand to an instruction is determined by the instruction itself, hence the instruction selection phase has to work on representation with symbolic register names (pseudo registers). Register allocation phase may indeed emit addition instructions, e.g. spill/load code when a register of a required class is not available for allocation.</p>
<p>Check this
<a href="http://arxiv.org/pdf/1306.4898.pdf" rel="nofollow">Survey on Instruction Selection: an Extensive and Modern Literature Review</a></p> | 2016-08-19 08:24:57.590000+00:00 | 2016-08-19 08:24:57.590000+00:00 | null | null | 39,030,481 | <p>I've been studying compiler design a lot recently. I've managed to get a strong grasp of the parsing stage, but am having a bit of trouble understanding how code generation works.</p>
<p>From what I've read, there seems to be 3 major steps in the code generation phase:</p>
<ul>
<li>Instruction Selection (Greedy Tiling)</li>
<li>Instruction Scheduling</li>
<li>Register Allocation</li>
</ul>
<p>Now, instruction scheduling is a little beyond what I'm trying to do at the moment, and I think with a bit more studying and prototyping, I can probably wrap my mind around the graph coloring algorithm for register allocation.</p>
<p>What stumps me is the first step, instruction selection. From what I've read about it, each instruction in a target machine language is represented by a tile; and the goal is to find the instructions that match the largest parts of the tree (hence the nickname, greedy tiling).</p>
<p>The thing I'm confused about is, how do you select instructions when they don't actually correspond 1:1 with the syntax tree?</p>
<p>Take for example, accumulator-based architectures like the Z80 or MIPs single instruction architecture. Performing even 16-bit integer arithmetic on a Z80 may require the use of the accumulator or shadow registers.</p>
<p>There are also some instructions that can only be used on certain registers despite them being general purpose.</p>
<p>Would I be right in assuming the following?</p>
<p>a) A tile may consist of a sequence of instructions that match a syntax tree pattern, rather than just a 1:1 match.</p>
<p>b) The code generator generates code for a stack-based architecture (or an architecture with infinite temporary registers) first and expands and substitutes instructions as necessary somehow during the register allocation phase.</p> | 2016-08-19 03:03:23.747000+00:00 | 2016-08-19 08:24:57.590000+00:00 | null | compilation|compiler-construction|code-generation | ['http://arxiv.org/pdf/1306.4898.pdf'] | 1 |
50,163,142 | <p>The R package <b>optimParallel</b> could be helpful in your case. The package provides parallel versions of the gradient-based optimization methods of <code>optim()</code>. The main function of the package is <code>optimParallel()</code>, which has the same usage and output as <code>optim()</code>. Using <code>optimParallel()</code> can significantly reduce optimization times as illustrated in the following figure (<code>p</code> is the number of paramters).
<a href="https://i.stack.imgur.com/AEsr4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AEsr4.png" alt="enter image description here"></a>
See <a href="https://cran.r-project.org/package=optimParallel" rel="nofollow noreferrer">https://cran.r-project.org/package=optimParallel</a> and <a href="http://arxiv.org/abs/1804.11058" rel="nofollow noreferrer">http://arxiv.org/abs/1804.11058</a> for more information. </p> | 2018-05-03 20:11:21.943000+00:00 | 2018-05-03 20:11:21.943000+00:00 | null | null | 3,759,878 | <p><a href="https://stackoverflow.com/questions/3757321/moving-beyond-rs-optim-function">This question</a> came at the right time, as I'm struggling with optimization as well. I am aware of the different "normal" optimization routines in R, and I am aware of parallel packages like snow, snowfall, Rmpi and the likes. Yet, I didn't manage to get an optimization running in parallel on my computer. </p>
<p>Some toy code to illustrate :</p>
<pre><code>f <- function(x) sum((x-1:length(x))^2)
a <- 1:5
optim(a,f)
nlm(f,a)
</code></pre>
<p>What I want to do, is to parallelize the optim() function ( or the nlm() function, which does basically the same). My real function f() is a lot more complicated, and one optimization round lasts about half an hour. If I want to run a simulation of 100 samples, that one takes ages. I'd like to avoid writing my own Newton-like algorithm for parallel computing, so I hope somebody could give me some hints on how to use parallel computing for complex optimization problems in R.</p>
<hr>
<p><em>I reckon this problem is of a different nature than the one in the related question. My request is specifically directed towards parallel computing, not some faster alternative for optim.</em></p> | 2010-09-21 11:37:30.670000+00:00 | 2018-05-03 20:11:21.943000+00:00 | 2017-05-23 12:09:09.743000+00:00 | optimization|r|parallel-processing | ['https://i.stack.imgur.com/AEsr4.png', 'https://cran.r-project.org/package=optimParallel', 'http://arxiv.org/abs/1804.11058'] | 3 |
45,456,264 | <p>You need to offer us more details. The answer to what CNN should I use? and do I have enough images for that? depends on several factors:</p>
<p>1- How many objects for 550 images? Each object is a class, if you have 550 images from 2 different objects that might be enough, but if you have 550 objects thats only 1 image per object, which is definitely not enough.</p>
<p>2- What is the size of your images? Does it change among them? The 550 images contain parts of the object or the whole object?</p>
<p>After knowing the answer to these questions you can select your CNNs architecture and your data augmentation strategy.</p>
<p>Structured receptive fields have shown better results for small datasets than the normal CNN. Here's a papers to it: <a href="https://arxiv.org/abs/1605.02971" rel="nofollow noreferrer">https://arxiv.org/abs/1605.02971</a></p> | 2017-08-02 09:03:46.657000+00:00 | 2017-08-02 09:03:46.657000+00:00 | null | null | 45,454,779 | <p>I would like to do some object detection where have two restrictions. </p>
<p>First one is that at the moment I don't have large number of images for training (at the moment are around 550 images). </p>
<p>Second, most likely I will not be able to see the whole object, there will be available only some part of the object that I try to detect. </p>
<p>My question is it good to try Deep Convolutional Networks
via Bayesian Optimization and Structured Prediction for this kind of situation?</p>
<p>I have this paper as a reference:
<a href="http://web.eecs.umich.edu/~honglak/cvpr15-cnn-detection.pdf" rel="nofollow noreferrer">Deep Convolutional Networks via Bayesian Optimization and Structured Prediction</a>.</p> | 2017-08-02 07:54:53.647000+00:00 | 2017-08-02 09:03:46.657000+00:00 | 2017-08-02 08:15:03.350000+00:00 | image-processing|deep-learning|conv-neural-network|object-detection | ['https://arxiv.org/abs/1605.02971'] | 1 |
32,172,588 | <p>Probably better suited for the mailing list. Also have a look at the <a href="http://arxiv.org/pdf/1406.4806v1.pdf" rel="nofollow">paper</a> for some of these topics.</p>
<blockquote>
<p>When/how do sessions expire?</p>
</blockquote>
<p>The default expiration for temporary sessions in the server implementation is 24h.</p>
<blockquote>
<p>Can session expire time be configured on the server?</p>
</blockquote>
<p>You could edit the <code>/usr/lib/opencpu/scripts/cleanocpu.sh</code> script, which get triggered through <code>/etc/cron.d/opencpu</code>. But if you want persistence it is usually better to store things in a database (RMySQL, mongolite, etc) or in a package on the server, or in the client.</p>
<blockquote>
<p>Can session expire time be changed at runtime?</p>
</blockquote>
<p>No, expiration of resources is up to the server.</p>
<blockquote>
<p>Are sessions saved on-disk or in-memory?</p>
</blockquote>
<p>The current implementation saves on disk (with a bit of in-memory cache), but the API is agnostic.</p>
<blockquote>
<p>Do sessions work with the nginx opencpu proxy?</p>
</blockquote>
<p>Yes, they are no different than anything else on the server.</p> | 2015-08-23 23:33:24.590000+00:00 | 2015-08-23 23:33:24.590000+00:00 | null | null | 32,172,285 | <p>After reading <a href="https://www.opencpu.org/posts/opencpu-release-1-4-4/" rel="nofollow">this blog post on OpenCPU</a>, I have questions about Sessions:<br>
* when/how do sessions expire?<br>
* can session expire time be configured on the server?<br>
* can session expire time be changed at runtime?<br>
* are sessions saved on-disk or in-memory?<br>
* do sessions work with the nginx opencpu proxy?</p>
<p>Thanks in advance!</p> | 2015-08-23 22:45:46.203000+00:00 | 2015-08-23 23:33:24.590000+00:00 | 2015-08-23 23:16:01.393000+00:00 | r|opencpu | ['http://arxiv.org/pdf/1406.4806v1.pdf'] | 1 |
56,813,349 | <p>As of 2015, there is a linear time algorithm for computing the <em>number of distinct palindromic substrings</em> of a given string S. You can use a data structure known as an <a href="https://arxiv.org/pdf/1506.04862.pdf" rel="nofollow noreferrer">eertree (or palindromic tree)</a>, as described in the linked paper. The idea is fairly complicated, but the premise is to build a trie of palindromes, and augment it with longest proper palindromic suffixes in a similar manner to the failure function of the <a href="https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm" rel="nofollow noreferrer">Aho-Corasick Algorithm</a>. See the original paper for more details: <a href="https://arxiv.org/pdf/1506.04862.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1506.04862.pdf</a></p> | 2019-06-28 21:28:25.317000+00:00 | 2019-06-28 21:28:25.317000+00:00 | null | null | 20,473,485 | <p>Given a string, I know how to find the <em>number of palindromic substrings</em> in linear time using Manacher's algorithm. But now I need to find the number of <em>distinct/unique</em> palindromic substrings. Now, this might lead to an O(n + n^2) algorithm - one 'n' for finding all such substrings, and n^2 for comparing each of these substrings with the ones already found, to check if it is unique.</p>
<p>I am sure there is an algorithm with better complexity. I was thinking of maybe trying my luck with suffix trees? Is there an algorithm with better time complexity?</p> | 2013-12-09 14:48:27.040000+00:00 | 2019-06-28 21:28:25.317000+00:00 | 2013-12-09 15:12:29.173000+00:00 | string|algorithm|substring|time-complexity|palindrome | ['https://arxiv.org/pdf/1506.04862.pdf', 'https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm', 'https://arxiv.org/pdf/1506.04862.pdf'] | 3 |
71,172,050 | <p>The method that they apply is cited in the paper:</p>
<blockquote>
<p><strong>6.1.1. Qualitative results</strong>
[...] we used one of the visualization methods presented in (Yosinski et al., 2015) [...].</p>
</blockquote>
<p>From a quick glimpse, it looks like <a href="https://arxiv.org/abs/1506.06579" rel="nofollow noreferrer">Yosinski et. al.</a> visualize the magnitude of pixels of a particular feature map. The authors of your referenced paper provide more detail how exactly they produced the results:</p>
<blockquote>
<p>More precisely, we started by plotting
the position of the neuron with the highest activation for all the features
maps extracted from the last convolutional layer, and from there we
accumulated the first 30 dominant activations and matched them to the
original image.
<a href="https://www.sciencedirect.com/science/article/pii/S0168169919300560" rel="nofollow noreferrer">[Lee et. al.]</a></p>
</blockquote> | 2022-02-18 10:37:38.800000+00:00 | 2022-02-18 10:37:38.800000+00:00 | null | null | 71,168,512 | <p>Hi I want to visualize what my model has learned and want to get a attention mask from my model. May I ask how can I do this?</p>
<p>Below is my desire output.
<a href="https://i.stack.imgur.com/Nx7dk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Nx7dk.png" alt="enter image description here" /></a></p>
<p>From paper [New perspectives on plant disease characterization based on deep learning]</p> | 2022-02-18 04:50:28.407000+00:00 | 2022-02-18 10:37:38.800000+00:00 | null | python|tensorflow|deep-learning|model | ['https://arxiv.org/abs/1506.06579', 'https://www.sciencedirect.com/science/article/pii/S0168169919300560'] | 2 |
70,628,470 | <p>You can overwrite the font settings of the .sty file by loading <code>size11.clo</code> or <code>size12.clo</code> afterwards.</p>
<p>These files will create nice matching sets of font sizes, so that it not only changes the size of the formal text but also adjust other sizes like <code>\large</code> etc. to get an harmonic result.</p>
<pre><code>\documentclass{article}
\usepackage{float}
\usepackage{arxiv}
\usepackage{graphicx}
\usepackage[utf8]{inputenc} % allow utf-8 input
\usepackage[T1]{fontenc} % use 8-bit T1 fonts
\usepackage{hyperref} % hyperlinks
\usepackage{url} % simple URL typesetting
\usepackage{booktabs} % professional-quality tables
\usepackage{amsfonts} % blackboard math symbols
\usepackage{nicefrac} % compact symbols for 1/2, etc.
\usepackage{microtype} % microtypography
\usepackage{lipsum}
\title{test}
\makeatletter
%\input{size11.clo}
\input{size12.clo}
\makeatother
\begin{document}
\lipsum
\end{document}
</code></pre> | 2022-01-08 00:00:22.817000+00:00 | 2022-01-08 00:00:22.817000+00:00 | null | null | 70,627,983 | <p>I found this latex template that has no font-size attribute, but the writing is just too small to be read.
Hope you can help me figuring out how to change the font size, the template has a file called arxiv.sty that contains:</p>
<pre><code>\NeedsTeXFormat{LaTeX2e}
\ProcessOptions\relax
% fonts
\renewcommand{\rmdefault}{ptm}
\renewcommand{\sfdefault}{phv}
% set page geometry
\usepackage[verbose=true,letterpaper]{geometry}
\AtBeginDocument{
\newgeometry{
textheight=9in,
textwidth=6.5in,
top=1in,
headheight=14pt,
headsep=25pt,
footskip=30pt
}
}
\widowpenalty=10000
\clubpenalty=10000
\flushbottom
\sloppy
\usepackage{fancyhdr}
\fancyhf{}
\pagestyle{fancy}
\renewcommand{\headrulewidth}{0pt}
\fancyheadoffset{0pt}
\rhead{\scshape \today}
\cfoot{\thepage}
%Handling Keywords
\def\keywordname{{\bfseries \emph Keywords}}%
\def\keywords#1{\par\addvspace\medskipamount{\rightskip=0pt plus1cm
\def\and{\ifhmode\unskip\nobreak\fi\ $\cdot$
}\noindent\keywordname\enspace\ignorespaces#1\par}}
% font sizes with reduced leading
\renewcommand{\normalsize}{%
\@setfontsize\normalsize\@xpt\@xipt
\abovedisplayskip 7\p@ \@plus 2\p@ \@minus 5\p@
\abovedisplayshortskip \z@ \@plus 3\p@
\belowdisplayskip \abovedisplayskip
\belowdisplayshortskip 4\p@ \@plus 3\p@ \@minus 3\p@
}
\normalsize
\renewcommand{\small}{%
\@setfontsize\small\@ixpt\@xpt
\abovedisplayskip 6\p@ \@plus 1.5\p@ \@minus 4\p@
\abovedisplayshortskip \z@ \@plus 2\p@
\belowdisplayskip \abovedisplayskip
\belowdisplayshortskip 3\p@ \@plus 2\p@ \@minus 2\p@
}
\renewcommand{\footnotesize}{\@setfontsize\footnotesize\@ixpt\@xpt}
\renewcommand{\scriptsize}{\@setfontsize\scriptsize\@viipt\@viiipt}
\renewcommand{\tiny}{\@setfontsize\tiny\@vipt\@viipt}
\renewcommand{\large}{\@setfontsize\large\@xiipt{14}}
\renewcommand{\Large}{\@setfontsize\Large\@xivpt{16}}
\renewcommand{\LARGE}{\@setfontsize\LARGE\@xviipt{20}}
\renewcommand{\huge}{\@setfontsize\huge\@xxpt{23}}
\renewcommand{\Huge}{\@setfontsize\Huge\@xxvpt{28}}
% sections with less space
\providecommand{\section}{}
\renewcommand{\section}{%
\@startsection{section}{1}{\z@}%
{-2.0ex \@plus -0.5ex \@minus -0.2ex}%
{ 1.5ex \@plus 0.3ex \@minus 0.2ex}%
{\large\bf\raggedright}%
}
\providecommand{\subsection}{}
\renewcommand{\subsection}{%
\@startsection{subsection}{2}{\z@}%
{-1.8ex \@plus -0.5ex \@minus -0.2ex}%
{ 0.8ex \@plus 0.2ex}%
{\normalsize\bf\raggedright}%
}
\providecommand{\subsubsection}{}
\renewcommand{\subsubsection}{%
\@startsection{subsubsection}{3}{\z@}%
{-1.5ex \@plus -0.5ex \@minus -0.2ex}%
{ 0.5ex \@plus 0.2ex}%
{\normalsize\bf\raggedright}%
}
\providecommand{\paragraph}{}
\renewcommand{\paragraph}{%
\@startsection{paragraph}{4}{\z@}%
{1.5ex \@plus 0.5ex \@minus 0.2ex}%
{-1em}%
{\normalsize\bf}%
}
\providecommand{\subparagraph}{}
\renewcommand{\subparagraph}{%
\@startsection{subparagraph}{5}{\z@}%
{1.5ex \@plus 0.5ex \@minus 0.2ex}%
{-1em}%
{\normalsize\bf}%
}
\providecommand{\subsubsubsection}{}
\renewcommand{\subsubsubsection}{%
\vskip5pt{\noindent\normalsize\rm\raggedright}%
}
% float placement
\renewcommand{\topfraction }{0.85}
\renewcommand{\bottomfraction }{0.4}
\renewcommand{\textfraction }{0.1}
\renewcommand{\floatpagefraction}{0.7}
\newlength{\@abovecaptionskip}\setlength{\@abovecaptionskip}{7\p@}
\newlength{\@belowcaptionskip}\setlength{\@belowcaptionskip}{\z@}
\setlength{\abovecaptionskip}{\@abovecaptionskip}
\setlength{\belowcaptionskip}{\@belowcaptionskip}
% swap above/belowcaptionskip lengths for tables
\renewenvironment{table}
{\setlength{\abovecaptionskip}{\@belowcaptionskip}%
\setlength{\belowcaptionskip}{\@abovecaptionskip}%
\@float{table}}
{\end@float}
% footnote formatting
\setlength{\footnotesep }{6.65\p@}
\setlength{\skip\footins}{9\p@ \@plus 4\p@ \@minus 2\p@}
\renewcommand{\footnoterule}{\kern-3\p@ \hrule width 12pc \kern 2.6\p@}
\setcounter{footnote}{0}
% paragraph formatting
\setlength{\parindent}{\z@}
\setlength{\parskip }{5.5\p@}
% list formatting
\setlength{\topsep }{4\p@ \@plus 1\p@ \@minus 2\p@}
\setlength{\partopsep }{1\p@ \@plus 0.5\p@ \@minus 0.5\p@}
\setlength{\itemsep }{2\p@ \@plus 1\p@ \@minus 0.5\p@}
\setlength{\parsep }{2\p@ \@plus 1\p@ \@minus 0.5\p@}
\setlength{\leftmargin }{3pc}
\setlength{\leftmargini }{\leftmargin}
\setlength{\leftmarginii }{2em}
\setlength{\leftmarginiii}{1.5em}
\setlength{\leftmarginiv }{1.0em}
\setlength{\leftmarginv }{0.5em}
\def\@listi {\leftmargin\leftmargini}
\def\@listii {\leftmargin\leftmarginii
\labelwidth\leftmarginii
\advance\labelwidth-\labelsep
\topsep 2\p@ \@plus 1\p@ \@minus 0.5\p@
\parsep 1\p@ \@plus 0.5\p@ \@minus 0.5\p@
\itemsep \parsep}
\def\@listiii{\leftmargin\leftmarginiii
\labelwidth\leftmarginiii
\advance\labelwidth-\labelsep
\topsep 1\p@ \@plus 0.5\p@ \@minus 0.5\p@
\parsep \z@
\partopsep 0.5\p@ \@plus 0\p@ \@minus 0.5\p@
\itemsep \topsep}
\def\@listiv {\leftmargin\leftmarginiv
\labelwidth\leftmarginiv
\advance\labelwidth-\labelsep}
\def\@listv {\leftmargin\leftmarginv
\labelwidth\leftmarginv
\advance\labelwidth-\labelsep}
\def\@listvi {\leftmargin\leftmarginvi
\labelwidth\leftmarginvi
\advance\labelwidth-\labelsep}
% create title
\providecommand{\maketitle}{}
\renewcommand{\maketitle}{%
\par
\begingroup
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
% for perfect author name centering
\renewcommand{\@makefnmark}{\hbox to \z@{$^{\@thefnmark}$\hss}}
% The footnote-mark was overlapping the footnote-text,
% added the following to fix this problem (MK)
\long\def\@makefntext##1{%
\parindent 1em\noindent
\hbox to 1.8em{\hss $\m@th ^{\@thefnmark}$}##1
}
\thispagestyle{empty}
\@maketitle
\@thanks
%\@notice
\endgroup
\let\maketitle\relax
\let\thanks\relax
}
% rules for title box at top of first page
\newcommand{\@toptitlebar}{
\hrule height 2\p@
\vskip 0.25in
\vskip -\parskip%
}
\newcommand{\@bottomtitlebar}{
\vskip 0.29in
\vskip -\parskip
\hrule height 2\p@
\vskip 0.09in%
}
% create title (includes both anonymized and non-anonymized versions)
\providecommand{\@maketitle}{}
\renewcommand{\@maketitle}{%
\vbox{%
\hsize\textwidth
\linewidth\hsize
\vskip 0.1in
\@toptitlebar
\centering
{\LARGE\sc \@title\par}
\@bottomtitlebar
\textsc{}\\
\vskip 0.1in
\def\And{%
\end{tabular}\hfil\linebreak[0]\hfil%
\begin{tabular}[t]{c}\bf\rule{\z@}{24\p@}\ignorespaces%
}
\def\AND{%
\end{tabular}\hfil\linebreak[4]\hfil%
\begin{tabular}[t]{c}\bf\rule{\z@}{24\p@}\ignorespaces%
}
\begin{tabular}[t]{c}\bf\rule{\z@}{24\p@}\@author\end{tabular}%
\vskip 0.4in \@minus 0.1in \center{\today} \vskip 0.2in
}
}
% add conference notice to bottom of first page
\newcommand{\ftype@noticebox}{8}
\newcommand{\@notice}{%
% give a bit of extra room back to authors on first page
\enlargethispage{2\baselineskip}%
\@float{noticebox}[b]%
\footnotesize\@noticestring%
\end@float%
}
% abstract styling
\renewenvironment{abstract}
{
\centerline
{\large \bfseries \scshape Abstract}
\begin{quote}
}
{
\end{quote}
}
\endinput
</code></pre>
<p>and the template.tex file that contains the code below but no attributes telling about the font-size:</p>
<pre><code>\documentclass{article}
\usepackage{float}
\usepackage{arxiv}
\usepackage{graphicx}
\usepackage[utf8]{inputenc} % allow utf-8 input
\usepackage[T1]{fontenc} % use 8-bit T1 fonts
\usepackage{hyperref} % hyperlinks
\usepackage{url} % simple URL typesetting
\usepackage{booktabs} % professional-quality tables
\usepackage{amsfonts} % blackboard math symbols
\usepackage{nicefrac} % compact symbols for 1/2, etc.
\usepackage{microtype} % microtypography
\usepackage{lipsum}
\title{
</code></pre> | 2022-01-07 22:44:18.683000+00:00 | 2022-01-08 00:00:22.817000+00:00 | 2022-01-07 23:18:29.293000+00:00 | latex|overleaf | [] | 0 |
62,961,753 | <p>Above answer highlights one of the recurrent dropout methods but that one is NOT used by tensorflow and keras. <a href="https://www.tensorflow.org/addons/api_docs/python/tfa/rnn/LayerNormLSTMCell" rel="noreferrer">Tensorflow Doc</a>.</p>
<p>Keras/TF refers a recurrent method proposed by <a href="https://arxiv.org/abs/1603.05118" rel="noreferrer">Semeniuta et al</a>. Also, check below the image comparing different recurrent dropout methods. The <a href="https://arxiv.org/pdf/1512.05287.pdf" rel="noreferrer">Gal and Ghahramani</a> method which is mentioned in above answer is at second position and Semeniuta method is the right most.</p>
<p><a href="https://i.stack.imgur.com/CeUjV.png" rel="noreferrer"><img src="https://i.stack.imgur.com/CeUjV.png" alt="enter image description here" /></a></p> | 2020-07-17 21:06:00.263000+00:00 | 2020-07-17 21:06:00.263000+00:00 | null | null | 44,924,690 | <p>From the Keras documentation:</p>
<p>dropout: Float between 0 and 1. Fraction of the units to drop for the
linear transformation of the inputs.</p>
<p>recurrent_dropout: Float between 0 and 1. Fraction of the units to
drop for the linear transformation of the recurrent state.</p>
<p>Can anyone point to where on the image below each dropout happens?</p>
<p><a href="https://i.stack.imgur.com/DS97N.png" rel="noreferrer"><img src="https://i.stack.imgur.com/DS97N.png" alt="enter image description here"></a></p> | 2017-07-05 11:13:26.127000+00:00 | 2022-04-15 19:39:01.443000+00:00 | 2018-04-03 13:18:34.303000+00:00 | keras|lstm|dropout | ['https://www.tensorflow.org/addons/api_docs/python/tfa/rnn/LayerNormLSTMCell', 'https://arxiv.org/abs/1603.05118', 'https://arxiv.org/pdf/1512.05287.pdf', 'https://i.stack.imgur.com/CeUjV.png'] | 4 |
44,929,759 | <p>I suggest taking a look at (the first part of) <a href="https://arxiv.org/pdf/1512.05287.pdf" rel="noreferrer">this paper</a>. Regular dropout is applied on the inputs and/or the outputs, meaning the vertical arrows from <code>x_t</code> and to <code>h_t</code>. In your case, if you add it as an argument to your layer, it will mask the inputs; you can add a Dropout layer after your recurrent layer to mask the outputs as well. Recurrent dropout masks (or "drops") the connections between the recurrent units; that would be the horizontal arrows in your picture.</p>
<p>This picture is taken from the paper above. On the left, regular dropout on inputs and outputs. On the right, regular dropout PLUS recurrent dropout:</p>
<p><a href="https://i.stack.imgur.com/fWDtw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/fWDtw.png" alt="This picture is taken from the paper above. On the left, regular dropout on inputs and outputs. On the right, regular dropout PLUS recurrent dropout."></a></p>
<p>(Ignore the colour of the arrows in this case; in the paper they are making a further point of keeping the same dropout masks at each timestep)</p> | 2017-07-05 14:59:00.077000+00:00 | 2019-01-13 11:06:06.290000+00:00 | 2019-01-13 11:06:06.290000+00:00 | null | 44,924,690 | <p>From the Keras documentation:</p>
<p>dropout: Float between 0 and 1. Fraction of the units to drop for the
linear transformation of the inputs.</p>
<p>recurrent_dropout: Float between 0 and 1. Fraction of the units to
drop for the linear transformation of the recurrent state.</p>
<p>Can anyone point to where on the image below each dropout happens?</p>
<p><a href="https://i.stack.imgur.com/DS97N.png" rel="noreferrer"><img src="https://i.stack.imgur.com/DS97N.png" alt="enter image description here"></a></p> | 2017-07-05 11:13:26.127000+00:00 | 2022-04-15 19:39:01.443000+00:00 | 2018-04-03 13:18:34.303000+00:00 | keras|lstm|dropout | ['https://arxiv.org/pdf/1512.05287.pdf', 'https://i.stack.imgur.com/fWDtw.png'] | 2 |
55,564,874 | <p>Finally some researchers published a paper about SPP application in Yolo <a href="https://arxiv.org/abs/1903.08589" rel="noreferrer">https://arxiv.org/abs/1903.08589</a>.</p>
<p>For yolov3-tiny, yolov3, and yolov3-spp differences : </p>
<ul>
<li>yolov3-tiny.cfg uses downsampling (stride=2) in Max-Pooling layers</li>
<li>yolov3.cfg uses downsampling (stride=2) in Convolutional layers</li>
<li>yolov3-spp.cfg uses downsampling (stride=2) in Convolutional layers + gets the best features in Max-Pooling layers</li>
</ul>
<p>But they got only <strong>mAP = 79.6%</strong> on Pascal VOC 2007 test with using Yolov3SPP-model on original framework.</p>
<p>But we can achive higher accuracy <strong>mAP = 82.1%</strong> even with yolov3.cfg model by using AlexeyAB's repository <a href="https://github.com/AlexeyAB/darknet/issues/2557#issuecomment-474187706" rel="noreferrer">https://github.com/AlexeyAB/darknet/issues/2557#issuecomment-474187706</a></p>
<p>And for sure we can achieve even higher mAP with yolov3-spp.cfg using Alexey's repo.
<a href="https://i.stack.imgur.com/js9wN.png" rel="noreferrer"><img src="https://i.stack.imgur.com/js9wN.png" alt="enter image description here"></a></p>
<p>Original github question : <a href="https://github.com/AlexeyAB/darknet/issues/2859" rel="noreferrer">https://github.com/AlexeyAB/darknet/issues/2859</a></p> | 2019-04-08 00:27:51.757000+00:00 | 2019-04-08 00:27:51.757000+00:00 | null | null | 54,998,225 | <p>I couldn't find any good explanation about YOLOv3 SPP which has better <code>mAP</code> than YOLOv3. The author himself states YOLOv3 SPP as this on his repo: </p>
<blockquote>
<p>YOLOv3 with spatial pyramid pooling, or something</p>
</blockquote>
<p>But still I don't really understand it. In <code>yolov3-spp.cfg</code> I notice there are some additions</p>
<pre><code>575 ### SPP ###
576 [maxpool]
577 stride=1
578 size=5
579
580 [route]
581 layers=-2
582
583 [maxpool]
584 stride=1
585 size=9
586
587 [route]
588 layers=-4
589
590 [maxpool]
591 stride=1
592 size=13
593
594 [route]
595 layers=-1,-3,-5,-6
596
597 ### End SPP ###
598
599 [convolutional]
600 batch_normalize=1
601 filters=512
602 size=1
603 stride=1
604 pad=1
605 activation=leaky
</code></pre>
<p>Anybody can give further explanation about how YOLOv3 SPP works? Why layers -2, -4 and -1, -3, -5, -6 are chosen in <code>[route] layers</code>? Thanks.</p> | 2019-03-05 08:18:24.937000+00:00 | 2019-04-08 00:27:51.757000+00:00 | 2019-03-06 05:04:34.600000+00:00 | conv-neural-network|object-detection|yolo|darknet | ['https://arxiv.org/abs/1903.08589', 'https://github.com/AlexeyAB/darknet/issues/2557#issuecomment-474187706', 'https://i.stack.imgur.com/js9wN.png', 'https://github.com/AlexeyAB/darknet/issues/2859'] | 4 |
56,272,779 | <p>I have an enumeration scheme, but it requires an array of integers. If you can compress an array of integers to a single Q value (and back) then this might work.</p>
<p>First comes N, number of pieces on the board.</p>
<p>Then comes the array of ceil(N/2) items, the X pieces. Every number is the count of empty valid spaces from the previous X piece (or board start). IMPORTANT: space is not valid if it would result in game end. This is where 5 in a row end rule helps us reduce the domain.</p>
<p>Then comes the array of floor(N/2) items, the O pieces. Same logic applies as for the X array.</p>
<p>So for this board and 3 piece rule:</p>
<pre><code>XX.
X.O
..O
</code></pre>
<p>we have the following array:</p>
<p>N: 5<br>
X: 0 (from board start), 0 (from previous X), 0 (top right corner is invalid for X because it would end the game)<br>
O: 2 (from board start, minus all preceding X), 2 (from previous O)</p>
<p>and that's the array [5, 0, 0, 0, 2, 2]. Given this array we can recreate the board above. Occurrence of small numbers is more probable than that of big numbers. In regular game with 19x19 board the pieces will group together for the most part, so there will be a lot of zeros, ones, twos, delimited with occasional "big" number for the next line.</p>
<p>You now have to compress this array using the fact that small numbers occur more than the big ones. General purpose compression algorithm may help, but some <a href="https://arxiv.org/abs/1209.2137" rel="nofollow noreferrer">specialized</a> may help more.</p>
<p>I don't know anything about q-learning, but all this here requires that q-value can have variable size. If you have to have constant size for q-value then that size would have to account for worst possible board, and that size may be so big, that it defeats the purpose of having this enumeration/compression in the first place.</p>
<p>We use left-to-right and top-to-bottom method to enumerate pieces, but we could also use some spiraling method that may yield even better small-to-big numbers ratio. We just have to pick the best starting point for the spiral center. But this may complicate the algorithm and waste more CPU time in the end.</p>
<p>Also, we don't really need the first number in the array, N. Length of the array gives this information.</p> | 2019-05-23 10:02:42.247000+00:00 | 2019-05-23 10:02:42.247000+00:00 | null | null | 56,234,489 | <p>I implemented a 3x3 OX game by q-learning ( it works perfectly in AI v.s AI and AI v.s Human), but I can't go one step further to 4x4 OX game since it will eat up all my PC memory and crash.</p>
<p>Here is my current problem:
<a href="https://stackoverflow.com/questions/56231392/access-violation-in-huge-array">Access violation in huge array?</a></p>
<p>In my understanding, a 3x3 OX game has a total 3(space, white, black) ^ 9 = 19683 possible states. ( same pattern different angle still count )</p>
<p>For a 4x4 OX game, the total state will be 3 ^ 16 = 43,046,721</p>
<p>For a regular go game, 15x15 board, the total state will be 3 ^ 225 ~ 2.5 x 10^107 </p>
<p>Q1. I want to know my calculation is correct or not. ( for 4x4 OX game, I need a 3^16 array ? )</p>
<p>Q2. Since I need to calculate each Q value ( for each state, each action), I need such a large number of array, is it expected? any way to avoid it?</p> | 2019-05-21 08:46:55.043000+00:00 | 2019-06-19 09:06:01.393000+00:00 | 2019-05-21 14:01:42.597000+00:00 | c++|machine-learning|reinforcement-learning | ['https://arxiv.org/abs/1209.2137'] | 1 |
56,663,947 | <p>If you skip reinventing the wheel, here is what have done to solve this problem:</p>
<blockquote>
<p>The model is a convolutional neural network, trained with a variant of
Q-learning, whose input is raw pixels and whose output is a value
function estimating future rewards. We apply our method to seven Atari
2600 games from the Arcade Learning Environment, with no adjustment of
the architecture or learning algorithm.</p>
</blockquote>
<p><a href="https://arxiv.org/pdf/1312.5602v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1312.5602v1.pdf</a></p>
<blockquote>
<p>We could represent our Q-function with a neural network, that takes
the state (four game screens) and action as input and outputs the
corresponding Q-value. Alternatively we could take only game screens
as input and output the Q-value for each possible action. This
approach has the advantage, that if we want to perform a Q-value
update or pick the action with highest Q-value, we only have to do one
forward pass through the network and have all Q-values for all actions
immediately available.</p>
</blockquote>
<p><a href="https://neuro.cs.ut.ee/demystifying-deep-reinforcement-learning/" rel="nofollow noreferrer">https://neuro.cs.ut.ee/demystifying-deep-reinforcement-learning/</a></p> | 2019-06-19 09:06:01.393000+00:00 | 2019-06-19 09:06:01.393000+00:00 | null | null | 56,234,489 | <p>I implemented a 3x3 OX game by q-learning ( it works perfectly in AI v.s AI and AI v.s Human), but I can't go one step further to 4x4 OX game since it will eat up all my PC memory and crash.</p>
<p>Here is my current problem:
<a href="https://stackoverflow.com/questions/56231392/access-violation-in-huge-array">Access violation in huge array?</a></p>
<p>In my understanding, a 3x3 OX game has a total 3(space, white, black) ^ 9 = 19683 possible states. ( same pattern different angle still count )</p>
<p>For a 4x4 OX game, the total state will be 3 ^ 16 = 43,046,721</p>
<p>For a regular go game, 15x15 board, the total state will be 3 ^ 225 ~ 2.5 x 10^107 </p>
<p>Q1. I want to know my calculation is correct or not. ( for 4x4 OX game, I need a 3^16 array ? )</p>
<p>Q2. Since I need to calculate each Q value ( for each state, each action), I need such a large number of array, is it expected? any way to avoid it?</p> | 2019-05-21 08:46:55.043000+00:00 | 2019-06-19 09:06:01.393000+00:00 | 2019-05-21 14:01:42.597000+00:00 | c++|machine-learning|reinforcement-learning | ['https://arxiv.org/pdf/1312.5602v1.pdf', 'https://neuro.cs.ut.ee/demystifying-deep-reinforcement-learning/'] | 2 |
73,342,710 | <p>As gdrouard suggested, categorizing might not be your best option. Using a suitable time-to-event regression model (such as the Cox proportional hazards model) is usually preferable when analyzing continuous variables. The reason for this is that you are basically throwing away information if you artificially categorize it. This may also lead to bias in some scenarios.</p>
<p>If you want to visualize the effect of the continuous covariate on the time-to-event outcome afterwards, you may be interested in the <code>contsurvplot</code> R-package (<a href="https://github.com/RobinDenz1/contsurvplot" rel="nofollow noreferrer">https://github.com/RobinDenz1/contsurvplot</a>) I created. You can simply plug your regression model into one of the included plot functions and get a nice plot of the effect. More information can be found in the associated preprint: <a href="https://arxiv.org/pdf/2208.04644.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2208.04644.pdf</a></p> | 2022-08-13 08:32:05.053000+00:00 | 2022-08-13 08:32:05.053000+00:00 | null | null | 66,813,335 | <p>I have a question concerning survival analysis. However, I have the following data (just an excerpt):</p>
<p><a href="https://i.stack.imgur.com/tscd9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tscd9.png" alt="enter image description here" /></a></p>
<p>Now I am trying to do Survival Analysis with Python lifelines package. For example I want to find out if T-cells influence the Overall Survival (OS). But as far as I know, I need to categorizie the numer of T cells in different categories, like e.g. High T-Cell and Low T-Cell... Is that right? But how do I find out the best fitting Cut-Out?
My plan is to show, that Tumor with High T-Cells have a better survival than low T-Cells. But how could I find the best cut-off-value to discriminate between High and Low T-Cell out of the data I have here.</p>
<p>Does anyone has an idea? A friend of mine said something about "ROC"-Analysis but I am really confused now... I would be glad about any help!</p> | 2021-03-26 08:26:33.260000+00:00 | 2022-08-13 08:32:05.053000+00:00 | null | survival-analysis|lifelines | ['https://github.com/RobinDenz1/contsurvplot', 'https://arxiv.org/pdf/2208.04644.pdf'] | 2 |
46,734,354 | <p>According to the original publication of <a href="https://arxiv.org/pdf/1606.07792.pdf" rel="nofollow noreferrer">deep-and-wide learning</a> they use logistic loss function for joined training. More specifically, model implementation relies on cross entropy loss applied to softmax output.</p>
<p>They use weighted sum to combine log-odds from two models before applying logistic regression.</p>
<p>Loss declaration can be found in the source code <a href="https://github.com/tensorflow/tensorflow/blob/r1.3/tensorflow/python/estimator/canned/dnn_linear_combined.py" rel="nofollow noreferrer">here</a>:</p>
<pre><code>...
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss( # pylint: disable=protected-access
n_classes,
weight_column=weight_column,
label_vocabulary=label_vocabulary)
</code></pre> | 2017-10-13 16:34:56.977000+00:00 | 2017-10-13 16:34:56.977000+00:00 | null | null | 46,729,794 | <p>I am trying Tensorflow's <code>DNNLinearCombinedClassifier</code> (version <code>1.3</code>) on the <a href="https://www.kaggle.com/dalpozz/creditcardfraud" rel="nofollow noreferrer">Kaggle's Credit Card Fraud</a> (classification) dataset:</p>
<pre><code>m = tf.estimator.DNNLinearCombinedClassifier(model_dir='/.../model', dnn_feature_columns=deep_columns,
dnn_hidden_units=[20,5])
def input_fn(df, num_epochs):
return tf.estimator.inputs.pandas_input_fn(
x = df,
y = df.Class,
batch_size = 1000,
num_epochs = num_epochs,
shuffle=False)
</code></pre>
<p>with model's output (here <em>df.Class</em>) as a binary feature.
Tensorflow's logs on training</p>
<pre><code>m.train(input_fn(data, 3))
</code></pre>
<p>are:</p>
<blockquote>
<p>INFO:tensorflow:<strong>loss = 532.633</strong>, step = 2566
INFO:tensorflow:global_step/sec: 37.9815 INFO:tensorflow:loss =
560.574, step = 2666 (2.635 sec) INFO:tensorflow:global_step/sec: 38.3186</p>
</blockquote>
<p>What is the loss function being used here?</p> | 2017-10-13 12:18:53.890000+00:00 | 2017-10-14 15:35:19.870000+00:00 | 2020-06-20 09:12:55.060000+00:00 | python|machine-learning|tensorflow|deep-learning | ['https://arxiv.org/pdf/1606.07792.pdf', 'https://github.com/tensorflow/tensorflow/blob/r1.3/tensorflow/python/estimator/canned/dnn_linear_combined.py'] | 2 |
14,253,275 | <p>Try <a href="http://gephi.org/features/" rel="nofollow">gephi</a>. I believe that what you plan to do is already implemented there. However, it is open source (3 GPL) and you can get some ideas from the code. The java Graph API description is <a href="http://gephi.org/docs/api/org/gephi/graph/api/package-summary.html" rel="nofollow">here</a>.</p>
<p>Also you might want to review <a href="http://arxiv.org/pdf/0906.0612v2.pdf" rel="nofollow">this</a> article </p> | 2013-01-10 08:06:33.447000+00:00 | 2013-01-10 08:49:42.223000+00:00 | 2013-01-10 08:49:42.223000+00:00 | null | 14,253,049 | <p>Looking for a lib that detects overlapping communities in a fairly large network (up to 10,000 nodes) in seconds, not minutes?
[note: by "network" I mean a graph]</p>
<hr>
<p>Responding to a comment asking for details, here is a simple example:</p>
<p>D-E-F<br>
|<br>
G<br>
|<br>
A-B-C </p>
<p>There are many algorithms that are able to detect (D,E,F,G) and (A,B,C) as 2 distinct (non overlapping) communities in this network - or of course, (D,E,F) and (A,B,C,G).</p>
<p>I am looking for an algorithm, implemented in Java, that would be able to detect (D,E,F,G) and (A,B,C,G) as the two overlapping (because they overlap on G) communities in this network.</p> | 2013-01-10 07:48:16.747000+00:00 | 2014-01-23 14:26:44.077000+00:00 | 2013-09-14 18:31:01.893000+00:00 | java|networking|graph|cluster-analysis | ['http://gephi.org/features/', 'http://gephi.org/docs/api/org/gephi/graph/api/package-summary.html', 'http://arxiv.org/pdf/0906.0612v2.pdf'] | 3 |
3,245,305 | <p><strong>Be very careful fitting power laws!!</strong> Many reported power laws are actually badly fitted by a power law. See <a href="http://dx.doi.org/10.1137/070710111" rel="noreferrer">Clauset et al.</a> for all the details (also on <a href="http://arxiv.org/abs/0706.1062" rel="noreferrer">arxiv</a> if you don't have access to the journal). They have a <a href="http://tuvalu.santafe.edu/%7Eaaronc/powerlaws/" rel="noreferrer">companion website</a> to the article which now links to a Python implementation. Don't know if it uses Scipy because I used their R implementation when I last used it.</p> | 2010-07-14 10:37:02.633000+00:00 | 2010-07-14 10:37:02.633000+00:00 | null | null | 3,242,326 | <p>I have a data set that I know has a Pareto distribution. Can someone point me to how to fit this data set in Scipy? I got the below code to run but I have no idea what is being returned to me (a,b,c). Also, after obtaining a,b,c, how do I calculate the variance using them?</p>
<pre><code>import scipy.stats as ss
import scipy as sp
a,b,c=ss.pareto.fit(data)
</code></pre> | 2010-07-13 23:26:34.603000+00:00 | 2021-06-10 13:58:55.037000+00:00 | 2014-06-18 18:00:44.263000+00:00 | python|scipy|distribution | ['http://dx.doi.org/10.1137/070710111', 'http://arxiv.org/abs/0706.1062', 'http://tuvalu.santafe.edu/%7Eaaronc/powerlaws/'] | 3 |
69,650,980 | <p>Actually the error is not very useful but I figured the problem out: adding <code>"https://export.arxiv.org/*",</code> to the manifest's permissions.</p>
<p>For some reason (?) Chrome allows <code>https://export.arxiv.org/api/...</code> declaring <code>https://arxiv.org/*</code> but not Firefox</p> | 2021-10-20 18:01:21.653000+00:00 | 2021-10-20 18:01:21.653000+00:00 | null | null | 69,650,947 | <p>I'm fetching data on Arxiv.org's api with my chrome extension.</p>
<p>The following code works when executed:</p>
<ul>
<li>[x] Popup on Chrome</li>
<li>[x] Content script on Chrome</li>
<li>[x] Popup on Firefox</li>
<li>[ ] Content script on Firefox <- why is that, how can I debug?</li>
</ul>
<p>If it is of any help, <code>content_script.js</code> is triggered on <a href="https://arxiv.org/abs/1801.06146" rel="nofollow noreferrer">https://arxiv.org/abs/1801.06146</a></p>
<pre class="lang-js prettyprint-override"><code>// content_script.js
$.get(`https://export.arxiv.org/api/query`, { id_list: "1801.06146" })
.done((data) => {
console.log("done");
console.log(data);
})
.fail((jqXHR, textStatus, errorThrown) => {
console.log("fail");
console.log({ jqXHR, textStatus, errorThrown });
});
</code></pre>
<p>The FF failure looks like:</p>
<p><a href="https://i.stack.imgur.com/3nyLb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3nyLb.png" alt="jserror" /></a></p>
<pre class="lang-json prettyprint-override"><code>// manifest.json
"permissions": [
"https://arxiv.org/*",
"https://proceedings.neurips.cc/*",
"https://openaccess.thecvf.com/*",
"tabs",
"activeTab",
"storage",
"unlimitedStorage",
"downloads"
],
"content_scripts": [
{
"matches": [
"*://arxiv.org/*",
"*://*.arxiv-vanity.com/*",
"*://proceedings.neurips.cc/*",
"*://openaccess.thecvf.com/*"
],
"run_at": "document_start",
"js": [
"src/shared/jquery.min.js",
"src/shared/utils.min.js",
"src/content_scripts/content_script.js"
],
"css": [
"src/content_scripts/downloadButton.css",
"src/content_scripts/loader.css",
"src/content_scripts/content_script.css"
]
}
]
</code></pre> | 2021-10-20 17:58:30.917000+00:00 | 2021-10-20 18:01:21.653000+00:00 | null | javascript|jquery|google-chrome-extension|firefox-addon-webextensions | [] | 0 |
7,994,087 | <p><strong>References:</strong></p>
<p>The canonical references for type classes in Coq - beyond <a href="http://coq.inria.fr/refman/Reference-Manual024.html" rel="nofollow">the manual</a> - are <a href="http://mattam.org/research/publications/First-Class_Type_Classes.pdf" rel="nofollow">this paper</a>, and <a href="http://mattam.org/research/PhD.en.html" rel="nofollow">the thesis</a> (in french) of <a href="http://mattam.org/" rel="nofollow">Matthieu Sozeau</a>. There are less canonical references (with different points of view) at the research level in <a href="http://arxiv.org/abs/1102.1323" rel="nofollow">a recent paper</a>, and in <a href="http://pastel.archives-ouvertes.fr/pastel-00649586" rel="nofollow">my thesis</a>. You should also spend some time on the #coq channel on Freenode, and subscribe to <a href="https://sympa-roc.inria.fr/wws/arc/coq-club" rel="nofollow">the mailing list</a>.</p>
<p><strong>Your problem:</strong></p>
<p>The syntax issue is not with <code>Classes</code> per se, but with <a href="http://coq.inria.fr/refman/Reference-Manual004.html#htoc53" rel="nofollow">maximally inserted</a> <a href="http://coq.inria.fr/refman/Reference-Manual004.html#toc20" rel="nofollow">implicit arguments</a>. The <code>Monoid</code> and <code>AbelianMonoid</code> <em>types</em> have in your definition (implicit) parametric arguments that are, in this order, the domain type, the operation, and the identity - as indexed by the dependent product that you see fully expanded when you print those record types. They are filled automatically when you mention the dependent product without its arguments in a position where it would need them. </p>
<p>Indeed, implicit argument resolution will automatically insert the required parametric arguments to be identical (for both products that depend on them : <code>P</code> and <code>M</code>'s types) if left to its own devices. You just need to specify constraints between those identifiers by specifying variables for the various identifiers, distinct when appropriate :</p>
<pre><code>Class Semiring A mul add `(P : AbelianMonoid A mul) `(M : Monoid A add) := {
}.
</code></pre>
<p>The result :</p>
<pre><code>> Print Semiring.
Record Semiring (A : Type) (mul add : A -> A -> A)
(M0 : Semigroup mul) (id0 : A) (M : Monoid M0 id0)
(P : AbelianMonoid M) (M1 : Semigroup add) (id1 : A)
(M : Monoid M1 id1) : Type := Build_Semiring { }
For Semiring: Arguments M0, id0, M, M1, id1 are implicit and maximally
inserted
For Semiring: Argument scopes are [type_scope _ _ _ _ _ _ _ _ _]
For Build_Semiring: Argument scopes are [type_scope _ _ _ _ _ _ _ _ _]
</code></pre>
<p>Note the identities for the abelian monoid and monoid are this time distinct. It's a good exercise (even if it makes little mathematical sense) to train yourself to write the record type (aka the Class) you would want if you had the same identity element for the additive and multiplicative structures.</p> | 2011-11-03 11:26:16.653000+00:00 | 2012-01-20 11:35:22.040000+00:00 | 2012-01-20 11:35:22.040000+00:00 | null | 7,990,301 | <p>I can naively construct a hierarchy of algebraic structures in Coq using type classes. I'm having some trouble finding resources on Coq's syntax and semantics for type classes. However, I believe the following is a correct implementation of semigroups, monoids and commutative monoids:</p>
<pre><code>Class Semigroup {A : Type} (op : A -> A -> A) : Type := {
op_associative : forall x y z : A, op x (op y z) = op (op x y) z
}.
Class Monoid `(M : Semigroup) (id : A) : Type := {
id_ident_left : forall x : A, op id x = x;
id_ident_right : forall x : A, op x id = x
}.
Class AbelianMonoid `(M : Monoid) : Type := {
op_commutative : forall x y : A, op x y = op y x
}.
</code></pre>
<p>If I understand correctly, additional parameters (e.g., the identity element of a monoid) can be added by first declaring <code>Monoid</code> an instance of <code>Semigroup</code>, then parameterizing on <code>id : A</code>. However, something odd is occurring in the record constructed for <code>AbelianMonoid</code>.</p>
<pre><code>< Print Monoid.
Record Monoid (A : Type) (op : A -> A -> A) (M : Semigroup op)
(id : A) : Type := Build_Monoid
{ id_ident_left : forall x : A, op id x = x;
id_ident_right : forall x : A, op x id = x }
< Print AbelianMonoid.
Record AbelianMonoid (A : Type) (op : A -> A -> A)
(M0 : Semigroup op) (id0 : A) (M : Monoid M0 id0) :
Type := Build_AbelianMonoid
{ op_commutative : forall x y : A, op x y = op y x }
</code></pre>
<p>This occurred when I was trying to build a class for semigroups. I thought that the following syntax was correct:</p>
<pre><code>Class Semiring `(P : AbelianMonoid) `(M : Monoid) := {
...
}.
</code></pre>
<p>However, I couldn't disambigutate the correct operators and identity elements. Printing the records revealed the problems outlined above. So I have two questions: first, how do I correctly declare the class <code>Monoid</code>; second, how do I disambiguate functions in superclasses?</p>
<p>What I'd really like is a good resources that clearly explains Coq's type classes without antiquated syntax. For example, I thought Hutton's book explained type classes in Haskell clearly.</p> | 2011-11-03 04:34:50.540000+00:00 | 2012-01-20 11:35:22.040000+00:00 | 2011-11-03 04:41:56.823000+00:00 | functional-programming|scope|typeclass|coq | ['http://coq.inria.fr/refman/Reference-Manual024.html', 'http://mattam.org/research/publications/First-Class_Type_Classes.pdf', 'http://mattam.org/research/PhD.en.html', 'http://mattam.org/', 'http://arxiv.org/abs/1102.1323', 'http://pastel.archives-ouvertes.fr/pastel-00649586', 'https://sympa-roc.inria.fr/wws/arc/coq-club', 'http://coq.inria.fr/refman/Reference-Manual004.html#htoc53', 'http://coq.inria.fr/refman/Reference-Manual004.html#toc20'] | 9 |
66,098,417 | <p>Hi i have one solution in pytorch</p>
<pre><code>import torch
import torch.nn as nn
from torch.utils import data
from torchvision import transforms
from torchvision import datasets
import matplotlib.pyplot as plt
import numpy as np
# use the ImageNet transformation
transform = transforms.Compose([transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
# define a 1 image dataset
dataset = datasets.ImageFolder(root='./data/Elephant/', transform=transform)
# define the dataloader to load that single image
dataloader = data.DataLoader(dataset=dataset, shuffle=False, batch_size=1)
vgg19 = Mymodel() ## create an object of your model
vgg19.load_state_dict(torch.load("your_vgg19_weights"))
class VGG(nn.Module):
def __init__(self):
super(VGG, self).__init__()
# get the pretrained VGG19 network
self.vgg = vgg19
# disect the network to access its last convolutional layer
self.features_conv = self.vgg.features[:36] # 36th layer was my last conv layer
# get the max pool of the features stem
self.max_pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
# get the classifier of the vgg19
self.classifier = self.vgg.classifier
# placeholder for the gradients
self.gradients = None
# hook for the gradients of the activations
def activations_hook(self, grad):
self.gradients = grad
def forward(self, x):
x = self.features_conv(x)
# register the hook
h = x.register_hook(self.activations_hook)
# apply the remaining pooling
x = self.max_pool(x)
x = x.view((1, -1))
x = self.classifier(x)
return x
# method for the gradient extraction
def get_activations_gradient(self):
return self.gradients
# method for the activation exctraction
def get_activations(self, x):
return self.features_conv(x)
vgg = VGG()
# set the evaluation mode
vgg.eval()
# get the image from the dataloader
img, _ = next(iter(dataloader))
# get the most likely prediction of the model
pred_class = vgg(img).argmax(dim=1).numpy()[0]
pred = vgg(img)
pred[:, pred_class].backward()
# pull the gradients out of the model
gradients = vgg.get_activations_gradient()
# pool the gradients across the channels
pooled_gradients = torch.mean(gradients, dim=[0, 2, 3])
# get the activations of the last convolutional layer
activations = vgg.get_activations(img).detach()
# weight the channels by corresponding gradients
for i in range(512):
activations[:, i, :, :] *= pooled_gradients[i]
# average the channels of the activations
heatmap = torch.mean(activations, dim=1).squeeze()
# relu on top of the heatmap
# expression (2) in https://arxiv.org/pdf/1610.02391.pdf
heatmap = np.maximum(heatmap, 0)
# normalize the heatmap
heatmap /= torch.max(heatmap)
heatmap = heatmap.numpy()
import cv2
img = cv2.imread('./data/Elephant/data/05fig34.jpg')
heatmap = cv2.resize(heatmap, (img.shape[1], img.shape[0]))
heatmap = np.uint8(255 * heatmap)
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
superimposed_img = heatmap * 0.4 + img
cv2.imwrite('./map.jpg', superimposed_img) ###saves gradcam visualization image
</code></pre> | 2021-02-08 08:52:12.877000+00:00 | 2021-02-08 12:31:53.613000+00:00 | 2021-02-08 12:31:53.613000+00:00 | null | 56,583,080 | <p>I want to implement Grad-CAM on my own network, should I save my model and load it, then treat my saved model like VGG-16, then do similar operations?</p>
<p>I tried to search on the internet, and I found that all methods are based on famous models, not their owns.</p>
<p>So I wonder, maybe I just need to treat my own model as VGG-16, then do similar things.</p> | 2019-06-13 14:44:40.143000+00:00 | 2021-02-08 12:31:53.613000+00:00 | null | python-3.x|pytorch | [] | 0 |
49,049,392 | <p>Seems that someone have sorted out (2018) the question (2017).</p>
<p>Vanilla adaptive gradients (RMSProp, Adagrad, Adam, etc) do not match well with L2 regularization.</p>
<p>Link to the paper [<a href="https://arxiv.org/pdf/1711.05101.pdf]" rel="noreferrer">https://arxiv.org/pdf/1711.05101.pdf]</a> and some intro:</p>
<blockquote>
<p>In this paper, we show that a
major factor of the poor generalization of the most popular
adaptive gradient method, Adam, is due to the fact that L2
regularization is not nearly as effective for it as for SGD.</p>
<p>L2 regularization and weight decay are not identical.
Contrary to common belief, the two techniques are not
equivalent. For SGD, they can be made equivalent by
a reparameterization of the weight decay factor based
on the learning rate; this is not the case for Adam. <strong>In
particular, when combined with adaptive gradients, L2
regularization leads to weights with large gradients
being regularized less than they would be when using
weight decay.</strong></p>
</blockquote> | 2018-03-01 12:05:13.773000+00:00 | 2018-03-01 12:05:13.773000+00:00 | null | null | 42,415,319 | <p>Should I avoid to use L2 regularization in conjuntion with RMSprop and NAG?</p>
<p>The L2 regularization term interferes with the gradient algorithm (RMSprop)?</p>
<p>Best reggards,</p> | 2017-02-23 12:06:18.947000+00:00 | 2018-03-01 12:05:13.773000+00:00 | null | machine-learning|neural-network|backpropagation | ['https://arxiv.org/pdf/1711.05101.pdf]'] | 1 |
66,947,147 | <p>TFF can be used with arbitrary TensorFlow; generally, all the symbols in the <code>tff.learning</code> namespace are really convenience wrappers for common use cases of the lower-level API, the <a href="https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_1" rel="nofollow noreferrer">federated core</a>.</p>
<p>I'm not personally aware of any 'FL for object detection' research, but there appears to <a href="https://arxiv.org/pdf/2006.01412.pdf" rel="nofollow noreferrer">have</a> <a href="https://arxiv.org/abs/1910.11089" rel="nofollow noreferrer">been</a> <a href="https://dl.acm.org/doi/abs/10.1145/3387168.3387181" rel="nofollow noreferrer">some</a> <a href="https://arxiv.org/abs/2001.06202" rel="nofollow noreferrer">projects</a> in this direction previously. I'm no expert in object detection, but generally it seems to me that there may be some interesting questions around e.g. labels in the federated setting for this kind of application.</p>
<p>That is, combining the two thrusts above: I think you are interested in a research area that has gotten some thought but not a tremendous amount yet, and therefore you are likely to be writing your own federated algorithms from scratch. TFF may certainly be a useful tool here; see for example this <a href="https://github.com/tensorflow/federated/tree/master/tensorflow_federated/python/examples/simple_fedavg" rel="nofollow noreferrer">standalone implementation of FedAvg</a> for another perspective on writing a custom algorithm in TFF.</p> | 2021-04-05 00:57:19.037000+00:00 | 2021-04-05 00:57:19.037000+00:00 | null | null | 66,395,599 | <p>I plan to use federated learning for an object detection algorithm I already developed for detecting weeds.
As I research, I see federated tensorflow examples on Image classification. Like the following link:
<a href="https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification" rel="nofollow noreferrer">https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification</a></p>
<p>My question is can we use federated learning and federated tensorflow for object detection algorithms?
If yes, would you please provide me with some links and examples?</p> | 2021-02-27 04:28:26.583000+00:00 | 2021-04-05 00:57:19.037000+00:00 | null | object-detection|tensorflow-federated|federated-learning | ['https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_1', 'https://arxiv.org/pdf/2006.01412.pdf', 'https://arxiv.org/abs/1910.11089', 'https://dl.acm.org/doi/abs/10.1145/3387168.3387181', 'https://arxiv.org/abs/2001.06202', 'https://github.com/tensorflow/federated/tree/master/tensorflow_federated/python/examples/simple_fedavg'] | 6 |
13,044,530 | <p>This looks like the <a href="http://en.wikipedia.org/wiki/Exact_cover" rel="nofollow">Exact Cover Problem</a>. You basically want to cover all fields on the board with your given pieces. I can recommend <a href="http://en.wikipedia.org/wiki/Dancing_Links" rel="nofollow">Dancing Links</a>, published by Donald Knuth. In the <a href="http://arxiv.org/pdf/cs/0011047v1.pdf" rel="nofollow">paper</a> you find a clear example for the <a href="http://en.wikipedia.org/wiki/Pentomino" rel="nofollow">pentomino problem</a> which should give you a good idea of how it works.</p>
<p>You basically set up a system that keeps track of all possible ways to place a specific block on the board. By placing a block, you would cover a set of positions on the field. These positions can't be used to place any other blocks. All possibilities would then be erased from the problem setting before you place another block. The dancing links allows for fast backtracking and erasing of possibilities.</p> | 2012-10-24 07:21:18.633000+00:00 | 2012-10-24 07:21:18.633000+00:00 | null | null | 13,037,826 | <p>I was given a brain puzzle from lonpos.cc as a present. I was curius of how many different solutions there were, and I quite enjoy writing algorithms and code, so I started writing an application to brute force it.</p>
<p>The puzzle looks like this : <a href="http://www.lonpos.cc/images/LONPOSdb.jpg" rel="nofollow">http://www.lonpos.cc/images/LONPOSdb.jpg</a> / <a href="http://cdn100.iofferphoto.com/img/item/191/498/944/u2t6.jpg" rel="nofollow">http://cdn100.iofferphoto.com/img/item/191/498/944/u2t6.jpg</a></p>
<p>It's a board of 20x14 "points". And all puzzle pieces can be flipped and turned. I wrote an application where each piece (and the puzzle) is presented like this:</p>
<pre><code>01010
00100
01110
01110
11111
01010
</code></pre>
<p>Now my application so far is reasonably simple.</p>
<p>It takes the list of pieces and a blank board, pops of piece #0
flips it in every direction, and for that piece tries to place it for every x and y coordinate. If it successfully places a piece it passes a copy of the new "board" with some pieces taken to a recursive function, and tries all combinations for their pieces.</p>
<p>Explained in pseudocode:</p>
<pre><code>bruteForce(Board base, List pieces) {
for (Piece in pieces.pop, piece.pop.flip, piece.pop.flip2...) {
int x,y = 0;
if canplace(piece, x, y) {
Board newBoard = base.clone();
newBoard.placePiece(piece, x, y);
bruteForce(newBoard, pieces);
}
## increment x until x > width, then y
}
}
</code></pre>
<p>Now I'm trying to find out ways to make this quicker. Things I've thought of so far:</p>
<ol>
<li>Making it solve in parallel - Implemented, now using 4 threads.</li>
<li>Sorting the pieces, and only trying to place the pieces that will fit in the x,y space we're trying to fit. (Aka if we're on the bottom row, and we only have 4 "points" from our position to the bottom, dont try the ones that are 8 high).</li>
<li>Not duplicating the board, instead using placePiece and removePiece or something like it.</li>
<li>Checking for "invalid" boards, aka if a piece is impossible to reach (boxed in completely).</li>
</ol>
<p>Anyone have any creative ideas on how I can do this quicker? Or any way to mathematically calculate how many different combinations there are? </p> | 2012-10-23 19:32:12.500000+00:00 | 2012-10-24 07:21:18.633000+00:00 | 2012-10-23 19:53:24.747000+00:00 | performance|algorithm|puzzle | ['http://en.wikipedia.org/wiki/Exact_cover', 'http://en.wikipedia.org/wiki/Dancing_Links', 'http://arxiv.org/pdf/cs/0011047v1.pdf', 'http://en.wikipedia.org/wiki/Pentomino'] | 4 |
23,519,448 | <p><strong>Update:</strong></p>
<p><strong>Quoting from the standard:</strong></p>
<blockquote>
<p>If during the evaluation of an expression, the result is not
mathematically defined or not in the range of representable values for
its type, <strong>the behavior is undefined</strong>. [ Note: most existing
implementations of C++ ignore integer overflows. Treatment of division
by zero, forming a remainder using a zero divisor, and all floating
point exceptions vary among machines, and is usually adjustable by a
library function].</p>
</blockquote>
<p><strong>Quoting from</strong> <a href="http://lists.freebsd.org/pipermail/freebsd-numerics/2014-March/000549.html" rel="nofollow noreferrer">http://lists.freebsd.org/pipermail/freebsd-numerics/2014-March/000549.html</a>:</p>
<blockquote>
<p>It appears that clang developers have chosen the naive complex
division algorithm.</p>
<p>...</p>
<p>I did a bit of grepping. Could it be that the division algorithm is
contained in the file
src/contrib/llvm/tools/clang/lib/CodeGen/CGExprComplex.cpp inside the
function ComplexExprEmitter::EmitBinDiv ?</p>
<p>If you look at the code, it certainly looks like it is generating code
to perform complex division, and it definitely looks like they are
using the naive algorithm.</p>
</blockquote>
<p>Assuming that indeed the clang uses naive complex division the expression <code>1.0 / c</code> evaluates according to the naive implementation of complex division to the following expression
<img src="https://i.stack.imgur.com/UVMaK.gif" alt="enter image description here">,</p>
<p><img src="https://i.stack.imgur.com/UHhLq.gif" alt="enter image description here"></p>
<p>1.e-324 is out of the double range. This results according to the standard to <strong>undefined behaviour</strong>.</p>
<p>Also making a search in the <a href="http://llvm.org/bugs/buglist.cgi?query_format=specific&order=relevance%20desc&bug_status=__open__&product=&content=complex%20division" rel="nofollow noreferrer">LLVM/Clang bug list</a>, it appears that there are quite some issues concerning <em>complex division.</em></p>
<p><strong>As such your case is a bug and you should report it.</strong></p>
<p>For anyone who is interested on how robust complex division is implemented take a look at</p>
<ol>
<li><a href="http://ideone.com/bqFk8j" rel="nofollow noreferrer">http://ideone.com/bqFk8j</a> and</li>
<li><a href="http://arxiv.org/abs/1210.4539" rel="nofollow noreferrer">A Robust Complex Division in Scilab.</a></li>
</ol> | 2014-05-07 13:38:43.810000+00:00 | 2014-05-08 17:10:37.567000+00:00 | 2014-05-08 17:10:37.567000+00:00 | null | 23,519,366 | <p>When I compile the following code with g++ (4.8.1 or 4.9.0) or clang++ (3.4) I get different outputs.</p>
<pre><code>#include <iostream>
#include <complex>
int main() {
std::complex<double> c = {1.e-162,0};
std::cout << 1.0/c << std::endl;
return 0;
}
</code></pre>
<p>g++:</p>
<pre><code>(1e+162,0)
</code></pre>
<p>clang++:</p>
<pre><code>(inf,-nan)
</code></pre>
<p>Is this a bug in clang? </p>
<p>Update:</p>
<p>Thank you for your answers! I reported the bug: <a href="http://llvm.org/bugs/show_bug.cgi?id=19820" rel="nofollow">http://llvm.org/bugs/show_bug.cgi?id=19820</a></p> | 2014-05-07 13:34:40.263000+00:00 | 2014-05-21 22:02:50.577000+00:00 | 2014-05-21 22:02:50.577000+00:00 | c++|clang|complex-numbers | ['http://lists.freebsd.org/pipermail/freebsd-numerics/2014-March/000549.html', 'http://llvm.org/bugs/buglist.cgi?query_format=specific&order=relevance%20desc&bug_status=__open__&product=&content=complex%20division', 'http://ideone.com/bqFk8j', 'http://arxiv.org/abs/1210.4539'] | 4 |
66,705,121 | <p>While counting the number of layers in a Neural Network we usually only count convolutional layers and fully connected layers. Pooling Layer is taken together with the Convolutional Layer and counted as one layer and Dropout is a regularization technique so it will also not be counted as a separate layer.</p>
<p>For reference, the VGG16 mode is defined as a 16 layer model. Those 16 layers are only the Convolutional Layers and Fully Connected Dense Layers. If you count all the pooling and activation layers it will change to a 41 layer model, which it is not. Reference: <a href="https://in.mathworks.com/help/deeplearning/ref/vgg16.html" rel="nofollow noreferrer">VGG16</a>, <a href="https://arxiv.org/pdf/1409.1556.pdf" rel="nofollow noreferrer">VGG16 Paper</a></p>
<p>So as per your code you have 3 layers (1 Convolutional Layer with 28 Neurons, 1 Fully Connected Layer with 128 Neurons and 1 Fully Connected Layer with 10 neurons)</p>
<p>As for making it a 10 layer network you can add more convolutional layers or dense layers before the output layers, but it won't be necessary for the MNIST dataset.</p> | 2021-03-19 08:54:35.420000+00:00 | 2021-03-25 11:42:39.800000+00:00 | 2021-03-25 11:42:39.800000+00:00 | null | 66,693,260 | <p>Please add a minimum comment on your thought, so that I can improve my query. Thanks. -)</p>
<hr />
<p>I am working on the <code>MNIST</code> dataset and write some <code>CNN</code> code. However, I am confused about some of the points with the <code>CNN</code> code. How to know the number of layers in a neural network? With my current understanding, I think this has 6 layers with 4 hidden layers. Is that right? and what if I need to extend to 10 layers? how to do it?</p>
<pre><code>from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Dropout, Flatten, MaxPooling2D
model = Sequential()
model.add(Conv2D(28, kernel_size=(3,3),
input_shape = ...))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(128, activation=tf.nn.relu))
model.add(Dropout(0.2))
model.add(Dense(10, activation=tf.nn.softmax))
</code></pre> | 2021-03-18 14:35:51+00:00 | 2021-03-31 13:44:46.617000+00:00 | 2021-03-31 13:44:46.617000+00:00 | python|tensorflow|machine-learning|keras|deep-learning | ['https://in.mathworks.com/help/deeplearning/ref/vgg16.html', 'https://arxiv.org/pdf/1409.1556.pdf'] | 2 |
49,039,994 | <p>Mathematically these two methods are different. One is called stochastic gradient descent and the other is called batch gradient descent. You are missing the most commonly used one - mini batch gradient descent. There has been a lot of research on this topic but basically different batch sizes have different convergence properties. Generally people use batch sizes that are greater than one but not the full dataset. This is usually necessary since most datasets cannot fit into memory all at once. Also if your model uses batch normalization then a batch size of one won't converge. This <a href="https://arxiv.org/pdf/1705.08741.pdf" rel="nofollow noreferrer">paper</a> discusses the effects of batch size (among other things) on performance. The takeaway is that larger batch sizes do not generalize as well. (They actually argue it isn't the batch size itself the but the fact that you have fewer updates when the batch is larger. I would recommend batch sizes of 32 to start and experiment to see how batch size effects performance. </p>
<p>Here is a graph of the effects of batch size on training and validation performance from the paper I linked. </p>
<p><a href="https://i.stack.imgur.com/XAU7U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XAU7U.png" alt="enter image description here"></a></p> | 2018-02-28 22:52:29.430000+00:00 | 2018-02-28 22:52:29.430000+00:00 | null | null | 42,574,611 | <p>Assuming we have 500k items worth of training data, does it matter if we train the model one item at a time or 'n' items at a time or all at once?</p>
<p>Considering <code>inputTrainingData</code> and <code>outputTrainingData</code> to be <code>[[]]</code> and <code>train_step</code> to be any generic tensorflow training step. </p>
<p><strong>Option 1</strong> Train one item at a time -</p>
<pre><code>for i in range(len(inputTrainingData)):
train_step.run(feed_dict={x: [inputTrainingData[i]], y: [outputTrainingData[i]], keep_prob: .60}, session= sess)
</code></pre>
<p><strong>Option 2</strong> Train on all at once -</p>
<pre><code>train_step.run(feed_dict={x: inputTrainingData, y: outputTrainingData, keep_prob: .60}, session= sess)
</code></pre>
<p>Is there any difference between options 1 and 2 above as far as the quality of training is concerned? </p> | 2017-03-03 09:03:49.880000+00:00 | 2018-02-28 22:52:29.430000+00:00 | null | python|tensorflow | ['https://arxiv.org/pdf/1705.08741.pdf', 'https://i.stack.imgur.com/XAU7U.png'] | 2 |
36,299,641 | <p>It depends on the usage:</p>
<p>1) If you only want to get sentence vector for some known data. Check out paragraph vector in these papers:</p>
<p>Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. Eprint Arxiv,4:1188–1196. </p>
<p>A. M. Dai, C. Olah, and Q. V. Le. 2015. DocumentEmbedding with Paragraph Vectors. ArXiv e-prints,July.</p>
<p>2) If you want a model to estimate sentence vector for unknown(test) sentences with unsupervised approach:</p>
<p>You could check out this paper:</p>
<p><a href="https://github.com/StevenLOL/aicyber_semeval_2016_ivector" rel="noreferrer">Steven Du and Xi Zhang. 2016. Aicyber at SemEval-2016 Task 4: i-vector based sentence representation. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval 2016), San Diego, US</a></p>
<p>3)Researcher are also looking for the output of certain layer in RNN or LSTM network, recent example is:</p>
<p><a href="http://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/12195" rel="noreferrer">http://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/12195</a></p>
<p>4)For the gensim doc2vec, many researchers could not get good results, to overcome this problem, following paper using doc2vec based on pre-trained word vectors.</p>
<p><a href="https://github.com/jhlau/doc2vec" rel="noreferrer">Jey Han Lau and Timothy Baldwin (2016). An Empirical Evaluation of doc2vec with Practical Insights into Document Embedding Generation. In Proceedings of the 1st Workshop on Representation Learning for NLP, 2016.</a></p>
<p>5) <a href="https://github.com/search?utf8=%E2%9C%93&q=tweet2vec&type=" rel="noreferrer">tweet2vec</a> or <a href="https://github.com/epfml/sent2vec" rel="noreferrer">sent2vec</a>
.</p>
<p>Facebook has SentEval project for evaluating the quality of sentence vectors. </p>
<p><a href="https://github.com/facebookresearch/SentEval" rel="noreferrer">https://github.com/facebookresearch/SentEval</a></p>
<p>6) There are more information in the following paper:</p>
<p>Neural Network Models for Paraphrase Identification, Semantic Textual Similarity, Natural Language Inference, and Question Answering</p>
<hr>
<p>And for now you can use 'BERT':</p>
<p>Google release the source code as well as pretrained models.</p>
<p><a href="https://github.com/google-research/bert" rel="noreferrer">https://github.com/google-research/bert</a></p>
<p>And here is an example to run bert as a service:</p>
<p><a href="https://github.com/hanxiao/bert-as-service" rel="noreferrer">https://github.com/hanxiao/bert-as-service</a></p> | 2016-03-30 04:21:29.487000+00:00 | 2018-11-23 00:59:17.700000+00:00 | 2018-11-23 00:59:17.700000+00:00 | null | 29,760,935 | <p>I have generated the vectors for a list of tokens from a large document using word2vec. Given a sentence, is it possible to get the vector of the sentence from the vector of the tokens in the sentence. </p> | 2015-04-21 00:46:52.413000+00:00 | 2019-12-09 15:37:08.377000+00:00 | null | word2vec | ['https://github.com/StevenLOL/aicyber_semeval_2016_ivector', 'http://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/12195', 'https://github.com/jhlau/doc2vec', 'https://github.com/search?utf8=%E2%9C%93&q=tweet2vec&type=', 'https://github.com/epfml/sent2vec', 'https://github.com/facebookresearch/SentEval', 'https://github.com/google-research/bert', 'https://github.com/hanxiao/bert-as-service'] | 8 |
71,461,317 | <p>"I can't find good documentation" - you could read <a href="https://arxiv.org/abs/1412.6980" rel="nofollow noreferrer">the original paper</a>, for example. Also, the documentation is here: <a href="https://pytorch.org/docs/stable/generated/torch.optim.Adam.html" rel="nofollow noreferrer">https://pytorch.org/docs/stable/generated/torch.optim.Adam.html</a>.</p>
<p>If by "learning rate" you mean the <code>lr</code> parameter of <code>torch.optim.Adam</code>, then it remains constant - Adam itself doesn' modify it, in contrast to <a href="https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate" rel="nofollow noreferrer">learning-rate schedulers</a>. However, Adam applies extra scaling to the gradient, so the learning rate is applied to this transformation of the gradient, not the gradient itself. This can't be turned off because this is the essence of the algorithm. If you'd like to apply the learning rate directly to the gradient, use stochastic gradient descent.</p> | 2022-03-13 22:20:10.270000+00:00 | 2022-03-13 22:20:10.270000+00:00 | null | null | 71,461,240 | <p>I am optimizing lstm networks with pytorch using the Adam optimizer. I have the feeling that my learning rate is decaying too fast, but I am not even 100% sure if Adam does that, since I can't find good documentation. If Adam decays the learning rate by default, is there a way to turn this off and set a constant learning rate?</p> | 2022-03-13 22:07:28.320000+00:00 | 2022-03-13 22:20:10.270000+00:00 | null | python|pytorch | ['https://arxiv.org/abs/1412.6980', 'https://pytorch.org/docs/stable/generated/torch.optim.Adam.html', 'https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate'] | 3 |
48,035,889 | <p>I think this may help: </p>
<blockquote>
<p>In [18], two approaches for back-propagating gradients through a
computational graph are described. The first, which the authors refer
to as symbol-to-number differentiation, re- ceives a set of input
values and then computes the numerical values of the gradients at
those input values. It does so by explicitly traversing the graph
first in the forward order (forward-propagation) to compute the cost,
then in reverse order (back-propagation) to compute the gradients via
the chain rule. Another approach, more relevant to TensorFlow, is what
[18] calls symbol-to-symbol derivatives and [8] terms automatic
gradient computation. In this case, gradients are not computed by an
explicit implementation of the back- propagation algorithm. Rather,
special nodes are added to the computational graph that calculate the
gradient of each operation and thus ultimately the chain rule. To
perform back- propagation, these nodes must then simply be executed
like any other nodes by the graph evaluation engine. As such, this
approach does not produce the desired derivatives as a numeric value,
but only as a symbolic handle to compute those values.</p>
</blockquote>
<p>Reference: <a href="http://arxiv.org/abs/1610.0117" rel="nofollow noreferrer">http://arxiv.org/abs/1610.0117</a></p> | 2017-12-30 16:41:45.687000+00:00 | 2017-12-30 16:41:45.687000+00:00 | null | null | 41,165,243 | <p>I am a beginner with tensorflow and I want to implement MLP and train it based on the back propagation algorithm but when I read tutorials I found that it uses optmizers like “Stochastic Gradient Descent” and called it back propagation without implementing the algorithm phases. How is this back propagation?</p> | 2016-12-15 13:22:32.557000+00:00 | 2017-12-30 16:41:45.687000+00:00 | 2017-05-03 05:18:37.907000+00:00 | machine-learning|neural-network|tensorflow | ['http://arxiv.org/abs/1610.0117'] | 1 |
69,082,436 | <p>Simple answer:</p>
<p>The cover (aka min_sum_hessian_in_leaf ) is just a parameter for taking account the number of observations in a leaf of our split (actually is more than that). When doing a split in the m=1,..,M estimators; If the sum of the hessian in a leaf is lower than min_sum_hessian_in_leaf the tree will stop growing.
For example, in the simplest case, when we are talking about regression and the loss function is the typical (1/2)*sum(y_i-ypred_i )^2 (no regularization) then sum_hessian_in_leaf is equal to the number of observations in the leaf.
So let's say min_sum_hessian_in_leaf = 3 ; therefore we will require at least 3 observations in each leaf for considering a new split. So if min_sum_hessian_in_leaf = 0 , the m-tree (remember gbm are m=1,..,M trees/estimators) will grow freely and of course, trees that grow without constraints tend to overfit. On the otherside is min_sum_hessian_in_leaf is high enough, the m=1,..,M will have low complexity (like stumps/ short trees).</p>
<p>More complex (my interpretation):</p>
<p>At this point you might ask yourself why we are talking about the hessian.
The answer is quite complex, but one way of resolving the GBM algorithm (see the algorithm here <a href="https://en.wikipedia.org/wiki/Gradient_boosting" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Gradient_boosting</a> )
iS instead of minimizing the actual Loss function is minimizing an approximation of a loss function. Which is taking
a second order polynomial around 0 (this will allow us to solve the problem by newton-raphson algorithm). So the loss function to minimize now depends on the first derivative (which can also be thought as the jacobian) and the second derivative (which can also be thought as the hessian).
At this point, we know that in a context of decission trees, when we have a new split that have a high hessian, this means the split reduces the loss very good (because of the approximation, actually we are trying to reduce the gradient of the loss in the context of gbm).
So when min_sum_hessian_in_leaf > sum_hessian_in_leaf this means our split is not good enough and therefore we cannot make the split in our tree. On the other side if min_sum_hessian_in_leaf <= sum_hessian_in_leaf , this means our split is good enough at reducing the loss (actually the gradient of the loss) and therefore we can keep the tree growing.
The formula for sum_hessian_in_leaf varies depending of the loss function. So you bet set a different min_sum_hessian_in_leaf depending on the loss function you are trying to minimize (in a regression problem - no regularization
the sum_hessian_in_leaf = number of observations in the leaf , as I stated before). ´</p>
<p>References:</p>
<p>The youtuber joshstarmer explains these kind of details on his playlists of xgboost/gbms.
The elements of statistical learning (tibshirani et al) has a chapter about gbms but not about lightgbm. But this helps a lot.
XGBOOST paper: <a href="https://arxiv.org/pdf/1603.02754.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1603.02754.pdf</a>
(you can also read lgbm paper but this is more helpful for me since the algorithms are really alike except for the part of EFB and GOOS + other details).</p> | 2021-09-07 04:32:13.550000+00:00 | 2021-09-07 12:15:25.457000+00:00 | 2021-09-07 12:15:25.457000+00:00 | null | 45,248,001 | <p>What is the meaning of min_sum_hessian_in_leaf in lightgbm (see <a href="http://lightgbm.readthedocs.io/en/latest/Parameters.html" rel="nofollow noreferrer">http://lightgbm.readthedocs.io/en/latest/Parameters.html</a>)? I know that the hessian is a matrix of second order derivatives but I don't understand what that means in the context of lightgbm (or gradient boosting in general). And how does lightgbm condense that matrix down into a single value?</p> | 2017-07-21 23:16:10.333000+00:00 | 2021-09-07 12:15:25.457000+00:00 | null | machine-learning | ['https://en.wikipedia.org/wiki/Gradient_boosting', 'https://arxiv.org/pdf/1603.02754.pdf'] | 2 |
8,313,405 | <p>The dynamic programming algorithm is O(n^{2k}) where k is the number of distinct items and n is the total number of items. This can be very slow irrespective of the implementation. Typically, when solving an NP-hard problem, heuristics are required for speed.</p>
<p>I suggest you consider Next Fit Decreasing Height (NFDH) and First Fit Decreasing Height (FFDH) from Coffman et al. They are 2-optimal and 17/10-optimal, respectively, and they run in O(n log n) time.</p>
<p>I recommend you first try NFDH: sort in decreasing order, store the result in a linked list, then repeatedly try to insert the items starting from the beginning (largest values first) until you have filled the bin or there is no more items that can be inserted. Then go to the next bin and so on.</p>
<p><strong>References</strong>: </p>
<p>Owen Kaser, Daniel Lemire, <a href="http://arxiv.org/abs/cs.DS/0703109" rel="noreferrer">Tag-Cloud Drawing: Algorithms for Cloud Visualization</a>, Tagging and Metadata for Social Information Organization (WWW 2007), 2007. (See Section 5.1 for a related discussion.)</p>
<p>E. G. Coffman, Jr., M. R. Garey, D. S. Johnson, and R. E. Tarjan. Performance bounds for level-oriented two-dimensional packing algorithms. SIAM J. Comput., 9(4):808–826, 1980.</p> | 2011-11-29 15:30:51.247000+00:00 | 2011-11-29 15:30:51.247000+00:00 | null | null | 8,310,385 | <p>I have an array which contains a list of different sizes of materials : <code>{4,3,4,1,7,8}</code> . However, the bin can accomodate materials upto size 10. I need to find out the minimum number of bins needed to pack all the elements in the array.</p>
<p>For the above array, you can pack in 3 bins and divide them as follows: <code>{4,4,1}</code>, <code>{3,7}</code> , <code>{8}</code> . There are other possible arrangements that also fit into three stock pipes, but it cannot be done with fewer</p>
<p>I am trying to solve this problem through recursion in order to understand it better.</p>
<p>I am using <a href="http://algo2.iti.kit.edu/appol/les7.pdf" rel="nofollow">this</a> DP formulation (page 20 of the pdf file)</p>
<blockquote>
<p>Consider an input (n1;:::;nk) with n = ∑nj items<br>
Determine set of k-tuples (subsets of the input) that can be packed into a single bin<br>
That is, all tuples (q1;:::;qk) for which OPT(q1;:::;qk) = 1<br>
Denote this set by Q For each k-tuple q , we have OPT(q) = 1<br>
Calculate remaining values by using the recurrence : OPT(i1;:::;ik) = 1 +
minOPT(i1 - q1;:::;ik - qk)</p>
</blockquote>
<p>I have made the code, and it works fine for small data set. But if increase the size of my array to more than 6 elements, it becomes extremely slow -- <strong>takes about 25 seconds to solve an array containing 8 elements</strong> Can you tell me if theres anything wrong with the algorithm? I dont need an alternative solution --- <strong>just need to know why this is so slow, and how it can be improved</strong></p>
<p>Here is the code I have written in C++ :</p>
<pre><code>void recCutStock(Vector<int> & requests, int numStocks)
{
if (requests.size() == 0)
{
if(numStocks <= minSize)
{
minSize = numStocks;
}
// cout<<"GOT A RESULT : "<<numStocks<<endl;
return ;
}
else
{
if(numStocks+1 < minSize) //minSize is a global variable initialized with a big val
{
Vector<int> temp ; Vector<Vector<int> > posBins;
getBins(requests, temp, 0 , posBins); // 2-d array(stored in posBins) containing all possible single bin formations
for(int i =0; i < posBins.size(); i++)
{
Vector<int> subResult;
reqMinusPos(requests, subResult, posBins[i]); // subtracts the initial request array from the subArray
//displayArr(subResult);
recCutStock(subResult, numStocks+1);
}
}
else return;
}
}
</code></pre>
<p>The getBins function is as follows : </p>
<pre><code>void getBins(Vector<int> & requests, Vector<int> current, int index, Vector<Vector<int> > & bins)
{
if (index == requests.size())
{
if(sum(current,requests) <= stockLength && sum(current, requests)>0 )
{
bins.add(current);
// printBins(current,requests);
}
return ;
}
else
{
getBins(requests, current, index+1 , bins);
current.add(index);
getBins(requests, current, index+1 , bins);
}
}
</code></pre> | 2011-11-29 12:03:30.993000+00:00 | 2014-04-03 18:43:33.400000+00:00 | 2013-03-27 15:30:53.327000+00:00 | algorithm|recursion|bin|packing | ['http://arxiv.org/abs/cs.DS/0703109'] | 1 |
68,796,192 | <p>I found a way to get it to work without any additional axioms using the "not not" technique from "Classical Mathematics for a Constructive World" <a href="https://arxiv.org/pdf/1008.1213.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1008.1213.pdf</a></p>
<p>(I haven't tried to clean up the proofs at all, just to get it to type check. Sorry for weird phrasing in proofs.)</p>
<pre><code>Require Import Coq.Sets.Ensembles.
(* Classical reasoning without extra axioms. *)
Definition stable P := ~~P -> P.
Theorem stable_False : stable False.
unfold stable; intros nnF.
apply nnF; intros f.
apply f.
Qed.
Definition orW P Q := ~(~P /\ ~Q).
Definition exW {A} (P : A -> Prop) := ~(forall a, ~P a).
Theorem exW_strengthen {A} P Q (stQ : stable Q) (exQ : (exists a, P a) -> Q) (exWP : exW (fun (a : A) => P a)) : Q.
apply stQ; hnf; intros.
apply exWP; intros; hnf; intros.
apply H; apply exQ; apply (ex_intro _ _ H0).
Qed.
(* Ensembles *)
Definition emptyOnly A := Singleton _ (Empty_set A).
Theorem notEmpty_In A (s : Ensemble A) (ne : ~ Same_set _ s (Empty_set _)) : exW (fun a => In _ s a).
hnf; intros; apply ne; clear ne.
apply conj; hnf; intros; [ | destruct H0 ].
apply (False_ind _ (H _ H0)).
Qed.
Theorem enumerateSingletonPowerset A s (inc : Included _ s (emptyOnly A)):
orW (Same_set _ s (Empty_set _)) (Same_set _ s (emptyOnly A)).
hnf; intros; destruct H.
apply notEmpty_In in H.
revert H; apply exW_strengthen; intros; [ apply stable_False | ]; destruct H.
apply H0; clear H0; apply conj; hnf; intros; [ apply (inc _ H0) | ].
destruct H0.
destruct (inc _ H).
apply H.
Qed.
</code></pre>
<p>So, by rephrasing the theorem to use weak or, it is now possible to prove it directly, and without appealing to the structure of A, that membership is decidable, or classical logic.</p> | 2021-08-16 00:02:58.757000+00:00 | 2021-08-16 00:02:58.757000+00:00 | null | null | 68,789,154 | <p>How do you prove <code>enumerateSingletonPowerset</code>?</p>
<pre><code>Require Import Coq.Sets.Ensembles.
Definition emptyOnly A := Singleton _ (Empty_set A).
Theorem enumerateSingletonPowerset A s (inc : Included _ s (emptyOnly A)):
Same_set _ s (Empty_set _) \/ Same_set _ s (emptyOnly A).
</code></pre>
<p>I'm using <code>Same_set</code> to avoid extensionality. (Either way is fine.)</p>
<p>Conceptually, it seems simple to just say I have
<code>{{}}</code>
so the powerset is
<code>{{}, {{}}}</code>
and that's it. But, it's not clear how to say anything like that with these primitives on their own.</p>
<p>I'd be tempted to try destructing on if empty set was in the set s. But, since Emsemble is propositional, checking set membership is not generally decidable. A first thought is</p>
<pre><code>Axiom In_dec : forall A a e, In A e a \/ ~In A e a.
Theorem ExcludedMiddle P : P \/ ~P.
apply (In_dec _ tt (fun _ => P)).
Qed.
</code></pre>
<p>But, that is too powerful and immediately puts me into classical logic. The finite case is easy, but I plan on dealing with larger sets (e.g. Reals), so In and Included would not generally be computable. Are there axioms I could add that could allow In and Included to pretend to be decidable without making everything else decidable too?</p>
<p>Edit: Changed from pair to singleton since quantity isn't important.</p> | 2021-08-15 06:16:42.290000+00:00 | 2021-08-16 00:02:58.757000+00:00 | 2021-08-15 23:59:44.520000+00:00 | coq | ['https://arxiv.org/pdf/1008.1213.pdf'] | 1 |
52,474,500 | <p>As discussed in the comments, the problem lies in the large batch size, and - maybe also - in the optimizer used for the training.</p>
<p>It is hard to determine an exact reason why your algorithm did not converge with the current settings, but it can be argued as such:</p>
<h2>Large batch sizes have a slower convergence.</h2>
<p>Counter-intuitively, training with larger batch sizes will actually in some instances slow down your training. The reason behind this is purely speculative and depends on the exact nature and distribution of your data. Generally, though, having a smaller batch size means having more frequent updates. If your calculated gradients all point in a similar direction, having these more frequent updates will lead to a faster convergence.<br/>
Good practice is to have a batch size that is <em>never</em> larger than 1000. In most scenarios 128 is a good rule of thumb and a nice trade-off between the speed advantage of larger batches, and the nice convergence properties of smaller batch sizes. Note that this only makes sense in cases where you have a lot of training data.</p>
<p>Also note that theoretically the gradients of multiple examples in that large setting can "average out", meaning the large batch size will have only a very small and indistinct gradient. Having fewer samples in a mini-batch will reduce this chance, although it increases the risk of "going the wrong direction" (i.e. having a gradient that points in the opposite directin).</p>
<h2>SGD is a good starting point, but there exist several optimizations.</h2>
<p>One of these "smarter" variants is the suggested ADAM method. There is a highly cited <a href="https://arxiv.org/pdf/1412.6980.pdf" rel="nofollow noreferrer">paper about it</a>, which can give you a vague idea of what happens under the hood. Essentially, SGD is a very naive solution that does not have any special assumptions or built-in optimizations.
(From what I know, ADAM for example uses first-order derivatives)</p>
<p>There exist many different ones, and there is a ton of <a href="http://ruder.io/optimizing-gradient-descent/" rel="nofollow noreferrer">theoretical articles</a> (and <a href="https://www.youtube.com/watch?v=OWzkRD6MjYI" rel="nofollow noreferrer">practical comparisons</a>) of the different implementations. It is worth a lot to at least understand partially what the parameters do, and know what values make sense to set them to.</p>
<p>For example, you already set the learning rate to a sensible value (0.001); personally, I usually end up with values between 0.001-0.01, and maybe use learning rate decay over time if I have larger learning rates.</p> | 2018-09-24 07:28:03.413000+00:00 | 2018-09-24 07:28:03.413000+00:00 | null | null | 52,473,530 | <p>This is my code snippet:</p>
<pre><code>model=keras.Sequential()
model.add(keras.layers.LSTM(28,input_shape=(300,1),return_sequences=True))
model.add(keras.layers.Dropout(0.4))
model.add(keras.layers.LSTM(14))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(2,activation="softmax"))
sgd=keras.optimizers.SGD(lr=0.001)
model.compile(optimizer=sgd,loss=keras.losses.sparse_categorical_crossentropy)
model.fit(trainData,labeledData.sentiment,epochs=20,batch_size=3000)
</code></pre>
<p>trainData shape is [batch_size,300,1],when i begin training this model,loss is not going down.</p>
<p>Epoch 1/20
25000/25000 [==============================] - 2s 89us/step - loss: 0.6927
Epoch 2/20
25000/25000 [==============================] - 0s 8us/step - loss: 0.6928
Epoch 3/20
25000/25000 [==============================] - 0s 8us/step - loss: 0.6928
Epoch 4/20
25000/25000 [==============================] - 0s 8us/step - loss: 0.6928
Epoch 5/20
25000/25000 [==============================] - 0s 8us/step - loss: 0.6928
Epoch 6/20
25000/25000 [==============================] - 0s 8us/step - loss: 0.6926</p>
<p>what am i missing?</p> | 2018-09-24 06:09:02.690000+00:00 | 2018-09-24 07:28:03.413000+00:00 | null | neural-network|keras|lstm|rnn | ['https://arxiv.org/pdf/1412.6980.pdf', 'http://ruder.io/optimizing-gradient-descent/', 'https://www.youtube.com/watch?v=OWzkRD6MjYI'] | 3 |
47,849,317 | <p>In the example of AlphaZero Chess, the network's output shape allows for all possible moves for any pieces starting on any square.</p>
<p>From the paper <a href="https://arxiv.org/abs/1712.01815" rel="noreferrer">Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm</a>:</p>
<blockquote>
<p>A move in chess may be described in two parts: selecting the piece to move, and then
selecting among the legal moves for that piece. We represent the policy π(a|s) by a 8 × 8 × 73
stack of planes encoding a probability distribution over 4,672 possible moves. Each of the 8 × 8
positions identifies the square from which to “pick up” a piece. The first 56 planes encode
possible ‘queen moves’ for any piece: a number of squares [1..7] in which the piece will be
moved, along one of eight relative compass directions {N, N E, E, SE, S, SW, W, N W }. The
next 8 planes encode possible knight moves for that piece. The final 9 planes encode possible underpromotions for pawn moves or captures in two possible diagonals, to knight, bishop or
rook respectively. Other pawn moves or captures from the seventh rank are promoted to a
queen.</p>
</blockquote>
<p>So for example the network is <em>allowed</em> to output a positive probability for the move <code>g1-f3</code> even if there isn't a knight on <code>g1</code>, or for the move <code>e8=Q</code> even if there isn't a pawn on <code>e7</code>, or <code>d1-h5</code> if there is a Queen in <code>d1</code> but another piece is blocking the diagonal.</p>
<p>The key is that it outputs a probability distribution over possible moves, and since it is trained by playing against itself where only legal moves are allowed, it will learn to output very low or zero probabilities for illegal moves.</p>
<p>More precisely, after a set number of self-play games, the network is trained using supervised learning to predict the probability and value of moves given a board position. At the very beginning of self-play the network has random weights and it will output significant probabilities for lots of impossible moves, but after one or more iterations of supervised learning the move output probabilities will start to look much more reasonable.</p>
<p>The reason the AlphaZero team chose this architecture over something that enforces rules in the network is simple: The output must take a fixed size, since there should be a fixed number of output neurons. It wouldn't make sense to have a different number of output neurons corresponding to a different number of legal moves. Alternatively, it wouldn't make sense to zero out outputs for non-legal moves inside the network, because this would be a highly non-standard operation which would probably be a nightmare to run backpropagation on. You would need to differentiate a chess move generator!</p>
<p>Furthermore, when the network uses its policy output to play games, it can simply normalize each output over only legal moves. In this way we are enforcing move legality within the self-play system, but not within the neural network architecture itself. This would be done with the aid of a move generator.</p>
<p>Since you are asking about keras, specifically, you could represent such an output layer as:</p>
<pre><code>model.add(Dense(4672, activation='softmax'))
</code></pre>
<p><strong>In Summary:</strong> It is not necessarily to <em>enforce</em> move legality in the architecture of a neural network for predicting chess moves, we can allow all possible moves (including illegal ones) and train the network to output low or zero probabilities for illegal moves. Then when we use the move probabilities for playing, we can normalize over only legal moves to get the desired result, but this is happening outside of the neural network.</p> | 2017-12-16 19:39:49.880000+00:00 | 2017-12-16 20:44:00.293000+00:00 | 2017-12-16 20:44:00.293000+00:00 | null | 47,847,461 | <p>How do I apply rules, like chess rules, to a neural network, so the network doesn't predict/train invalid moves?</p> | 2017-12-16 15:59:15.803000+00:00 | 2017-12-16 20:44:00.293000+00:00 | 2017-12-16 19:40:45.283000+00:00 | machine-learning|tensorflow|neural-network|keras|lstm | ['https://arxiv.org/abs/1712.01815'] | 1 |
68,651,949 | <p>If you want to avoid <strong>Writer starvation</strong> then you could consider another algorithm. I research some algorithms, which avoid the starvation of the Writer problem (e.g. in this <a href="https://arxiv.org/pdf/1309.4507" rel="nofollow noreferrer">paper</a>). One of the solution proposal pseudo-code is the following: <a href="https://i.stack.imgur.com/Tw67R.png" rel="nofollow noreferrer">pseudo-code image</a>.</p>
<pre><code>public class ReadWriterSynchronizer : IDisposable
{
public ReadWriterSynchronizer(string name, int maxReaderCount)
{
myIncomingOperation = new Semaphore(1, 1, name + ".Incoming");
myReadOperation = new Semaphore(1, 1, name + ".Reader");
myWriteOperation = new Semaphore(1, 1, name + ".Writer");
myCrossprocessCounter = new ReaderCounter(name + ".Counter", maxReaderCount);
}
public void EnterReadLock()
{
myIncomingOperation.WaitOne();
myReadOperation.WaitOne();
// Local variable is necessary, because of optimalization
int currentCount = myCrossprocessCounter.Increase();
if (currentCount == 1)
{
myWriteOperation.WaitOne();
}
myReadOperation.Release();
myIncomingOperation.Release();
}
public void ExitReadLock()
{
myReadOperation.WaitOne();
// Local variable is necessary, because of optimalization
int currentCount = myCrossprocessCounter.Decrease();
if (currentCount == 0)
{
myWriteOperation.Release();
}
myReadOperation.Release();
}
public void EnterWriteLock()
{
myIncomingOperation.WaitOne();
myWriteOperation.WaitOne();
}
public void ExitWriteLock()
{
myWriteOperation.Release();
myIncomingOperation.Release();
}
public void Dispose()
{
myIncomingOperation?.Dispose();
myReadOperation?.Dispose();
myWriteOperation?.Dispose();
myCrossprocessCounter?.Dispose();
GC.SuppressFinalize(this);
}
private readonly ReaderCounter myCrossprocessCounter;
private readonly Semaphore myIncomingOperation;
private readonly Semaphore myReadOperation;
private readonly Semaphore myWriteOperation;
}
</code></pre>
<p>Unfortunately, <code>ctr</code> variable is an integer, therefore it could only work in interprocess scenarios. I decided to replace the integer counter with a <strong>Semaphore counter</strong> (<code>ReaderCounter</code>) so it could be used for cross-process communication. Essentially, I used <code>WaitOne(0)</code> in order to <strong>decrease</strong> and <code>Release()</code> to <strong>increase</strong> the reader counter.</p>
<pre><code>internal class ReaderCounter : IDisposable
{
internal ReaderCounter(string name, int maxConcurrentRead)
{
MaximumCount = maxConcurrentRead + InitialCount;
myReadCounterSemaphore = new Semaphore(InitialCount, MaximumCount, name);
myIncomingOperation = new Semaphore(1, 1, name + ".Incoming");
}
internal int Increase()
{
int counter = RetrieveCurrentCount();
// Not allowing to exceed maximum count
if (counter != MaximumCount - 1)
{
counter = myReadCounterSemaphore.Release();
}
else
{
counter++;
}
return counter;
}
internal int Decrease()
{
int counter = RetrieveCurrentCount() - 1;
myReadCounterSemaphore.WaitOne(0);
return counter;
}
public void Dispose()
{
myReadCounterSemaphore?.Dispose();
myIncomingOperation?.Dispose();
GC.SuppressFinalize(this);
}
internal int MaximumCount { get; private set; }
private const int InitialCount = 1;
private readonly Semaphore myReadCounterSemaphore;
private readonly Semaphore myIncomingOperation;
private int RetrieveCurrentCount()
{
myReadCounterSemaphore.WaitOne(0);
int counter = myReadCounterSemaphore.Release();
return counter;
}
}
</code></pre>
<p>NOTE: For easier usage, 1 puffer count was added to the reader counter. For example, using a 5 reader means [1,6] initial Semaphore count. Decreasing from the minimum count returns with -1 and Increasing from maximum count return with maximum count +1.</p>
<p><strong>UPDATE</strong>: I have created a GitHub repository with console applications, so you can play with it. It also contains ReaderWriterSynchronizer with <code>TryEnterReadLock()</code> and <code>TryEnterWriteLock()</code> methods: <a href="https://github.com/SzilvasiPeter/Cross-process-ReaderWriterLock" rel="nofollow noreferrer">https://github.com/SzilvasiPeter/Cross-process-ReaderWriterLock</a></p> | 2021-08-04 13:16:13.473000+00:00 | 2021-08-04 13:22:38.497000+00:00 | 2021-08-04 13:22:38.497000+00:00 | null | 3,503,833 | <p>Is there a read/write locking mechanism that works across processes (similar to Mutex, but read/write instead exclusive locking)? I would like to allow concurrent read access, but exclusive write access.</p> | 2010-08-17 15:00:07.407000+00:00 | 2021-08-04 13:22:38.497000+00:00 | 2010-08-17 15:03:05.570000+00:00 | .net|synchronization|locking|readerwriterlock|cross-process | ['https://arxiv.org/pdf/1309.4507', 'https://i.stack.imgur.com/Tw67R.png', 'https://github.com/SzilvasiPeter/Cross-process-ReaderWriterLock'] | 3 |
73,008,256 | <p>Yes, but it may need additional work.</p>
<p>If the data come from a <em>randomized controlled trial</em>, then fitting a treatment-response model in every subgroup has the same type of causal interpretation it has in the full sample.</p>
<p>However, in <em>observational data</em>, it is necessary to first estimate the propensity of treatment (i.e., the probability of treatment given the regressors) and use that in the treatment-response model in every subset. This is also known as "local centering" of the treatment indicator. Additionally, local centering of the dependent response variable may improve the performance of the model further.</p>
<p>See Dandl <em>et al.</em> (2022) for more details and comparisons. For the setup in randomized controlled trials, there is also a dedicated interface package <code>model4you</code> that facilitates fitting "personalized" treatment-response models using trees and random forests. See the Seibold <em>et al.</em> publications for details on the software and underlying methods.</p>
<ul>
<li><p>Susanne Dandl, Torsten Hothorn, Heidi Seibold, Erik Sverdrup, Stefan Wager, Achim Zeileis (2022). “What Makes Forest-Based Heterogeneous Treatment Effect Estimators Work?.” arXiv:2206.10323, arXiv.org E-Print Archive. <a href="https://doi.org/10.48550/arXiv.2206.10323" rel="nofollow noreferrer">doi:10.48550/arXiv.2206.10323</a></p>
</li>
<li><p>Heidi Seibold, Achim Zeileis, Torsten Hothorn (2019). “model4you: An R Package for Personalised Treatment Effect Estimation.” Journal of Open Research Software, 7(17), 1-6. <a href="https://doi.org/10.5334/jors.219" rel="nofollow noreferrer">doi:10.5334/jors.219</a></p>
</li>
<li><p>Heidi Seibold, Achim Zeileis, Torsten Hothorn (2018). “Individual Treatment Effect Prediction for Amyotrophic Lateral Sclerosis Patients.” Statistical Methods in Medical Research, 27(10), 3104-3125. <a href="https://doi.org/10.1177/0962280217693034" rel="nofollow noreferrer">doi:10.1177/0962280217693034</a></p>
</li>
<li><p>Heidi Seibold, Achim Zeileis, Torsten Hothorn (2016). “Model-Based Recursive Partitioning for Subgroup Analyses.” The International Journal of Biostatistics, 12(1), 45-63. <a href="https://doi.org/10.1515/ijb-2015-0032" rel="nofollow noreferrer">doi:10.1515/ijb-2015-0032</a></p>
</li>
</ul> | 2022-07-16 23:19:30.980000+00:00 | 2022-07-16 23:19:30.980000+00:00 | null | null | 72,989,448 | <p>Can the exposure-response relationship estimated within each subgroup, generated by using "partykit", have causal interpretation?</p> | 2022-07-15 05:34:30.363000+00:00 | 2022-07-16 23:19:30.980000+00:00 | null | party | ['https://doi.org/10.48550/arXiv.2206.10323', 'https://doi.org/10.5334/jors.219', 'https://doi.org/10.1177/0962280217693034', 'https://doi.org/10.1515/ijb-2015-0032'] | 4 |
56,893,635 | <p>While I'm unsure whether it provides a better solution than any you've already implemented, TensorFlow, and deep learning in general, can indeed be used for this purpose. A neural network can be created which takes an image as input and outputs a numeric vector. The Euclidean distance between vectors can be used to determine the similarity between different images, an approach which has been applied effectively for facial recognition (see <a href="https://arxiv.org/abs/1503.03832" rel="nofollow noreferrer">this paper</a>).</p>
<p>For a starting point in implementing this solution using TensorFlow, see <a href="https://douglasduhaime.com/posts/identifying-similar-images-with-tensorflow.html" rel="nofollow noreferrer">this tutorial</a>.</p> | 2019-07-04 19:50:30.533000+00:00 | 2019-07-04 19:55:34.780000+00:00 | 2019-07-04 19:55:34.780000+00:00 | null | 56,893,239 | <p>I know this question is likely to be closed as "opinion based", but I could not find any resource online and every link pointed in asking on Stack Overflow, so please be patient. </p>
<p>I'm trying to understand if Tensorflow is the right tool for object detection. I'm not talking about classification, but real object detection and recognition.<br>
My use case is the following: given image A (live photo), find the matching one inside a catalogue of thousand of different images.<br>
For example: live scanning of a supermarket product, find the matching one inside an high res catalogue of images. I'm not interested to know if the product is a shoe or a toothpaste, I want to know the "most matching" image (ie Prada model X or Colgate mint flavoured). </p>
<p>I already have a working script developed few years ago with OpenCV, using SURF feature detection with FLANN, but I wanted to know if there's a better tool for the job.<br>
Can anyone point me in the right direction?</p> | 2019-07-04 19:07:49.917000+00:00 | 2019-07-04 19:55:34.780000+00:00 | null | tensorflow | ['https://arxiv.org/abs/1503.03832', 'https://douglasduhaime.com/posts/identifying-similar-images-with-tensorflow.html'] | 2 |
56,157,184 | <p>I think this paper answers your question:
<a href="https://arxiv.org/pdf/1507.04296.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1507.04296.pdf</a></p>
<p>This paper runs a central learner, with a central replay memory. Also, there are <code>n</code> workers which are replicates of the central learner, each has its own replay memory. Each worker fills its own replay memory, and in each train step can use its own replay memory (if it is large enough) or use the central replay memory. Before each action selection the weights of the network are synched with the server, and after each single step of training, the gradients are sent back to the server. </p>
<p>Also consider:
<a href="https://arxiv.org/pdf/1602.01783.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1602.01783.pdf</a></p>
<p>which proposes A3C and later A2C is proposed which is a simpler version of A3C. The point is that, asynchronous Q-learning algorithm did not get much of attention because of the A3C’s performance. Basically, it is not efficient to use distributed DQN algorithm since replay memory needs moving a lot of data back and forth from different workers to the servers. Indeed, A3C is proposed to solve this problem with the replay memory, which runs one instance of the model and env in each worker and only asynchronously updates the weights. </p>
<p>I hope this has answered your question. </p>
<p>Afshin</p> | 2019-05-15 20:30:22.847000+00:00 | 2019-05-15 20:30:22.847000+00:00 | null | null | 56,153,309 | <p>My friend and I are training a DDQN for learning 2D soccer. I trained the model about 40.000 episodes but it tooks 6 days. Is there a way for training this model concurrently?</p>
<p>For example, I have 4 core and 4 thread and each thread trains the model 10.000 times concurrently. Therefore, time to training 40.000 episodes are reduced 6 days to 1,5 days like parallelism of for loop.</p>
<p>EDIT : If we train a model 10.000 episodes in 4 threads separately, would forming a new model consisting of the average of those trained models give the effect of training 40.000 episodes or would it be a model that was trained 10.000 episodes but a better one?</p> | 2019-05-15 15:51:59.233000+00:00 | 2019-05-15 20:30:22.847000+00:00 | 2019-05-15 17:21:38.440000+00:00 | neural-network|reinforcement-learning | ['https://arxiv.org/pdf/1507.04296.pdf', 'https://arxiv.org/pdf/1602.01783.pdf'] | 2 |
72,974,805 | <p>You can use VGG19 with the input shape (32, 32, 3). (or VGG16)
32 is the least size you can input to VGG19.
<a href="https://arxiv.org/pdf/1409.1556.pdf%20http://arxiv.org/abs/1409.1556.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1409.1556.pdf%20http://arxiv.org/abs/1409.1556.pdf</a></p>
<p>And also, you should use <strong>data_generator</strong> to load the dataset.</p> | 2022-07-14 03:15:36.073000+00:00 | 2022-07-18 02:17:41.177000+00:00 | 2022-07-18 02:17:41.177000+00:00 | null | 69,997,515 | <p>I'm working on a project to classify the images from the fashion MNIST database. I'd like to use a pre-trained CNN (possibly one that I can fine-tune), but all the ones I find require image sizes of at least 224x224 pixels.</p>
<p>The images from the fashion MNIST database are 28x28x1 in size, and for this application I have 50000 training images and 20000 testing images. I tried simply resizing the 28x28 images to 224x224, but creating the array of size 20000x224x224 made Google Colab crash because it ran out of RAM...</p>
<p>I'm not sure if this is because I did something wrong or just because I have too many images to be resizing them to 224x224.</p>
<p>So, is there a model available for smaller images? I can't find anything for images smaller than 224x224. Or is there something else I could do here?</p>
<p>Thanks a lot.</p> | 2021-11-17 00:09:26.493000+00:00 | 2022-07-18 02:17:41.177000+00:00 | null | python|conv-neural-network|pre-trained-model|image-classification | ['https://arxiv.org/pdf/1409.1556.pdf%20http://arxiv.org/abs/1409.1556.pdf'] | 1 |
48,359,103 | <p>In HoTT with univalence, it is indeed provable that <code>list1 A</code> is equal to <code>list2 A</code> for all <code>A</code>. Given a proof <code>p : list1 A = list2 A</code>, transport (or <code>subst</code>) gives you <code>P (list1 A) -> P (list2 A)</code> for any <code>P</code>. In cubical type theories, such transporting may also compute as expected. To my knowledge, cubical type theory (<a href="https://hal.inria.fr/hal-01378906/document" rel="nofollow noreferrer">CCHM</a> or <a href="https://arxiv.org/pdf/1712.01800.pdf" rel="nofollow noreferrer">cartesian</a>) is the only setting where this currently works. <a href="https://github.com/mortberg/cubicaltt" rel="nofollow noreferrer"><code>cubicaltt</code></a> is the most usable (but still not really practical) implementation.</p> | 2018-01-20 17:31:36.753000+00:00 | 2018-09-19 07:08:37.757000+00:00 | 2018-09-19 07:08:37.757000+00:00 | null | 48,355,427 | <p>Say I have two inductively defined datatypes:</p>
<pre><code>Inductive list1 (A : Type) : Type :=
| nil1 : list1 A
| cons1 : A -> list1 A -> list1 A.
</code></pre>
<p>and</p>
<pre><code>Inductive list2 (A : Type) : Type :=
| nil2 : list2 A
| cons2 : A -> list2 A -> list2 A.
</code></pre>
<p>For any <code>P (list1 a)</code> I should be able to construct a <code>P (list2 a)</code>, by applying the exact same method I used to construct <code>P (list1 a)</code> except replacing <code>nil1</code> with <code>nil2</code>, <code>list1</code> with <code>list2</code> and <code>cons1</code> with <code>cons2</code>. Similarly, any function that takes <code>list1 a</code> as a parameter could be extended to take a <code>list2 a</code>.</p>
<p>Is there a type system that allows me to speak of two datatypes having the same shape in this manner (having identically shaped constructors), and prove <code>P (list1 a) -> P (list2 a)</code>? For instance, is this something that univalence, HOTT, or a cubical/observational type system allows? It might also allow defining functions like <code>reverse: list_like a -> list_like a</code> that accept both <code>list1</code>s and <code>list2</code>s as parameters.</p> | 2018-01-20 11:13:07.200000+00:00 | 2018-09-19 07:08:37.757000+00:00 | null | types|coq|idris|dependent-type|type-theory | ['https://hal.inria.fr/hal-01378906/document', 'https://arxiv.org/pdf/1712.01800.pdf', 'https://github.com/mortberg/cubicaltt'] | 3 |
50,402,350 | <p>Merge layers performs the operation selected. For detail keras documentations: <a href="https://keras.io/layers/merge/" rel="nofollow noreferrer">https://keras.io/layers/merge/</a>.
Add: Adds list of inputs, tensors in case of Keras Model.
Multiply: Element-wise multiplication of a list of inputs.
and so on. </p>
<p>Therefore, select according to your application. The effect would be different, for scientific comments please go through:</p>
<ul>
<li>ResNet (uses addition) <a href="https://arxiv.org/abs/1512.03385" rel="nofollow noreferrer">https://arxiv.org/abs/1512.03385</a> </li>
<li>UNET (uses concatenation)<a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">https://arxiv.org/abs/1505.04597</a>. </li>
</ul>
<p>In you are from CV background, this paper would give you more insights: <a href="https://arxiv.org/pdf/1611.06612.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1611.06612.pdf</a> </p> | 2018-05-18 00:56:25.460000+00:00 | 2018-05-18 01:01:53.430000+00:00 | 2018-05-18 01:01:53.430000+00:00 | null | 49,990,882 | <p>Keras has many different ways of merging inputs like <code>Add()</code>, <code>Subtract()</code>, <code>Multiply()</code>, <code>concatenate()</code>, etc...</p>
<p>Do they all have the same effect or are there situations where one is preferable?</p> | 2018-04-23 22:36:53.450000+00:00 | 2018-05-18 01:01:53.430000+00:00 | null | python|neural-network|keras | ['https://keras.io/layers/merge/', 'https://arxiv.org/abs/1512.03385', 'https://arxiv.org/abs/1505.04597', 'https://arxiv.org/pdf/1611.06612.pdf'] | 4 |
40,097,401 | <p>According to <a href="https://arxiv.org/pdf/1502.01852v1.pdf" rel="nofollow">He et al 2015</a> Eq. 15, the theoretical weight variance for one layer when using ReLu becomes:</p>
<pre><code>n*Var[W] = 2
</code></pre>
<p>where n is the layer size. </p>
<p>If you want to use pooled variance of both the in layer and the out layer, then it becomes:</p>
<pre><code>(fan_in, fan_out) = ...
low = -2*np.sqrt(1.0/(fan_in + fan_out))
high = 2*np.sqrt(1.0/(fan_in + fan_out))
</code></pre>
<p>If you are using tensorflow, they have a <a href="https://www.tensorflow.org/versions/r0.11/api_docs/python/contrib.layers.html#variance_scaling_initializer" rel="nofollow">variance_scaling_initializer</a>, where you can set the factor variable and the mode variable to control how you want the initialization to be.</p>
<p>If you use the default setting of argument factor=2.0 for this initializer, you'll get the initialization variances suggested by He et al 2015 for ReLu activation. Although you can play around with the argument mode to get slightly different weight initialization variances. Only use in layer:</p>
<pre><code>tf.contrib.layers.variance_scaling_initializer(factor=2.0, mode='FAN_IN')
</code></pre>
<p>would give you following:</p>
<pre><code>(fan_in, fan_out) = ...
low = -np.sqrt(2.0/fan_in)
high = np.sqrt(2.0/fan_in)
</code></pre>
<p>Use both in and out layers:</p>
<pre><code>tf.contrib.layers.variance_scaling_initializer(factor=2.0, mode='FAN_AVG')
</code></pre>
<p>would give you:</p>
<pre><code>(fan_in, fan_out) = ...
low = -np.sqrt(4.0/(fan_in+fan_out)) = -2.0*np.sqrt(1.0/(fan_in+fan_out))
high = np.sqrt(4.0/(fan_in+fan_out)) = 2.0*np.sqrt(1.0/(fan_in+fan_out))
</code></pre>
<p>Only use out layer:</p>
<pre><code>tf.contrib.layers.variance_scaling_initializer(factor=2.0, mode='FAN_AVG')
</code></pre>
<p>would give you:</p>
<pre><code>(fan_in, fan_out) = ...
low = -np.sqrt(2.0/fan_out)
high = np.sqrt(2.0/fan_out)
</code></pre> | 2016-10-17 23:51:41.063000+00:00 | 2016-10-17 23:51:41.063000+00:00 | null | null | 39,231,032 | <p>As a followup to a reply (not the chosen one) in <a href="https://stackoverflow.com/questions/33640581/how-to-do-xavier-initialization-on-tensorflow">How to do Xavier initialization on TensorFlow</a>: Anyone having an idea, which values to use in relu and especially leaky relu?</p>
<p>I mean this part:</p>
<pre><code># use 4 for sigmoid, 1 for tanh activation
</code></pre>
<p>This was given there:</p>
<pre><code>(fan_in, fan_out) = ...
low = -4*np.sqrt(6.0/(fan_in + fan_out)) # use 4 for sigmoid, 1 for tanh activation
high = 4*np.sqrt(6.0/(fan_in + fan_out))
return tf.Variable(tf.random_uniform(shape, minval=low, maxval=high, dtype=tf.float32))
</code></pre> | 2016-08-30 15:02:51.217000+00:00 | 2016-10-17 23:51:41.063000+00:00 | 2017-05-23 12:09:41.573000+00:00 | machine-learning|initialization|tensorflow | ['https://arxiv.org/pdf/1502.01852v1.pdf', 'https://www.tensorflow.org/versions/r0.11/api_docs/python/contrib.layers.html#variance_scaling_initializer'] | 2 |
65,258,839 | <p>It likely uses <code>blazeface</code> for face detection used on mediapipe. I could not find a direct answer, but when analyzing an apk with mlkit face detection <code>blazeface.tfl</code> can be found on assets folder.</p>
<p><a href="https://i.stack.imgur.com/3uFkJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3uFkJ.png" alt="enter image description here" /></a></p>
<p>Mediapipe pose detection <a href="https://google.github.io/mediapipe/solutions/pose.html" rel="nofollow noreferrer">doc mentions</a> <code>blazepose</code> powers the ML Kit Pose Detection API. So, blazeface is likely to power mlkit on device face detection.</p>
<p>Links to documentation, paper, poster,</p>
<p><a href="https://google.github.io/mediapipe/solutions/face_detection.html" rel="nofollow noreferrer">https://google.github.io/mediapipe/solutions/face_detection.html</a></p>
<p><a href="https://arxiv.org/abs/1907.05047" rel="nofollow noreferrer">https://arxiv.org/abs/1907.05047</a></p>
<p><a href="https://docs.google.com/presentation/d/1YCtASfnYyZtH-41QvnW5iZxELFnf0MF-pPWSLGj8yjQ/present?slide=id.g5bc8aeffdd_1_0" rel="nofollow noreferrer">https://docs.google.com/presentation/d/1YCtASfnYyZtH-41QvnW5iZxELFnf0MF-pPWSLGj8yjQ/present?slide=id.g5bc8aeffdd_1_0</a></p>
<p><a href="https://drive.google.com/file/d/1u6aB6wxDY7X2TmeUUKgFydulNtXkb3pu/view" rel="nofollow noreferrer">https://drive.google.com/file/d/1u6aB6wxDY7X2TmeUUKgFydulNtXkb3pu/view</a></p> | 2020-12-11 21:26:41.030000+00:00 | 2020-12-11 21:26:41.030000+00:00 | null | null | 65,159,888 | <p>MLKit provides good API documentations and guides for Face detection (<a href="https://developers.google.com/ml-kit/vision/face-detection" rel="nofollow noreferrer">https://developers.google.com/ml-kit/vision/face-detection</a>). However, I can not find any informations about the algorihms/baseline model or related research papers behind the scene. Can someone provide any suggestions about it's implementation ?</p> | 2020-12-05 17:30:04.903000+00:00 | 2020-12-11 21:26:41.030000+00:00 | null | face-detection|google-mlkit | ['https://i.stack.imgur.com/3uFkJ.png', 'https://google.github.io/mediapipe/solutions/pose.html', 'https://google.github.io/mediapipe/solutions/face_detection.html', 'https://arxiv.org/abs/1907.05047', 'https://docs.google.com/presentation/d/1YCtASfnYyZtH-41QvnW5iZxELFnf0MF-pPWSLGj8yjQ/present?slide=id.g5bc8aeffdd_1_0', 'https://drive.google.com/file/d/1u6aB6wxDY7X2TmeUUKgFydulNtXkb3pu/view'] | 6 |
58,678,687 | <p>There is no reason not to fine-tune the GloVe embeddings in order to have better score for your final task, except if you have to keep a link with another model which uses the original embeddings (for interpreting your results for instance).</p>
<p>When fine-tuning the embeddings for your objective function, the word embeddings will (potentially) loose their initial properties (performing well for word similarity and analogy tasks).</p>
<p>Using word embeddings is just a way not to initialize with random vectors, so would it make sense to keep the random vectors fixed?</p>
<p>There are several articles which fine tune the word embeddings, for instance this one: <a href="https://arxiv.org/abs/1505.07931" rel="nofollow noreferrer">https://arxiv.org/abs/1505.07931</a></p>
<p>I made the assumption that you have enough training data. Otherwise it would be better to let the word embeddings fixed since it involves less parameters to train and thus avoids overfitting.</p> | 2019-11-03 08:58:45.513000+00:00 | 2019-11-03 09:41:05.173000+00:00 | 2019-11-03 09:41:05.173000+00:00 | null | 58,630,101 | <p>while transfer learning / fine-tuning recent language models, such as BERT and XLNET, is by far a very common practice, how is this for GloVe? </p>
<p>Basically, I see two options when using GloVe to get dense vector representations that can be used by downstream NNs.</p>
<p>1) Fine-tune GloVe embeddings (in pytorch terms, gradient enabled)</p>
<p>2) Just use the embeddings without gradient.</p>
<p>For instance, given GloVe's embeddings matrix, I do</p>
<pre><code>embed = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float))
...
dense = nn.Linear(...)
</code></pre>
<p>Is it best practice to solely use GloVe to get vector representation (and only train the dense layer and potentially other layers) or would one fine-tune the embeddings matrix, too? </p> | 2019-10-30 16:42:49.063000+00:00 | 2021-03-26 11:00:46.027000+00:00 | 2019-10-31 08:57:49.890000+00:00 | pytorch|glove | ['https://arxiv.org/abs/1505.07931'] | 1 |
58,727,436 | <p>You should absolutely fine-tune your word embedding matrix. Here is the thing, when you initialize the word embedding matrix with the GloVe word embeddings, your word embeddings will already capture most of the semantic properties of the data. However, you want your word embeddings to be tailored to the task your solving i.e task specific (Check <a href="https://arxiv.org/abs/1505.07931" rel="noreferrer">Yang</a>). Now, assuming that you don't have enough data in your dataset, you can't learn the word embedding matrix on your own (If you initialize the word embedding matrix with random vectors). Because of that, you want to initialize it with vectors that have been trained on huge datasets and are general.</p>
<p>One really important thing to keep in mind → Because the rest of your model is going to be initialized randomly, when you start training your word embedding matrix may suffer from catastrophic forgetting (Check the work of <a href="https://arxiv.org/abs/1801.06146" rel="noreferrer">Howard and Ruder</a> and <a href="https://arxiv.org/abs/1612.00796" rel="noreferrer">Kirkpatrick et al.</a>), i.e., the gradients will be huge because your model will drastically underfit the data for the first few batches, and you will lose the initial vectors completely. You can overcome this by:</p>
<ol>
<li><p>For the first several epochs don't fine-tune the word embedding matrix, just keep it as it is: <code>embeddings = nn.Embedding.from_pretrained(glove_vectors, freeze=True)</code>.</p>
</li>
<li><p>After the rest of the model has learned to fit your training data, decrease the learning rate, unfreeze the your embedding module <code>embeddings.weight.requires_grad = True</code>, and continue training.</p>
</li>
</ol>
<p>By following the above mentioned steps, you will get the best of both worlds. In other words, your word embeddings will still capture semantic properties while being tailored for your own downstream task. Finally, there are works (Check <a href="https://arxiv.org/abs/1510.03820" rel="noreferrer">Ye Zhang</a> for example) showing that it is fine to fine-tune immediately, but I would opt for the safer option.</p> | 2019-11-06 09:57:57.633000+00:00 | 2021-03-26 11:00:46.027000+00:00 | 2021-03-26 11:00:46.027000+00:00 | null | 58,630,101 | <p>while transfer learning / fine-tuning recent language models, such as BERT and XLNET, is by far a very common practice, how is this for GloVe? </p>
<p>Basically, I see two options when using GloVe to get dense vector representations that can be used by downstream NNs.</p>
<p>1) Fine-tune GloVe embeddings (in pytorch terms, gradient enabled)</p>
<p>2) Just use the embeddings without gradient.</p>
<p>For instance, given GloVe's embeddings matrix, I do</p>
<pre><code>embed = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float))
...
dense = nn.Linear(...)
</code></pre>
<p>Is it best practice to solely use GloVe to get vector representation (and only train the dense layer and potentially other layers) or would one fine-tune the embeddings matrix, too? </p> | 2019-10-30 16:42:49.063000+00:00 | 2021-03-26 11:00:46.027000+00:00 | 2019-10-31 08:57:49.890000+00:00 | pytorch|glove | ['https://arxiv.org/abs/1505.07931', 'https://arxiv.org/abs/1801.06146', 'https://arxiv.org/abs/1612.00796', 'https://arxiv.org/abs/1510.03820'] | 4 |
46,089,158 | <ol start="6">
<li>I switched the evaluation and training data (in config) and training continues as normal with exactly same command start it.
<ul>
<li>there's log about restoring parameters from last checkpoint</li>
<li>as I switch test/train data mAP immediately shoots too the moon</li>
<li>Images tab in the tensorboard gets updated</li>
</ul></li>
</ol>
<p>So it looks like changing the data works correctly. I'm not sure how can it affect the model, basically it's pretrained without these examples and fine-tuned with them</p>
<p>LOG:</p>
<pre><code>INFO:tensorflow:Restoring parameters from /home/.../train_output/model.ckpt-3190
</code></pre>
<ol start="7">
<li>This results in train/test contamination and real model performance suppose to be lower than one calculated on the contaminated validation dataset. You shouldn't worry that much unless you want to present some well-defined results</li>
</ol>
<p>Real life example from <a href="https://arxiv.org/abs/1311.2901" rel="nofollow noreferrer">https://arxiv.org/abs/1311.2901</a> :
ImageNet and Caltech datasets have some images in common. While evaluating how well your model trained with ImageNet performs with and Caltech as validation, you should remove duplicates from ImageNet before training.</p> | 2017-09-07 06:20:04.080000+00:00 | 2017-09-07 09:12:06.970000+00:00 | 2017-09-07 09:12:06.970000+00:00 | null | 45,276,140 | <h1>Training on large scale images:</h1>
<p>I'm trying to train a vehicle detector on Images with 4K-Resolution with about 100 small-sized vehicles per image (vehicle size about 100x100 pixel).</p>
<p>I'm currently using the full resolution, which costs me a lot of memory. I'm training using 32 cores and 128 GB RAM. The current architecture is Faster RCNN. I can train with a second stage batch size of 12 and a first_stage_mini_batch_size of 50. (I scaled both down until my memory was sufficient). </p>
<ol>
<li>I assume, that I should increase the max number of RPN proposals. Which dimension would be appropriate?</li>
<li>Does this approach make sense?</li>
</ol>
<h1>Difficulty, truncated, labels and poses:</h1>
<p>I currently separated my dataset only into three classes (cars, trucks, vans).</p>
<ol start="3">
<li><p>I assume giving additional information like:</p>
<ul>
<li>difficult (for mostly hidden vehicles), and</li>
<li>truncated (I currently did not select truncated objects, but I could)</li>
</ul></li>
</ol>
<p>would improve the training process. </p>
<ol start="4">
<li><p>Would truncated include overlapped vehicles?</p></li>
<li><p>Would additional Information like views/poses and other labels also improve the training process, or would it make the training harder?</p></li>
</ol>
<h1>Adding new data to the training set:</h1>
<ol start="6">
<li>Is it possible to add new images and objects into the training and validation record files and automatically resume the training using the latest checkpoint file from the training directory? Or is the option "fine_tune_checkpoint" with "from_detection_checkpoint" necessary?</li>
<li>Would it harm, if a random separation of training and validation data would pick different datasets than in the training before?</li>
</ol> | 2017-07-24 08:51:56.307000+00:00 | 2017-09-07 09:12:06.970000+00:00 | 2017-07-24 18:43:15.753000+00:00 | python|tensorflow|conv-neural-network|object-detection | ['https://arxiv.org/abs/1311.2901'] | 1 |
64,007,655 | <p>Few days I came across a library which is solving the same problem and it is call <strong>POINTER</strong></p>
<p>This will take some keywords and will form multiple sentences using those keywords.</p>
<ul>
<li><strong>Paper</strong> - <a href="https://arxiv.org/abs/2005.00558" rel="nofollow noreferrer">https://arxiv.org/abs/2005.00558</a></li>
<li><strong>GitHub</strong> - <a href="https://github.com/dreasysnail/POINTER" rel="nofollow noreferrer">https://github.com/dreasysnail/POINTER</a></li>
<li><strong>Demo</strong> - <a href="http://52.247.25.3:8900/" rel="nofollow noreferrer">http://52.247.25.3:8900/</a></li>
</ul> | 2020-09-22 10:16:58.990000+00:00 | 2020-09-22 10:16:58.990000+00:00 | null | null | 64,006,005 | <p>I have a text,</p>
<pre><code> text = 'Morning Sue, I wondered if we could catch up this week, 19th?I wanted to discuss the quote I sent last week B-GreW2020-026 for my aviation policy. I think it had a limit of £30,000,000 and an excess of £200,000. Let me know what works for you? Shall we include Willis into the call from a brokerage perspective?'
</code></pre>
<p>I have an excel file,</p>
<pre><code> Ref_no Limit Excess
Co-MS N2020-501 3471463 520000
</code></pre>
<p>I would like to generate a text like above sentence using the keywords from excel? I am sure there's a way to do this with AI. I ready about text augmentation as well. I know this can be done using regex but I am looking for a way using sentence generator. Kindly help.</p>
<p>Expected output:</p>
<pre><code> Output = 'Morning Sue, I wondered if we could catch up this week, 19th?I wanted to discuss the quote I sent last week Co-MS N2020-501 for my aviation policy. I think it had a limit of £3471463 and an excess of 520000. Let me know what works for you? Shall we include Willis into the call from a brokerage perspective?'
</code></pre>
<p>Or somewhat related text using the keywords.</p> | 2020-09-22 08:37:39.127000+00:00 | 2020-09-22 10:16:58.990000+00:00 | null | nlp | ['https://arxiv.org/abs/2005.00558', 'https://github.com/dreasysnail/POINTER', 'http://52.247.25.3:8900/'] | 3 |
60,969,379 | <p>You can try the following techniques:</p>
<ul>
<li>Make all the images of the same size using rescale. You wrote that <em>"As the pixel resolution are the same across all dataset, the element/features are in the same scale despite the different size. I expect that resizing would severely deteriorate the information."</em> This is right as CNNs are not scale invariant but there exists many techniques to overcome this issue, for example see <a href="https://arxiv.org/abs/1612.03144" rel="nofollow noreferrer">Feature Pyramid Networks for Object Detection</a></li>
<li>Make all the images of the same size using zero-padding. This would be the easiest way as you will not have to change architecture, zero-padding is used to control the shrinkage of the representations inside the CNN, zero-padding the input could lead to negative side effects but it is worth to give it a try. I would recommend to use this technique if all the images have the same background color (for example if you are using X-ray images) and zero-pad with the same color of the background. </li>
</ul> | 2020-04-01 11:01:52.407000+00:00 | 2020-04-01 11:01:52.407000+00:00 | null | null | 60,962,391 | <p>I am aware that a lot of solutions have been proposed to train a CNN with variable input size, but the situation I'm facing is different :</p>
<p>My dataset is composed of single-cell images that are all of same pixel resolutions (0.31x0.31 µm) but different sizes (crop from a cell population image after a cell-segmentation process).
Moreover, I'm implementing a VAE for manifold learning purpose.</p>
<p>Therefore :</p>
<ul>
<li>Going for a fully convolutional network or using an AdaptiveAvgPooling (or equivalent for different deep learning framework than pytorch) is not a solution. It's indeed trivial for the encoding part, but then the decoding part of the VAE will have to retrieve the original size of the input which is - as far as I know - not possible.</li>
<li>Resizing all image to a given shape is often proposed, but : As the pixel resolution are the same across all dataset, the element/features are in the same scale despite the different size. I expect that resizing would severely deteriorate the information. The difference in size between cells are meaningful (due to morphological changes) and all other features are still in similar scale.</li>
<li>The only solution that I could find is to zero-pad / crop all the image to a given size.</li>
<li>Multi-scale training won't probably help, as CNN can focus on features at a given scale as all data has same pixel resolution. </li>
</ul>
<p>My questions are :</p>
<ul>
<li>Has anyone ever faced this situation ? Did I miss another approach that would be better ?</li>
<li>If no, does zero-padding could do the job, and could it deteriorate the VAE learning ? The network will need to learn that for some images huge part need to be ignored while not for others images. Some single-cell images would in the end only be represented in a very small portion of the padded-image. The position of the cell in the padded-image might be kept as latent features, but is not relevant.</li>
</ul>
<p>Thanks a lot for your help</p> | 2020-04-01 01:38:05.923000+00:00 | 2020-04-01 11:01:52.407000+00:00 | null | deep-learning|computer-vision|conv-neural-network|autoencoder|image-size | ['https://arxiv.org/abs/1612.03144'] | 1 |
62,507,607 | <blockquote>
<p>I am working on topic modeling and I am curious what exactly would be short text under this context?</p>
</blockquote>
<p>The recent survey paper on short text topic modeling (by <a href="https://arxiv.org/pdf/1904.07695.pdf" rel="nofollow noreferrer">Qiang et al.</a>) mentions several datasets on which such models are evaluated: search snippets, StackOverflow question titles, tweets, and some others. The documents in these datasets have 5-14 words on average, and 14-37 words at maximum.</p>
<blockquote>
<p>For example, if there is a research paper, would the research paper's title and abstract be considered as short text?</p>
</blockquote>
<p>Paper abstracts that may have a bigger length. It is usual that the abstract has 200 or 300 words or even more.</p>
<p>The second argument that should be mentioned is that some short text topic modeling techniques assume that <em>each text has exactly one topic</em> (for example, in the paper by <a href="https://dl.acm.org/doi/pdf/10.1145/2623330.2623715" rel="nofollow noreferrer">Yin & Wang</a>). I think it's possible that the abstract may have several topics in it. So, some of the models that assume one topic per one document may perform badly on paper abstracts.</p> | 2020-06-22 04:48:42.910000+00:00 | 2020-06-22 04:48:42.910000+00:00 | null | null | 62,280,471 | <p>I am working on topic modeling and I am curious what exactly would be short text under this context?For example, if there is a research paper ,would the research paper's title and abstract be considered as short text?</p> | 2020-06-09 10:29:10.957000+00:00 | 2020-06-22 04:48:42.910000+00:00 | null | python-3.x|nlp|lda|topic-modeling|nmf | ['https://arxiv.org/pdf/1904.07695.pdf', 'https://dl.acm.org/doi/pdf/10.1145/2623330.2623715'] | 2 |
54,841,150 | <p>There is nothing as far as I am aware which automatically takes a spreadsheet and just describes it, without a developer having to define the process by which the data is processed to natural language. Natural Language is incredibly complex.</p>
<p>The <a href="https://en.wikipedia.org/wiki/Natural-language_generation" rel="nofollow noreferrer">NLG</a> Wikipedia article gives an overview of a common process for converting data to text. There is also a recent <a href="https://arxiv.org/pdf/1703.09902.pdf" rel="nofollow noreferrer">survey paper</a>.</p>
<p>Your question in its current form is too vague to provide anything more than links to such resources. It is more of a question of "How do I convert <strong>my</strong> data to natural language" than "How do I convert data to natural language". It is a highly domain-specific task.</p> | 2019-02-23 11:34:31.173000+00:00 | 2019-05-08 20:52:00.617000+00:00 | 2019-05-08 20:52:00.617000+00:00 | null | 53,410,629 | <p>Is there any working Natural Language Generation (NLG) system which can describe the numerical data in a financial balance sheet. If so, please provide the code/resource. I tried but couldn't find any working system. </p> | 2018-11-21 10:58:12.503000+00:00 | 2019-05-08 20:52:00.617000+00:00 | null | nlg | ['https://en.wikipedia.org/wiki/Natural-language_generation', 'https://arxiv.org/pdf/1703.09902.pdf'] | 2 |
47,053,077 | <p>Use the idea originally proposed in <a href="https://arxiv.org/abs/1412.6806" rel="nofollow noreferrer">All Convolutional Net</a> paper and later extensively used in <a href="https://stackoverflow.com/q/39366271/712995">Inception network</a>, i.e. apply convolution for dimensionality reduction. </p>
<p>The trick is to perform convolution with a unit <code>filter</code> (<code>1x1</code> for 2-D convolution, <code>1x1x1</code> for 3-D and so on) with a smaller number of filters. Nowadays, this trick is applied all the time to save computation in very deep convolutional networks, so you can use it before convolutional layers as well. In your question, the output tensor is one-dimensional (except batch size), so use 1-D convolution with <code>1</code> kernel size.</p>
<p>Here's the code in tensorflow, which reduces the tensor length from 64 to 32:</p>
<pre><code> # `x` shape: [batch, length] = [?, 64]
layer = tf.expand_dims(x, 2) # reshape to: [batch, channels, 1] = [?, 64, 1]
output = tf.layers.conv1d(layer, filters=32, kernel_size=1,
strides=1, padding='valid',
data_format='channels_first')
# new shape: [batch, filters, 1] = [?, 32, 1]
output = tf.squeeze(output) # reshape to: [batch, length] = [?, 32]
</code></pre> | 2017-11-01 10:45:48.707000+00:00 | 2017-11-01 10:45:48.707000+00:00 | null | null | 46,978,577 | <p>In a CNN, if the output is a one dimensional vector(say, a pre-logit layer), how would one reduce the dimensionality down to a specified size, using only convolutions? </p>
<p>How does one derive the filter dimensions/receptive field to accomplish such a task?</p>
<p>I am aware that this can be achieved by stacking a fully connected layer on the end of the network, but this does not seem so elegant.</p> | 2017-10-27 15:28:34.190000+00:00 | 2017-11-01 10:45:48.707000+00:00 | null | machine-learning|neural-network|deep-learning|convolution|dimensionality-reduction | ['https://arxiv.org/abs/1412.6806', 'https://stackoverflow.com/q/39366271/712995'] | 2 |
11,696,837 | <p>Check out google similarity distance - <a href="http://arxiv.org/abs/cs.CL/0412098" rel="noreferrer">http://arxiv.org/abs/cs.CL/0412098</a>
eg. if lots of webpages include them both, theyre probably related. </p>
<p>demo program at <a href="http://mechanicalcinderella.com" rel="noreferrer">http://mechanicalcinderella.com</a></p>
<p>Other than that, you could try to translate a project like wordnet ((google translate could help), or start a collaborative ontology.</p> | 2012-07-27 23:21:46.280000+00:00 | 2012-07-27 23:21:46.280000+00:00 | null | null | 2,441,361 | <p>I don't know whether StackOverflow covers NLP, so I am gonna give this a shot.
I am interested to find the semantic relatedness of two words from a specific domain, i.e. "image quality" and "noise". I am doing some research to determine if reviews of cameras are positive or negative for a particular attribute of the camera. (like image quality in each one of the reviews). </p>
<p>However, not everybody uses the exact same wording "image quality" in the posts, so I am out to see if there is a way for me to build something like that:</p>
<p>"image quality" which includes ("noise", "color", "sharpness", etc etc)
so I can wrap all everything within one big umbrella.</p>
<p>I am doing this for another language, so Wordnet is not necessarily helpful. And no, I do not work for Google or Microsoft so I do not have data from people's clicking behaviour as input data either. </p>
<p>However, I do have a lot of text, pos-tagged, segmented etc. </p> | 2010-03-14 06:09:20.477000+00:00 | 2017-01-08 08:17:51.760000+00:00 | 2017-01-08 08:17:51.760000+00:00 | nlp | ['http://arxiv.org/abs/cs.CL/0412098', 'http://mechanicalcinderella.com'] | 2 |
15,113,153 | <p>What you want is to detect <em>statistical stationarity</em>, which is a pretty tough problem and research papers <a href="http://arxiv.org/abs/gr-qc/9910027" rel="nofollow">[1]</a>, <a href="http://arxiv.org/abs/1001.1831" rel="nofollow">[2]</a>, <a href="http://www.sciencedirect.com/science/article/pii/S0378375801000817" rel="nofollow">[3]</a> are written about it. First you will need to decide on the algorithm that actually is able do detect stationarity before you even begin to consider how you would implement it using any programming language (be it Unix utilities, Python with numpy/scipy, or whatever you choose). Perhaps a good book on time-series analysis will help you here.</p> | 2013-02-27 13:20:30.647000+00:00 | 2013-02-27 13:20:30.647000+00:00 | null | null | 15,112,761 | <p>I have a set of data (two columns of CSV numbers) which varies quite a bit initially and then stabilises around a certain number. I'm trying to spot the point at which the graph first seems to stabilise in an automated way using standard Bash utilities. In an ASCII graph, the data may look like something such as this:</p>
<pre><code>y
^ ___ stabilised
| . |
| . |
| . . |
| . . . . . ▾
| . . . . . . . . . . . . . . . . . . . . . . .
| . . . .
------------------------------------------------------------> x
</code></pre>
<p>Note that the data do not reach a constant value, but fluctuate slightly around the stable value. Is there some way I may spot the first point at which the graph seems to become stable using standard Bash utilities?</p> | 2013-02-27 13:00:03.297000+00:00 | 2016-01-23 18:26:52.423000+00:00 | 2016-01-23 18:26:52.423000+00:00 | bash|graph|detect|stability | ['http://arxiv.org/abs/gr-qc/9910027', 'http://arxiv.org/abs/1001.1831', 'http://www.sciencedirect.com/science/article/pii/S0378375801000817'] | 3 |
57,074,013 | <p>You should.</p>
<p>The way filter weights are initialized there is a hidden assumption that the input signal is roughly zero mean and unit variance. This helps scaling the activations and the gradients.<br>
In practice, you may be able to train a net (for segmentation or any other task for that matter) using the "un-normalized" input images, but it will probably take longer to train and be less stable w.r.t meta parameters such as learning rate and solver type.</p>
<p>For more details see section 2.2 in He et al. <a href="https://arxiv.org/pdf/1502.01852.pdf" rel="nofollow noreferrer">"Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification"</a> (ICCV 2015).</p> | 2019-07-17 10:42:07.167000+00:00 | 2019-07-17 10:42:07.167000+00:00 | null | null | 57,073,251 | <p>Should I subtract image mean or divide 255 for a image on a semantic segmentation task? why or why not?</p> | 2019-07-17 09:59:10.507000+00:00 | 2019-07-17 10:42:24.837000+00:00 | 2019-07-17 10:42:24.837000+00:00 | machine-learning|computer-vision|image-segmentation|semantic-segmentation | ['https://arxiv.org/pdf/1502.01852.pdf'] | 1 |
44,339,818 | <p>As I said, in the loss function I used the scores (output values of network) for different inputs, and I compare them using the cross_entropy function.</p>
<pre><code>tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None)
</code></pre>
<p>This function computs the softmax of the logits, but naturally it doesn't compute them for the labels, so, in order to properly compare the scores of different inputs I need to wrap the "labels" with tf.nn.softmax() function, like this: (notice the last term)</p>
<pre><code>loss_function = tf.reduce_sum(alpha1 * weights_ll tf.nn.softmax_cross_entropy_with_logits(logits=g(input1), labels=tf.nn.softmax(g(input))
</code></pre>
<p>In case you are curious why I need to do something like this, and you are interested in Deep Learning, I invite you to read this interesting paper on Neural Graph Machines: <a href="https://arxiv.org/abs/1703.04818" rel="nofollow noreferrer">https://arxiv.org/abs/1703.04818</a></p>
<p>Also, there is another big problem with this model, that I don't share variables, so in practice with this loss function it's like training multiple networks for each term of the loss_function. </p>
<p>Luckily, there is a tutorial in the TensorFlow website that addresses Sharing Variables: <a href="https://www.tensorflow.org/programmers_guide/variable_scope" rel="nofollow noreferrer">https://www.tensorflow.org/programmers_guide/variable_scope</a> </p> | 2017-06-03 03:02:27.350000+00:00 | 2017-06-03 14:29:40.020000+00:00 | 2017-06-03 14:29:40.020000+00:00 | null | 44,333,516 | <p>I have a doubt about the definition of TensorFlow models.
I am implementing a rather peculiar neural network model, where I need to access to the values of the output level for different inputs to compute the loss function... </p>
<p>So, I defined the layers of the neural network in a function like this:</p>
<pre><code>def g(input_x,....):
###...Convolutional and Fully Connected Layers...###
# Last FC Layer
with tf.name_scope("output"):
W = tf.Variable(tf.truncated_normal([num_hidden_units, num_classes], stddev=0.05), name="W")
b = tf.Variable(tf.constant(0.1, shape=[num_classes]), name="b")
scores = tf.nn.xw_plus_b(fc_2_output, W, b, name="output")
return scores
</code></pre>
<p>And then in my model I have something like this:</p>
<pre><code>with tf.Graph().as_default():
session_conf = tf.ConfigProto(
allow_soft_placement=FLAGS.allow_soft_placement,
log_device_placement=FLAGS.log_device_placement)
sess = tf.Session(config=session_conf)
with sess.as_default():
###definition of other variables, constants and placeholders###
###weird loss function with scores from different inputs###
loss_function = tf.reduce_mean(alpha1 * weights_ll * tf.nn.softmax_cross_entropy_with_logits(logits=g(input1), labels=labels1) \
+ cu1 * tf.nn.softmax_cross_entropy_with_logits(logits=g(input2), labels=g(input1) \
+ cv1 * tf.nn.softmax_cross_entropy_with_logits(logits=g(input1), labels=labels2))) \
+ ... + ...
optimizer = tf.train.AdamOptimizer(1e-3).minimize(loss_function, global_step=global_step)
##...training steps and such...##
</code></pre>
<p>The training is working, but without running it for too long I am getting strange results, and I wonder if the weights defined in the g functions are being trained or they are somewhat out of scope of the optimizer.</p>
<p>Unfortunately, I am still learning a lot about tensorflow and I have no TensorBoard results to show you right now. </p>
<p>I just need to know from someone with a bit more experience if it is legal to define a model like this, using a python function for outputs.</p>
<p>Thank you very much for reading this far</p> | 2017-06-02 16:27:02.900000+00:00 | 2017-06-03 14:30:28.613000+00:00 | 2017-06-03 14:30:28.613000+00:00 | python|tensorflow | ['https://arxiv.org/abs/1703.04818', 'https://www.tensorflow.org/programmers_guide/variable_scope'] | 2 |
52,627,872 | <p>I wanted to add more information to this question since there are some more recent works in this area. Your intuition</p>
<blockquote>
<p>use instance normalisation for image classification where class label
should not depend on the contrast of input image</p>
</blockquote>
<p>is partly correct. I would say that a pig in broad daylight is still a pig when the image is taken at night or at dawn. However, this does not mean using instance normalization across the network will give you better result. Here are some reasons:</p>
<ol>
<li>Color distribution still play a role. It is more likely to be a apple than an orange if it has a lot of red.</li>
<li>At later layers, you can no longer imagine instance normalization acts as contrast normalization. Class specific details will emerge in deeper layers and normalizing them by instance will hurt the model's performance greatly.</li>
</ol>
<p><a href="https://arxiv.org/abs/1807.09441" rel="noreferrer">IBN-Net</a> uses both batch normalization and instance normalization in their model. They only put instance normalization in early layers and have achieved improvement in both accuracy and ability to generalize. They have open sourced code <a href="https://github.com/XingangPan/IBN-Net" rel="noreferrer">here</a>.</p>
<p><a href="https://i.stack.imgur.com/S74SW.png" rel="noreferrer"><img src="https://i.stack.imgur.com/S74SW.png" alt="enter image description here"></a></p> | 2018-10-03 13:13:19.960000+00:00 | 2018-10-03 13:13:19.960000+00:00 | null | null | 45,463,778 | <p>I understand that Batch Normalisation helps in faster training by turning the activation towards unit Gaussian distribution and thus tackling vanishing gradients problem. Batch norm acts is applied differently at training(use mean/var from each batch) and test time (use finalized running mean/var from training phase).</p>
<p>Instance normalisation, on the other hand, acts as contrast normalisation as mentioned in this paper <a href="https://arxiv.org/abs/1607.08022" rel="noreferrer">https://arxiv.org/abs/1607.08022</a> . The authors mention that the output stylised images should be not depend on the contrast of the input content image and hence Instance normalisation helps. </p>
<p>But then should we not also use instance normalisation for image classification where class label should not depend on the contrast of input image. I have not seen any paper using instance normalisation in-place of batch normalisation for classification. What is the reason for that? Also, can and should batch and instance normalisation be used together. I am eager to get an intuitive as well as theoretical understanding of when to use which normalisation. </p> | 2017-08-02 14:34:46.257000+00:00 | 2021-10-14 03:02:30.443000+00:00 | 2018-01-05 19:35:45.790000+00:00 | machine-learning|neural-network|computer-vision|conv-neural-network|batch-normalization | ['https://arxiv.org/abs/1807.09441', 'https://github.com/XingangPan/IBN-Net', 'https://i.stack.imgur.com/S74SW.png'] | 3 |
48,118,940 | <h2>Definition</h2>
<p>Let's begin with the strict definition of both:</p>
<p><strong>Batch normalization</strong>
<a href="https://i.stack.imgur.com/VDqKY.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/VDqKY.jpg" alt="batch-norm-formula" /></a></p>
<p><strong>Instance normalization</strong>
<a href="https://i.stack.imgur.com/X5z48.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/X5z48.jpg" alt="instance-norm-formula" /></a></p>
<p>As you can notice, they are doing the same thing, except for the number of input tensors that are normalized jointly. Batch version normalizes all images <em>across the batch and spatial locations</em> (in the CNN case, in the ordinary case <a href="https://stackoverflow.com/q/38553927/712995">it's different</a>); instance version normalizes each element of the batch independently, i.e., across <em>spatial locations</em> only.</p>
<p>In other words, where batch norm computes one mean and std dev (thus making the distribution of the whole layer Gaussian), instance norm computes <code>T</code> of them, making each individual image distribution look Gaussian, but not jointly.</p>
<p>A simple analogy: during data pre-processing step, it's possible to normalize the data on per-image basis or normalize the whole data set.</p>
<p><sup>Credit: the formulas are from <a href="https://github.com/aleju/papers/blob/master/neural-nets/Instance_Normalization_The_Missing_Ingredient_for_Fast_Stylization.md" rel="noreferrer">here</a>.</sup></p>
<h2>Which normalization is better?</h2>
<p>The answer depends on the network architecture, in particular on what is done <em>after</em> the normalization layer. Image classification networks usually stack the feature maps together and wire them to the FC layer, which <strong>share weights across the batch</strong> (the modern way is to use the CONV layer instead of FC, but the argument still applies).</p>
<p>This is where the distribution nuances start to matter: the same neuron is going to receive the input from all images. If the variance across the batch is high, the gradient from the small activations will be completely suppressed by the high activations, which is exactly the problem that batch norm tries to solve. That's why it's fairly possible that per-instance normalization won't improve network convergence at all.</p>
<p>On the other hand, batch normalization adds extra noise to the training, because the result for a particular instance depends on the neighbor instances. As it turns out, this kind of noise may be either good and bad for the network. This is well explained in the <a href="https://arxiv.org/pdf/1602.07868.pdf" rel="noreferrer">"Weight Normalization"</a> paper by Tim Salimans at al, which name recurrent neural networks and reinforcement learning DQNs as <em>noise-sensitive applications</em>. I'm not entirely sure, but I think that the same noise-sensitivity was the main issue in stylization task, which instance norm tried to fight. It would be interesting to check if weight norm performs better for this particular task.</p>
<h2>Can you combine batch and instance normalization?</h2>
<p>Though it makes a valid neural network, there's no practical use for it. Batch normalization noise is either helping the learning process (in this case it's preferable) or hurting it (in this case it's better to omit it). In both cases, leaving the network with one type of normalization is likely to improve the performance.</p> | 2018-01-05 18:01:06.790000+00:00 | 2021-06-09 03:53:55.517000+00:00 | 2021-06-09 03:53:55.517000+00:00 | null | 45,463,778 | <p>I understand that Batch Normalisation helps in faster training by turning the activation towards unit Gaussian distribution and thus tackling vanishing gradients problem. Batch norm acts is applied differently at training(use mean/var from each batch) and test time (use finalized running mean/var from training phase).</p>
<p>Instance normalisation, on the other hand, acts as contrast normalisation as mentioned in this paper <a href="https://arxiv.org/abs/1607.08022" rel="noreferrer">https://arxiv.org/abs/1607.08022</a> . The authors mention that the output stylised images should be not depend on the contrast of the input content image and hence Instance normalisation helps. </p>
<p>But then should we not also use instance normalisation for image classification where class label should not depend on the contrast of input image. I have not seen any paper using instance normalisation in-place of batch normalisation for classification. What is the reason for that? Also, can and should batch and instance normalisation be used together. I am eager to get an intuitive as well as theoretical understanding of when to use which normalisation. </p> | 2017-08-02 14:34:46.257000+00:00 | 2021-10-14 03:02:30.443000+00:00 | 2018-01-05 19:35:45.790000+00:00 | machine-learning|neural-network|computer-vision|conv-neural-network|batch-normalization | ['https://i.stack.imgur.com/VDqKY.jpg', 'https://i.stack.imgur.com/X5z48.jpg', 'https://stackoverflow.com/q/38553927/712995', 'https://github.com/aleju/papers/blob/master/neural-nets/Instance_Normalization_The_Missing_Ingredient_for_Fast_Stylization.md', 'https://arxiv.org/pdf/1602.07868.pdf'] | 5 |
11,839,187 | <p>As a followup to mcdowella's answer, I'd like to point out that the O(n^2 lg n) solution presented in Maes' paper is the intended solution to the contest problem (check <a href="http://www.acmicpc-pacnw.org/ProblemSet/2011/solutions.zip" rel="nofollow">http://www.acmicpc-pacnw.org/ProblemSet/2011/solutions.zip</a>). The O(ne) solution in Landau et al's paper does NOT apply to this problem, as that paper is targeted at edit distance, not LCS. In particular, the solution to cyclic edit distance only applies if the edit operations (add, delete, replace) all have unit (1, 1, 1) cost. LCS, on the other hand, is equivalent to edit distances with (add, delete, replace) costs (1, 1, 2). These are not equivalent to each other; for example, consider the input strings "ABC" and "CXY" (for the acyclic case; you can construct cyclic counterexamples similarly). The LCS of the two strings is "C", but the minimum unit-cost edit is to replace each character in turn. </p>
<p>At 110 lines but no complex data structures, Maes' solution falls towards the upper end of what is reasonable to implement in a contest setting. Even if Landau et al's solution could be adapted to handle cyclic LCS, the complexity of the data structure makes it infeasible in a contest setting.</p>
<p>Last but not least, I'd like to point out that an O(n^2) solution DOES exist for CLCS, described here: <a href="http://arxiv.org/abs/1208.0396" rel="nofollow">http://arxiv.org/abs/1208.0396</a> At 60 lines, no complex data structures, and only 2 arrays, this solution is quite reasonable to implement in a contest setting. Arriving at the solution might be a different matter, though.</p> | 2012-08-07 04:14:58.603000+00:00 | 2012-08-07 04:14:58.603000+00:00 | null | null | 8,025,314 | <p>This is a problem appeared in today's Pacific NW Region Programming Contest during which no one solved it. It is problem B and the complete problem set is here: <a href="http://www.acmicpc-pacnw.org/icpc-statements-2011.zip" rel="nofollow">http://www.acmicpc-pacnw.org/icpc-statements-2011.zip</a>. There is a well-known O(n^2) algorithm for LCS of two strings using Dynamic Programming. But when these strings are extended to rings I have no idea...</p>
<p>P.S. note that it is subsequence rather than substring, so the elements do not need to be adjacent to each other</p>
<p>P.S. It might not be O(n^2) but O(n^2lgn) or something that can give the result in 5 seconds on a common computer.</p> | 2011-11-06 05:02:54.857000+00:00 | 2012-08-07 04:14:58.603000+00:00 | 2011-11-06 06:03:13.517000+00:00 | string|algorithm|dynamic-programming | ['http://www.acmicpc-pacnw.org/ProblemSet/2011/solutions.zip', 'http://arxiv.org/abs/1208.0396'] | 2 |
49,730,367 | <p>After reading this question I got curious and found the paper
"<a href="https://arxiv.org/abs/1503.08376" rel="nofollow noreferrer">Assessing Excel VBA Suitability for Monte Carlo Simulation</a>" by Alexei Botchkarev that is available <a href="http://epublications.bond.edu.au/cgi/viewcontent.cgi?article=1179&context=ejsie" rel="nofollow noreferrer">here</a>. Both RAND and RND functions are not recommended, but as pointed out in the paper the Mersenne Twister has been implemented in VBA by Jerry Wang.</p>
<p>A quick search led me to this nicely commented Version that has been updated the last 2015/2/28: <a href="http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/VERSIONS/BASIC/MTwister.xlsb" rel="nofollow noreferrer">http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/VERSIONS/BASIC/MTwister.xlsb</a></p>
<p>Source: <a href="http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/VERSIONS/BASIC/basic.html" rel="nofollow noreferrer">http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/VERSIONS/BASIC/basic.html</a></p> | 2018-04-09 10:04:59.150000+00:00 | 2018-04-09 10:04:59.150000+00:00 | null | null | 38,891,165 | <p>I need a pseudo random number generator for 2D Monte Carlo simulation that doesn't have the characteristic hyperplanes that you get with simple LCGs. I tested the random number generator Rnd() in Excel 2013 using the following code (takes about 5 secs to run):</p>
<pre><code>Sub ZoomRNG()
Randomize
For i = 1 To 1000
Found = False
Do
x = Rnd() ' 2 random numbers between 0.0 and 1.0
y = Rnd()
If ((x > 0.5) And (x < 0.51)) Then
If ((y > 0.5) And (y < 0.51)) Then
' Write if both x & y in a narrow range
Cells(i, 1) = i
Cells(i, 2) = x
Cells(i, 3) = y
Found = True
End If
End If
Loop While (Not Found)
Next i
End Sub
</code></pre>
<p>Here is a simple plot of x vs y from running the above code</p>
<p><a href="https://i.stack.imgur.com/GdEQj.png" rel="noreferrer"><img src="https://i.stack.imgur.com/GdEQj.png" alt="enter image description here"></a></p>
<p>Not only is it not very random-looking, it has more obvious hyperplanes than the infamous RANDU algorithm does in 2D. Basically, am I using the function incorrectly or is the Rnd() function in VBA actually not the least bit usable? </p>
<p>For comparison, here's what I get for the Mersenne Twister MT19937 in C++. </p>
<p><a href="https://i.stack.imgur.com/WmCUB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/WmCUB.png" alt="enter image description here"></a></p> | 2016-08-11 08:30:38.900000+00:00 | 2020-05-09 22:50:03.063000+00:00 | 2018-07-09 18:41:45.953000+00:00 | excel|random|excel-2013|montecarlo|vba | ['https://arxiv.org/abs/1503.08376', 'http://epublications.bond.edu.au/cgi/viewcontent.cgi?article=1179&context=ejsie', 'http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/VERSIONS/BASIC/MTwister.xlsb', 'http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/VERSIONS/BASIC/basic.html'] | 4 |
50,664,005 | <p>Here's how to rebuild the plot starting from the <code>gg_both</code> data frame, using <code>ggplot()</code>, with the added ticks:</p>
<pre><code>library(tidyverse)
max_pos <- gg_both %>% filter(col=="+") %>% select(vimp) %>% max
min_neg <- gg_both %>% filter(col=="-") %>% select(vimp) %>% min
vline <- (min_neg - max_pos) / 2 + max_pos
ggplot(gg_both, aes(x=vimp, y=reorder(names, depth), color=col)) +
geom_point() +
scale_x_continuous(breaks=1:22, labels=1:22) +
geom_abline(slope=1, lty=2, color="red") +
geom_vline(xintercept = vline, lty=2, color="red") +
geom_hline(yintercept = attr(gg_both, "modelsize") + .5, lty=2, color="red")
</code></pre>
<p><a href="https://i.stack.imgur.com/eZihc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eZihc.png" alt="vimp"></a></p>
<p>Explanation (excerpts from <a href="https://arxiv.org/pdf/1501.07196.pdf" rel="nofollow noreferrer"><code>ggRandomForests: Random Forests for Regression</code></a>) arXiv paper: </p>
<ul>
<li><p>Colors and diagonal line:</p>
<blockquote>
<p>Points on the red dashed line are ranked equivalently, points below have higher VIMP, those above have higher minimal depth ranking. Variables are colored by the sign of the VIMP measure. </p>
</blockquote></li>
<li><p>Vertical line:</p>
<blockquote>
<p>Vertical dashed line indicates the maximal minimal depth for important variables. </p>
</blockquote></li>
<li><p>Horizontal line (this isn't mentioned in the paper, but it's in the <a href="https://github.com/ehrlinger/ggRandomForests/blob/master/R/gg_minimal_vimp.R#L151" rel="nofollow noreferrer">source code</a>):</p>
<blockquote>
<p>...we can put a horizontal line at the MD selection point.</p>
</blockquote></li>
</ul> | 2018-06-03 06:54:55.597000+00:00 | 2018-06-03 07:18:47.703000+00:00 | 2018-06-03 07:18:47.703000+00:00 | null | 50,663,760 | <p>I would like to change axis' scale (or intervals).</p>
<p>On the other hand, I have some trouble with it. </p>
<p>Here's my code what I've implemented as below.</p>
<pre><code>install.packages("randomForestSRC")
install.packages("ggRandomForests")
library(randomForestSRC)
library(ggRandomForests)
data(pbc, package="randomForestSRC")
pbc.na <- na.omit(pbc)
set.seed(123)
rsf <- rfsrc(Surv(days, status)~., data=pbc.na,
ntree=500, nplist=1, importance=T, proximity=T)
out.vs <- var.select (rsf)
gg_md <- gg_minimal_depth(out.vs)
gg_both <- gg_minimal_vimp(gg_md)
plot(gg_both)
</code></pre>
<p>In that case, the graph can be shown like this.
<a href="https://i.stack.imgur.com/pbLOE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pbLOE.png" alt="enter image description here"></a></p>
<p>However, What I want to do is that adjusting x-axis scale from 0 to 22 by 1.</p>
<p>So although I've conducted the added code as below, it didn't work.</p>
<pre><code>#### It doest not work.
plot(gg_both)+
ggplot2::scale_x_continuous(breaks=seq(0, 22, 1))
### It is working but no information about positive $ negative VIMP, dashed lines, etc.
ggplot2::ggplot(gg_both, ggplot2::aes(x=vimp, y=names))+
ggplot2::geom_point(color="black")+
ggplot2::scale_x_continuous(breaks=seq(0, 22, 1))
</code></pre>
<p>Please let me know how to do what I want to.</p>
<p>Thanks always.</p> | 2018-06-03 06:12:04.297000+00:00 | 2018-06-03 07:18:47.703000+00:00 | null | r|ggplot2 | ['https://i.stack.imgur.com/eZihc.png', 'https://arxiv.org/pdf/1501.07196.pdf', 'https://github.com/ehrlinger/ggRandomForests/blob/master/R/gg_minimal_vimp.R#L151'] | 3 |
61,571,567 | <p>Some guys did experiments on the pre-padding and post-padding in their paper <a href="https://arxiv.org/pdf/1903.07288.pdf" rel="nofollow noreferrer">Effects of Padding on LSTMs and CNNs</a>. Here is their conclusion.</p>
<blockquote>
<p>For LSTMs, the accuracy of post-padding (50.117%) is <strong>way</strong> less than pre-padding (80.321%).</p>
<p>Pre-padding and post padding doesn’t matter much to CNN because unlike LSTMs, CNNs don’t try to remember stuff from the previous output, but instead tries to find pattern in the given data.</p>
</blockquote>
<p>I have never expected such big effect of padding positions, so I suggest you verify it yourself.</p> | 2020-05-03 08:25:39.627000+00:00 | 2020-05-03 08:25:39.627000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 48,533,886 | <p>In RNN world, does it matter which end of the word vector is padded so they all have the same length?</p>
<p>Example</p>
<pre><code>pad_left = [0, 0, 0, 0, 5, 4, 3, 2]
pad_right = [5, 4, 3, 2, 0, 0, 0, 0]
</code></pre> | 2018-01-31 03:16:23.053000+00:00 | 2020-05-03 08:25:39.627000+00:00 | null | machine-learning|nlp|lstm|rnn | ['https://arxiv.org/pdf/1903.07288.pdf'] | 1 |
918,328 | <p>Ask around of your colleagues. </p>
<p>You don't say what kind of physics you're doing, or how big the working group is, but in my discipline (particle physics) there is a deep repository of experience putting up and running just this type of systems (we call it "slow controls" and similar). There is a pretty good chance that someone you work with has either done this or knows someone who has. There may be a detailed description of the last time out in someone's thesis.</p>
<p>I don't personally have much to do with this, but I do know this: one common feature is to have no-delete-no-overwrite design. You can only <em>add</em> data, never remove it. This preserves your chances of figuring out what <em>really happened</em> in the case of trouble</p>
<hr>
<p>Perhaps I should explain a little more. While this is an important task and has to be done right, it is not really related to physics, so you can't look it up on <a href="http://www.slac.stanford.edu/spires/hep/" rel="nofollow noreferrer">Spires</a> or on <a href="http://arxiv.org/" rel="nofollow noreferrer">arXive.org</a>. No one writes papers on the design and implementation of medium sized slow controls databases. But they do sometimes put it in their dissertations. The easiest way to find a pointer really is to ask a bunch of people around the lab.</p> | 2009-05-27 22:32:43.083000+00:00 | 2009-05-27 23:45:54.293000+00:00 | 2009-05-27 23:45:54.293000+00:00 | null | 918,207 | <p>I have to develop a database for a unique environment. I don't have experience with database design and could use everybody's wisdom.</p>
<p>My group is designing a database for piece of physics hardware and a data acquisition system. We need a system that will store all the hardware configuration parameters, and track the changes to these parameters as they are changed by the user.</p>
<p>The setup:</p>
<ul>
<li>We have nearly 200 detectors and roughly 40 parameters associated with each detector. Of these 40 parameters, we expect only a few to change during the course of the experiment. Most parameters associated with a single detector are static.</li>
<li><p>We collect data for this experiment in timed runs. During these runs, the parameters loaded into the hardware must not change, although we should be able to edit the database at any time to prepare for the next run. The current plan:</p>
<ul>
<li>The database will provide the difference between the current parameters and the parameters used during last run.</li>
<li>At the start of a new run, the most recent database changes be loaded into hardware.</li>
<li>The settings used for the upcoming run must be tagged with a run number and the current date and time. This is essential. I need a run-by-run history of the experimental setup.</li>
</ul></li>
<li><p>There will be several different clients that both read and write to the database. Although changes to the database will be infrequent, I cannot guarantee that the changes won't happen concurrently. </p></li>
<li>Must be robust and non-corruptible. The configuration of the experimental system depends on the hardware. Any breakdown of the database would prevent data acquisition, and our time is expensive. Database backups?</li>
</ul>
<p>My current plan is to implement the above requirements using a sqlite database, although I am unsure if it can support all my requirements. Is there any other technology I should look into? Has anybody done something similar? I am willing to learn any technology, as long as it's mature. </p>
<p>Tips and advice are welcome.</p>
<p>Thank you,</p>
<p>Sean</p>
<hr>
<p><strong>Update 1</strong>:</p>
<p><em>Database access</em>:</p>
<p>There are three lite applications that can write and read to the database and one application that can only read. </p>
<p>The applications with write access are responsible for setting a non-overlapping subset of the hardware parameters. To be specific, we have one application (of which there may be multiple copies) which sets the high voltage, one application which sets the remainder of the hardware parameters which may change during the experiment, and one GUI which sets the remainder of the parameters which are nearly static and are only essential for the proper reconstruction of the data. </p>
<p>The program with read access only is our data analysis software. It needs access to nearly all of the parameters in the database to properly format the incoming data into something we can analyze properly. The number of connections to the database should be >10. </p>
<p><em>Backups</em>:</p>
<p>Another setup at our lab dumps an xml file every run. Even though I don't think xml is appropriate, I was planning to back up the system every run, just in case. </p> | 2009-05-27 21:52:58.840000+00:00 | 2009-05-27 23:45:54.293000+00:00 | 2009-05-27 22:49:31.240000+00:00 | database|hardware|sqlite|physics | ['http://www.slac.stanford.edu/spires/hep/', 'http://arxiv.org/'] | 2 |
50,307,204 | <p>You can implement multidimensional systems by using the first argument of <code>y</code> to indicate which component you want to use. Also, your definition of the right-hand side of the differential equation must have two components.</p>
<p>For instance, you can implement your example as follows:</p>
<pre><code>from jitcdde import jitcdde, y, t
f = [
y(1),
a*α/ω*y(1,t-τ)*(1-symengine.tanh(y(0,t-τ))**2)-y(0)-y(1)/Q
]
DDE = jitcdde(f)
</code></pre>
<p>What is <em>v</em> in your equation is now <code>y(0)</code>; <em>y</em> has become <code>y(1)</code>.</p>
<p>There is an example for a second-order differential equation such as yours in the <a href="http://dx.doi.org/10.1063/1.5019320" rel="nofollow noreferrer">accompanying paper</a> (<a href="https://arxiv.org/abs/1711.09886" rel="nofollow noreferrer">preprint</a>).</p> | 2018-05-12 14:18:46.297000+00:00 | 2018-05-12 15:13:59.010000+00:00 | 2018-05-12 15:13:59.010000+00:00 | null | 50,305,413 | <p>I'm a student coding in Python and trying to solve the folowing delayed differential equation:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><a href="http://www.codecogs.com/eqnedit.php?latex=\left\{\begin{array}{l}\dot{v}(t)=&space;y(t)&space;\\&space;\dot{y}(t)=&space;\frac{a_1\alpha}{\omega_1}.y(t-\tau)).\{1-tanh^2[v(t-\tau)]\}&space;-&space;v(t)-\frac{1}{Q_1}.y(t)&space;\end{array}\right.\\&space;\\&space;(a_1&space;=&space;70,&space;\quad&space;Q_1&space;=&space;50,&space;\quad&space;\omega_1&space;=&space;2260,&space;\quad&space;\alpha&space;=&space;10,&space;\quad&space;\tau&space;\in&space;[0,8e-3])" target="_blank"><img src="http://latex.codecogs.com/gif.latex?\left\{\begin{array}{l}\dot{v}(t)=&space;y(t)&space;\\&space;\dot{y}(t)=&space;\frac{a_1\alpha}{\omega_1}.y(t-\tau)).\{1-tanh^2[v(t-\tau)]\}&space;-&space;v(t)-\frac{1}{Q_1}.y(t)&space;\end{array}\right.\\&space;\\&space;(a_1&space;=&space;70,&space;\quad&space;Q_1&space;=&space;50,&space;\quad&space;\omega_1&space;=&space;2260,&space;\quad&space;\alpha&space;=&space;10,&space;\quad&space;\tau&space;\in&space;[0,8e-3])" title="\left\{\begin{array}{l}\dot{v}(t)= y(t) \\ \dot{y}(t)= \frac{a_1\alpha}{\omega_1}.y(t-\tau)).\{1-tanh^2[v(t-\tau)]\} - v(t)-\frac{1}{Q_1}.y(t) \end{array}\right.\\ \\ (a_1 = 70, \quad Q_1 = 50, \quad \omega_1 = 2260, \quad \alpha = 10, \quad \tau \in [0,8e-3])" /></a></code></pre>
</div>
</div>
</p>
<p>I wanted to use JiTCDDE, but didn’t succeed to find a way to adapt the system, even after studying the examples in the documentation of the module.
The major problem I have is that I don’t understand how to deal with the second equation containing <em>y</em> and <em>v</em> at the same time.</p>
<p>The goal is to plot the bifurcation diagram of the system (<em>v</em> as a function of <em>τ</em>). Am I using the wrong tool? Or is there a way to use JiTCDDE in my situation?</p> | 2018-05-12 10:50:29.847000+00:00 | 2022-03-01 16:34:53.283000+00:00 | 2022-03-01 16:34:53.283000+00:00 | python|differential-equations|jitcode-jitcdde-jitcsde | ['http://dx.doi.org/10.1063/1.5019320', 'https://arxiv.org/abs/1711.09886'] | 2 |
25,524,893 | <p>According to Wikipedia, <a href="https://en.wikipedia.org/wiki/Inductive_logic_programming" rel="nofollow">Inductive logic programming (ILP)</a> is a subfield of machine learning which uses <a href="https://en.wikipedia.org/wiki/Logic_programming" rel="nofollow">logic programming</a> as a uniform representation for examples, background knowledge and hypotheses.</p>
<p>In example, given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesised logic program which entails all the positive and none of the negative examples. Inductive logic programming is particularly useful in bioinformatics and natural language processing.</p>
<p>There are some known implementations, such as:</p>
<ul>
<li><a href="http://www.cs.bris.ac.uk/Research/MachineLearning/1BC/" rel="nofollow">1BC and 1BC2: first-order naive Bayesian classifiers</a></li>
<li><a href="http://dtai.cs.kuleuven.be/ACE/" rel="nofollow">ACE (A Combined Engine)</a></li>
<li><a href="http://web.comlab.ox.ac.uk/oucl/research/areas/machlearn/Aleph/" rel="nofollow">Aleph </a></li>
<li><a href="http://www.ahlgren.info/research/atom/" rel="nofollow">Atom </a></li>
<li><a href="http://dtai.cs.kuleuven.be/claudien/" rel="nofollow">Claudien </a></li>
<li><a href="http://dl-learner.org" rel="nofollow">DL-Learner </a></li>
<li><a href="http://dtai.cs.kuleuven.be/dmax/" rel="nofollow">DMax </a></li>
<li><a href="ftp://ftp.cs.su.oz.au/pub/foil6.sh" rel="nofollow">First Order Inductive Learner (FOIL)</a></li>
<li><a href="http://www.doc.ic.ac.uk/~shm/Software/golem" rel="nofollow">Golem (ILP)</a></li>
<li><a href="http://arxiv.org/abs/1407.3836" rel="nofollow">Imparo</a></li>
<li><a href="http://lacam.di.uniba.it:8000/systems/inthelex/" rel="nofollow">Inthelex (INcremental THEory Learner from EXamples) </a></li>
<li><a href="http://cs.anu.edu.au/people/Eric.McCreath/lime.html" rel="nofollow">Lime</a></li>
<li><a href="http://libra.msra.cn/Publication/3392493/mio-user-s-manual" rel="nofollow">Mio </a></li>
<li>MIS (Model Inference System) by Ehud Shapiro</li>
<li><a href="http://www.doc.ic.ac.uk/~shm/Software/progol5.0" rel="nofollow">PROGOL</a></li>
<li><a href="http://labe.felk.cvut.cz/~zelezny/rsd/" rel="nofollow">RSD</a></li>
<li>Warmr (now included in ACE)</li>
<li><a href="http://ilp.doc.ic.ac.uk/ProGolem/" rel="nofollow">ProGolem </a></li>
</ul> | 2014-08-27 10:34:39.807000+00:00 | 2014-08-27 10:40:48.603000+00:00 | 2014-08-27 10:40:48.603000+00:00 | null | 25,524,069 | <p>The language where the computer is told what the problem is, not how to solve the problem. So given a database or a set of rules, the computer tries to find a solution matching all the desired properties.</p>
<p>Example 1 (format: input vars => expected output)</p>
<p>Set of rules:
<code>2, 2 => 4; 2, 4 => 6; 4, 4 => 8</code>, etc.</p>
<p>Then program learns that it needs to add all the input variables.</p> | 2014-08-27 09:54:52.867000+00:00 | 2014-08-27 10:40:48.603000+00:00 | 2014-08-27 10:31:14.803000+00:00 | language-agnostic|machine-learning|artificial-intelligence|induction | ['https://en.wikipedia.org/wiki/Inductive_logic_programming', 'https://en.wikipedia.org/wiki/Logic_programming', 'http://www.cs.bris.ac.uk/Research/MachineLearning/1BC/', 'http://dtai.cs.kuleuven.be/ACE/', 'http://web.comlab.ox.ac.uk/oucl/research/areas/machlearn/Aleph/', 'http://www.ahlgren.info/research/atom/', 'http://dtai.cs.kuleuven.be/claudien/', 'http://dl-learner.org', 'http://dtai.cs.kuleuven.be/dmax/', 'ftp://ftp.cs.su.oz.au/pub/foil6.sh', 'http://www.doc.ic.ac.uk/~shm/Software/golem', 'http://arxiv.org/abs/1407.3836', 'http://lacam.di.uniba.it:8000/systems/inthelex/', 'http://cs.anu.edu.au/people/Eric.McCreath/lime.html', 'http://libra.msra.cn/Publication/3392493/mio-user-s-manual', 'http://www.doc.ic.ac.uk/~shm/Software/progol5.0', 'http://labe.felk.cvut.cz/~zelezny/rsd/', 'http://ilp.doc.ic.ac.uk/ProGolem/'] | 18 |
71,771,243 | <p>There are 2 things that differ in the implementations of ResNet50 in <a href="https://github.com/keras-team/keras/blob/v2.8.0/keras/applications/resnet.py" rel="nofollow noreferrer">TensorFlow</a> and <a href="https://pytorch.org/vision/stable/_modules/torchvision/models/resnet.html#resnet50" rel="nofollow noreferrer">PyTorch</a> that I could notice and might explain your observation.</p>
<ol>
<li><p>The batch normalization does not have the same momentum in both. It's <a href="https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html" rel="nofollow noreferrer">0.1 in PyTorch</a> and <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization" rel="nofollow noreferrer">0.01 in TensorFlow</a> (although it is reported as 0.99 I am writing it down in PyTorch's convention for comparison here). This might affect training and therefore the weights.</p>
</li>
<li><p>TensorFlow's implementation <a href="https://github.com/keras-team/keras/blob/v2.8.0/keras/applications/resnet.py#L213-L255" rel="nofollow noreferrer">uses biases</a> in convolutions while PyTorch's one doesn't (as can be seen in the <code>conv3x3</code> and <code>conv1x1</code> definitions). Because the batch normalization layers are affine, the biases are not needed, and are spurious. I think this is truly what explains the difference in your case since they can be compensated by the batch norm, and therefore be arbitrarily large, which would be why you observe a bigger range for TF.
Another way to see this is to compare the summaries as I did in <a href="https://colab.research.google.com/drive/1RCmWkpwuKFapzzPacbqodxz0mqt9Igft?usp=sharing" rel="nofollow noreferrer">this colab</a>.</p>
</li>
</ol>
<p>I currently have a <a href="https://github.com/keras-team/keras/pull/16363" rel="nofollow noreferrer">PR that should fix the bias part</a> (at least provide the possibility to train a resnet without conv bias in TF), and plan on submitting one for BN soon.</p>
<h2>EDIT</h2>
<p>I have actually found out more differences, that I listed in a paper I recently wrote. You can check them in <a href="https://ar5iv.labs.arxiv.org/html/2206.13424#A6.T3" rel="nofollow noreferrer">Table 3 of the F appendix</a>.</p>
<p>I list here for completeness of the answer, those that might have an impact on the output features statistics:</p>
<ul>
<li>the variance estimation in the batch norm is different</li>
<li>the convolution weights and classification head weights and bias initialization are not the same</li>
</ul> | 2022-04-06 17:30:03.523000+00:00 | 2022-08-05 09:19:02.293000+00:00 | 2022-08-05 09:19:02.293000+00:00 | null | 67,365,237 | <p>"Obviously!", you might say... But there's one significant difference that I have trouble explaining by the difference in random initialization.</p>
<p>Take the two pre-trained basenets (before the average pooling layer) and feed them with the same image, you will notice that the output features don't follow the same distribution. Specifically, <strong>TensorFlow</strong>'s backbone has more inhibited features by the ReLU compared to <strong>Pytorch</strong>'s backbone. Additionally, as shows in the third figure, the dynamic range is different between the two frameworks.</p>
<p><a href="https://i.stack.imgur.com/N80rP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N80rP.png" alt="Features distribution" /></a></p>
<p>Of course, this difference is absorbed by the dense layer addressing the classification task, but: Can that difference be explained by randomness in the training process? Or training time? Or is there something else that would explain the difference?</p>
<p>Code to reproduce:</p>
<pre><code>import imageio
import numpy as np
image = imageio.imread("/tmp/image.png").astype(np.float32)/255
import tensorflow as tf
inputs = image[np.newaxis]
model = tf.keras.applications.ResNet50(include_top=False, input_shape=(None, None, 3))
output = model(inputs).numpy()
print(f"TensorFlow features range: [{np.min(output):.02f};{np.max(output):.02f}]")
import torchvision
import torch
model = torch.nn.Sequential(*list(torchvision.models.resnet50(pretrained=True).children())[0:8])
inputs = torch.tensor(image).permute(2,0,1).unsqueeze(0)
output = model(inputs).detach().permute(0,2,3,1).numpy()
print(f"Pytorch features range: [{np.min(output):.02f};{np.max(output):.02f}]")
</code></pre>
<p>Outputting</p>
<pre><code>TensorFlow features range: [0.00;25.98]
Pytorch features range: [0.00;12.00]
</code></pre>
<p>Note: it's similar to any image.</p> | 2021-05-03 07:46:52.100000+00:00 | 2022-08-05 09:19:02.293000+00:00 | 2022-04-08 13:31:40.203000+00:00 | tensorflow|deep-learning|pytorch|resnet|pre-trained-model | ['https://github.com/keras-team/keras/blob/v2.8.0/keras/applications/resnet.py', 'https://pytorch.org/vision/stable/_modules/torchvision/models/resnet.html#resnet50', 'https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html', 'https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization', 'https://github.com/keras-team/keras/blob/v2.8.0/keras/applications/resnet.py#L213-L255', 'https://colab.research.google.com/drive/1RCmWkpwuKFapzzPacbqodxz0mqt9Igft?usp=sharing', 'https://github.com/keras-team/keras/pull/16363', 'https://ar5iv.labs.arxiv.org/html/2206.13424#A6.T3'] | 8 |
56,784,754 | <p>What you learned was the sinus function and not its derivative : during the training process, you are controlling the error with your cost function that takes into account only the values, but it does not control the slope at all : you could have learned a very noisy function but matching the data points exactly.</p>
<p>If you are just using the data point in your cost function, you have no guarantee about the derivative you've learned. However, with some advanced training technics, you could also learn such a derivative : <a href="https://arxiv.org/abs/1706.04859" rel="nofollow noreferrer">https://arxiv.org/abs/1706.04859</a></p>
<p>So as a summary, it is not a code issue but only
a theoritical issue</p> | 2019-06-27 05:56:06.257000+00:00 | 2019-06-27 05:56:06.257000+00:00 | null | null | 56,772,362 | <p>I trained a neural network to do a regression on the sine function and would like to compute the first and second derivative with respect to the input.
I tried using the tf.gradients() function like this (neural_net is an instance of tf.keras.Sequential):</p>
<pre class="lang-py prettyprint-override"><code>prediction = neural_net(x_value)
dx_f = tf.gradients(prediction, x_value)
dx_dx_f = tf.gradients(dx_f, x_value)
</code></pre>
<p>x_value is an array that has the length of the test size.
However, this results in <a href="https://i.stack.imgur.com/VpqAf.jpg" rel="noreferrer">predictions and derivatives</a>. The prediction of the network (blue curve) basically exactly catches the sine function, but I had to divide the first derivative (orange) with a factor of 10 and the second derivative (green) with a factor of 100 in order for it to be in the same order of magnitude. So the, the first derivative looks (after that rescale) ok, but the seond derivative is completely erratic. Since the prediction of the sine function works really well there is clearly something funny going on here.</p> | 2019-06-26 12:02:42.123000+00:00 | 2019-06-27 05:56:06.257000+00:00 | null | python|tensorflow | ['https://arxiv.org/abs/1706.04859'] | 1 |
57,644,106 | <p>I have an alternative proposal. Run head detection rather than face detection. But you would still have to create a dataset for profile faces as I cannot find one yet. </p>
<p>For face detection, I am currently running DFSD by Tencent on the video on which I want to detect profile faces. It seems more robust than MTCNN on frontal faces oriented sideways (person lying down). </p>
<p><a href="https://github.com/TencentYoutuResearch/FaceDetection-DSFD" rel="nofollow noreferrer">Tencent DFSD</a></p>
<p>Head detection has a few Github repos but this one looks really good with models, Arxiv paper and examples:</p>
<p><a href="https://github.com/nyck33/cnn_head_detection" rel="nofollow noreferrer">head detection</a></p>
<p>I'll add a profile face dataset here if I find one. Actually just realized one of the answers above mentions a few such iBug and AFLW, etc.</p> | 2019-08-25 07:31:55.637000+00:00 | 2019-08-25 07:40:20.103000+00:00 | 2019-08-25 07:40:20.103000+00:00 | null | 38,870,106 | <p>I built a facial landmark predictor for frontal faces (similar to 68 landmarks of dlib). Now, I would like to continue to profile faces. Firstly, what I need is:
1 - A robust detector for profile face.
2 - Profile faces dataset and corresponding landmarks (key-points) annotations.</p>
<p>Any suggestion is welcome. Thanks a lot.</p> | 2016-08-10 09:59:48.320000+00:00 | 2019-08-25 07:40:20.103000+00:00 | null | dataset|computer-vision | ['https://github.com/TencentYoutuResearch/FaceDetection-DSFD', 'https://github.com/nyck33/cnn_head_detection'] | 2 |
65,975,351 | <p>You can find why they doing it in the original paper of <a href="https://arxiv.org/abs/1508.06576" rel="nofollow noreferrer">A Neural Algorithm of Artistic Style</a>, in short, people doing it mainly because after the calculate gradient of loss the constant value will be cancel out, so the formula will looking nicer.</p>
<p><a href="https://i.stack.imgur.com/jy6fL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jy6fL.png" alt="enter image description here" /></a></p>
<p><code>Flij</code> is the activation of the i filter at position j in layer l</p>
<p><a href="https://i.stack.imgur.com/VTtC7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VTtC7.png" alt="enter image description here" /></a></p>
<p><code>Nl</code> and <code>Ml</code> are number of feature maps and the height times the width of the feature map at layer l, <code>El</code> is style loss at layer l</p>
<p><a href="https://i.stack.imgur.com/oSpo3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oSpo3.png" alt="enter image description here" /></a></p>
<p>As you can see, the constant value is cancel out</p>
<p>Of course you could do the math without cancel out the constant value for gradient, after all it just a constant value, will not have much effect on result</p>
<p>Paper did the same to the content loss as to the style loss:</p>
<p><a href="https://i.stack.imgur.com/jxZWZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jxZWZ.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/uDTge.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uDTge.png" alt="enter image description here" /></a></p>
<p><code>Plij</code> is content image's feature representation in layer l</p>
<p>While code in <a href="https://keras.io/examples/generative/neural_style_transfer/" rel="nofollow noreferrer">keras example of neural style transfer</a> did not cancel out the constant value for content loss:</p>
<pre><code>def content_loss(base, combination):
return tf.reduce_sum(tf.square(combination - base))
</code></pre>
<p>Or you could think as it cancel out constant value using <code>content_weight</code>:</p>
<pre><code>loss = loss + content_weight * content_loss(
base_image_features, combination_features
)
</code></pre> | 2021-01-31 02:59:28.020000+00:00 | 2021-01-31 03:12:52.283000+00:00 | 2021-01-31 03:12:52.283000+00:00 | null | 65,974,937 | <p>I am following along the keras example for neural style transfer and in the style loss function they divide by the number 4, I have read the original paper by Gatys et al. and other articles in search of the meaning of this integer and its contribution to the whole pprocess and cannot find an explanation.</p>
<p>code is from: <a href="https://keras.io/examples/generative/neural_style_transfer/" rel="nofollow noreferrer">https://keras.io/examples/generative/neural_style_transfer/</a></p>
<p>Does anyone know what it is ?</p>
<pre><code> def style_loss(style, combination):
S = gram_matrix(style)
C = gram_matrix(combination)
channels = 3
size = img_nrows * img_ncols
return tf.reduce_sum(tf.square(S - C)) / (4.0 * (channels ** 2) * (size ** 2))
</code></pre> | 2021-01-31 01:34:42.943000+00:00 | 2021-01-31 03:12:52.283000+00:00 | null | python|machine-learning|keras|tensorflow2.0 | ['https://arxiv.org/abs/1508.06576', 'https://i.stack.imgur.com/jy6fL.png', 'https://i.stack.imgur.com/VTtC7.png', 'https://i.stack.imgur.com/oSpo3.png', 'https://i.stack.imgur.com/jxZWZ.png', 'https://i.stack.imgur.com/uDTge.png', 'https://keras.io/examples/generative/neural_style_transfer/'] | 7 |
63,094,298 | <p>An MLP is not suited for recommendations. If you want to go this route, you will need to create an embedding for your userid and another for your itemid and then add linear layers on top of the embeddings. Your target will be to predict the rating for a userid-itemid pair.</p>
<p>I suggest you take a look at variational autoencoders (VAE). They give state-of-the-art results in recommender systems. They will also give a fair comparaison with your stacked-autoencoder. Here's the research paper applying VAE for collaborative filtering : <a href="https://arxiv.org/pdf/1802.05814.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1802.05814.pdf</a></p> | 2020-07-25 22:40:30.617000+00:00 | 2020-07-25 22:40:30.617000+00:00 | null | null | 62,839,095 | <p>For my project , i’m trying to predict the ratings that a user will give to an unseen movie, based on the ratings he gave to other movies. I’m using the <strong>movielens dataset</strong>.The Main folder, which is ml-100k contains informations about <strong>100,000 movies</strong>.</p>
<p>Before processing the data, the main data (ratings data) contains <strong>user ID, movie ID, user rating</strong> from 0 to 5 and <strong>timestamps</strong>(not considered for this project).I then split the data into <strong>Training set(80%) and test data(20%) using sklearn Library.</strong></p>
<p>To create the recommendation systems, the model ‘<strong>Stacked-Autoencoder</strong>’ is being used. I’m using <strong>PyTorch</strong> and the <strong>code is implemented on Google Colab</strong>. The project is based on this <a href="https://towardsdatascience.com/stacked-auto-encoder-as-a-recommendation-system-for-movie-rating-prediction-33842386338" rel="nofollow noreferrer">https://towardsdatascience.com/stacked-auto-encoder-as-a-recommendation-system-for-movie-rating-prediction-33842386338</a></p>
<p>I'm new to deep Learning and I want to compare this model(Stacked_Autoencoder) to another Deep Learning model. For Instance,I want to use <strong>Multilayer Perception(MLP)</strong>. This is for research purposes. This is the code below for creating Stacked-Autoencoder model and training the model.</p>
<pre><code>### Part 1 : Archirecture of the AutoEncoder
#nn.Module is a parent class
# SAE is a child class of the parent class nn.Module
class SAE(nn.Module):
# self is the object of the SAE class
# Archirecture
def __init__(self, ):
# self can use alll the methods of the class nn.Module
super(SAE,self).__init__()
# Full connected layer n°1, input and 20 neurons-nodes of the first layer
# one neuron can be the genre of the movie
# Encode step
self.fc1 = nn.Linear(nb_movies,20)
# Full connected layer n°2
self.fc2 = nn.Linear(20,10)
# Decode step
# Full connected layer n°3
self.fc3 = nn.Linear(10,20)
# Full connected layer n°4
self.fc4 = nn.Linear(20,nb_movies)
# Sigmoid activation function
self.activation = nn.Sigmoid()
# Action : activation of the neurons
def forward(self, x) :
x = self.activation(self.fc1(x))
x = self.activation(self.fc2(x))
x = self.activation(self.fc3(x))
# dont's use the activation function
# use the linear function only
x = self.fc4(x)
# x is th evector of predicted ratings
return x
# Create the AutoEncoder object
sae=SAE()
#MSE Loss : imported from torch.nn
criterion=nn.MSELoss()
# RMSProp optimizer (update the weights) imported from torch.optim
#sea.parameters() are weights and bias adjusted during the training
optimizer=optim.RMSProp(sae.parameters(),lr=0.01, weight_decay=0.5)
### Part 2 : Training of the SAE
# number of epochs
nb_epochs = 200
# Epoch forloop
for epoch in range(1, nb_epoch+1):
# at the beginning the loss is at zero
s=0.
train_loss = 0
#Users forloop
for id_user in range(nb_users)
# add one dimension to make a two dimension vector.
# create a new dimension and put it the first position .unsqueeze[0]
input = Variable(training_set[id_user].unsqueeze[0])
# clone the input to obtain the target
target= input.clone()
# target.data are all the ratings
# ratings > 0
if torch.sum(target.data >0) > 0
output = sae(input)
# don't compute the gradients regarding the target
target.require_grad=False
# only deal with true ratings
output[target==0]=0
# Loss Criterion
loss =criterion(output,target)
# Average the error of the movies that don't have zero ratings
mean_corrector=nb_movies/float(torch.sum(target.data>0)+1e-10)
# Direction of the backpropagation
loss.backward()
train_loss+=np.sqrt(loss.data[0]*mean_corrector)
s+=1.
# Intensity of the backpropagation
optimizer.step()
print('epoch:' +str (epoch)+'loss:' +str(train_loss/s)
</code></pre>
<p>)</p>
<p>If I want to train using the MLP model. How can I implement this class model?
Also, What other deep learning model(Beside MLP) that I can use to compare with Stacked-Autoencoder?</p>
<p>Thanks.</p> | 2020-07-10 17:02:57.453000+00:00 | 2021-08-21 22:25:45.203000+00:00 | null | python|deep-learning|pytorch|autoencoder|recommendation-system | ['https://arxiv.org/pdf/1802.05814.pdf'] | 1 |
27,865,215 | <p>After a request for clarification, the question is about IEEE 754, independently of a programming language. In this context, obtaining the result <code>2.4196151872870495e-72</code> for the division being considered, in “round-to-nearest”, is purely and simply incorrect. The correct result is <code>2.41961518728705e-72</code>, according to the definition found in the question:</p>
<blockquote>
<p>[...] every operation [...] shall be performed as if it first produced an intermediate result correct to infinite precision and with unbounded range, and then rounded that result [...].</p>
</blockquote>
<p>What happened in practice is that most programming language implementations, and often specifications, do not put a lot of emphasis on the strict respect of IEEE 754 semantics for floating-point operations. Even when the IEEE 754 double-precision representation is used for storage of floating-point values, operations can end up being implemented as:</p>
<ul>
<li><p>if the arguments aren't already 80-bit floating-point values with 64-bit significands, <strong>conversion</strong> from double-precision to this format. This does not lose precision and would not be a problem in itself</p></li>
<li><p><strong>computation of a 80-bit result</strong> from the 80-bit operands, because this is what is easy without extra effort when computing with the 8087 instruction set</p></li>
<li><p>just after that or later, <strong>conversion</strong> (in other words, <strong>rounding</strong>) of the 80-bit value with its 64-bit significand to a double-precision value with a 53-bit significand.</p></li>
</ul>
<p>In some cases the last step does not take place immediately but at the whim of the compiler. This is particularly annoying because it makes the code non-deterministic. Adding separate debugging code that should not affect computations does change them by changing the availability of 80-bit registers and causing some of them to be spilt and rounded to double-precision.</p>
<p>Even when storage in double-precision happens immediately for each intermediate result, there remains the issue that the result has been computed, and correctly rounded, for a significand of 64 bits, and then rounded again to 53 bits. In some cases, the mathematical result is close to the midpoint between two double-precision value, and rounding it to 64 bits of significand drags it to the exact middle. If this result with its 64-bit significand is then rounded to 53 bits, the end result is a different value than the direct application of the IEEE 754 rule would have produced. This only happens when the mathematical result is very close to the midpoint between two double-precision numbers, so that the two answers are both almost equally accurate answers, but one of them is what the IEEE 754 standard says and not the other.</p>
<p>The article <a href="http://arxiv.org/abs/cs/0701192" rel="nofollow noreferrer">The pitfalls of verifying floating-point computations</a> makes good further
reading.</p>
<p>Notes:</p>
<p>As mentioned by Patricia in her answer, the reason that IEEE 754 specifies that +, -, *, / and √ should compute as if the mathematical result, sometimes with infinite digits, had been computed and then rounded, is that algorithms exist to obtain this result without computing the entire mathematical result. When no algorithms are known to obtain this “correctly rounded” result cheaply, for instance for trigonometric functions, the standard does not mandate it.</p>
<p>Since you found a solution on a page that explains how to configure the 387 FPU to round directly at 53 bits of significand, I should point out that double-rounding problems can remain even after this configuration, although much rarer. Indeed, while the significand of the FPU can be limited to 53 bits, there is no equivalent way to limit the exponent. A double-precision operation that produces a subnormal result will tend to be double-rounded when computed on the 387 even in 53-bit-significand mode. This caused me to ask this <a href="https://stackoverflow.com/questions/18496560/how-do-java-runtimes-targeting-pre-sse2-processors-implement-floating-point-basi">question about how Java implementations implement multiplication on the 387</a>.</p> | 2015-01-09 16:44:20.613000+00:00 | 2015-01-09 17:01:03.277000+00:00 | 2017-05-23 12:28:28.253000+00:00 | null | 27,858,307 | <p>From IEEE754, I read </p>
<blockquote>
<p>[...] every operation [...] shall be performed as if it first produced an intermediate
result correct to infinite precision and with unbounded range, and then rounded that
result [...].</p>
</blockquote>
<p>My understanding is when dividing the double <code>1.0108552519184509e+76</code> (<code>0x4FB6593CEBC97CC5</code>) by <code>4.1777521369084075e+147</code> (<code>0x5E94E917A9CC65DC</code>), the theoretical intermediate fraction part is
(binary) </p>
<pre><code>1.0001000110011011000100110000110101001010110111101110100000000000001...
</code></pre>
<p>and should get rounded to (rounding mode "nearest")</p>
<pre><code>1.0001000110011011000100110000110101001010110111101111
</code></pre>
<p>resulting in a quotient of <code>2.41961518728705e-72</code> (<code>0x311119B130D4ADEF</code>).</p>
<p>One SW here yields <code>2.4196151872870495e-72</code> (<code>0x311119B130D4ADEE</code>) which seems to indicate it calculates the intermediate fraction only up to a certain position, e.g.</p>
<pre><code>1.000100011001101100010011000011010100101011011110111010000000000
</code></pre>
<p>and then rounds.</p>
<p>Is this compliant with IEEE754? Is it a common approach? </p> | 2015-01-09 10:15:22.510000+00:00 | 2015-01-09 17:01:03.277000+00:00 | null | floating-point|floating-accuracy | ['http://arxiv.org/abs/cs/0701192', 'https://stackoverflow.com/questions/18496560/how-do-java-runtimes-targeting-pre-sse2-processors-implement-floating-point-basi'] | 2 |
57,523,988 | <p>I guess you are using your own Dataset. If not consider using the same hyperparameter as in the DCGAN paper and first reproduce their results. Then use that network for your dataset. DCGANs especially with cross-entropy loss are known to be really tricky and extremely hyperparameter sensitive to get to run (see <a href="https://arxiv.org/abs/1711.10337" rel="nofollow noreferrer">https://arxiv.org/abs/1711.10337</a>). Especially, if you did not constrain your discriminator with some kind of gradient penalties (Gradient Penalty or Spectral norm).</p>
<p>In GAN's using your own ideas for the hyperparameters is mostly a bad idea, unless you either really know what you are doing or have loads of GPU power to burn on a large scale search. </p> | 2019-08-16 11:39:35.060000+00:00 | 2019-08-16 11:39:35.060000+00:00 | null | null | 57,513,715 | <p>I cannot tell if this error is due to a technical mistake or hyper-parameters, but my DC-GAN's discriminator loss starts low and gradually climbs higher, slowing down around 8, whereas my generator loss goes way down. I ended it at about 60,000 epochs. Funny enough, the discriminator accuracy seems to be floating around 20-50%. Does anybody have any suggestions to fix the problem? Any help is appreciated.</p>
<p><strong>Important Info</strong></p>
<ul>
<li>Data Format: 472 320x224 Color PNG files.</li>
<li>Optimizer: <code>Adam(0.0002, 0.5)</code></li>
<li>Loss: Binary Cross-Entropy</li>
</ul>
<p><strong>A generated image after 50,000+ epochs: (Supposed to be a sneaker on a white background)</strong></p>
<p><a href="https://i.stack.imgur.com/aX5il.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aX5il.png" alt="enter image description here"></a></p>
<p><strong>Discriminator Model:</strong></p>
<pre class="lang-python prettyprint-override"><code> def build_discriminator(self):
img_shape = (self.img_size[0], self.img_size[1], self.channels)
model = Sequential()
model.add(Conv2D(32, kernel_size=self.kernel_size, strides=2, input_shape=img_shape, padding="same")) # 192x256 -> 96x128
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(64, kernel_size=self.kernel_size, strides=2, padding="same")) # 96x128 -> 48x64
model.add(ZeroPadding2D(padding=((0, 1), (0, 1))))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(BatchNormalization(momentum=0.8))
model.add(Conv2D(128, kernel_size=self.kernel_size, strides=2, padding="same")) # 48x64 -> 24x32
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(BatchNormalization(momentum=0.8))
model.add(Conv2D(256, kernel_size=self.kernel_size, strides=1, padding="same")) # 24x32 -> 12x16
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(512, kernel_size=self.kernel_size, strides=1, padding="same")) # 12x16 -> 6x8
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
model.summary()
img = Input(shape=img_shape)
validity = model(img)
return Model(img, validity)
</code></pre>
<p><strong>Generator Model:</strong></p>
<pre class="lang-python prettyprint-override"><code> def build_generator(self):
noise_shape = (100,)
model = Sequential()
model.add(
Dense(self.starting_filters * (self.img_size[0] // (2 ** self.upsample_layers)) * (self.img_size[1] // (2 ** self.upsample_layers)),
activation="relu", input_shape=noise_shape))
model.add(Reshape(((self.img_size[0] // (2 ** self.upsample_layers)),
(self.img_size[1] // (2 ** self.upsample_layers)),
self.starting_filters)))
model.add(BatchNormalization(momentum=0.8))
model.add(UpSampling2D()) # 6x8 -> 12x16
model.add(Conv2D(1024, kernel_size=self.kernel_size, padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(momentum=0.8))
model.add(UpSampling2D()) # 12x16 -> 24x32
model.add(Conv2D(512, kernel_size=self.kernel_size, padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(momentum=0.8))
model.add(UpSampling2D()) # 24x32 -> 48x64
model.add(Conv2D(256, kernel_size=self.kernel_size, padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(momentum=0.8))
model.add(UpSampling2D()) # 48x64 -> 96x128
model.add(Conv2D(128, kernel_size=self.kernel_size, padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(momentum=0.8))
model.add(UpSampling2D()) # 96x128 -> 192x256
model.add(Conv2D(64, kernel_size=self.kernel_size, padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(momentum=0.8))
model.add(Conv2D(32, kernel_size=self.kernel_size, padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(momentum=0.8))
model.add(Conv2D(self.channels, kernel_size=self.kernel_size, padding="same"))
model.add(Activation("tanh"))
model.summary()
noise = Input(shape=noise_shape)
img = model(noise)
return Model(noise, img)
</code></pre> | 2019-08-15 17:29:05.373000+00:00 | 2019-08-16 11:39:35.060000+00:00 | null | python|tensorflow|machine-learning|deep-learning|generative-adversarial-network | ['https://arxiv.org/abs/1711.10337'] | 1 |
57,044,083 | <p>There are a few open source vision packages that are able to detect text in noisy background images, comparable to Google's Vision API.</p>
<p>You can use a Fixed Convolution Layer simple architecture called EAST (Efficient and Accurate Scene Text Detector) by Zhou et al.
<a href="https://arxiv.org/abs/1704.03155v2" rel="noreferrer">https://arxiv.org/abs/1704.03155v2</a></p>
<p>Using Python:</p>
<p>Download the Pre-trained model from: <b> <a href="https://www.dropbox.com/s/r2ingd0l3zt8hxs/frozen_east_text_detection.tar.gz?dl=1" rel="noreferrer">https://www.dropbox.com/s/r2ingd0l3zt8hxs/frozen_east_text_detection.tar.gz?dl=1</a> </b>.
Extract the model to your current folder.</p>
<p>You will need OpenCV >= 3.4.2 to execute the below commands.</p>
<pre><code>import cv2
import math
net = cv2.dnn.readNet("frozen_east_text_detection.pb") #This is the model we get after extraction
frame = cv2.imread(<image_filename>)
inpWidth = inpHeight = 320 # A default dimension
# Preparing a blob to pass the image through the neural network
# Subtracting mean values used while training the model.
image_blob = cv2.dnn.blobFromImage(frame, 1.0, (inpWidth, inpHeight), (123.68, 116.78, 103.94), True, False)
</code></pre>
<p>Now we will have to define the output layers which churns out the positional values of the detected text and its confidence Score (through the Sigmoid Function)</p>
<pre><code>output_layer = []
output_layer.append("feature_fusion/Conv_7/Sigmoid")
output_layer.append("feature_fusion/concat_3")
</code></pre>
<p>Finally we will do a Forward Propagation through the network to get the desired output.</p>
<pre><code>net.setInput(image_blob)
output = net.forward(output_layer)
scores = output[0]
geometry = output[1]
</code></pre>
<p>Here i have used the decode function defined in opencv's github page, <a href="https://github.com/opencv/opencv/blob/master/samples/dnn/text_detection.py" rel="noreferrer">https://github.com/opencv/opencv/blob/master/samples/dnn/text_detection.py</a> to convert the positional values into box coordinates. (line 23 to 75).</p>
<p>For box detection threshold i have used a value of 0.5 and for Non Max Suppression i have used 0.3. You can try different values to achieve better bounding boxes. </p>
<pre><code>confThreshold = 0.5
nmsThreshold = 0.3
[boxes, confidences] = decode(scores, geometry, confThreshold)
indices = cv2.dnn.NMSBoxesRotated(boxes, confidences, confThreshold, nmsThreshold)
</code></pre>
<p>Lastly, to overlay the boxes over the detected text in image:</p>
<pre><code>height_ = frame.shape[0]
width_ = frame.shape[1]
rW = width_ / float(inpWidth)
rH = height_ / float(inpHeight)
for i in indices:
# get 4 corners of the rotated rect
vertices = cv2.boxPoints(boxes[i[0]])
# scale the bounding box coordinates based on the respective ratios
for j in range(4):
vertices[j][0] *= rW
vertices[j][1] *= rH
for j in range(4):
p1 = (vertices[j][0], vertices[j][1])
p2 = (vertices[(j + 1) % 4][0], vertices[(j + 1) % 4][1])
cv2.line(frame, p1, p2, (0, 255, 0), 3)
# To save the image:
cv2.imwrite("maggi_boxed.jpg", frame)
</code></pre>
<p><a href="https://i.stack.imgur.com/D5QjJ.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/D5QjJ.jpg" alt="Maggi's Ad with bounding boxes"></a></p>
<p>I have not experimented with different values of threshold. Changing them will surely give better result and also remove the misclassifications of the logo as text.</p>
<p>Note: The model was trained on English corpus, so Hindi words will not be detected. Also you can read the paper which outlines the test datasets it was bench-marked on.</p> | 2019-07-15 16:49:46.613000+00:00 | 2019-07-15 17:00:57.690000+00:00 | 2019-07-15 17:00:57.690000+00:00 | null | 54,821,969 | <p>I am trying to detect and grab text from a screenshot taken from any consumer product's ad.</p>
<p>My code works at a certain accuracy but fails to make bounding boxes around the skewed text area. </p>
<p>Recently I tried <strong>Google Vision API</strong> and it makes bounding boxes around almost every possible text area and detects text in that area with great accuracy. I am curious about how can I achieve the same or similar! </p>
<p>My test image: </p>
<p><a href="https://i.stack.imgur.com/IxBds.png" rel="noreferrer"><img src="https://i.stack.imgur.com/IxBds.png" alt="enter image description here"></a></p>
<p>Google Vision API after bounding boxes: </p>
<p><a href="https://i.stack.imgur.com/TnH1z.png" rel="noreferrer"><img src="https://i.stack.imgur.com/TnH1z.png" alt="enter image description here"></a></p>
<p>Thank you in advance:)</p> | 2019-02-22 07:19:57.137000+00:00 | 2020-05-28 03:23:47.347000+00:00 | 2019-02-22 07:33:19.997000+00:00 | opencv|imagemagick|bounding-box|google-vision|python-tesseract | ['https://arxiv.org/abs/1704.03155v2', 'https://www.dropbox.com/s/r2ingd0l3zt8hxs/frozen_east_text_detection.tar.gz?dl=1', 'https://github.com/opencv/opencv/blob/master/samples/dnn/text_detection.py', 'https://i.stack.imgur.com/D5QjJ.jpg'] | 4 |
40,650,036 | <p>The clever definition of <code>zipWith</code> (which comes from <a href="https://arxiv.org/pdf/1309.5135.pdf" rel="nofollow noreferrer">Launchbury et al.</a>, I believe) doesn't work in Morte, because typing it without negative recursive types (which Morte doesn't have, and which imply <code>fix</code>, as seen in my <a href="https://stackoverflow.com/questions/29885983/infinite-type-error-when-defining-zip-with-foldr-only-can-it-be-fixed/29930561#29930561">mentioned</a> previous answer) requires induction at least on natural numbers. <a href="https://github.com/AndrasKovacs/misc-stuff/blob/master/agda/Hyperfunction.agda" rel="nofollow noreferrer">Here's</a> a simple Agda version of Launchbury's definition without Church encoding; to reproduce this in Morte we'd need functions whose return type depends on natural numbers (the lengths of input lists).</p>
<p>Without induction, the best we can do is an <code>O(N^2)</code> definition that uses <code>O(N)</code> pattern matching on lists, i. e. a <code>List A -> Maybe (A, List A)</code> function. It's <code>O(N)</code> because we can only get the tail of the list by rebuilding it from the end.</p>
<p>In Morte-compliant Agda (to get Morte, we need to desugar <code>let</code> style definitions to applications and function definitions to annotated lambdas):</p>
<pre><code>Pair : Set → Set → Set
Pair A B = ∀ P → (A → B → P) → P
pair : ∀ A B → A → B → Pair A B
pair A B a b P p = p a b
List : Set → Set
List = λ A → ∀ L → (A → L → L) → L → L
Maybe : Set → Set
Maybe A = ∀ M → (A → M) → M → M
just : ∀ A → A → Maybe A
just A a M j n = j a
nothing : ∀ A → Maybe A
nothing A M j n = n
nil : ∀ A → List A
nil A L c n = n
cons : ∀ A → A → List A → List A
cons A a as L c n = c a (as L c n)
match : ∀ A → List A → Maybe (Pair A (List A))
match A as =
as
(Maybe (Pair A (List A)))
(λ a m M j n →
m M
(λ p → p M (λ a' as → j (pair A (List A) a (cons A a' as))))
(j (pair A (List A) a (nil A))))
(nothing (Pair A (List A)))
zipWith : ∀ A B C → (A → B → C) → List A → List B → List C
zipWith A B C f as =
as
(List B → List C)
(λ a hyp bs → match B bs (List C)
(λ p → p (List C) (λ b bs' → cons C (f a b) (hyp bs')))
(nil C))
(λ _ → nil C)
</code></pre> | 2016-11-17 08:35:03.207000+00:00 | 2016-11-17 10:43:43.263000+00:00 | 2017-05-23 12:13:44.170000+00:00 | null | 40,645,730 | <p>This is an almost valid <code>zipWith</code> definition in Morte:</p>
<pre><code>zipWith
= λ (u : *)
-> λ (f : (u -> u -> u))
-> λ (a : (#List u))
-> λ (b : (#List u))
-> λ (List : *)
-> λ (cons : (u -> List -> List))
-> λ (nil : List)
-> ((λ (A:*) -> λ (B:*) ->
(a (B -> List)
(λ (h:u) -> λ (t : (B -> List) -> λ k : B -> (k h t)))
(λ (k:B) -> nil)
(b (u -> A -> List)
(λ (h:u) -> λ (t:(u -> A -> List)) -> λ (H:u) -> λ (k:A) -> (cons (f h H) (k t)))
(λ (H:u) -> λ (k:A) -> nil)))
) (fix A . ((u -> A -> List) -> List))
(fix B . (u -> (B -> List) -> List)))
</code></pre>
<p>It isn't actually typeable due to the use of <code>fix</code>, which Morte lacks. <a href="https://stackoverflow.com/a/29930561/477476">András posted this clever Agda solution without <code>fix</code></a> last year. It isn't obvious to me how it translates to Morte, though, because it also lacks inductive types. How to approach this problem? </p>
<p>Edit: seems like my zipWith was incorrect even with <code>fix</code>. <a href="http://lpaste.net/341877" rel="nofollow noreferrer">This one</a> seems to check, though.</p> | 2016-11-17 02:27:36.020000+00:00 | 2016-11-18 01:21:38.993000+00:00 | 2017-05-23 11:53:37.967000+00:00 | haskell|functional-programming|agda|dependent-type|morte | ['https://arxiv.org/pdf/1309.5135.pdf', 'https://stackoverflow.com/questions/29885983/infinite-type-error-when-defining-zip-with-foldr-only-can-it-be-fixed/29930561#29930561', 'https://github.com/AndrasKovacs/misc-stuff/blob/master/agda/Hyperfunction.agda'] | 3 |
3,932,298 | <p>You can find a description of a reasonably efficient algorithm for testing r.e. equality here:</p>
<p><a href="http://arxiv.org/PS_cache/arxiv/pdf/0907/0907.5058v1.pdf" rel="nofollow">http://arxiv.org/PS_cache/arxiv/pdf/0907/0907.5058v1.pdf</a></p>
<p>Dig through references of the article to find other solutions that may be less efficient, but easier to implement.</p> | 2010-10-14 10:44:34.897000+00:00 | 2010-10-14 10:44:34.897000+00:00 | null | null | 3,931,153 | <p>I'm trying to find out what the algorithm would be by being given two languages L1 and L2 to determine if they are equivalent (L1 = L2).</p>
<p>It's surprisingly difficult to come up with one as I've found, although I am pretty sure it needs to be converted to a DFA first and then reduce each of them to a minimal DFA..</p>
<p>Also, I know that if L1 - L2 and L2 - L1 are empty, then L1 = L2. </p>
<p>Anyone good with theory here?</p> | 2010-10-14 07:56:22.683000+00:00 | 2010-11-12 22:08:13.783000+00:00 | null | regex|theory|expression|equivalence | ['http://arxiv.org/PS_cache/arxiv/pdf/0907/0907.5058v1.pdf'] | 1 |
26,460,114 | <p>Both RNN and LSTM can be sequence learners. RNN suffers from vanishing gradient point problem. This problem causes the RNN to have trouble in remembering values of past inputs after more than 10 timesteps approx. (RNN can remember previously seen inputs for a few time steps only)</p>
<p>LSTM is designed to solve the vanishing gradient point problem in RNN. LSTM has the capability of bridging long time lags between inputs. In other words, it is able to remember inputs from up to 1000 time steps in the past (some papers even made claims it can go more than this). This capability makes LSTM an advantage for learning long sequences with long time lags. Refer to Alex Graves Ph.D. thesis <a href="http://www.cs.toronto.edu/~graves/phd.pdf" rel="nofollow noreferrer">Supervised Sequence Labelling
with Recurrent Neural Networks</a> for some details. If you are new to LSTM, I recommend <a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/" rel="nofollow noreferrer">Colah's blog</a> for super simple and easy explanation.</p>
<p>However, recent advances in RNN also claim that with careful initialization, RNN can also learn long sequences comparable to the performance of LSTM. <a href="http://arxiv.org/abs/1504.00941" rel="nofollow noreferrer">A Simple Way to Initialize Recurrent Networks of Rectified Linear Units</a>.</p> | 2014-10-20 07:05:24.540000+00:00 | 2018-01-04 17:41:14.250000+00:00 | 2018-01-04 17:41:14.250000+00:00 | null | 24,901,637 | <p>First, let me apologize for cramming three questions in that title. I'm not sure what better way is there.</p>
<p>I'll get right to it. I think I understand feedforward neural networks pretty well. </p>
<p>But LSTM really escapes me, and I feel maybe this is because I don't have a very good grasp of Recurrent neural networks in general. I have went through Hinton's and Andrew Ng's course on Coursera. A lot of it still doesn't make sense to me.</p>
<p>From what I understood, recurrent neural networks are different from feedforward neural networks in that past values influence the next prediction. Recurrent neural network are generally used for sequences.</p>
<p>The example I saw of recurrent neural network was binary addition.</p>
<pre><code> 010
+ 011
</code></pre>
<p>A recurrent neural network would take the right most 0 and 1 first, output a 1. Then take the 1,1 next, output a zero, and carry the 1. Take the next 0,0 and output a 1 because it carried the 1 from last calculation. Where does it store this 1? In feed forward networks the result is basically:</p>
<pre><code> y = a(w*x + b)
where w = weights of connections to previous layer
and x = activation values of previous layer or inputs
</code></pre>
<p>How is a recurrent neural network calculated? I am probably wrong but from what I understood, recurrent neural networks are pretty much feedforward neural network with T hidden layers, T being number of timesteps. And each hidden layer takes the X input at timestep T and it's outputs are then added to the next respective hidden layer's inputs.</p>
<pre><code> a(l) = a(w*x + b + pa)
where l = current timestep
and x = value at current timestep
and w = weights of connections to input layer
and pa = past activation values of hidden layer
such that neuron i in layer l uses the output value of neuron i in layer l-1
y = o(w*a(l-1) + b)
where w = weights of connections to last hidden layer
</code></pre>
<p>But even if I understood this correctly, I don't see the advantage of doing this over simply using past values as inputs to a normal feedforward network (sliding window or whatever it's called).</p>
<p>For example, what is the advantage of using a recurrent neural network for binary addition instead of than training a feedforward network with two output neurons. One for the binary result and the other for the carry? And then take the carry output and plug it back into the feedforward network.</p>
<p>However, I'm not sure how is this different than simply having past values as inputs in a feedforward model.</p>
<p>It seems to me that the more timesteps there are, recurrent neural networks are only a disadvantage over feedforward networks because of vanishing gradient. Which brings me to my second question, from what I understood, LSTM is a solution to the problem of vanishing gradient. But I have no actual grasp of how they work. Furthermore, are they simply better than recurrent neural networks, or are there sacrifices to using a LSTM?</p> | 2014-07-23 04:03:27.263000+00:00 | 2018-01-04 17:41:14.250000+00:00 | 2015-04-02 20:09:56.703000+00:00 | machine-learning|neural-network|sequence|lstm | ['http://www.cs.toronto.edu/~graves/phd.pdf', 'http://colah.github.io/posts/2015-08-Understanding-LSTMs/', 'http://arxiv.org/abs/1504.00941'] | 3 |
43,033,448 | <p>Without having access to sample data, is kind of hard to recommend you a specific registration algorithm.</p>
<p>However, I'm pretty exicted nowdays about all the new "data-driven" registration approaches.</p>
<p>From my personal experience, I'm having awesome registration results using the approach of this recent paper:</p>
<p><a href="https://arxiv.org/abs/1603.08182" rel="nofollow noreferrer">https://arxiv.org/abs/1603.08182</a></p>
<p>Wich has source code avaliable here:</p>
<p><a href="https://github.com/andyzeng/3dmatch-toolbox" rel="nofollow noreferrer">https://github.com/andyzeng/3dmatch-toolbox</a></p>
<p>As reported in the paper, it outperforms pcl-descriptor based registration approaches and I think that it may be suitable for your needs.</p> | 2017-03-26 19:24:58.293000+00:00 | 2017-03-26 19:24:58.293000+00:00 | null | null | 43,029,418 | <p>I have two point clouds, in 3d coordinates. One is a subset of the other, containing many less points. They are in the same scale.</p>
<p>What i need to do is find the translation and rotation between the two. I have looked at Point cloud Library, <a href="https://en.wikipedia.org/wiki/Iterative_closest_point" rel="nofollow noreferrer">"Iterative closest point"</a>, and <a href="https://github.com/gadomski/cpd" rel="nofollow noreferrer">Coherent Point Drift</a>, but these matching approaches both seem to expect the two point sets to contain mostly the same points, not have one be a smaller, subset of the other. </p>
<p>Can i use either of these, with adjustments? Or is there another algorithm to match a subset point cloud to a set?</p>
<p>Thank you.</p> | 2017-03-26 13:35:02.903000+00:00 | 2017-03-26 19:24:58.293000+00:00 | null | c++|point-cloud-library|point-clouds | ['https://arxiv.org/abs/1603.08182', 'https://github.com/andyzeng/3dmatch-toolbox'] | 2 |
71,213,592 | <p>It is possible to do inverse transform using the method in the paper you mentioned (<a href="https://arxiv.org/pdf/2011.10925.pdf" rel="nofollow noreferrer">Ghojogh et al. (2020)</a> - Sec. 2.4.1) and other papers to (e.g., <a href="http://dx.doi.org/10.1080/10618562.2014.918695" rel="nofollow noreferrer">Franz et al. (2014)</a> - Sec. 4.1 ). The idea is that you find the k-nearest neighbors in the embedded space then express each point as a linear combination of its neighbors in the embedded space. Then keep the weights obtained and use the same weights to express each point in the original point as a combination of its k-nearest neighbors. Obviously the same number of neighbors should be used as in the original forward LLE.</p>
<p>The code would look something like this using the <code>barycenter_kneighbor_graph</code> function:</p>
<pre><code>from sklearn.manifold._locally_linear import barycenter_kneighbors_graph
# calculate the weights for expressing each point in the embedded space as a linear combination of its neighbors
W = barycenter_kneighbors_graph(Y, n_neighbors = k, reg = 1e-3)
# reconstruct the data points in the high dimensional space from its neighbors using the weights calculated based on the embedded space
X_reconstructed = W @ X
</code></pre>
<p>where Y is the result of the original LLE embedding (this is X_train_lle in your code snippet), X is the original data matrix and k is the number of nearest neighbors.</p> | 2022-02-21 22:07:29.643000+00:00 | 2022-02-21 22:07:29.643000+00:00 | null | null | 69,123,836 | <p>How can i perform inverse locally linear embedding (LLE) using sklearn or other python packages?</p>
<p>I would like to perform classification machine learning algorithms (SVM, neural networks...) on some tabular data X with y being target class variable.</p>
<p>As usual the procedure is the following:</p>
<p>Splitting X and y to X_train, y_train, X_test, y_test. Since I have a large number of parameters (columns). I can reduce the number of parameters using the LLE on X_train in order to obtain X_train_lle. y is a target variable and it does not undergo any transformation. After that, i can simply train a model on X_train_lle. The problem arises when I want to use the trained model on y_test. If LLE is performed on X_test, together with the X_train, it would introduce data leakage. Also if LLE is performed solely on X_test, the new X_test_lle might be completely different since the algorithm is using k nearest neighbours. I guess that the correct procedure should be performing inverse LLE on X_test with the parameters obtained on X_train and then use the classification model on X_test_lle.</p>
<p>I've checked some references and the section 2.4.1 deals with inverse LLE. <a href="https://arxiv.org/pdf/2011.10925.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2011.10925.pdf</a></p>
<p>How does one do inverse LLE with python (preferably sklearn)?</p>
<p>Here is a code example:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from sklearn import preprocessing
from sklearn import svm, datasets
from sklearn.manifold import LocallyLinearEmbedding
### Generating dummy data
n_row = 10000 # these numbers are much bigger for the real problem
n_col = 50 #
X = np.random.random(n_row, n_col)
y = np.random.randint(5, size=n_row) # five different classes labeled from 0 to 4
### Preprocessing ###
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size = 0.5, random_state = 1)
#standardization using StandardScaler applied to X_train and then scaling X_train and X_test
scaler = preprocessing.StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
### Here is the part with LLE ###
# We reduce the parameter space to 10 with 15 nearest neighbours
X_train_lle = LocallyLinearEmbedding(n_neighbors=15, n_components=10, method='modified', eigen_solver='dense')
### Here is the training part ###
# we want to apply SVM to transformed data X_train_lle
#Create a svm Classifier
clf = svm.SVC(kernel='linear') # Linear Kernel
#Train the model using the training sets
clf.fit(X_train_lle, y_train)
# Here should go the code to do inverse LLE on X_test
#i.e. where do values of X_test_lle fit in the manufold X_train_lle
### After the previous part of the code was successfully solved by stackoverflow community :)
#Predict the response for test dataset
y_pred = clf.predict(X_test_lle)
</code></pre> | 2021-09-09 19:35:20.293000+00:00 | 2022-02-21 22:07:29.643000+00:00 | 2021-09-10 07:50:55.200000+00:00 | python-3.x|machine-learning|scikit-learn | ['https://arxiv.org/pdf/2011.10925.pdf', 'http://dx.doi.org/10.1080/10618562.2014.918695'] | 2 |
70,632,510 | <p>I spent a lot of time with this issue and finally reached a solution for embeeding PDFs in a HTML files, also inspired by this post. You mentioned that "converting the PDF to images and then sliding them in a div" was not satisfactory due to quality problems. Here I experienced the same since the images were blurry.</p>
<p>However, I tried converting the images to SVG instead of PNG and the situation was a different one: The fonts were crystal-clear when embedding the image like below:</p>
<pre><code><object type="image/svg+xml" data="https://ik.imagekit.io/nudvztcu8my/pdf2svg_example_Ft2FQgqWaG.svg">
<!-- Your fall back here -->
<img src="https://ik.imagekit.io/nudvztcu8my/pdf2svg_example_Ft2FQgqWaG.svg" />
</object>
</code></pre>
<p>You can directly paste that snippet into a HTML file and you will see the result. For producing this example I used a ramdom <a href="https://arxiv.org/abs/2201.01851" rel="nofollow noreferrer">PDF from ArXiv.org</a> and <a href="https://converter.app/pdf-to-svg/" rel="nofollow noreferrer">converted it to SVG</a> using an online converter.</p>
<p>There are also free command line tools like <a href="https://stackoverflow.com/questions/4120567/convert-pdf-to-svg">pdf2svg</a> or commerical APIs like Aspose and probably it is worth examining which approach gives the best results.</p>
<p>You can easily build a slider which is loading the SVG images dynamically and it is even possible to scale them to different viewports due to the vector character of the SVG images. The approach so far worked for all PDFs I tried but probably it is recommendable to implement a fallback solution still using PNGs.</p> | 2022-01-08 12:56:45.633000+00:00 | 2022-01-08 12:56:45.633000+00:00 | null | null | 53,739,984 | <p>I need to embed a PDF file in an HTML page for the users to see it on every major device.
<strong>Most of the approaches work fine on desktop but they start to show problems on iPad devices.</strong> The PDFs are no longer scrollable if placed inside an iframe or embed tag.</p>
<p>I used the following techniques to overcome the problem:</p>
<p>1) Using <a href="https://www.npmjs.com/package/pdf-image" rel="nofollow noreferrer">pdf-image</a> for node and converting the PDF to images and then sliding them in a div.
The problem in this approach is that the image quality gets degraded and is not suitable for viewing on Web.</p>
<p>2) Using <a href="https://mozilla.github.io/pdf.js/" rel="nofollow noreferrer">PDF.js</a> by Mozilla
It works fine on every device but it <strong>makes the page extremely slow and unresponsive</strong> on iPad</p>
<p>3) Using Google PDF viewer</p>
<pre><code><iframe src="https://docs.google.com/viewer?url=http://public-Url-of-pdf.pdf&embedded=true" frameborder="0" height="500px" width="100%"></iframe>
</code></pre>
<p>The <strong>problem with this approach is that I need to make my PDFs publicly available</strong> which I don't want to do for security reasons.</p>
<p>None of the above method is working for me. Is there any solution available to embed PDF in a page which works on iPad also.</p>
<p>One of my colleagues told me about using <strong>LibreOffice(open office) headless</strong> to embed PDFs in my page but I cannot find any documentation about it usage?</p>
<p>Can anyone please help? :( </p>
<p>Thanks in advance!</p> | 2018-12-12 09:34:36.847000+00:00 | 2022-01-08 12:56:45.633000+00:00 | 2018-12-12 09:43:08.923000+00:00 | javascript|html|node.js|pdf|openoffice.org | ['https://arxiv.org/abs/2201.01851', 'https://converter.app/pdf-to-svg/', 'https://stackoverflow.com/questions/4120567/convert-pdf-to-svg'] | 3 |
70,813,939 | <p>For others' reference, if you are using schemas (e.g. Postgres) you might also have to specify it:</p>
<pre class="lang-py prettyprint-override"><code># Assumes tables live in schema `my_schema`
arxiv_id = db.Column(db.String(1000), db.ForeignKey('my_schema.papers.arxiv_id'))
</code></pre> | 2022-01-22 14:44:58.627000+00:00 | 2022-01-22 14:44:58.627000+00:00 | null | null | 48,473,140 | <p>I get the following error in my sqlalchemy schema</p>
<pre><code>python manage.py db migrate
...
sqlalchemy.exc.NoReferencedTableError: Foreign key associated with column
'ArxivPaperFigure.arxiv_id' could not find table 'papers' with which to
generate a foreign key to target column 'arxiv_id'
</code></pre>
<p>I do not understand this error since the table papers is not new but existed before. I made a lot of changes, including giving the papers table a new table to inherit from, but the primary key in papers is still the same arxiv_id. Here is my papers table </p>
<pre><code>class Papers(db.Model, Paper):
arxiv_id = db.Column(db.String(1000), primary_key=True)
...
</code></pre>
<p>and the table which points to this table is </p>
<pre><code>class ArxivPaperFigure(db.Model):
__tablename__ = 'ArxivPaperFigure'
id = db.Column(db.Integer, primary_key=True)
arxiv_id = db.Column(db.String(1000), db.ForeignKey('papers.arxiv_id'))
</code></pre>
<p>I can solve the issue by re-writing the Foreign key</p>
<pre><code>arxiv_id = db.Column(db.String(1000), db.ForeignKey(Papers.arxiv_id))
</code></pre>
<p>However, I have a lot of foreign keys and for some tables which also have these foreign keys this solution does not work. So I would like to understand why this error appears? The papers table does exist. If I use psql and print all tables with \d I find the line</p>
<pre><code>public | papers | table | user
</code></pre>
<p>so why does this reference not work anymore?</p> | 2018-01-27 06:11:03.863000+00:00 | 2022-01-22 14:44:58.627000+00:00 | null | python|sql|database|postgresql|sqlalchemy | [] | 0 |
7,462,801 | <p>There is nice algorithm from Donald Knuth: <a href="http://en.wikipedia.org/wiki/Knuth%27s_Algorithm_X" rel="nofollow">Algorithm X</a>, <a href="http://en.wikipedia.org/wiki/Dancing_Links" rel="nofollow">Dancing Links</a></p>
<p>As I know it is one of the fastest algorithms to solve Sudoku.</p>
<p>And here is quite readable and through paper with nice pictures: <a href="http://arxiv.org/abs/cs/0011047" rel="nofollow">http://arxiv.org/abs/cs/0011047</a></p> | 2011-09-18 16:27:23.497000+00:00 | 2011-09-18 16:27:23.497000+00:00 | null | null | 7,454,722 | <p>I was looking at the problem <a href="http://acm.timus.ru/problem.aspx?space=51&num=8" rel="nofollow">Magic Square</a>
I am sure with some loop and if condition this problem can be solve, but I am interested to know if there is any know algorithm / datastructure to solve this problem. I am not interested in exact solution, but any hint toward algorithm/datastructure would help.</p> | 2011-09-17 12:27:30.243000+00:00 | 2011-09-18 16:27:23.497000+00:00 | 2011-09-17 17:19:00.190000+00:00 | algorithm|data-structures|puzzle | ['http://en.wikipedia.org/wiki/Knuth%27s_Algorithm_X', 'http://en.wikipedia.org/wiki/Dancing_Links', 'http://arxiv.org/abs/cs/0011047'] | 3 |
2,394,299 | <p> (This is a Wiki! <strong>Please edit here</strong> with corrections or enhancings)</p>
<p><strong>For better results on not-so-big strings:</strong></p>
<p>There are problems with the direct uso of the NCD formula on strings or little texts. NCD(X,X) is not zero (!). To remove this artifact subtract the self comparison.</p>
<p>See similar_NCD_gzip() demo at <a href="http://leis.saocarlos.sp.gov.br/SIMILAR.php" rel="nofollow noreferrer">http://leis.saocarlos.sp.gov.br/SIMILAR.php</a></p>
<pre><code>function similar_NCD_gzip($sx, $sy, $prec=0, $MAXLEN=90000) {
# NCD with gzip artifact correctoin and percentual return.
# sx,sy = strings to compare.
# Use $prec=-1 for result range [0-1], $pres=0 for percentual,
# $pres=1 or =2,3... for better precision (not a reliable)
# Use MAXLEN=-1 or a aprox. compress lenght.
# For NCD definition see http://arxiv.org/abs/0809.2553
# (c) Krauss (2010).
$x = $min = strlen(gzcompress($sx));
$y = $max = strlen(gzcompress($sy));
$xy= strlen(gzcompress($sx.$sy));
$a = $sx;
if ($x>$y) { # swap min/max
$min = $y;
$max = $x;
$a = $sy;
}
$res = ($xy-$min)/$max; # NCD definition.
# Optional correction (for little strings):
if ($MAXLEN<0 || $xy<$MAXLEN) {
$aa= strlen(gzcompress($a.$a));
$ref = ($aa-$min)/$min;
$res = $res - $ref; # correction
}
return ($prec<0)? $res: 100*round($res,2+$prec);
}
</code></pre> | 2010-03-06 22:19:05.093000+00:00 | 2021-02-21 15:35:54.160000+00:00 | 2021-02-21 15:35:54.160000+00:00 | null | 1,085,048 | <p>First, please note, that I am interested in how something like this would work, and am not intending to build it for a client etc, as I'm sure there may already be open source implementations.</p>
<p>How do the algorithms work which detect plagiarism in uploaded text? Does it use regex to send all words to an index, strip out known words like 'the', 'a', etc and then see how many words are the same in different essays? Does it them have a magic number of identical words which flag it as a possible duplicate? Does it use <a href="http://us2.php.net/manual/en/function.levenshtein.php" rel="noreferrer">levenshtein()</a>? </p>
<p>My language of choice is PHP.</p>
<p><strong>UPDATE</strong></p>
<p>I'm thinking of not checking for plagiarism globally, but more say in 30 uploaded essays from a class. In case students have gotten together on a strictly one person assignment.</p>
<p>Here is an online site that claims to do so: <a href="http://www.plagiarism.org/" rel="noreferrer">http://www.plagiarism.org/</a></p> | 2009-07-05 23:21:59.393000+00:00 | 2021-12-05 08:14:31.590000+00:00 | 2013-12-06 01:35:58.183000+00:00 | php|theory | ['http://leis.saocarlos.sp.gov.br/SIMILAR.php'] | 1 |
58,732,327 | <p>If I understand correctly, you are asking why the label associated with cb_explore is a set of action/probability pairs.</p>
<p>The probability of the label action is used as an importance weight for training. This has the effect of amplifying the updates for actions that are played less frequently, making them less likely to be drowned out by actions played more frequently. </p>
<p>As well, this type of label is very useful during predict-time, because it generates a log that can be used to perform unbiased counterfactual analyses. In other words, by logging the probability of playing each of the actions before sampling (see cb_sample - this implements how a single action/probability vector is sampled, as for example in the ccb reduction: <a href="https://github.com/VowpalWabbit/vowpal_wabbit/blob/master/vowpalwabbit/cb_sample.cc#L37" rel="nofollow noreferrer">https://github.com/VowpalWabbit/vowpal_wabbit/blob/master/vowpalwabbit/cb_sample.cc#L37</a>), we can then use the log to train another policy, and compare how it performs against the original.</p>
<p>See the "A Multi-World Testing Decision Service" paper to describe the mechanism to do unbiased offline experimentation with logged data: <a href="https://arxiv.org/pdf/1606.03966v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1606.03966v1.pdf</a></p> | 2019-11-06 14:25:46.573000+00:00 | 2019-11-15 20:35:36.880000+00:00 | 2019-11-15 20:35:36.880000+00:00 | null | 58,385,113 | <p>The <code>cb_explore</code> input format requires specifying action:cost:action_probability for each example.
However the cb algorithms within are already trying to learn the optimal policy i.e. probability for each action from the data. Then, why does it need the probability of each action in the input? Is it just for initialization?</p> | 2019-10-14 23:06:54.213000+00:00 | 2019-11-15 20:35:36.880000+00:00 | null | vowpalwabbit | ['https://github.com/VowpalWabbit/vowpal_wabbit/blob/master/vowpalwabbit/cb_sample.cc#L37', 'https://arxiv.org/pdf/1606.03966v1.pdf'] | 2 |
58,473,999 | <p>The best way to get the distribution that Chao's algorithm produces is to implement VarOpt<sub>k</sub> sampling as in the pseudocode labeled Algorithm 1 from <a href="https://arxiv.org/abs/0803.0473" rel="nofollow noreferrer">the paper that introduced VarOpt<sub>k</sub> sampling</a> by Cohen et al.</p>
<p>That's an arXiv link and hence very stable, but to summarize, the idea is to separate the items into "heavy" (weight high enough to guarantee inclusion in the sample so far) and "light" (the others). Keep the heavy items in a priority queue where it is easy to remove the lightest of them. When a new item comes in, we have to determine whether it is heavy or light, and which heavy items became light (if any). Then there's a sampling procedure for dropping an item that treats the heavy → light items specially using weighted sampling and then falls back to choosing a uniform random light item (as in the easy case of Chao's algorithm).</p>
<p>The one trick with the pseudocode is that, if you use floating-point arithmetic, you have to be a little careful about "impossible" cases. Post your finished code on Code Review and ping me here if you would like feedback.</p> | 2019-10-20 14:29:06.583000+00:00 | 2019-10-20 14:29:06.583000+00:00 | null | null | 58,310,136 | <p>I am trying to implement A-Chao version of weighted reservoir sampling as shown in <a href="https://en.wikipedia.org/wiki/Reservoir_sampling#Algorithm_A-Chao" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Reservoir_sampling#Algorithm_A-Chao</a></p>
<p>But I found that the pseudo-code described in wiki seems to be wrong, especially on the initialization part. I read the <a href="https://www.jstor.org/stable/2336002?seq=1#metadata_info_tab_contents" rel="nofollow noreferrer">paper</a>, it mentions we need to handle over-weighted data points, but I still cannot get the idea how to initialize correctly.</p>
<p>In my understanding, on initialization step, we want to make sure all initial data points chosen should have same probability*weight to be chosen. However, I don't understand how the over-weighted points is related with that.</p>
<p>Code I implemented according to the wiki, but the results show it is incorrect.</p>
<pre><code>const reservoirSampling = <T>(dataList: T[], k: number, getWeight: (point: T) => number): T[] => {
const sampledList = dataList.slice(0, k);
let currentWeightSum: number = sampledList.reduce((sum, item) => sum + getWeight(item), 0);
for (let i = k; i < dataList.length; i++) {
const currentItem = dataList[i];
currentWeightSum += getWeight(currentItem);
const probOfChoosingCurrentItem = getWeight(currentItem) / currentWeightSum;
const rand = Math.random();
if (rand <= probOfChoosingCurrentItem) {
sampledList[getRandomInt(0, k - 1)] = currentItem;
}
}
return sampledList;
};
</code></pre> | 2019-10-09 18:25:34.623000+00:00 | 2021-03-21 04:22:52.963000+00:00 | 2021-03-21 04:22:52.963000+00:00 | algorithm|sampling|reservoir-sampling | ['https://arxiv.org/abs/0803.0473'] | 1 |
33,970,201 | <p>There are many different strategies for dealing with locking in B-Trees in general; most of these actually deal with B+Trees and its variations since they have been dominating the field for decades. Summarising these strategies would be tantamount to summarising the progress of four decades; it's virtually impossible. Here are some highlights.</p>
<p>One strategy for minimising the amount of locking during initial descent is to lock not the whole path starting from the root, but only the sub-path beginning at the last 'stable' node (i.e. a node that won't split or merge as a result of the currently planned operation).</p>
<p>Another strategy is to assume that no split or merge will happen, which is true most of the time anyway. This means the descent can be done by locking only the current node and the child node one will descend into next, then release the lock on the previously 'current' node and so on. If it turns out that a split or merge is necessary after all then re-descend from the root under a heavier locking regime (i.e. path rooted at last stable node).</p>
<p>Another staple in the bag of tricks is to ensure that each node 'descended through' is stable by preventative splitting/merging; that is, when the current node would split or merge under a change bubbling up from below then it gets split/merged right away before continuing the descent. This can simplify operations (including locking) and it is somewhat popular in reinventions of the wheel - homework assignments and 'me too' implementations, rather than sophisticated production-grade systems.</p>
<p>Some strategies allow most normal operations to be performed without any locking at all but usually they require that the standard B+Tree structure be slightly modified; see <a href="https://arxiv.org/pdf/1009.2764" rel="nofollow">B-link trees</a> for example. This means that different concurrent threads operating on the tree can 'see' different <strong>physical</strong> views of this tree - depending on when they got where and followed which link - but they all see the same <strong>logical</strong> view.</p>
<p>Seminal papers and good overviews:</p>
<ul>
<li><a href="http://www.csd.uoc.gr/~hy460/pdf/p650-lehman.pdf" rel="nofollow">Efficient Locking for Concurrent Operations on B-Trees (Lehman/Yao 1981)</a></li>
<li><a href="http://www.sciencedirect.com/science/article/pii/0022000086900218/pdf?md5=fafabf86d6f6aced3c490eacb5d30d46&pid=1-s2.0-0022000086900218-main.pdf" rel="nofollow">Concurrent Operations on B*-Trees with Overtaking (Sagiv 1986)</a></li>
<li><a href="http://www.hpl.hp.com/techreports/2010/HPL-2010-9.pdf" rel="nofollow">A survey of B-tree locking techniques (Graefe 2010)</a></li>
<li><a href="https://web.stanford.edu/class/cs346/2015/notes/Blink.pptx" rel="nofollow">B+Tree Locking (slides from Stanford U, including Blink trees)</a></li>
<li><a href="https://arxiv.org/pdf/1009.2764" rel="nofollow">A Blink Tree method and latch protocol for synchronous deletion in a high concurreny environment (Malbrain 2010)</a></li>
<li><a href="https://www.cs.technion.ac.il/~anastas/lfbtree-spaa.pdf" rel="nofollow">A Lock-Free B+Tree (Braginsky/Petrank 2012)</a></li>
</ul> | 2015-11-28 10:20:34.337000+00:00 | 2015-11-28 10:49:25.873000+00:00 | 2015-11-28 10:49:25.873000+00:00 | null | 33,969,301 | <p>I am trying to figure out how to insert an item into a B+ tree using locks and don't really understand the theory behind it.</p>
<p>So for searching, my view is that I put a lock on the root node, and then decide which child node I should go to and lock it, at this point I can release the parent node and continue this operation until I reach the leaf node.</p>
<p>But inserting is a lot more complicated because I can't allow any other threads to interfere with the insertion. My idea is put a lock on each node along the path to the leaf node but putting that many locks is quite expensive, and then the question I have is what happens when the leaf node splits because it is too large?</p>
<p>Does anyone know how to properly insert an item into a B+ tree using locks?</p> | 2015-11-28 08:29:48.803000+00:00 | 2015-11-28 10:49:25.873000+00:00 | null | database|multithreading|b-tree | ['https://arxiv.org/pdf/1009.2764', 'http://www.csd.uoc.gr/~hy460/pdf/p650-lehman.pdf', 'http://www.sciencedirect.com/science/article/pii/0022000086900218/pdf?md5=fafabf86d6f6aced3c490eacb5d30d46&pid=1-s2.0-0022000086900218-main.pdf', 'http://www.hpl.hp.com/techreports/2010/HPL-2010-9.pdf', 'https://web.stanford.edu/class/cs346/2015/notes/Blink.pptx', 'https://arxiv.org/pdf/1009.2764', 'https://www.cs.technion.ac.il/~anastas/lfbtree-spaa.pdf'] | 7 |
53,179,982 | <p>In case it is still relevant: I needed to solve this recently. You can paste the code below into a Jupyter notebook to see how it works.</p>
<pre><code>%matplotlib inline
import numpy as np
from skimage.io import imshow
from skimage.measure import label
from scipy.ndimage.morphology import distance_transform_edt
import numpy as np
def generate_random_circles(n = 100, d = 256):
circles = np.random.randint(0, d, (n, 3))
x = np.zeros((d, d), dtype=int)
f = lambda x, y: ((x - x0)**2 + (y - y0)**2) <= (r/d*10)**2
for x0, y0, r in circles:
x += np.fromfunction(f, x.shape)
x = np.clip(x, 0, 1)
return x
def unet_weight_map(y, wc=None, w0 = 10, sigma = 5):
"""
Generate weight maps as specified in the U-Net paper
for boolean mask.
"U-Net: Convolutional Networks for Biomedical Image Segmentation"
https://arxiv.org/pdf/1505.04597.pdf
Parameters
----------
mask: Numpy array
2D array of shape (image_height, image_width) representing binary mask
of objects.
wc: dict
Dictionary of weight classes.
w0: int
Border weight parameter.
sigma: int
Border width parameter.
Returns
-------
Numpy array
Training weights. A 2D array of shape (image_height, image_width).
"""
labels = label(y)
no_labels = labels == 0
label_ids = sorted(np.unique(labels))[1:]
if len(label_ids) > 1:
distances = np.zeros((y.shape[0], y.shape[1], len(label_ids)))
for i, label_id in enumerate(label_ids):
distances[:,:,i] = distance_transform_edt(labels != label_id)
distances = np.sort(distances, axis=2)
d1 = distances[:,:,0]
d2 = distances[:,:,1]
w = w0 * np.exp(-1/2*((d1 + d2) / sigma)**2) * no_labels
else:
w = np.zeros_like(y)
if wc:
class_weights = np.zeros_like(y)
for k, v in wc.items():
class_weights[y == k] = v
w = w + class_weights
return w
y = generate_random_circles()
wc = {
0: 1, # background
1: 5 # objects
}
w = unet_weight_map(y, wc)
imshow(w)
</code></pre> | 2018-11-06 21:05:24.153000+00:00 | 2019-11-18 08:38:20.773000+00:00 | 2019-11-18 08:38:20.773000+00:00 | null | 50,255,438 | <p>I am currently using a modified version of the U-Net (<a href="https://arxiv.org/pdf/1505.04597.pdf" rel="noreferrer">https://arxiv.org/pdf/1505.04597.pdf</a>) to segment cell organelles in microscopy images. Since I am using Keras, I took the code from <a href="https://github.com/zhixuhao/unet" rel="noreferrer">https://github.com/zhixuhao/unet</a>. However, in this version no weight map is implemented to force the network to learn the border pixels. </p>
<p>The results that I have obtained so far are quite good, but the network fails to separate objects that are close to each other. So I want to try and make use of the weight map mentioned in the paper. I have been able to generate the weight map (based on the given formula) for each label image, but I was unable to find out how to use this weight map to train my network and thus solve the above mentioned problem. </p>
<p>Do weight maps and label images have to be combined somehow or is there a Keras function that will allow me to make use of the weight maps? I am Biologist, who only recently started to work with neural networks, so my understanding is still limited. Any help or advice would be greatly appreciated.</p> | 2018-05-09 14:05:23.520000+00:00 | 2022-08-03 20:11:00.173000+00:00 | 2022-08-03 20:11:00.173000+00:00 | python|keras|conv-neural-network|image-segmentation | [] | 0 |
59,392,826 | <p>You can definitely concatenate an embedding with other features and feed this as an input to a model.</p>
<p>In the speech domain this was done for example in <a href="https://arxiv.org/pdf/1908.04284.pdf" rel="nofollow noreferrer">Personal VAD</a> where a speaker's embedding was concatenated with other features describing speech to determine whether the target speaker is speaking in the given audio. </p>
<p>I am pretty sure the same approach can and has been applied to other machine learning application fields than speech (pretty sure I have seen it in NLP but can't come up with any paper right now).</p>
<p>In the end of the day you give your model just extra information. If that information is useless then ideally your model would figure that out and set the corresponding weights to zero. However, in reality you are just making the training task more complex and might end up with a worse model (if the features are not that useful or the model is not complex enough to capture the relation between input and output).</p>
<p>Either way, machine learning (especially deep learning) is partly (or largely) trial and error. Not everything is theoretically well established to the point where someone will tell you that "this will work". If your model can figure out the relation between input and output and learn the appropriate mapping function depends on your dataset, model and choices you make for your training settings. Try it out yourself and see whether it works for you. </p> | 2019-12-18 13:10:31.870000+00:00 | 2019-12-18 13:10:31.870000+00:00 | null | null | 59,391,569 | <p>My task was to create a classifier model for a review dataset. I have 15000 train observations, 5000 dev and 5000 test. </p>
<p>The task specified that 3 features needed to be used: I used <code>TFIDF</code> (5000 features there), <code>BOW</code> (2000 more features) and the <code>review length</code> (1 more feature). So, for example, my X_train is an array shaped (15000,7001).</p>
<p>I was investigating and I found that word embedding (<code>word2vec</code> especially) could be a nice alternative to BOW.
My question is, can it be used (can it be put in the same "array" format as my other features?) in addition to my other features? </p>
<p>I did some research on it but didn't quite answered my question. </p> | 2019-12-18 11:58:47.347000+00:00 | 2019-12-18 15:16:18.430000+00:00 | null | python|machine-learning|scikit-learn|text-classification|sklearn-pandas | ['https://arxiv.org/pdf/1908.04284.pdf'] | 1 |
59,504,596 | <p>Solving SAT does not solve all NP problems. It solves all NP problems that can be reduced to SAT by a P complexity algorithm. There exist problems that do not have a P complexity reduction to SAT. Some are reducible to SAT by NP reductions. Some of those by NP reductions for which the reduction algorithm has a P complexity reduction to SAT.</p>
<p>SAT has a P complexity solution. See arxiv.org/abs/cs/0205064.</p> | 2019-12-27 18:18:39.650000+00:00 | 2019-12-27 18:18:39.650000+00:00 | null | null | 29,910,310 | <p>Based on the below link , I can know that solving of Satisfiability(NP Complete) in polynomial time means any other NP problem can be solved in polynomial time.
But is Vice - Versa true?</p>
<p>Also, If there is a polynimial for any other NP-Complete problemt does it mean , all the other NP-Complete can be solved in polynomial time?</p>
<p><a href="https://stackoverflow.com/questions/1857244/np-vs-np-complete-vs-np-hard-what-does-it-all-mean">What are the differences between NP, NP-Complete and NP-Hard?</a></p> | 2015-04-28 04:24:58.160000+00:00 | 2019-12-27 18:18:39.650000+00:00 | 2017-05-23 12:22:13.193000+00:00 | algorithm|complexity-theory|theory | [] | 0 |
58,475,927 | <p><a href="https://arxiv.org/abs/1706.03762" rel="nofollow noreferrer">All you need is attention</a> will the solution you are looking for. The mechanism will help you to focus on certain features/blocks in your inputs. There is a <a href="https://medium.com/@moshnoi2000/all-you-need-is-attention-computer-vision-edition-dbe7538330a4" rel="nofollow noreferrer">medium post</a> that gives a tutorial of how to apply this mechanism to images. </p> | 2019-10-20 18:18:10.937000+00:00 | 2019-10-20 18:18:10.937000+00:00 | null | null | 58,475,817 | <p>I am looking to predict ocean current vorticity using kinetic energy and sea surface temperature. My data consists of satellite kinetic energy readings and surface temperature readings in the gulf stream region. I plan to use a hybrid neural network which combines a recurrent architecture (LSTM) with a convolutional network model.</p>
<p>My dataset consists of daily kinetic energy and temperature readings from 1996 to 2018, for a total of 8036x80x120 grids. For example, given the <a href="https://i.stack.imgur.com/uOhlS.jpg" rel="nofollow noreferrer">kinetic energy</a>, and <a href="https://i.stack.imgur.com/PVTdo.png" rel="nofollow noreferrer">temperature</a>, I want the NN to predict the <a href="https://i.stack.imgur.com/z9snd.jpg" rel="nofollow noreferrer">vorticity</a>.</p>
<p>My question is, how can I influence my neural network to ignore/discount land terrain and only focus on the ocean data. The land terrain data is stored in my image arrays as NaN values.</p>
<p>I am using PyTorch. </p> | 2019-10-20 18:08:00.703000+00:00 | 2019-10-20 18:18:10.937000+00:00 | null | python|neural-network|deep-learning|conv-neural-network|satellite-image | ['https://arxiv.org/abs/1706.03762', 'https://medium.com/@moshnoi2000/all-you-need-is-attention-computer-vision-edition-dbe7538330a4'] | 2 |
44,321,300 | <p>Given your problem setup, it looks like you're trying to training a neural network on one image such that it is able to predict the blue channel of an image from other 2 images. Putting aside the use of such an experiment, there are a few important things when training neural networks properly, including. </p>
<ol>
<li>learning rate</li>
<li>weight initialization</li>
<li>optimizer</li>
<li>model complexity.</li>
</ol>
<p><a href="http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf" rel="nofollow noreferrer">Yann Lecun's Efficient backprop</a> is a late 90s paper that talks about numbers 1, 2 and 3. Number 4 holds on the assumption that as the number of free parameters increase, at some point you'll be able to match each parameter to each output. </p>
<p>Note that achieving zero-loss provides no guarantees on generalization nor does it mean that your model will not generalize, as brilliantly described in a <a href="https://arxiv.org/abs/1611.03530" rel="nofollow noreferrer">paper presented at ICLR</a>.</p> | 2017-06-02 05:17:16.757000+00:00 | 2017-06-02 05:17:16.757000+00:00 | null | null | 44,320,706 | <p>I am trying to estimate the third band(Blue) in an RGB image using convolutional neural networks. my design using Keras is a sequentiol model with a convolution2D layer as input layer two hidden layers and output neuron. if i want loss(rmse) to be zero how should i change my model?</p>
<p>my model in python goes like this</p>
<pre><code>in_image = skimage.io.imread('test.jpg')[0:50,0:50,:].astype(float)
data = in_image[:,:,0:2]
target = in_image[:,:,2:3]
model1 = keras.models.Sequential()
model1.add(keras.layers.Convolution2D(50,(3,3),strides = (1,1),padding = "same",input_shape=(None,None,2))) #Convolution Layer
model1.add(keras.layers.Dense(50,activation = 'relu')) # Hiden Layer1
model1.add(keras.layers.Dense(50,activation = 'sigmoid')) # Hidden Layer 2
model1.add(keras.layers.Dense(1)) # Output Layer
adadelta = keras.optimizers.Adadelta(lr=1.0, rho=0.95, epsilon=1e-08, decay=0.0)
model1.compile(loss='mean_squared_error', optimizer=adadelta) # Compile the model
model1.fit(np.array([data]),np.array([target]),epochs = 5000)
estimated_band = model1.predict(np.array([data]))
</code></pre> | 2017-06-02 04:18:36.157000+00:00 | 2017-06-02 05:17:16.757000+00:00 | null | python|image-processing|machine-learning|keras|conv-neural-network | ['http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf', 'https://arxiv.org/abs/1611.03530'] | 2 |
47,798,260 | <p>Attention mechanism in the context of encoder-decoder means that decoder at each time step "attends" to the "useful" parts of the encoder. This is implemented as, for example, averaging encoder's outputs and feeding that value (called context) into a decoder at a given time step.</p>
<p><code>dynamic_rnn</code> computes outputs of LSTM cells across all time steps and gives you the final value. So, there is no way to tell the model that the cell state at time step <code>t</code> should depend not only on the output of the previous cell and input, but also on additional information such as <strong>context</strong>. You can control computation at each time step of encoder or decoder LSTM using <code>raw_rnn</code>.</p>
<p>If I understand correctly, in this tutorial the author feeds ground truth input as input to the decoder at each time step. However, this is not the usual way it is done. Usually, you want to feed the output of decoder at time <code>t</code> as input to decoder at time <code>t+1</code>. In short, the input to the decoder at each time step is variable, whereas in <code>dynamic_rnn</code> it is predefined.</p>
<p>Refer to for more technical details: <a href="https://arxiv.org/abs/1409.0473" rel="nofollow noreferrer">https://arxiv.org/abs/1409.0473</a></p> | 2017-12-13 16:46:24.507000+00:00 | 2017-12-13 16:46:24.507000+00:00 | null | null | 44,271,930 | <p>I went through <a href="https://github.com/ematvey/tensorflow-seq2seq-tutorials/blob/master/1-seq2seq.ipynb" rel="nofollow noreferrer">this tutorial</a> . In the last block it says that dynamic_rnn function cannot apply to calculate attention. But what I don't understand is all we need is the hidden state of the decoder in order to find the attention which will be work out with encoder symbols. </p> | 2017-05-30 21:09:34.863000+00:00 | 2017-12-13 16:46:24.507000+00:00 | null | tensorflow|deep-learning | ['https://arxiv.org/abs/1409.0473'] | 1 |
37,524,120 | <p>The question isn't quite right. </p>
<p>Kolmogorov complexity K(x) doesn't apply to programs, it applies to a string x.
More specifically, the Kolmogorov complexity of a string x is the minimum program length needed to compute a particular string x. </p>
<p>It has been formally proven that one can't compute the Kolmogorov complexity of a string. In practice, you can approximate via an upper bound.</p>
<p>The following paper by Ferbus-Zanda and Griorieff gives you the theory <a href="http://arxiv.org/abs/1010.3201" rel="nofollow">http://arxiv.org/abs/1010.3201</a></p>
<p>An intuitive way of thinking about such an approximate upper bound is to consider the length of a compression program that can decompress to a particular string.</p>
<p>Applying this to your problem, the string you describe is a random binary one, doubled. The input string acts a seed for the random number generator.</p>
<p>Ignoring the kolmogorov complexity part of your question, and just looking at space complexity (ie. memory footprint) aspect as @templatetypedef did, the criteria you mention are so loose that all you can say is that the lower space bound for the algorithm is O(1) and the upper bound O(n), where n is the output.</p> | 2016-05-30 11:06:29.480000+00:00 | 2016-05-30 11:06:29.480000+00:00 | null | null | 3,836,134 | <p>Suppose for various input strings an algorithm generates binary string with same number of 0's and 1's. The output for two different input strings may or may not be the same. Can we say anything about the space complexity of the algorithm? </p> | 2010-10-01 02:21:38.490000+00:00 | 2016-05-30 11:06:29.480000+00:00 | null | space-complexity | ['http://arxiv.org/abs/1010.3201'] | 1 |
46,438,438 | <p>To rephrase your question:</p>
<p>"When training a neural network, a common stopping criterion is the 'early stopping criterion' which stops training when the validation loss increases (signaling overfitting). For small datasets, where training samples are precious, we would prefer to use some other criterion and use 100% of the data for training the model."</p>
<p>I think this is generally a hard problem, so I am not surprised you have not found a simple answer. I think you have a few options:</p>
<ol>
<li>Add regularization (such as Dropout or Batch Normalization) which should help prevent overfitting. Then, use the training loss for a stopping criterion. You could see how this approach would perform on a validation set without using early stopping to ensure that the model is not overfitting.</li>
<li>Be sure not to overprovision the model. Smaller models will have a more difficult time overfitting.</li>
<li>Take a look at the stopping criterion described in this paper which does not rely on a validation set: <a href="https://arxiv.org/pdf/1703.09580.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1703.09580.pdf</a></li>
</ol>
<p>Finally, you may not use Neural Networks here. Generally, these models work best with large amounts of training data. In this case of 700 samples, you can possibly get better performance with another algorithm.</p> | 2017-09-27 02:13:51.523000+00:00 | 2017-09-27 02:13:51.523000+00:00 | null | null | 46,438,060 | <p>This is a problem that I am constantly facing, but don't seem to find the answer anywhere. I have a data set of 700 samples. As a result, I have to use cross-validation instead of just using one validation and one test set to get a close estimate of the error.</p>
<p>I would like to use a neural network to do this. But after doing CV with a neural network, and get an error estimate, how do I train the NN on the whole data set? Because for other algorithms like Logistic regression or SVM, there is no question of when to stop in training. But for NN, you train it until your validation score goes down. So, for the final model, training on the whole dataset, how do you know when to stop?</p>
<p>Just to make it clear, my problem is not how to choose hyper-parametes with NN. I can do that by using a nested CV. My question is how to train the final NN on the whole data set(when to stop more specifically) before applying it in wild?</p> | 2017-09-27 01:21:22.853000+00:00 | 2017-09-27 02:25:01.857000+00:00 | 2017-09-27 02:25:01.857000+00:00 | python|validation|machine-learning|neural-network|conv-neural-network | ['https://arxiv.org/pdf/1703.09580.pdf'] | 1 |
54,255,735 | <p>For categorical features target based statistics were used. This is currently the best way to preprocess categorical features for GBDT, works better than one-hot. This is similar to target encoding but uses a permutation to not have overfitting.
Details and comparisons about this approach can be found in NIPS 2018 paper "CatBoost: unbiased boosting with categorical features" (<a href="https://arxiv.org/abs/1706.09516" rel="nofollow noreferrer">https://arxiv.org/abs/1706.09516</a>).</p> | 2019-01-18 14:11:37.260000+00:00 | 2019-01-18 14:11:37.260000+00:00 | null | null | 54,171,680 | <p>I've recently started to use CatBoost for rapid prototyping of machine learning models, inspired by the outstanding <a href="https://github.com/catboost/benchmarks/tree/master/quality_benchmarks" rel="nofollow noreferrer">performance benchmarks</a> of CatBoost compared to XGBoost, LightGBM and h2o.</p>
<p>Since XGBoost can only accept numeric features, a comparison between CatBoost and XGBoost needs a common preprocessing of categorical features. It is not entirely clear to me what kind of preprocessing was used to encode categorical features in the benchmarking experiments, and the rationale for not using a simple one-hot encoding.</p>
<p>I've tried to read the <a href="https://github.com/catboost/benchmarks/blob/master/quality_benchmarks/comparison_description.pdf" rel="nofollow noreferrer">documentation</a> of the experiments. As far as I understand it, the procedure to encode categorical feature <code>j</code> is about equivalent to the following:</p>
<ol>
<li>On the <code>train</code> set, group the response <code>y</code> by <code>j</code>, aggregating with the <code>mean</code> function . Let's call the result <code>df_agg_j</code></li>
<li>Left join the <code>train</code> set and <code>df_agg_j</code> on the categorical column <code>j</code>, drop the original categorical column <code>j</code> and use the new numeric column instead</li>
<li>Left join the <code>valid</code> set and <code>df_agg_j</code> on the categorical column <code>j</code>, drop the original categorical column <code>j</code> and use the new numeric column instead</li>
</ol>
<p>Still I don't understand the need for "a random permutation of the objects for j-th categorical feature and i-th object", and for adding 1 at the numerator and 2 to the denominator in the final formula under the section "Preparation of Splits" of the <a href="https://github.com/catboost/benchmarks/blob/master/quality_benchmarks/comparison_description.pdf" rel="nofollow noreferrer">documentation</a>.</p>
<p>The code for splitting and preprocessing the data can be found <a href="https://github.com/catboost/benchmarks/blob/master/quality_benchmarks/experiment.py" rel="nofollow noreferrer">here</a>.</p>
<p>Is there an explanation (or some reference in the literature) about the method used to encode categorical features in this experiment, and a comparison between this method and one-hot encoding?</p> | 2019-01-13 18:02:15.283000+00:00 | 2019-01-18 14:11:37.260000+00:00 | null | machine-learning|catboost | ['https://arxiv.org/abs/1706.09516'] | 1 |
13,137,344 | <p>I'm the author of the <a href="http://scikit-learn.org" rel="noreferrer">scikit-learn</a> <a href="http://scikit-learn.org/dev/modules/ensemble.html#gradient-boosting" rel="noreferrer">gradient boosting module</a>, a Gradient Boosted Regression Trees implementation in Python. I put some effort in optimizing prediction time since the method was targeted at low-latency environments (in particular ranking problems); the prediction routine is written in C, still there is some overhead due to Python function calls. Having said that: prediction time for single data points with ~50 features and about 250 trees should be << 1ms.</p>
<p>In my use-cases prediction time is often governed by the cost of feature extraction. I strongly recommend profiling to pin-point the source of the overhead (if you use Python, I can recommend <a href="http://packages.python.org/line_profiler/" rel="noreferrer">line_profiler</a>).</p>
<p>If the source of the overhead is prediction rather than feature extraction you might check whether its possible to do batch predictions instead of predicting single data points thus limiting the overhead due to the Python function call (e.g. in ranking you often need to score the top-K documents, so you can do the feature extraction first and then run predict on the K x n_features matrix.</p>
<p>If this doesn't help either you should try the limit the number of trees because the runtime cost for prediction is basically linear in the number of trees.
There are a number of ways to limit the number of trees without affecting the model accuracy:</p>
<ol>
<li><p>Proper tuning of the learning rate; the smaller the learning rate, the more trees are needed and thus the slower is prediction.</p></li>
<li><p>Post-process GBM with L1 regularization (Lasso); See <a href="http://www-stat.stanford.edu/~tibs/ElemStatLearn/" rel="noreferrer">Elements of Statistical Learning</a> Section 16.3.1 - use predictions of each tree as new features and run the representation through a L1 regularized linear model - remove those trees that don't get any weight.</p></li>
<li><p>Fully-corrective weight updates; instead of doing the line-search/weight update just for the most recent tree, update all trees (see [Warmuth2006] and [Johnson2012]). Better convergence - fewer trees.</p></li>
</ol>
<p>If none of the above does the trick you could investigate cascades or early-exit strategies (see [Chen2012])</p>
<p>References:</p>
<p>[Warmuth2006] M. Warmuth, J. Liao, and G. Ratsch. Totally corrective boosting algorithms that maximize the margin. In Proceedings of the 23rd international conference on Machine learning, 2006.</p>
<p>[Johnson2012] Rie Johnson, Tong Zhang, Learning Nonlinear Functions Using Regularized Greedy Forest, arxiv, 2012.</p>
<p>[Chen2012] Minmin Chen, Zhixiang Xu, Kilian Weinberger, Olivier Chapelle, Dor Kedem, Classifier Cascade for Minimizing Feature Evaluation Cost, JMLR W&CP 22: 218-226, 2012.</p> | 2012-10-30 10:46:08.057000+00:00 | 2012-10-30 10:59:32.723000+00:00 | 2012-10-30 10:59:32.723000+00:00 | null | 11,295,755 | <p>Can anyone recommend a strategy for making predictions using a gradient boosting model in the <10-15ms range (the faster the better)? </p>
<p>I have been using <code>R</code>'s <code>gbm</code> package, but the first prediction takes ~50ms (subsequent vectorized predictions average to 1ms, so there appears to be overhead, perhaps in the call to the C++ library). As a guideline, there will be ~10-50 inputs and ~50-500 trees. The task is classification and I need access to predicted probabilities.</p>
<p>I know there are a lot of libraries out there, but I've had little luck finding information even on rough prediction times for them. The training will happen offline, so only predictions need to be fast -- also, predictions may come from a piece of code / library that is completely separate from whatever does the training (as long as there is a common format for representing the trees).</p> | 2012-07-02 14:33:16.980000+00:00 | 2012-10-30 10:59:32.723000+00:00 | null | machine-learning|classification | ['http://scikit-learn.org', 'http://scikit-learn.org/dev/modules/ensemble.html#gradient-boosting', 'http://packages.python.org/line_profiler/', 'http://www-stat.stanford.edu/~tibs/ElemStatLearn/'] | 4 |
63,608,348 | <p>As of 2020 you can use RDF* as follows:</p>
<p><< :Tolkien :wrote :LordOfTheRings >> :said :Wikipedia .</p>
<p>In 2020 many leading triple stores implemented this approach. There are also tools to convert standard reification to RDF* to reduce triple bloat. This approach is efficient in terms of the number of triples and the speed to load data, as reported by Ontotext’s GraphDB Triple store as well as several others.</p>
<p>You can read about the origins of this approach here <a href="https://arxiv.org/pdf/1406.3399.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1406.3399.pdf</a></p> | 2020-08-27 02:40:07.880000+00:00 | 2020-08-27 02:40:07.880000+00:00 | null | null | 1,312,741 | <p>Could anybody be so kind to give me a simple example of reification in RDF? I want to see if I understood it correctly.</p>
<p>For example, I propose the following case</p>
<pre><code>Tolkien -> wrote -> Lord of the rings
/|\
|
Wikipedia said that
</code></pre>
<p>How would you write it <em>with</em> and <em>without</em> reification (i.e. as a simple RDF statement with no need for reification)?</p> | 2009-08-21 15:42:06.763000+00:00 | 2022-07-06 21:14:42.520000+00:00 | 2013-04-01 04:31:30.680000+00:00 | rdf|reification | ['https://arxiv.org/pdf/1406.3399.pdf'] | 1 |
69,025,751 | <h1>Fine Tuning Approach</h1>
<p>There are multiple approaches to fine-tune BERT for the target tasks.</p>
<ol>
<li>Further Pre-training the base BERT model</li>
<li>Custom classification layer(s) on top of the base BERT model being trainable</li>
<li>Custom classification layer(s) on top of the base BERT model being non-trainable (frozen)</li>
</ol>
<p>Note that the BERT base model has been pre-trained only for two tasks as in the original paper.</p>
<ul>
<li><a href="https://arxiv.org/abs/1810.04805" rel="noreferrer">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</a></li>
</ul>
<blockquote>
<p>3.1 Pre-training BERT ...we pre-train BERT using two unsupervised tasks<br></p>
<ul>
<li>Task #1: Masked LM<br></li>
<li>Task #2: Next Sentence Prediction (NSP)<br></li>
</ul>
</blockquote>
<p>Hence, the base BERT model is like half-baked which can be fully baked for the target domain (1st way). We can use it as part of our custom model training with the base trainable (2nd) or not-trainable (3rd).</p>
<hr />
<h1>1st approach</h1>
<p><a href="https://arxiv.org/abs/1905.05583" rel="noreferrer">How to Fine-Tune BERT for Text Classification?</a> demonstrated the 1st approach of Further Pre-training, and pointed out the learning rate is the key to avoid <strong>Catastrophic Forgetting</strong> where the pre-trained knowledge is erased during learning of new knowledge.</p>
<blockquote>
<p>We find that a lower learning rate, such as 2e-5,
is necessary to make BERT overcome the catastrophic forgetting problem. With an aggressive learn rate of 4e-4, the training set fails to converge.<br>
<a href="https://i.stack.imgur.com/pm2EV.png" rel="noreferrer"><img src="https://i.stack.imgur.com/pm2EV.png" alt="enter image description here" /></a></p>
</blockquote>
<p>Probably this is the reason why the <a href="https://arxiv.org/abs/1810.04805" rel="noreferrer">BERT paper</a> used 5e-5, 4e-5, 3e-5, and 2e-5 for <strong>fine-tuning</strong>.</p>
<blockquote>
<p>We use a batch size of 32 and fine-tune for 3 epochs over the data for all GLUE tasks. For each task, we selected the best fine-tuning learning rate (among 5e-5, 4e-5, 3e-5, and 2e-5) on the Dev set</p>
</blockquote>
<p>Note that the base model pre-training itself used higher learning rate.</p>
<ul>
<li><a href="https://huggingface.co/bert-base-uncased#pretraining" rel="noreferrer">bert-base-uncased - pretraining</a></li>
</ul>
<blockquote>
<p>The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of <code>1e-4</code>, β1=<code>0.9</code> and β2=<code>0.999</code>, a weight decay of <code>0.01</code>, learning rate warmup for 10,000 steps and linear decay of the learning rate after.</p>
</blockquote>
<p>Will describe the 1st way as part of the 3rd approach below.</p>
<p>FYI:
<a href="https://huggingface.co/transformers/model_doc/distilbert.html#tfdistilbertmodel" rel="noreferrer">TFDistilBertModel</a> is the bare base model with the name <code>distilbert</code>.</p>
<pre><code>Model: "tf_distil_bert_model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
distilbert (TFDistilBertMain multiple 66362880
=================================================================
Total params: 66,362,880
Trainable params: 66,362,880
Non-trainable params: 0
</code></pre>
<hr />
<h1>2nd approach</h1>
<p>Huggingface takes the 2nd approach as in <a href="https://huggingface.co/transformers/custom_datasets.html#fine-tuning-with-native-pytorch-tensorflow" rel="noreferrer">Fine-tuning with native PyTorch/TensorFlow</a> where <code>TFDistilBertForSequenceClassification</code> has added the custom classification layer <code>classifier</code> on top of the base <code>distilbert</code> model being trainable. The small learning rate requirement will apply as well to avoid the catastrophic forgetting.</p>
<pre><code>from transformers import TFDistilBertForSequenceClassification
model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
model.compile(optimizer=optimizer, loss=model.compute_loss) # can also use any keras loss fn
model.fit(train_dataset.shuffle(1000).batch(16), epochs=3, batch_size=16)
</code></pre>
<pre><code>Model: "tf_distil_bert_for_sequence_classification_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
distilbert (TFDistilBertMain multiple 66362880
_________________________________________________________________
pre_classifier (Dense) multiple 590592
_________________________________________________________________
classifier (Dense) multiple 1538
_________________________________________________________________
dropout_59 (Dropout) multiple 0
=================================================================
Total params: 66,955,010
Trainable params: 66,955,010 <--- All parameters are trainable
Non-trainable params: 0
</code></pre>
<h3>Implementation of the 2nd approach</h3>
<pre><code>import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from transformers import (
DistilBertTokenizerFast,
TFDistilBertForSequenceClassification,
)
DATA_COLUMN = 'text'
LABEL_COLUMN = 'category_index'
MAX_SEQUENCE_LENGTH = 512
LEARNING_RATE = 5e-5
BATCH_SIZE = 16
NUM_EPOCHS = 3
# --------------------------------------------------------------------------------
# Tokenizer
# --------------------------------------------------------------------------------
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
def tokenize(sentences, max_length=MAX_SEQUENCE_LENGTH, padding='max_length'):
"""Tokenize using the Huggingface tokenizer
Args:
sentences: String or list of string to tokenize
padding: Padding method ['do_not_pad'|'longest'|'max_length']
"""
return tokenizer(
sentences,
truncation=True,
padding=padding,
max_length=max_length,
return_tensors="tf"
)
# --------------------------------------------------------------------------------
# Load data
# --------------------------------------------------------------------------------
raw_train = pd.read_csv("./train.csv")
train_data, validation_data, train_label, validation_label = train_test_split(
raw_train[DATA_COLUMN].tolist(),
raw_train[LABEL_COLUMN].tolist(),
test_size=.2,
shuffle=True
)
# --------------------------------------------------------------------------------
# Prepare TF dataset
# --------------------------------------------------------------------------------
train_dataset = tf.data.Dataset.from_tensor_slices((
dict(tokenize(train_data)), # Convert BatchEncoding instance to dictionary
train_label
)).shuffle(1000).batch(BATCH_SIZE).prefetch(1)
validation_dataset = tf.data.Dataset.from_tensor_slices((
dict(tokenize(validation_data)),
validation_label
)).batch(BATCH_SIZE).prefetch(1)
# --------------------------------------------------------------------------------
# training
# --------------------------------------------------------------------------------
model = TFDistilBertForSequenceClassification.from_pretrained(
'distilbert-base-uncased',
num_labels=NUM_LABELS
)
optimizer = tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE)
model.compile(
optimizer=optimizer,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(
x=train_dataset,
y=None,
validation_data=validation_dataset,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
)
</code></pre>
<hr />
<h1>3rd approach</h1>
<h2>Basics</h2>
<p>Please note that the images are taken from <a href="http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/" rel="noreferrer">A Visual Guide to Using BERT for the First Time</a> and modified.</p>
<h3>Tokenizer</h3>
<p><a href="https://huggingface.co/transformers/main_classes/tokenizer.html" rel="noreferrer">Tokenizer</a> generates the instance of BatchEncoding which can be used like a Python dictionary and the input to the BERT model.</p>
<ul>
<li><a href="https://huggingface.co/transformers/main_classes/tokenizer.html#batchencoding" rel="noreferrer">BatchEncoding</a></li>
</ul>
<blockquote>
<p>Holds the output of the encode_plus() and batch_encode() methods (tokens, attention_masks, etc).
<br>
This class is derived from a python dictionary and <strong>can be used as a dictionary</strong>. In addition, this class exposes utility methods to map from word/character space to token space.<br><br>
Parameters<br></p>
<ul>
<li>data (dict) – Dictionary of lists/arrays/tensors returned by the encode/batch_encode methods (‘input_ids’, ‘attention_mask’, etc.).</li>
</ul>
</blockquote>
<p>The <code>data</code> attribute of the class is the tokens generated which has <code>input_ids</code> and <code>attention_mask</code> elements.</p>
<h3>input_ids</h3>
<ul>
<li><a href="https://huggingface.co/transformers/glossary.html#input-ids" rel="noreferrer">input_ids</a></li>
</ul>
<blockquote>
<p>The input ids are often the only required parameters to be passed to the model as input. They are <strong>token indices, numerical representations of tokens</strong> building the sequences that will be used as input by the model.</p>
</blockquote>
<h3>attention_mask</h3>
<ul>
<li><a href="https://huggingface.co/transformers/glossary.html#attention-mask" rel="noreferrer">Attention mask</a></li>
</ul>
<blockquote>
<p>This argument indicates to the model which tokens should be attended to, and which should not.</p>
</blockquote>
<p>If the attention_mask is <code>0</code>, the token id is ignored. For instance if a sequence is padded to adjust the sequence length, the padded words should be ignored hence their attention_mask are 0.</p>
<h3>Special Tokens</h3>
<p>BertTokenizer addes special tokens, enclosing a sequence with <code>[CLS]</code> and <code>[SEP]</code>. <code>[CLS]</code> represents <strong>Classification</strong> and <code>[SEP]</code> separates sequences. For Question Answer or Paraphrase tasks, <code>[SEP]</code> separates the two sentences to compare.</p>
<p><a href="https://huggingface.co/transformers/model_doc/bert.html#berttokenizer" rel="noreferrer">BertTokenizer</a></p>
<blockquote>
<ul>
<li>cls_token (str, optional, defaults to "<strong>[CLS]</strong>")<BR>The <strong>Classifier Token which is used when doing sequence classification</strong> (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.</li>
<li>sep_token (str, optional, defaults to "[SEP]")<BR>The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.</li>
</ul>
</blockquote>
<p><a href="http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/" rel="noreferrer">A Visual Guide to Using BERT for the First Time</a> show the tokenization.</p>
<p><a href="https://i.stack.imgur.com/zQtff.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zQtff.png" alt="enter image description here" /></a></p>
<h3>[CLS]</h3>
<p>The embedding vector for <strong><code>[CLS]</code></strong> in the output from the base model final layer represents the classification that has been learned by the base model. Hence feed the embedding vector of <strong><code>[CLS]</code></strong> token into the classification layer added on top of the base model.</p>
<ul>
<li><a href="https://arxiv.org/abs/1810.04805" rel="noreferrer">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</a></li>
</ul>
<blockquote>
<p>The first token of every sequence is always <code>a special classification token ([CLS])</code>. The final hidden state corresponding to this token is <strong>used as the aggregate sequence representation for classification tasks</strong>. Sentence pairs are packed together into a single sequence. We differentiate the sentences in two ways. First, we separate them with a special token ([SEP]). Second, we add a learned embedding to every token indicating whether it belongs to sentence A or sentence B.</p>
</blockquote>
<p>The model structure will be illustrated as below.</p>
<p><a href="https://i.stack.imgur.com/VAq7v.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/VAq7v.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/tjpn4.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/tjpn4.jpg" alt="enter image description here" /></a></p>
<h3>Vector size</h3>
<p>In the model <code>distilbert-base-uncased</code>, each token is embedded into a vector of size <strong>768</strong>. The shape of the output from the base model is <code>(batch_size, max_sequence_length, embedding_vector_size=768)</code>. This accords with the BERT paper about the BERT/BASE model (as indicated in distilbert-<em><strong>base</strong></em>-uncased).</p>
<ul>
<li><a href="https://arxiv.org/abs/1810.04805" rel="noreferrer">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</a></li>
</ul>
<blockquote>
<p>BERT/BASE (L=12, H=<strong>768</strong>, A=12, Total Parameters=110M) and BERT/LARGE (L=24, H=1024, A=16, Total Parameters=340M).</p>
</blockquote>
<h3>Base Model - TFDistilBertModel</h3>
<ul>
<li><a href="https://towardsdatascience.com/hugging-face-transformers-fine-tuning-distilbert-for-binary-classification-tasks-490f1d192379" rel="noreferrer">Hugging Face Transformers: Fine-tuning DistilBERT for Binary Classification Tasks</a></li>
</ul>
<blockquote>
<p>TFDistilBertModel class to instantiate the base DistilBERT model <strong>without any specific head on top</strong> (as opposed to other classes such as TFDistilBertForSequenceClassification that do have an added classification head). <br><br>
We do not want any task-specific head attached because we simply want the pre-trained weights of the base model to provide a general understanding of the English language, and it will be our job to add our own classification head during the fine-tuning process in order to help the model distinguish between toxic comments.</p>
</blockquote>
<p><code>TFDistilBertModel</code> generates an instance of <code>TFBaseModelOutput</code> whose <code>last_hidden_state</code> parameter is the output from the model last layer.</p>
<pre><code>TFBaseModelOutput([(
'last_hidden_state',
<tf.Tensor: shape=(batch_size, sequence_lendgth, 768), dtype=float32, numpy=array([[[...]]], dtype=float32)>
)])
</code></pre>
<ul>
<li><a href="https://huggingface.co/transformers/main_classes/output.html#tfbasemodeloutput" rel="noreferrer">TFBaseModelOutput</a></li>
</ul>
<blockquote>
<p>Parameters<br></p>
<ul>
<li>last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.</li>
</ul>
</blockquote>
<h2>Implementation</h2>
<h3>Python modules</h3>
<pre><code>import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from transformers import (
DistilBertTokenizerFast,
TFDistilBertModel,
)
</code></pre>
<h3>Configuration</h3>
<pre><code>TIMESTAMP = datetime.datetime.now().strftime("%Y%b%d%H%M").upper()
DATA_COLUMN = 'text'
LABEL_COLUMN = 'category_index'
MAX_SEQUENCE_LENGTH = 512 # Max length allowed for BERT is 512.
NUM_LABELS = len(raw_train[LABEL_COLUMN].unique())
MODEL_NAME = 'distilbert-base-uncased'
NUM_BASE_MODEL_OUTPUT = 768
# Flag to freeze base model
FREEZE_BASE = True
# Flag to add custom classification heads
USE_CUSTOM_HEAD = True
if USE_CUSTOM_HEAD == False:
# Make the base trainable when no classification head exists.
FREEZE_BASE = False
BATCH_SIZE = 16
LEARNING_RATE = 1e-2 if FREEZE_BASE else 5e-5
L2 = 0.01
</code></pre>
<h3>Tokenizer</h3>
<pre><code>tokenizer = DistilBertTokenizerFast.from_pretrained(MODEL_NAME)
def tokenize(sentences, max_length=MAX_SEQUENCE_LENGTH, padding='max_length'):
"""Tokenize using the Huggingface tokenizer
Args:
sentences: String or list of string to tokenize
padding: Padding method ['do_not_pad'|'longest'|'max_length']
"""
return tokenizer(
sentences,
truncation=True,
padding=padding,
max_length=max_length,
return_tensors="tf"
)
</code></pre>
<h3>Input layer</h3>
<p>The base model expects <code>input_ids</code> and <code>attention_mask</code> whose shape is <code>(max_sequence_length,)</code>. Generate Keras Tensors for them with <code>Input</code> layer respectively.</p>
<pre><code># Inputs for token indices and attention masks
input_ids = tf.keras.layers.Input(shape=(MAX_SEQUENCE_LENGTH,), dtype=tf.int32, name='input_ids')
attention_mask = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32, name='attention_mask')
</code></pre>
<h3>Base model layer</h3>
<p>Generate the output from the base model. The base model generates <code>TFBaseModelOutput</code>. Feed the embedding of <strong><code>[CLS]</code></strong> to the next layer.</p>
<pre><code>base = TFDistilBertModel.from_pretrained(
MODEL_NAME,
num_labels=NUM_LABELS
)
# Freeze the base model weights.
if FREEZE_BASE:
for layer in base.layers:
layer.trainable = False
base.summary()
# [CLS] embedding is last_hidden_state[:, 0, :]
output = base([input_ids, attention_mask]).last_hidden_state[:, 0, :]
</code></pre>
<h3>Classification layers</h3>
<pre><code>if USE_CUSTOM_HEAD:
# -------------------------------------------------------------------------------
# Classifiation leayer 01
# --------------------------------------------------------------------------------
output = tf.keras.layers.Dropout(
rate=0.15,
name="01_dropout",
)(output)
output = tf.keras.layers.Dense(
units=NUM_BASE_MODEL_OUTPUT,
kernel_initializer='glorot_uniform',
activation=None,
name="01_dense_relu_no_regularizer",
)(output)
output = tf.keras.layers.BatchNormalization(
name="01_bn"
)(output)
output = tf.keras.layers.Activation(
"relu",
name="01_relu"
)(output)
# --------------------------------------------------------------------------------
# Classifiation leayer 02
# --------------------------------------------------------------------------------
output = tf.keras.layers.Dense(
units=NUM_BASE_MODEL_OUTPUT,
kernel_initializer='glorot_uniform',
activation=None,
name="02_dense_relu_no_regularizer",
)(output)
output = tf.keras.layers.BatchNormalization(
name="02_bn"
)(output)
output = tf.keras.layers.Activation(
"relu",
name="02_relu"
)(output)
</code></pre>
<h3>Softmax Layer</h3>
<pre><code>output = tf.keras.layers.Dense(
units=NUM_LABELS,
kernel_initializer='glorot_uniform',
kernel_regularizer=tf.keras.regularizers.l2(l2=L2),
activation='softmax',
name="softmax"
)(output)
</code></pre>
<h3>Final Custom Model</h3>
<pre><code>name = f"{TIMESTAMP}_{MODEL_NAME.upper()}"
model = tf.keras.models.Model(inputs=[input_ids, attention_mask], outputs=output, name=name)
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
metrics=['accuracy']
)
model.summary()
---
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_ids (InputLayer) [(None, 256)] 0
__________________________________________________________________________________________________
attention_mask (InputLayer) [(None, 256)] 0
__________________________________________________________________________________________________
tf_distil_bert_model (TFDistilB TFBaseModelOutput(la 66362880 input_ids[0][0]
attention_mask[0][0]
__________________________________________________________________________________________________
tf.__operators__.getitem_1 (Sli (None, 768) 0 tf_distil_bert_model[1][0]
__________________________________________________________________________________________________
01_dropout (Dropout) (None, 768) 0 tf.__operators__.getitem_1[0][0]
__________________________________________________________________________________________________
01_dense_relu_no_regularizer (D (None, 768) 590592 01_dropout[0][0]
__________________________________________________________________________________________________
01_bn (BatchNormalization) (None, 768) 3072 01_dense_relu_no_regularizer[0][0
__________________________________________________________________________________________________
01_relu (Activation) (None, 768) 0 01_bn[0][0]
__________________________________________________________________________________________________
02_dense_relu_no_regularizer (D (None, 768) 590592 01_relu[0][0]
__________________________________________________________________________________________________
02_bn (BatchNormalization) (None, 768) 3072 02_dense_relu_no_regularizer[0][0
__________________________________________________________________________________________________
02_relu (Activation) (None, 768) 0 02_bn[0][0]
__________________________________________________________________________________________________
softmax (Dense) (None, 2) 1538 02_relu[0][0]
==================================================================================================
Total params: 67,551,746
Trainable params: 1,185,794
Non-trainable params: 66,365,952 <--- Base BERT model is frozen
</code></pre>
<h3>Data allocation</h3>
<pre><code># --------------------------------------------------------------------------------
# Split data into training and validation
# --------------------------------------------------------------------------------
raw_train = pd.read_csv("./train.csv")
train_data, validation_data, train_label, validation_label = train_test_split(
raw_train[DATA_COLUMN].tolist(),
raw_train[LABEL_COLUMN].tolist(),
test_size=.2,
shuffle=True
)
# X = dict(tokenize(train_data))
# Y = tf.convert_to_tensor(train_label)
X = tf.data.Dataset.from_tensor_slices((
dict(tokenize(train_data)), # Convert BatchEncoding instance to dictionary
train_label
)).batch(BATCH_SIZE).prefetch(1)
V = tf.data.Dataset.from_tensor_slices((
dict(tokenize(validation_data)), # Convert BatchEncoding instance to dictionary
validation_label
)).batch(BATCH_SIZE).prefetch(1)
</code></pre>
<h3>Train</h3>
<pre><code># --------------------------------------------------------------------------------
# Train the model
# https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit
# Input data x can be a dict mapping input names to the corresponding array/tensors,
# if the model has named inputs. Beware of the "names". y should be consistent with x
# (you cannot have Numpy inputs and tensor targets, or inversely).
# --------------------------------------------------------------------------------
history = model.fit(
x=X, # dictionary
# y=Y,
y=None,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
validation_data=V,
)
</code></pre>
<p>To implement the 1st approach, change the configuration as below.</p>
<pre><code>USE_CUSTOM_HEAD = False
</code></pre>
<p>Then <code>FREEZE_BASE</code> is changed to <code>False</code> and <code>LEARNING_RATE</code> is changed to <code>5e-5</code> which will run Further Pre-training on the base BERT model.</p>
<h3>Saving the model</h3>
<p>For the 3rd approach, saving the model will cause issues. The <a href="https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.save_pretrained" rel="noreferrer">save_pretrained</a> method of the Huggingface Model cannot be used as the model is not a direct sub class from of Huggingface <a href="https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel" rel="noreferrer">PreTrainedModel</a>.</p>
<p><a href="https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model" rel="noreferrer">Keras save_model</a> causes an error with the default <code>save_traces=True</code>, or causes a different error with <code>save_traces=True</code> when loading the model with <a href="https://www.tensorflow.org/api_docs/python/tf/keras/models/load_model" rel="noreferrer">Keras load_model</a>.</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-71-01d66991d115> in <module>()
----> 1 tf.keras.models.load_model(MODEL_DIRECTORY)
11 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/saved_model/load.py in _unable_to_call_layer_due_to_serialization_issue(layer, *unused_args, **unused_kwargs)
865 'recorded when the object is called, and used when saving. To manually '
866 'specify the input shape/dtype, decorate the call function with '
--> 867 '`@tf.function(input_signature=...)`.'.format(layer.name, type(layer)))
868
869
ValueError: Cannot call custom layer tf_distil_bert_model of type <class 'tensorflow.python.keras.saving.saved_model.load.TFDistilBertModel'>, because the call function was not serialized to the SavedModel.Please try one of the following methods to fix this issue:
(1) Implement `get_config` and `from_config` in the layer/model class, and pass the object to the `custom_objects` argument when loading the model. For more details, see: https://www.tensorflow.org/guide/keras/save_and_serialize
(2) Ensure that the subclassed model or layer overwrites `call` and not `__call__`. The input shape and dtype will be automatically recorded when the object is called, and used when saving. To manually specify the input shape/dtype, decorate the call function with `@tf.function(input_signature=...)`.
</code></pre>
<p>Only <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model#save_weights" rel="noreferrer">Keras Model save_weights</a> worked as far as I tested.</p>
<h1>Experiments</h1>
<p>As far as I tested with <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge" rel="noreferrer">Toxic Comment Classification Challenge</a>, the 1st approach gave better recall (identify true toxic comment, true non-toxic comment). Code can be accessed as below. Please provide correction/suggestion if anything.</p>
<ul>
<li><a href="https://nbviewer.jupyter.org/github/omontasama/nlp-huggingface/blob/main/fine_tuning/huggingface_fine_tuning.ipynb" rel="noreferrer">Code for 1st and 3rd approach</a></li>
</ul>
<hr />
<h1>Related</h1>
<ul>
<li><a href="https://www.youtube.com/watch?v=_eSGWNqKeeY" rel="noreferrer">BERT Document Classification Tutorial with Code</a> - Fine tuning using TFDistilBertForSequenceClassification and Pytorch</li>
<li><a href="https://towardsdatascience.com/hugging-face-transformers-fine-tuning-distilbert-for-binary-classification-tasks-490f1d192379" rel="noreferrer">Hugging Face Transformers: Fine-tuning DistilBERT for Binary Classification Tasks</a> - Fine tuning using TFDistilBertModel</li>
</ul> | 2021-09-02 07:18:15.683000+00:00 | 2021-09-02 07:18:15.683000+00:00 | null | null | 69,025,750 | <p>Is there a <em>Step by step explanation</em> on how to <strong>Fine-tune HuggingFace BERT</strong> model for text classification?</p> | 2021-09-02 07:18:15.683000+00:00 | 2021-11-11 19:55:51.123000+00:00 | 2021-11-11 19:55:51.123000+00:00 | machine-learning|huggingface-transformers|transfer-learning | ['https://arxiv.org/abs/1810.04805', 'https://arxiv.org/abs/1905.05583', 'https://i.stack.imgur.com/pm2EV.png', 'https://arxiv.org/abs/1810.04805', 'https://huggingface.co/bert-base-uncased#pretraining', 'https://huggingface.co/transformers/model_doc/distilbert.html#tfdistilbertmodel', 'https://huggingface.co/transformers/custom_datasets.html#fine-tuning-with-native-pytorch-tensorflow', 'http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/', 'https://huggingface.co/transformers/main_classes/tokenizer.html', 'https://huggingface.co/transformers/main_classes/tokenizer.html#batchencoding', 'https://huggingface.co/transformers/glossary.html#input-ids', 'https://huggingface.co/transformers/glossary.html#attention-mask', 'https://huggingface.co/transformers/model_doc/bert.html#berttokenizer', 'http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/', 'https://i.stack.imgur.com/zQtff.png', 'https://arxiv.org/abs/1810.04805', 'https://i.stack.imgur.com/VAq7v.jpg', 'https://i.stack.imgur.com/tjpn4.jpg', 'https://arxiv.org/abs/1810.04805', 'https://towardsdatascience.com/hugging-face-transformers-fine-tuning-distilbert-for-binary-classification-tasks-490f1d192379', 'https://huggingface.co/transformers/main_classes/output.html#tfbasemodeloutput', 'https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.save_pretrained', 'https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel', 'https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model', 'https://www.tensorflow.org/api_docs/python/tf/keras/models/load_model', 'https://www.tensorflow.org/api_docs/python/tf/keras/Model#save_weights', 'https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge', 'https://nbviewer.jupyter.org/github/omontasama/nlp-huggingface/blob/main/fine_tuning/huggingface_fine_tuning.ipynb', 'https://www.youtube.com/watch?v=_eSGWNqKeeY', 'https://towardsdatascience.com/hugging-face-transformers-fine-tuning-distilbert-for-binary-classification-tasks-490f1d192379'] | 30 |
35,314,016 | <p><strong>EDIT: Important note</strong>
This has been bothering me for a while now. The answers given by myself and Abraham here are correct in the sense that they add variability to x. HOWEVER: Note that you <strong>cannot</strong> simply add uncertainty in this way to cancel out the errors you have in your x-values, so that you regress against "true x". The methods in this answer can show you how adding errors to x affects your regression if you have the true x. If you have a mismeasured x, these answers will not help you. Having errors in the x-values is a very tricky problem to solve, as it leads to "attenuation" and an "errors-in-variables effect". The short version is: having unbiased, random errors in x leads to <em>bias</em> in your regression estimates. If you have this problem, check out Carroll, R.J., Ruppert, D., Crainiceanu, C.M. and Stefanski, L.A., 2006. <em>Measurement error in nonlinear models: a modern perspective</em>. Chapman and Hall/CRC., or for a Bayesian approach, Gustafson, P., 2003. <em>Measurement error and misclassification in statistics and epidemiology: impacts and Bayesian adjustments</em>. CRC Press. I ended up solving my specific problem using Carroll et al.'s SIMEX method along with PyMC3. The details are in Carstens, H., Xia, X. and Yadavalli, S., 2017. <em>Low-cost energy meter calibration method for measurement and verification.</em> Applied energy, 188, pp.563-575. It is also available on ArXiv</p>
<hr>
<p>I converted Abraham Flaxman's answer above into PyMC3, in case someone needs it. Some very minor changes, but can be confusing nevertheless.</p>
<p>The first is that the deterministic decorator <code>@Deterministic</code> is replaced by a distribution-like call function <code>var=pymc3.Deterministic()</code>. Second, when generating a vector of normally distributed random variables,</p>
<pre><code>rvs = pymc2.rnormal(mu=mu, tau=tau)
</code></pre>
<p>is replaced by </p>
<pre><code>rvs = pymc3.Normal('var_name', mu=mu, tau=tau,shape=size(var)).random()
</code></pre>
<p>The complete code is as follows:</p>
<pre><code>import numpy as np
from pymc3 import *
import matplotlib.pyplot as plt
# set random seed for reproducibility
np.random.seed(12345)
x = np.arange(5,400,10)*1e3
# Parameters for gaussian
amp_true = 0.2
size_true = 1.8
ps_true = 0.1
#Gaussian function
gauss = lambda x,amp,size,ps: amp*np.exp(-1*(np.pi**2/(3600.*180.)*size*x)**2/(4.*np.log(2.)))+ps
f_true = gauss(x=x,amp=amp_true, size=size_true, ps=ps_true )
# add noise to the data points
noise = np.random.normal(size=len(x)) * .02
f = f_true + noise
f_error = np.ones_like(f_true)*0.05*f.max()
with Model() as model3:
amp = Uniform('amp', 0.05, 0.4, testval= 0.15)
size = Uniform('size', 0.5, 2.5, testval= 1.0)
ps = Normal('ps', 0.13, 40, testval=0.15)
gauss=Deterministic('gauss',amp*np.exp(-1*(np.pi**2*size*x/(3600.*180.))**2/(4.*np.log(2.)))+ps)
y =Normal('y', mu=gauss, tau=1.0/f_error**2, observed=f)
start=find_MAP()
step=NUTS()
trace=sample(2000,start=start)
# extract and plot results
y_min = np.percentile(trace.gauss,2.5,axis=0)
y_max = np.percentile(trace.gauss,97.5,axis=0)
y_fit = np.percentile(trace.gauss,50,axis=0)
plt.plot(x,f_true,'b', marker='None', ls='-', lw=1, label='True')
plt.errorbar(x,f,yerr=f_error, color='r', marker='.', ls='None', label='Observed')
plt.plot(x,y_fit,'k', marker='+', ls='None', ms=5, mew=1, label='Fit')
plt.fill_between(x, y_min, y_max, color='0.5', alpha=0.5)
plt.legend()
</code></pre>
<p>Which results in</p>
<p><a href="https://i.stack.imgur.com/mvhbd.png" rel="noreferrer">y_error</a></p>
<p>For errors in x (note the 'x' suffix to variables):</p>
<pre><code># define the model/function to be fitted in PyMC3:
with Model() as modelx:
x_obsx = pm3.Normal('x_obsx',mu=x, tau=(1e4)**-2, shape=40)
ampx = Uniform('ampx', 0.05, 0.4, testval=0.15)
sizex = Uniform('sizex', 0.5, 2.5, testval=1.0)
psx = Normal('psx', 0.13, 40, testval=0.15)
x_pred = Normal('x_pred', mu=x_obsx, tau=(1e4)**-2*np.ones_like(x_obsx),testval=5*np.ones_like(x_obsx),shape=40) # this allows error in x_obs
gauss=Deterministic('gauss',ampx*np.exp(-1*(np.pi**2*sizex*x_pred/(3600.*180.))**2/(4.*np.log(2.)))+psx)
y = Normal('y', mu=gauss, tau=1.0/f_error**2, observed=f)
start=find_MAP()
step=NUTS()
tracex=sample(20000,start=start)
</code></pre>
<p>Which results in:</p>
<p><a href="https://i.stack.imgur.com/ZOh71.png" rel="noreferrer">x_error_graph</a></p>
<p>the last observation is that when doing </p>
<pre><code>traceplot(tracex[100:])
plt.tight_layout();
</code></pre>
<p>(result not shown), we can see that <code>sizex</code> seems to be suffering from 'attenuation' or 'regression dilution' due to the error in the measurement of <code>x</code>.</p> | 2016-02-10 11:31:16.667000+00:00 | 2018-11-14 19:16:40.873000+00:00 | 2018-11-14 19:16:40.873000+00:00 | null | 24,804,298 | <p>I am trying to fit some data with a Gaussian (and more complex) function(s). I have created a small example below. </p>
<p>My first question is, <em>am I doing it right?</em> </p>
<p>My second question is, <em>how do I add an error in the x-direction, i.e. in the x-position of the observations/data?</em></p>
<p>It is very hard to find nice guides on how to do this kind of regression in pyMC. Perhaps because its easier to use some least squares, or similar approach, I however have many parameters in the end and need to see how well we can constrain them and compare different models, pyMC seemed like the good choice for that.</p>
<pre><code>import pymc
import numpy as np
import matplotlib.pyplot as plt; plt.ion()
x = np.arange(5,400,10)*1e3
# Parameters for gaussian
amp_true = 0.2
size_true = 1.8
ps_true = 0.1
# Gaussian function
gauss = lambda x,amp,size,ps: amp*np.exp(-1*(np.pi**2/(3600.*180.)*size*x)**2/(4.*np.log(2.)))+ps
f_true = gauss(x=x,amp=amp_true, size=size_true, ps=ps_true )
# add noise to the data points
noise = np.random.normal(size=len(x)) * .02
f = f_true + noise
f_error = np.ones_like(f_true)*0.05*f.max()
# define the model/function to be fitted.
def model(x, f):
amp = pymc.Uniform('amp', 0.05, 0.4, value= 0.15)
size = pymc.Uniform('size', 0.5, 2.5, value= 1.0)
ps = pymc.Normal('ps', 0.13, 40, value=0.15)
@pymc.deterministic(plot=False)
def gauss(x=x, amp=amp, size=size, ps=ps):
e = -1*(np.pi**2*size*x/(3600.*180.))**2/(4.*np.log(2.))
return amp*np.exp(e)+ps
y = pymc.Normal('y', mu=gauss, tau=1.0/f_error**2, value=f, observed=True)
return locals()
MDL = pymc.MCMC(model(x,f))
MDL.sample(1e4)
# extract and plot results
y_min = MDL.stats()['gauss']['quantiles'][2.5]
y_max = MDL.stats()['gauss']['quantiles'][97.5]
y_fit = MDL.stats()['gauss']['mean']
plt.plot(x,f_true,'b', marker='None', ls='-', lw=1, label='True')
plt.errorbar(x,f,yerr=f_error, color='r', marker='.', ls='None', label='Observed')
plt.plot(x,y_fit,'k', marker='+', ls='None', ms=5, mew=2, label='Fit')
plt.fill_between(x, y_min, y_max, color='0.5', alpha=0.5)
plt.legend()
</code></pre>
<p>I realize that I might have to run more iterations, use burn in and thinning in the end. The figure plotting the data and the fit is seen here below. </p>
<p><img src="https://i.stack.imgur.com/iKjXf.png" alt="Resulting figure from the code."></p>
<p>The pymc.Matplot.plot(MDL) figures looks like this, showing nicely peaked distributions. This is good, right?</p>
<p><img src="https://i.stack.imgur.com/U71o2.png" alt="enter image description here"></p> | 2014-07-17 12:58:56.773000+00:00 | 2019-11-26 18:03:50.023000+00:00 | 2019-11-26 18:03:50.023000+00:00 | python|regression|pymc|probabilistic-programming | ['https://i.stack.imgur.com/mvhbd.png', 'https://i.stack.imgur.com/ZOh71.png'] | 2 |
61,748,120 | <p>Dirk's paper <a href="https://arxiv.org/abs/1911.06416" rel="nofollow noreferrer">"Thirteen Simple Steps for Creating An R Package with an External C++ Library"</a> gives an example <code>src/Makevars</code>:</p>
<pre><code>CXX_STD = CXX11
PKG_CFLAGS = -I. -DGMP -DSKIP_MAIN
PKG_LIBS = $(LAPACK_LIBS) $(BLAS_LIBS) $(FLIBS) -lgmpxx -lgmp
</code></pre>
<p>As you can see, additional libraries are specified in <code>PKG_LIBS</code> in this file. The <code>src/Makevars</code> approach assumes that you incorporate C++ code into your project using a standard package layout, as produced by <code>Rcpp.package.skeleton()</code>, with <code>NAMESPACE</code> and <code>DESCRIPTION</code> and so on.</p>
<p>According to Dirk's comments above, there is currently no way to specify an external library when C++ code is incorporated using the <code>sourceCpp</code> function, because that function provides an interface which is supposed to be multi-platform.</p> | 2020-05-12 09:22:20.543000+00:00 | 2020-05-12 09:22:20.543000+00:00 | null | null | 18,570,526 | <p>Such as boost, where can I specify the following: </p>
<pre><code>1.External c++ header file include path
2.External c++ source file
3.External c++ link library file path
</code></pre> | 2013-09-02 10:04:21.633000+00:00 | 2020-05-12 09:22:20.543000+00:00 | null | rcpp | ['https://arxiv.org/abs/1911.06416'] | 1 |
40,680,563 | <blockquote>
<p>Do we consider this scenario to be supervised learning?</p>
</blockquote>
<p>It is supervised learning when you have labels to optimize your model. So for most neural networks, it is supervised.</p>
<p>However, you might also look at the complete task. I guess you don't have any ground truth for image pairs and the "desired" similarity value your model should output?</p>
<p>One way to solve this problem which sounds inherently unsupervised is to take a CNN (convolutional neural network) trained (in a supervised way) on the 1000 classes of image net. To get the similarity of two images, you could then simply take the euclidean distance of the output probability distribution. This will not lead to excellent results, but is probably a good starter.</p>
<blockquote>
<ol start="2">
<li>What algorithms i should use to train etc..</li>
</ol>
</blockquote>
<p>First, you should define what "similar" means for you. Are two images similar when they contain the same object (classes)? Are they similar if the general color of the image is the same?</p>
<p>For example, how similar are the following 3 pairs of images?</p>
<p><a href="https://i.stack.imgur.com/LBa8D.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/LBa8D.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/U1icu.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/U1icu.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/ozgEl.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ozgEl.png" alt="enter image description here"></a></p>
<p>Have a look at <a href="https://arxiv.org/abs/1503.03832" rel="noreferrer">FaceNet</a> and search for "Content based image retrieval" (CBIR):</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Content-based_image_retrieval" rel="noreferrer">Wikipedia</a></li>
<li><a href="https://scholar.google.de/scholar?hl=en&q=content+based+image+retrival&btnG=&as_sdt=1%2C5&as_sdtp=" rel="noreferrer">Google Scholar</a></li>
</ul> | 2016-11-18 15:20:05.967000+00:00 | 2017-01-13 21:58:39.587000+00:00 | 2017-01-13 21:58:39.587000+00:00 | null | 40,662,773 | <p>Recently I started to play with tensorflow, while trying to learn the popular algorithms i am in a situation where i need to find similarity between images.</p>
<p>Image A is supplied to the system by me, and userx supplies an image B and the system should retrieve image A to the userx if image B is similar(color and class).</p>
<p>Now i have got few questions:</p>
<ol>
<li>Do we consider this scenario to be supervised learning? I am asking
because i don't see it as a classification problem(confused!!)</li>
<li>What algorithms i should use to train etc..</li>
<li>Re-training should be done quite often, how should i tackle this
problem so i don't train everytime from scratch( fine-tuning??)</li>
</ol> | 2016-11-17 18:49:19.070000+00:00 | 2017-01-13 21:58:39.587000+00:00 | 2016-11-18 18:55:31.307000+00:00 | machine-learning|scikit-learn|computer-vision|tensorflow|deep-learning | ['https://i.stack.imgur.com/LBa8D.jpg', 'https://i.stack.imgur.com/U1icu.jpg', 'https://i.stack.imgur.com/ozgEl.png', 'https://arxiv.org/abs/1503.03832', 'https://en.wikipedia.org/wiki/Content-based_image_retrieval', 'https://scholar.google.de/scholar?hl=en&q=content+based+image+retrival&btnG=&as_sdt=1%2C5&as_sdtp='] | 6 |
49,188,263 | <p>I have a partial (not complete because I don't thoroughly understand your goals) answer found here: <a href="https://arxiv.org/pdf/1108.1320.pdf" rel="nofollow noreferrer">Compressed matrix multiplication</a>.</p>
<p>You are probably asking for the problem known as "Matrix Multiplication with Sparse Output".</p>
<p>Read the first section (introduction): </p>
<blockquote>
<p>"Our method can be seen as a compressed sensing method for the matrix product, with the nonstandard idea that the <strong>SKETCH</strong> of AB is computed <strong>WITHOUT EXPLICITLY CONSTRUCTING</strong> AB"</p>
</blockquote>
<p>Under some assumptions (sparseness of the output matrix and number of sketches) and using <strong>ERROR CORRECTING CODES</strong> one can obtain the largest entries of the initial product without multiplying A and B.</p>
<p>Look at figure 2 for an example.</p> | 2018-03-09 07:03:13.430000+00:00 | 2018-03-09 07:03:13.430000+00:00 | null | null | 45,617,709 | <p>I would like to implement a matrix multiplication on TensorFlow like <em>C</em> = <em>A</em> · <em>B</em> where <em>A</em> ∈ ℝ<sup><em>n</em>,<em>k</em></sup> and <em>B</em> ∈ ℝ<sup><em>k</em>,<em>n</em></sup>. In my case, <em>n</em> could be potentially large but <em>k</em> is typically small (e.g. low rank or latent embeddings).</p>
<p>As you know the dense matrix <em>C</em> ∈ ℝ<sup><em>n</em>,<em>n</em></sup> is expensive to store in RAM. However, the only entries in <em>C</em> I want to retain are sparse. That is, by defining another sparse matrix <em>D</em> ∈ ℝ<sup><em>n</em>,<em>n</em></sup>, what I really care about are those values with indices <code>[i,j]</code> having values in <em>D</em>. The non-empty values in <em>D</em> should only be <code>1</code>.</p>
<p>So, instead of doing stuff like this:</p>
<pre><code>tmp = tf.matmul(A,B)
C = tf.SparseTensor(D.indices, tf.gather_nd(tmp, D.indices)*D.values, D.dense_shape)
</code></pre>
<p>I want to avoid explicitly computing the dense tensor <code>tmp</code> above.</p>
<p>Thanks in advance!</p> | 2017-08-10 15:21:10.083000+00:00 | 2018-03-09 07:03:13.430000+00:00 | 2017-08-11 03:01:08.307000+00:00 | machine-learning|tensorflow|sparse-matrix|matrix-multiplication | ['https://arxiv.org/pdf/1108.1320.pdf'] | 1 |
51,414,362 | <p>Regarding the two, input-hidden weight matrix and hidden-output weight matrix, there is an interesting research paper.
'A Dual Embedding Space Model for Document Ranking', Mitra et al., arXiv 2016. (<a href="https://arxiv.org/pdf/1602.01137.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1602.01137.pdf</a>).
Similar with your question, this paper studies how these two weight matrix are different, and claims that they encode different characteristics of words.</p>
<p>Overall, from my understanding, it is your choice to use either the input-hidden weight matrix (convention), hidden-output weight matrix, or the combined one as word embeddings, depending on your data and the problem to solve.</p> | 2018-07-19 04:29:15.810000+00:00 | 2018-07-19 04:29:15.810000+00:00 | null | null | 46,065,773 | <p>In word2vec, after training, we get two weight matrixes:1.input-hidden weight matrix; 2.hidden-output weight matrix. and people will use the input-hidden weight matrix as the word vectors(each row corresponds to a word, namely, the word vectors).Here comes to my confusions:</p>
<ol>
<li>why people use input-hidden weight matrix as the word vectors instead of the hidden-output weight matrix.</li>
<li>why don't we just add softmax activation function to the hidden layers rather than output layers, thus preventing time-consuming.</li>
</ol>
<p>Plus, clarifying remarks on the intuition of how word vectors can be obtained like this will be appreciated.</p> | 2017-09-06 02:12:56.380000+00:00 | 2020-10-17 12:31:19.583000+00:00 | null | nlp|gensim|word2vec | ['https://arxiv.org/pdf/1602.01137.pdf'] | 1 |
Subsets and Splits