Search is not available for this dataset
text
string | meta
dict |
---|---|
\hypertarget{section}{%
\section{1}\label{section}}
\bibverse{1} The Song of songs, which is Solomon's. Beloved \bibverse{2}
Let him kiss me with the kisses of his mouth; for your love is better
than wine. \bibverse{3} Your oils have a pleasing fragrance. Your name
is oil poured out, therefore the virgins love you. \bibverse{4} Take me
away with you. Let's hurry. The king has brought me into his rooms.
Friends We will be glad and rejoice in you. We will praise your love
more than wine! Beloved They are right to love you. \bibverse{5} I am
dark, but lovely, you daughters of Jerusalem, like Kedar's tents, like
Solomon's curtains. \bibverse{6} Don't stare at me because I am dark,
because the sun has scorched me. My mother's sons were angry with me.
They made me keeper of the vineyards. I haven't kept my own vineyard.
\bibverse{7} Tell me, you whom my soul loves, where you graze your
flock, where you rest them at noon; for why should I be as one who is
veiled beside the flocks of your companions? Lover \bibverse{8} If you
don't know, most beautiful among women, follow the tracks of the sheep.
Graze your young goats beside the shepherds' tents. \bibverse{9} I have
compared you, my love, to a steed in Pharaoh's chariots. \bibverse{10}
Your cheeks are beautiful with earrings, your neck with strings of
jewels. Friends \bibverse{11} We will make you earrings of gold, with
studs of silver. Beloved \bibverse{12} While the king sat at his table,
my perfume spread its fragrance. \bibverse{13} My beloved is to me a
sachet of myrrh, that lies between my breasts. \bibverse{14} My beloved
is to me a cluster of henna blossoms from the vineyards of En Gedi.
Lover \bibverse{15} Behold,+ 1:15 ``Behold'', from ``הִנֵּה'', means
look at, take notice, observe, see, or gaze at. It is often used as an
interjection. you are beautiful, my love. Behold, you are beautiful.
Your eyes are like doves. Beloved \bibverse{16} Behold, you are
beautiful, my beloved, yes, pleasant; and our couch is verdant. Lover
\bibverse{17} The beams of our house are cedars. Our rafters are firs.
\hypertarget{section-1}{%
\section{2}\label{section-1}}
Beloved \bibverse{1} I am a rose of Sharon, a lily of the valleys. Lover
\bibverse{2} As a lily among thorns, so is my love among the daughters.
Beloved \bibverse{3} As the apple tree among the trees of the wood, so
is my beloved among the sons. I sat down under his shadow with great
delight, his fruit was sweet to my taste. \bibverse{4} He brought me to
the banquet hall. His banner over me is love. \bibverse{5} Strengthen me
with raisins, refresh me with apples; for I am faint with love.
\bibverse{6} His left hand is under my head. His right hand embraces me.
\bibverse{7} I adjure you, daughters of Jerusalem, by the roes, or by
the hinds of the field, that you not stir up, nor awaken love, until it
so desires. \bibverse{8} The voice of my beloved! Behold, he comes,
leaping on the mountains, skipping on the hills. \bibverse{9} My beloved
is like a roe or a young deer. Behold, he stands behind our wall! He
looks in at the windows. He glances through the lattice. \bibverse{10}
My beloved spoke, and said to me, ``Rise up, my love, my beautiful one,
and come away. \bibverse{11} For behold, the winter is past. The rain is
over and gone. \bibverse{12} The flowers appear on the earth. The time
of the singing has come, and the voice of the turtledove is heard in our
land. \bibverse{13} The fig tree ripens her green figs. The vines are in
blossom. They give out their fragrance. Arise, my love, my beautiful
one, and come away.'' Lover \bibverse{14} My dove in the clefts of the
rock, in the hiding places of the mountainside, let me see your face.
Let me hear your voice; for your voice is sweet and your face is lovely.
\bibverse{15} Catch for us the foxes, the little foxes that plunder the
vineyards; for our vineyards are in blossom. Beloved \bibverse{16} My
beloved is mine, and I am his. He browses among the lilies.
\bibverse{17} Until the day is cool, and the shadows flee away, turn, my
beloved, and be like a roe or a young deer on the mountains of Bether.
\hypertarget{section-2}{%
\section{3}\label{section-2}}
\bibverse{1} By night on my bed, I sought him whom my soul loves. I
sought him, but I didn't find him. \bibverse{2} I will get up now, and
go about the city; in the streets and in the squares I will seek him
whom my soul loves. I sought him, but I didn't find him. \bibverse{3}
The watchmen who go about the city found me; ``Have you seen him whom my
soul loves?'' \bibverse{4} I had scarcely passed from them, when I found
him whom my soul loves. I held him, and would not let him go, until I
had brought him into my mother's house, into the room of her who
conceived me. \bibverse{5} I adjure you, daughters of Jerusalem, by the
roes, or by the hinds of the field, that you not stir up nor awaken
love, until it so desires. \bibverse{6} Who is this who comes up from
the wilderness like pillars of smoke, perfumed with myrrh and
frankincense, with all spices of the merchant? \bibverse{7} Behold, it
is Solomon's carriage! Sixty mighty men are around it, of the mighty men
of Israel. \bibverse{8} They all handle the sword, and are expert in
war. Every man has his sword on his thigh, because of fear in the night.
\bibverse{9} King Solomon made himself a carriage of the wood of
Lebanon. \bibverse{10} He made its pillars of silver, its bottom of
gold, its seat of purple, the middle of it being paved with love, from
the daughters of Jerusalem. \bibverse{11} Go out, you daughters of Zion,
and see king Solomon, with the crown with which his mother has crowned
him, in the day of his weddings, in the day of the gladness of his
heart.
\hypertarget{section-3}{%
\section{4}\label{section-3}}
Lover
\bibverse{1} Behold, you are beautiful, my love. Behold, you are
beautiful. Your eyes are like doves behind your veil. Your hair is as a
flock of goats, that descend from Mount Gilead. \bibverse{2} Your teeth
are like a newly shorn flock, which have come up from the washing, where
every one of them has twins. None is bereaved among them. \bibverse{3}
Your lips are like scarlet thread. Your mouth is lovely. Your temples
are like a piece of a pomegranate behind your veil. \bibverse{4} Your
neck is like David's tower built for an armory, on which a thousand
shields hang, all the shields of the mighty men. \bibverse{5} Your two
breasts are like two fawns that are twins of a roe, which feed among the
lilies. \bibverse{6} Until the day is cool, and the shadows flee away, I
will go to the mountain of myrrh, to the hill of frankincense.
\bibverse{7} You are all beautiful, my love. There is no spot in you.
\bibverse{8} Come with me from Lebanon, my bride, with me from Lebanon.
Look from the top of Amana, from the top of Senir and Hermon, from the
lions' dens, from the mountains of the leopards. \bibverse{9} You have
ravished my heart, my sister, my bride. You have ravished my heart with
one of your eyes, with one chain of your neck. \bibverse{10} How
beautiful is your love, my sister, my bride! How much better is your
love than wine, the fragrance of your perfumes than all kinds of spices!
\bibverse{11} Your lips, my bride, drip like the honeycomb. Honey and
milk are under your tongue. The smell of your garments is like the smell
of Lebanon. \bibverse{12} My sister, my bride, is a locked up garden; a
locked up spring, a sealed fountain. \bibverse{13} Your shoots are an
orchard of pomegranates, with precious fruits, henna with spikenard
plants, \bibverse{14} spikenard and saffron, calamus and cinnamon, with
every kind of incense tree; myrrh and aloes, with all the best spices,
\bibverse{15} a fountain of gardens, a well of living waters, flowing
streams from Lebanon. Beloved \bibverse{16} Awake, north wind, and come,
you south! Blow on my garden, that its spices may flow out. Let my
beloved come into his garden, and taste his precious fruits.
\hypertarget{section-4}{%
\section{5}\label{section-4}}
Lover \bibverse{1} I have come into my garden, my sister, my bride. I
have gathered my myrrh with my spice; I have eaten my honeycomb with my
honey; I have drunk my wine with my milk. Friends Eat, friends! Drink,
yes, drink abundantly, beloved. Beloved \bibverse{2} I was asleep, but
my heart was awake. It is the voice of my beloved who knocks: ``Open to
me, my sister, my love, my dove, my undefiled; for my head is filled
with dew, and my hair with the dampness of the night.'' \bibverse{3} I
have taken off my robe. Indeed, must I put it on? I have washed my feet.
Indeed, must I soil them? \bibverse{4} My beloved thrust his hand in
through the latch opening. My heart pounded for him. \bibverse{5} I rose
up to open for my beloved. My hands dripped with myrrh, my fingers with
liquid myrrh, on the handles of the lock. \bibverse{6} I opened to my
beloved; but my beloved left, and had gone away. My heart went out when
he spoke. I looked for him, but I didn't find him. I called him, but he
didn't answer. \bibverse{7} The watchmen who go about the city found me.
They beat me. They bruised me. The keepers of the walls took my cloak
away from me. \bibverse{8} I adjure you, daughters of Jerusalem, If you
find my beloved, that you tell him that I am faint with love. Friends
\bibverse{9} How is your beloved better than another beloved, you
fairest among women? How is your beloved better than another beloved,
that you do so adjure us? Beloved \bibverse{10} My beloved is white and
ruddy. The best among ten thousand. \bibverse{11} His head is like the
purest gold. His hair is bushy, black as a raven. \bibverse{12} His eyes
are like doves beside the water brooks, washed with milk, mounted like
jewels. \bibverse{13} His cheeks are like a bed of spices with towers of
perfumes. His lips are like lilies, dropping liquid myrrh. \bibverse{14}
His hands are like rings of gold set with beryl. His body is like ivory
work overlaid with sapphires. \bibverse{15} His legs are like pillars of
marble set on sockets of fine gold. His appearance is like Lebanon,
excellent as the cedars. \bibverse{16} His mouth is sweetness; yes, he
is altogether lovely. This is my beloved, and this is my friend,
daughters of Jerusalem.
\hypertarget{section-5}{%
\section{6}\label{section-5}}
Friends \bibverse{1} Where has your beloved gone, you fairest among
women? Where has your beloved turned, that we may seek him with you?
Beloved \bibverse{2} My beloved has gone down to his garden, to the beds
of spices, to pasture his flock in the gardens, and to gather lilies.
\bibverse{3} I am my beloved's, and my beloved is mine. He browses among
the lilies. Lover \bibverse{4} You are beautiful, my love, as Tirzah,
lovely as Jerusalem, awesome as an army with banners. \bibverse{5} Turn
away your eyes from me, for they have overcome me. Your hair is like a
flock of goats, that lie along the side of Gilead. \bibverse{6} Your
teeth are like a flock of ewes, which have come up from the washing, of
which every one has twins; not one is bereaved among them. \bibverse{7}
Your temples are like a piece of a pomegranate behind your veil.
\bibverse{8} There are sixty queens, eighty concubines, and virgins
without number. \bibverse{9} My dove, my perfect one, is unique. She is
her mother's only daughter. She is the favorite one of her who bore her.
The daughters saw her, and called her blessed. The queens and the
concubines saw her, and they praised her. \bibverse{10} Who is she who
looks out as the morning, beautiful as the moon, clear as the sun, and
awesome as an army with banners? \bibverse{11} I went down into the nut
tree grove, to see the green plants of the valley, to see whether the
vine budded, and the pomegranates were in flower. \bibverse{12} Without
realizing it, my desire set me with my royal people's chariots. Friends
\bibverse{13} Return, return, Shulammite! Return, return, that we may
gaze at you. Lover Why do you desire to gaze at the Shulammite, as at
the dance of Mahanaim?
\hypertarget{section-6}{%
\section{7}\label{section-6}}
\bibverse{1} How beautiful are your feet in sandals, prince's daughter!
Your rounded thighs are like jewels, the work of the hands of a skillful
workman. \bibverse{2} Your body is like a round goblet, no mixed wine is
wanting. Your waist is like a heap of wheat, set about with lilies.
\bibverse{3} Your two breasts are like two fawns, that are twins of a
roe. \bibverse{4} Your neck is like an ivory tower. Your eyes are like
the pools in Heshbon by the gate of Bathrabbim. Your nose is like the
tower of Lebanon which looks toward Damascus. \bibverse{5} Your head on
you is like Carmel. The hair of your head like purple. The king is held
captive in its tresses. \bibverse{6} How beautiful and how pleasant you
are, love, for delights! \bibverse{7} This, your stature, is like a palm
tree, your breasts like its fruit. \bibverse{8} I said, ``I will climb
up into the palm tree. I will take hold of its fruit.'' Let your breasts
be like clusters of the vine, the smell of your breath like apples.
\bibverse{9} Your mouth is like the best wine, that goes down smoothly
for my beloved, gliding through the lips of those who are asleep.
Beloved \bibverse{10} I am my beloved's. His desire is toward me.
\bibverse{11} Come, my beloved! Let's go out into the field. Let's lodge
in the villages. \bibverse{12} Let's go early up to the vineyards. Let's
see whether the vine has budded, its blossom is open, and the
pomegranates are in flower. There I will give you my love. \bibverse{13}
The mandrakes produce fragrance. At our doors are all kinds of precious
fruits, new and old, which I have stored up for you, my beloved.
\hypertarget{section-7}{%
\section{8}\label{section-7}}
\bibverse{1} Oh that you were like my brother, who nursed from the
breasts of my mother! If I found you outside, I would kiss you; yes, and
no one would despise me. \bibverse{2} I would lead you, bringing you
into the house of my mother, who would instruct me. I would have you
drink spiced wine, of the juice of my pomegranate. \bibverse{3} His left
hand would be under my head. His right hand would embrace me.
\bibverse{4} I adjure you, daughters of Jerusalem, that you not stir up,
nor awaken love, until it so desires. Friends \bibverse{5} Who is this
who comes up from the wilderness, leaning on her beloved? Beloved Under
the apple tree I awakened you. There your mother conceived you. There
she was in labor and bore you. \bibverse{6} Set me as a seal on your
heart, as a seal on your arm; for love is strong as death. Jealousy is
as cruel as Sheol.+ 8:6 Sheol is the place of the dead. Its flashes are
flashes of fire, a very flame of Yahweh.+ 8:6 ``Yahweh'' is God's proper
Name, sometimes rendered ``LORD'' (all caps) in other translations.
\bibverse{7} Many waters can't quench love, neither can floods drown it.
If a man would give all the wealth of his house for love, he would be
utterly scorned. Brothers \bibverse{8} We have a little sister. She has
no breasts. What shall we do for our sister in the day when she is to be
spoken for? \bibverse{9} If she is a wall, we will build on her a turret
of silver. If she is a door, we will enclose her with boards of cedar.
Beloved \bibverse{10} I am a wall, and my breasts like towers, then I
was in his eyes like one who found peace. \bibverse{11} Solomon had a
vineyard at Baal Hamon. He leased out the vineyard to keepers. Each was
to bring a thousand shekels+ 8:11 A shekel is about 10 grams or about
0.35 ounces, so 1000 shekels is about 10 kilograms or about 22 pounds.
of silver for its fruit. \bibverse{12} My own vineyard is before me. The
thousand are for you, Solomon, two hundred for those who tend its fruit.
Lover \bibverse{13} You who dwell in the gardens, with friends in
attendance, let me hear your voice! Beloved \bibverse{14} Come away, my
beloved! Be like a gazelle or a young stag on the mountains of spices!
| {
"alphanum_fraction": 0.7602133668,
"avg_line_length": 62.24609375,
"ext": "tex",
"hexsha": "15675f3b3bd9a57f084bd5c2413460d2466753c0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "039ab9b18364ecade1d56695cb77c40ee62b1317",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "bibliadelpueblo/BibliaLibre",
"max_forks_repo_path": "Bibles/English.WorldEnglishBibleUS/out/tex/24-Song of Solomon.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "039ab9b18364ecade1d56695cb77c40ee62b1317",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "bibliadelpueblo/BibliaLibre",
"max_issues_repo_path": "Bibles/English.WorldEnglishBibleUS/out/tex/24-Song of Solomon.tex",
"max_line_length": 72,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "039ab9b18364ecade1d56695cb77c40ee62b1317",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "bibliadelpueblo/BibliaLibre",
"max_stars_repo_path": "Bibles/English.WorldEnglishBibleUS/out/tex/24-Song of Solomon.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4540,
"size": 15935
} |
\section{Project Management}
\subsection{Work Breakdown Structure}
The project was broken down into sub-tasks and each sub-task's length of completion was estimated in days as shown in Table \ref{tab:work_breakdown_structure}. The \emph{work breakdown structure} has been designed to keep the project on track but is flexible enough for to allow for any future adjustments needed to complete the project successfully.
\begin{table}[H]
\scriptsize
% Set row height
\renewcommand{\arraystretch}{1.125}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%% HELPER FUNCTIONS %%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Set current date [YYYY-MM-DD]
\newcommand{\setCurrDate}[3]{\setdatenumber{#1}{#2}{#3}}
% Custom date format
\def\datedate{\thedatemonth/\thedateday/\thedateyear}
%\def\datedate{\thedateday-\thedatemonth-\thedateyear}
% Takes number of days as an argument and prints "arg1 & DATE & DATE+arg1"
\newcommand{\TEdate}[1]{
\setdatebynumber{\thedatenumber}
\multicolumn{1}{c}{#1} &
\datedate &
\addtocounter{datenumber}{#1} \setdatebynumber{\thedatenumber}
\datedate
}
% Stuff for numbering table items
\newcounter{TableEntryID}
\newcounter{SubTableEntryID}
\setcounter{SubTableEntryID}{0}
\setcounter{TableEntryID}{0}
\newcommand\showTE{\setcounter{SubTableEntryID}{0}\stepcounter{TableEntryID}\theTableEntryID.\theSubTableEntryID \ }
\newcommand\showSubTE{\stepcounter{SubTableEntryID}\theTableEntryID.\theSubTableEntryID \ }
% Stuff for automating table rows
\newcommand{\tableEntry}[1]{\hline \multirow{2}{*}{\showTE #1}}
\newcommand{\subTableEntry}[2]{& \showSubTE #1 & \TEdate{#2} \\ \cline{2-5}}
\newcommand{\initialTableEntry}[2]{&0.1 #1 & \TEdate{#2} \\ \cline{2-5}}
\newcommand{\finalTableEntry}[2]{\hline &\showTE #1 & \TEdate{#2} \\ \hline}
\vspace{10pt}
\caption{Work breakdown structure of the project}
\label{tab:work_breakdown_structure}
\begin{center}
\begin{tabular}{|l|p{30em}|p{3.5em}|r|r|}
% Table HEAD
\hline
\multicolumn{2}{|l|}{\multirow{3}{9cm}{\textbf{ramen: Design and Development of a Raft Consensus Algorithm Coupled With a IEEE 802.11 Based Mesh Network for Embedded Systems}}} & \multicolumn{3}{c|}{Dates and Duration} \\ \cline{3-5}
\multicolumn{2}{|l|}{ } & Duration (Days) & \multicolumn{2}{c|}{Planned Dates} \\ \cline{3-5}
\multicolumn{2}{|l|}{ } & & \multicolumn{1}{c}{Start} & \multicolumn{1}{|c|}{End} \\ \hline
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%% ACTUAL TABLE DATA STARTS HERE %%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\setCurrDate{2020}{09}{06}
\initialTableEntry{Begin Project}{1}
\tableEntry{Background Research}
\subTableEntry{Research on existing problems in IoT devices}{3}
\subTableEntry{Research on existing problems in embedded devices}{3}
\subTableEntry{Research on currently existing solutions}{3}
\subTableEntry{Identifying technical \& non-technical constraints}{4}
\subTableEntry{Revise problem statement}{1}
\tableEntry{Generate Concepts}
\subTableEntry{Functionality decompose of the project}{1}
\subTableEntry{Research on literature for similar solutions}{5}
\subTableEntry{Experimenting with existing consensus and mesh networking protocols protocols}{3}
\subTableEntry{Consensus protocol, network topology, and microprocessors selection}{2}
\tableEntry{Begin Detailed Design}
\subTableEntry{Perform detailed analysis of the concepts}{2}
\subTableEntry{Perform detailed analysis of the available components}{5}
\subTableEntry{Select components}{2}
\subTableEntry{Perform simulations with consensus algorithms}{5}
\subTableEntry{Perform simulations with WiFi chips}{5}
\subTableEntry{Create workflow diagrams for the code}{2}
\tableEntry{Build Prototype}
\subTableEntry{Create a GitHub repository}{1}
\subTableEntry{Code a mesh network for ESP8266 chip}{20}
\subTableEntry{Adapt the Raft consensus algorithm for mesh networks}{25}
\subTableEntry{Code an adapter for our Raft implementation for ESP8266 chip specifically}{25}
\subTableEntry{Ensure the functionality of connection between the Raft consensus algorithm implementation and the mesh networking adapter}{5}
\subTableEntry{Write tests for the code}{5}
\subTableEntry{CAD drawings for the enclosure}{4}
\subTableEntry{Design the PCB}{10}
\subTableEntry{Order the PCB for fabrication}{2}
\subTableEntry{Purchase the PCB components}{5}
\subTableEntry{Assemble the PCB}{3}
\subTableEntry{3D printing the enclosure}{2}
\subTableEntry{Assembling the prototype}{3}
\subTableEntry{Upload the code to the prototype}{2}
\subTableEntry{Check the functionality of the prototype and the code}{5}
\tableEntry{Test Prototype}
\subTableEntry{Develop testing protocol}{7}
\subTableEntry{Perform tests}{7}
\tableEntry{Documentation and Reporting}
\subTableEntry{Generating documentation from the code}{2}
\setCurrDate{2020}{10}{01}\subTableEntry{Preparation of the Intermediary Report I}{10}
\setCurrDate{2020}{11}{20}\subTableEntry{Preparation of the Final Proposal}{10}
\subTableEntry{Preparation of the Proposal Presentation}{8}
\setCurrDate{2021}{3}{5}\subTableEntry{Preparation of the Intermediary Report II}{10}
\setCurrDate{2021}{4}{20}\subTableEntry{Preparation of the Final Poster}{8}
\subTableEntry{Preparation of the Final Presentation}{10}
\finalTableEntry{End Project}{1}
\end{tabular}
\end{center}
\end{table}
\newpage
\subsection{Design Structure Matrix}
To complement the \emph{work breakdown structure}, a \emph{design structure matrix}, shown in Table \ref{tab:design_structure_matrix}, was created. The \emph{design structure matrix} is meant to organize our tasks and streamline our workflow.
\begin{table}[H]
\scriptsize
\centering
\renewcommand{\arraystretch}{1.3}
\vspace{10pt}
\caption{Design structure matrix of the project}
\label{tab:design_structure_matrix}
\begin{tabular}{r|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\cline{2-16}
& & A & B & C & D & E & F & G & H & I & J & K & L & M & N \\ \cline{2-16}
Begin Project & A & A & & & & & & & & & & & & & \\ \cline{2-16}
Background Research & B & X & B & & & & & & & & & & & & \\ \cline{2-16}
Consensus Algorithm Selection & C & & X & C & & & & & & & & & & & \\ \cline{2-16}
Network Topology Selection & D & & X & & D & & & & & & & & & & \\ \cline{2-16}
Concept Generation & E & & & X & X & E & & & & & & & & & \\ \cline{2-16}
Detailed Design & F & & & X & X & X & F & & & & & & & & \\ \cline{2-16}
Simulation & G & & & & & & X & G & & & & & & & \\ \cline{2-16}
Finalize Design & H & & & & & & X & X & H & & & & & & \\ \cline{2-16}
Coding the software & I & & & X & X & X & & X & X & I & & & & & \\ \cline{2-16}
CAD Drawings & J & & & & & & & X & X & & J & & & & \\ \cline{2-16}
Purchase Components & K & & & & & & & & X & & X & K & & & \\ \cline{2-16}
Manufacture Components & L & & & & & & & & & & X & X & L & & \\ \cline{2-16}
Assembly and Testing & M & & & & & & & & X & X & & & X & M & \\ \cline{2-16}
Finish Project & N & & & & & & & & & & & & & X & N \\
\cline{2-16}
\end{tabular}
\end{table}
\subsection{Critical Path}
The \emph{critical path method} was used to identify the bottlenecks in the project. The duration for each project component was calculated in days and placed into the \emph{critical path} graph shown in Figure \ref{fig:critical_path}. After our analysis, we have determined that the critical path for our project is to implement the base software library, explore the implementation of a mesh network, adapt our library to interface with the mesh network and finally test our implementation.
\begin{figure}[H]
\centering
\resizebox{0.80\pdfpagewidth}{!}{
\begin{tikzpicture}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%% HELPER FUNCTIONS %%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Custom color
\definecolor{airforceblue}{rgb}{0.36, 0.54, 0.66}
% Function for generating nodes in the graph
\newcommand{\pathNode}[5][black]{
\scriptsize
\node(#2)[shape=rectangle][#3] {
\begin{tcolorbox}[
rounded corners,
colback=airforceblue!35,
colframe=#1!80,
arc=1.5mm,
box align=center,
halign=center,
valign=center,
text width=2cm,
left=0.5mm,
right=0.5mm,
top=0.5mm,
bottom=0.5mm,
title = {\centering\makebox[\linewidth][c]{\color{white}#5 Days}}
]
\color{black}#4
\end{tcolorbox} \\
};
}
% Function for generating a terminal node (like start % end) in the graph
\newcommand{\terminalNode}[4][black]{
\scriptsize
\node(#2)[circle, draw=#1!80, fill=airforceblue!35, very thick, minimum size=7mm][#3]{#4};
}
% Function for drawing the arrows
\newcommand{\pathArrow}[2]{
\draw[->, very thick] (#1) -- (#2);
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%% ACTUAL GRAPH DATA STARTS HERE %%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Nodes
\terminalNode[red]{start}{}{Capstone Project}
\pathNode[red]{software_1}{above right = of start, yshift=1cm, xshift=1cm}{Coding a software library}{20}
\pathNode[red]{software_4}{above right = of software_1, yshift=-1.25cm}{Creating a mesh network}{25}
\pathNode[red]{software_5}{right = of software_4}{Adapting Raft consensus to mesh the network}{25}
\pathNode{software_2}{below right = of software_1, yshift=1.25cm, xshift=2cm}{Creating an adapter code for ESP8266 specifically}{20}
\pathNode[red]{software_3}{below right = of software_5, yshift=1cm}{Testing the code in a simulation}{10}
\pathNode{hardware_1}{below right = of start, xshift=1cm}{Building a Hardware Prototype}{15}
\pathNode{hardware_4}{below right = of hardware_1, yshift=1.25cm}{Prototype enclosure design}{10}
\pathNode{hardware_2}{above right = of hardware_1, yshift=-1.25cm}{Simulating WiFi radios}{10}
\pathNode{hardware_3}{right = of hardware_2}{PCB Design}{10}
\pathNode{hardware_5}{right = of hardware_4}{Manufacturing Components}{15}
\pathNode[red]{software_6}{right = of hardware_3, xshift=2.25cm}{Testing on the hardware}{25}
% Arrows
\pathArrow{start.east}{software_1.west}
\pathArrow{start.east}{hardware_1.west}
\pathArrow{software_1.east}{software_2.west}
\pathArrow{software_1.east}{software_2.west}
\pathArrow{software_1.east}{software_4.west}
\pathArrow{software_4.east}{software_5.west}
\pathArrow{software_2.east}{software_3.west}
\pathArrow{software_5.east}{software_3.west}
\pathArrow{software_3.south}{software_6.west}
%\pathArrow{software_5.south}{software_2.north}
\pathArrow{hardware_1.east}{hardware_4.west}
\pathArrow{hardware_1.east}{hardware_2.west}
\pathArrow{hardware_2.east}{hardware_3.west}
\pathArrow{hardware_3.south}{hardware_5.north}
\pathArrow{hardware_4.east}{hardware_5.west}
\pathArrow{hardware_4.east}{hardware_3.west}
\pathArrow{hardware_5.east}{software_6.west}
\pathArrow{hardware_2.north}{software_2.west}
\end{tikzpicture}
}
\caption{Critical path of the project tasks}
\label{fig:critical_path}
\end{figure}
\newpage
\subsection{Gantt Chart}
The \emph{Gantt Chart} shown in Figure \ref{fig:gantt} was used to visualize the tasks and their duration. The \emph{Gantt Chart} is meant to track our progress on the sub-tasks and overall progress on the project.
\vspace{5mm}
\ganttset{calendar week text = \small {\startday}}
\hvFloat[rotAngle=0]{figure}{
\resizebox{0.8\pdfpagewidth}{!}{
\begin{ganttchart}[
newline shortcut=true,
bar label node/.append style={align=right},
time slot format = isodate,
vgrid = {*{6}{dotted}, *{1}{dashed}},
hgrid,x unit=1mm,
hgrid style/.style={draw=black!5, line width=.75pt},
time slot format=little-endian,
linespacing=0.5]
{01-09-2020}{15-06-2021}
\gantttitlecalendar{year, month=shortname, week=4}\\
%%%%%%%%%%%%
\ganttgroup{Background Research}{07-09-2020}{21-09-2020} \\
\ganttbar{Research on existing problems}{7-9-2020}{13-9-2020} \\
\ganttbar{Research on currently existing solutions}{13-9-2020}{16-9-2020} \\
\ganttbar{Identifying technical \& non-technical constraints}{16-9-2020}{21-9-2020} \\
%%%%%%%%%%%%
\ganttgroup{Generate Concepts}{21-09-2020}{02-10-2020} \\
\ganttbar{Functionality decompose of the project}{21-9-2020}{22-9-2020} \\
\ganttbar{Research on literature for similar solutions}{22-9-2020}{27-9-2020} \\
\ganttbar{Experimenting with existing consensus \ganttalignnewline and mesh networking protocols protocols}{27-9-2020}{30-9-2020} \\
\ganttbar{Consensus protocol, network topology, \ganttalignnewline and microprocessors selection}{30-9-2020}{02-10-2020} \\
%%%%%%%%%%%%
\ganttgroup{Begin Detailed Design}{02-10-2020}{23-10-2020} \\
\ganttbar{Perform detailed analysis of the concepts}{02-10-2020}{09-10-2020} \\
\ganttbar{Select components}{09-10-2020}{11-10-2020} \\
\ganttbar{Perform simulations}{11-10-2020}{23-10-2020} \\
%%%%%%%%%%%%
\ganttgroup{Build Prototype}{23-10-2020}{17-02-2021} \\
\ganttbar{Code a mesh network for ESP8266 chip}{23-10-2020}{13-11-2020} \\
\ganttbar{Adapt the Raft consensus algorithm for mesh networks}{13-11-2020}{8-12-2020} \\
\ganttbar{Code an adapter for ESP8266 specifically}{8-12-2020}{12-1-2021} \\
\ganttbar{PCB design and CAD drawings}{12-1-2021}{28-1-2021} \\
\ganttbar{Purchase the PCB components}{28-1-2021}{2-2-2021} \\
\ganttbar{Assembling the prototype}{2-2-2021}{17-2-2021} \\
%%%%%%%%%%%%
\ganttgroup{Test Prototype}{17-02-2021}{03-03-2021} \\
\ganttbar{Develop testing protocol}{17-2-2021}{24-2-2021} \\
\ganttbar{Perform tests}{24-2-2021}{3-3-2021} \\
%%%%%%%%%%%%
\ganttgroup{Documentation and Reporting}{03-03-2021}{08-05-2021} \\
\ganttbar{Generating documentation from the code}{3-3-2021}{5-3-2021} \\
\ganttbar{Preparing Reports, Presentations, and Posters}{5-3-2021}{08-05-2021} \\
%%%%%%%%%%%%
\end{ganttchart}
}
}[The Gantt Chart of the project timeline]{The Gantt Chart of the project timeline}{fig:gantt} | {
"alphanum_fraction": 0.5569176883,
"avg_line_length": 53.53125,
"ext": "tex",
"hexsha": "9041decc4af3dc4a96c4d6cfe3b7aced2cb47812",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9fb42ea76cc570da4b8a6e8b17121a764169e899",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "A5-015/capstone-reports",
"max_forks_repo_path": "intermediate-report/project-management.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9fb42ea76cc570da4b8a6e8b17121a764169e899",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "A5-015/capstone-reports",
"max_issues_repo_path": "intermediate-report/project-management.tex",
"max_line_length": 492,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9fb42ea76cc570da4b8a6e8b17121a764169e899",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "A5-015/capstone-reports",
"max_stars_repo_path": "intermediate-report/project-management.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4730,
"size": 17130
} |
\section{Errors}
The error boundaries for the classifiers were obtained
for both the Chernoff and Bhattacharyya Bounds \cite{duda2012pattern}.
For the Chernoff Bounds, the following equations were employed.
\begin{align*}
\min[a,b] & \leq a^\beta b^{1-\beta} \quad \text{for } a,b \geq 0 \text{ and } 0\leq \beta \leq 1 \\
P(\text{error}) & \leq P^\beta (\omega_1) P^{1-\beta} (\omega_2) \int p^\beta (x | \omega_1)p^{1-\beta}(x|\omega_2)dx \text{ for } 0 \leq \beta \leq 1\\
\int p^\beta &(x | \omega_1)p^{1-\beta}(x|\omega_2)dx = \exp(-k(\beta)) \\
k(\beta) &= \frac{\beta(1-\beta)}{2} (\mu_2 - \mu_1)^T \det(\beta \Sigma_1 + (1-\beta)\Sigma_2)^{-1} (\mu_2 - \mu_1) \\
& \quad + \frac{1}{2} \ln \frac{\det(\beta \Sigma_1 + (1-\beta)\Sigma_2)}{\det(\Sigma_1)^\beta \det(\Sigma_2)^{1-\beta}}
\end{align*}
Likewise, for the Bhattacharyya Bounds another set of equations was utilized
\begin{align*}
P(\text{error}) & \leq \sqrt{P{\omega_1} P(\omega_2)} \int \sqrt{p(x | \omega_1) p(x|\omega_2)} dx\\
&= \sqrt{P{\omega_1} P(\omega_2)} \exp{-k(1/2)}\\
k(1/2) &= \frac{1}{8}(\mu_2 - \mu_1)^T \det(\frac{\Sigma_1 + Sigma_2}{2})^{-1} (\mu_2 + mu_1) \\
& \quad + \frac{1}{2} \ln \frac{\det(\frac{\Sigma_1 + \Sigma_2}{2})}{\sqrt{\det(\Sigma_1)\det(\Sigma_2)}}
\end{align*}
\subsection{Case A}
For this case, the Chernoff bounds were determined to be
\begin{table}[htb]
\begin{center}
\begin{tabular}{ll}
$P_{12}$ & 0.3712\%\\
$P_{23}$ & 11.3240\%\\
$P_{31}$ & 0.0016\%
\end{tabular}
\end{center}
\caption{Case A - Chernoff Bound}
\label{tab: case a chernoff}
\end{table}
Likewise the Bhattacharyya bounds were determined
\begin{table}[htb]
\begin{center}
\begin{tabular}{ll}
$P_{12}$ & 0.3712\%\\
$P_{23}$ & 11.3240\%\\
$P_{31}$ & 0.0016\%
\end{tabular}
\end{center}
\caption{Case A - Bhattacharyya Bound}
\label{tab: case a bhatt}
\end{table}
\begin{figure}
\centering
\includegraphics{errorCaseA}
\caption{Case A - Error bound}
\label{case a error}
\end{figure}
The experimental error was also examined for the data set.
\begin{table}[htb]
\begin{center}
\begin{tabular}{ll}
$P_{12}$ & 0.000\%\\
$P_{23}$ & 4.000\%\\
$P_{31}$ & 0.000\%
\end{tabular}
\end{center}
\caption{Case A - Experimental error}
\label{tab: case a experiment}
\end{table}
% errorExp =
%
% 0 0.0400 0
\newpage
\pagebreak
\subsection{Case B}
For this case, the Chernoff bounds were determined to be
\begin{table}[htb]
\begin{center}
\begin{tabular}{ll}
$P_{12}$ & 0.0049\%\\
$P_{23}$ & 6.3073\%\\
$P_{31}$ & 0.0000\%
\end{tabular}
\end{center}
\caption{Case B - Chernoff Bound}
\label{tab: case b chernoff}
\end{table}
Likewise the Bhattacharyya bounds were determined
\begin{table}[htb!]
\begin{center}
\begin{tabular}{ll}
$P_{12}$ & 0.0049\%\\
$P_{23}$ & 6.3073\%\\
$P_{31}$ & 0.0000
\end{tabular}
\end{center}
\caption{Case B - Bhattacharyya Bound}
\label{tab: case b bhatt}
\end{table}
\begin{figure}
\centering
\includegraphics{errorCaseB}
\caption{Case B - Error bound}
\label{case b error}
\end{figure}
The experimental error was also examined for the data set.
\begin{table}[htb]
\begin{center}
\begin{tabular}{ll}
$P_{12}$ & 0.000\%\\
$P_{23}$ & 8.000\%\\
$P_{31}$ & 0.000\%
\end{tabular}
\end{center}
\caption{Case B - Experimental error}
\label{tab: case b experiment}
\end{table}
% errorExp =
%
% 0 0.0800 0
\newpage
\pagebreak
\subsection{Case C}
For this case, the Chernoff bounds were determined to be
\begin{table}[htb!]
\begin{center}
\begin{tabular}{ll}
$P_{12}$ & 0.0012\%\\
$P_{23}$ & 7.9909\%\\
$P_{31}$ & 0.0000\%
\end{tabular}
\end{center}
\caption{Case C - Chernoff Bound}
\label{tab: case c chernoff}
\end{table}
Likewise the Bhattacharyya bounds were determined
\begin{table}[htb!]
\begin{center}
\begin{tabular}{ll}
$P_{12}$ & 0.0074\%\\
$P_{23}$ & 8.0726\%\\
$P_{31}$ & 0.0000\%
\end{tabular}
\end{center}
\caption{Case C - Bhattacharyya Bound}
\label{tab: case c bhatt}
\end{table}
\begin{figure}
\centering
\includegraphics{errorCaseC}
\caption{Case C - Error bound}
\label{case c error}
\end{figure}
The experimental error was also examined for the data set.
\begin{table}[htb]
\begin{center}
\begin{tabular}{ll}
$P_{12}$ & 0.000\%\\
$P_{23}$ & 3.333\%\\
$P_{31}$ & 0.000\%
\end{tabular}
\end{center}
\caption{Case C - Experimental error}
\label{tab: case c experiment}
\end{table}
% errorExp =
%
% 0 0.0333 0
| {
"alphanum_fraction": 0.6444043321,
"avg_line_length": 21.8325123153,
"ext": "tex",
"hexsha": "5ee13717ab29dbfd70b80415dc381c9729580ae2",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ccd3364818c673f7a6bf13d495004034d2c6ecc0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "der-coder/CINVESTAV-Mathematics-II-2020",
"max_forks_repo_path": "tex/error.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ccd3364818c673f7a6bf13d495004034d2c6ecc0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "der-coder/CINVESTAV-Mathematics-II-2020",
"max_issues_repo_path": "tex/error.tex",
"max_line_length": 154,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ccd3364818c673f7a6bf13d495004034d2c6ecc0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "der-coder/CINVESTAV-Mathematics-II-2020",
"max_stars_repo_path": "tex/error.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1746,
"size": 4432
} |
\chapter{Developer Documentation}
\label{ch:impl}
Creation of the program for histopathologic cancer detection is divided into four major parts:
\begin{enumerate}
\itemsep0em
\item Assembling dataset structure (required for input to a convolutional neural network), image preprocessing (loading, removing noise, normalization, whitening) and data augmentation (expanding the size of the dataset by applying a series of random transformations to each image)
\item Building the convolutional neural networks (class of deep neural networks applied to analyze images), and training them on data
\item Improving prediction accuracy of networks (solving underfitting and overfitting problems) with hyperparameter tuning (choosing optimal parameters for learning algorithm) and changes to network architecture
\item Creating a graphical user interface for the program, which allows user to load histopathologic slide and select network which has to be applied on it, and to get as output category to which that slide belongs, with the additional possibility to visualize network representations (heatmaps of class activations, filters of convolutional layers and intermediate activations)
\end{enumerate}
In the remainder of this chapter each part will be thoroughly analyzed, and directory/file structure of source the code will be illustrated, as well as use-case and class diagrams.
System requirements discussed in user documentation (\textcolor{red}{\hyperref[sysreq]{Section 2.1}}) apply to developer documentation as well.
\clearpage
\section{Program Structure}
Histopathologic Cancer Detection program is divided into six major modules (\textcolor{red}{\autoref{fig:dirdiag}}, \textcolor{red}{\autoref{fig:dirdiag2}}):
\begin{enumerate}
\itemsep 0em
\item Data - includes dataset creation (\textcolor{red}{\hyperref[createdata]{Section 3.4}})
\item Models - includes creation, training and testing of CNNs (\textcolor{red}{\hyperref[cnn]{Section 3.5}\textcolor{black}{,} \\ \hyperref[vgg19]{Section 3.6}})
\item Experiments - includes hyperparameter tuning (\textcolor{red}{\hyperref[exp]{Section 3.7}})
\item Graphical User Interface - includes creation of all application windows and their interconnection (\textcolor{red}{\hyperref[gui]{Section 3.8}})
\item Utilities - includes dataset analysis and visualization and network performance assessment and visualization (\textcolor{red}{\hyperref[utils]{Section 3.9}})
\item Tests - includes unit testing of Data, Models, Experiments and Utilities (\textcolor{red}{\hyperref[tests]{Section 3.11}})
\end{enumerate}
\begin{figure}[h]
\centering
\includegraphics[scale=1.3]{code_structure_1.jpg}
\caption{Diagram of directories and Python scripts of Data, Models, Experiments and Tests modules of Histopathologic Cancer Detection}
\label{fig:dirdiag}
\end{figure}
\clearpage
\begin{figure}[h]
\centering
\includegraphics[scale=1.3]{code_structure_2.jpg}
\caption{Diagram of directories and Python scripts of Tests, Graphical User Interface and Utilities modules of Histopathologic Cancer Detection}
\label{fig:dirdiag2}
\end{figure}
\section{Use-Case Diagram}
One of the main goals of the Histopathologic Cancer Detection program was the ease of use, i.e. straightforward graphical user interface which makes complex operations look quite simple and effortless. Even though there are extremely advanced algorithms with millions of parameters behind the program, GUI was made in such a way that everyone can use it. The first step is loading the image and selecting tissue type (breast or colorectal tissue), after which classification is being done. At every step of the way, current work can be saved, and a new image can be loaded to start the process from scratch. After the classification, it is possible to visualize network representations and perform further analysis of the results by visualizing layer activations, network filters, and heatmaps (\textcolor{red}{\autoref{fig:usecase}}).
\begin{figure}[h]
\centering
\includegraphics[scale=1.35]{use_case.jpg}
\caption{Use-Case Diagram of Histopathologic Cancer Detection}
\label{fig:usecase}
\end{figure}
\section{Class Diagrams}
Classes of Histopathologic Cancer Detection can be divided into two main segments: window classes and neural network classes.
BaseCNN class is the common class of all neural network classes, and it contains common attributes, such as dataset name, network name, compile parameters, and common methods, such as the creation of data generators, compilation, and training of the network. Neural network classes will be discussed in more detail in \textcolor{red}{\hyperref[cnn]{Section 3.5}, \hyperref[vgg19]{Section 3.6}}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.6]{nets_class_diagram.png}
\caption{Class diagram of neural network classes of Histopathologic Cancer Detection}
\label{fig:class1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.38]{main_class_diagram.png}
\caption{Class diagram of window classes of Histopathologic Cancer Detection}
\label{fig:class2}
\end{figure}
Window class is the base class of all window classes, and it implements common methods, such as setting up the window size and central widget. On the other side, MainWindow class is the central point of GUI, as it defines the window which appears when the program is run, and every other window is invoked from it (\textcolor{red}{\autoref{fig:class2}}). Window classes, along with their attributes and methods, will be discussed in more detail in \textcolor{red}{\hyperref[gui]{Section 3.8}}.
\section{Creation of Datasets}
\label{createdata}
Performance and accuracy of convolutional neural networks rely largely on datasets, i.e. on quality of available data, dataset size, class balance, etc. But before feeding data to the network, if using \texttt{Keras} API, certain dataset structures must be satisfied. More precisely, datasets must have the following structure: train, validation, and test directories, each with a subdirectory for each class. Scripts responsible for the creation of required directory structure and distribution of data are:
\begin{itemize}
\itemsep 0em
\item \texttt{break\_his\_dataset\_creation.py},
\item \texttt{nct\_crc\_he\_100k\_dataset\_creation.py}.
\end{itemize}
They work by extracting datasets downloaded from \cite{breakhis_bib}, \cite{nctcrche100k_bib}, creating necessary directory tree, and distributing images between created subdirectories. After executing scripts, datasets are ready to be fed into convolutional neural networks (in order to train them), but before that, neural network architecture has to be built.
\clearpage
\section{CNNSimple Implementation}
\label{cnn}
CNNSimple convolutional neural network (\textcolor{red}{\hyperref[src:py1]{Code 3.1}}) was created using \texttt{Keras} Sequential model in order to classify images from NCT-CRC-HE-100K dataset, which contains 100.000 images divided into 9 tissue/cancer categories.
The convolutional base of CNNSimple is composed of four blocks of convolutional and max-pooling layers, where the first two blocks have two convolutional and one max-pooling layer, and the last two blocks have three convolutional and one max-pooling layer. Each convolutional layer has a 3$\times$3 convolution window, uses the ReLU activation function, and the number of feature maps (filters) increases exponentially from 32 to 256. Each max-pooling layer has pool size of 2$\times$2.
Classification top of CNNSimple is composed of two fully-connected layers, each followed by a dropout layer, and an output (also fully-connected) layer. Fully-connected layers use ReLU activation function, and number of neurons grows from 512 to 1024. Dropout layers use 50\% dropout rate (fraction of neurons which will be ignored in each passing). Output layer uses softmax activation function, and has nine neurons (one neuron per tissue/cancer subtype output).
\vspace{1mm}
\lstset{caption={CNNSimple network architecture (defined in \texttt{cnn\_simple.py})},label=src:py1}
\begin{lstlisting}[language={Python}, basicstyle=\scriptsize]
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3),
name='block1_conv1'))
model.add(Conv2D(32, (3, 3), activation='relu', name='block1_conv2'))
model.add(MaxPooling2D((2, 2), name='block1_pool'))
model.add(Conv2D(64, (3, 3), activation='relu', name='block2_conv1'))
model.add(Conv2D(64, (3, 3), activation='relu', name='block2_conv2'))
model.add(MaxPooling2D((2, 2), name='block2_pool'))
model.add(Conv2D(128, (3, 3), activation='relu', name='block3_conv1'))
model.add(Conv2D(128, (3, 3), activation='relu', name='block3_conv2'))
model.add(Conv2D(128, (3, 3), activation='relu', name='block3_conv3'))
model.add(MaxPooling2D((2, 2), name='block3_pool'))
model.add(Conv2D(256, (3, 3), activation='relu', name='block4_conv1'))
model.add(Conv2D(256, (3, 3), activation='relu', name='block4_conv2'))
model.add(Conv2D(256, (3, 3), activation='relu', name='block4_conv3'))
model.add(MaxPooling2D((2, 2), name='block4_pool'))
model.add(Flatten(name='flatten'))
model.add(Dense(512, activation='relu', name='dense1'))
model.add(Dropout(0.5, name='dropout1'))
model.add(Dense(1024, activation='relu', name='dense2'))
model.add(Dropout(0.5, name='dropout2'))
model.add(Dense(9, activation='softmax', name='prediction'))
\end{lstlisting}
\section{Transfer Learning}
\label{vgg19}
Although CNNs are a powerful tool for image classification, in order to achieve high accuracy, a large amount of data is required. The problem occurs when only a small dataset is available (as is often in healthcare, ex. BreakHis dataset). In such cases transfer learning can be used: take a model trained on a large dataset and transfer knowledge to a small dataset, i.e. freeze convolutional base, and only train classification top of the network. The main idea is that early layers of convolutional base learn low-level features applicable across all images, such as edges and patterns.
\subsection{VGG19Simple Implementation}
VGG19Simple convolutional neural network (\textcolor{red}{\hyperref[src:py2]{Code 3.2}}) was created using \texttt{Keras} Graphical API in order to classify images from BreakHis dataset, which contains 2.081 images divided into 8 tissue/cancer categories.
Convolutional base of VGG19Simple is VGG19 \cite{simonyan2014very} pre-built network pre-trained on ImageNet \cite{deng2009imagenet} dataset (without top classification part), using imagenet weights.
Classification top of VGG19Simple is composed of two fully-connected layers, each followed by a dropout layer, and an output (also fully-connected) layer. Fully-connected layers use the ReLU activation function, and the number of neurons grows from 512 to 1024. Dropout layers use a 50\% dropout rate (fraction of neurons which will be ignored in each passing). The output layer uses softmax activation function, and has eight neurons (one neuron per tissue/cancer subtype output).
\vspace{3mm}
\lstset{caption={VGG19Simple network architecture (defined in \texttt{vgg19\_simple.py})},label=src:py2}
\begin{lstlisting}[language={Python}, basicstyle=\scriptsize]
input = Input((150, 150, 3))
convolutional_base = VGG19(weights='imagenet', include_top=False,
input_tensor=input)
for layer in convolutional_base.layers:
layer.trainable = False
x = Flatten(name='flatten')(convolutional_base.output)
x = Dense(512, activation='relu', name='dense_1')(x)
x = Dropout(0.5, name='dropout_1')(x)
x = Dense(1024, activation='relu', name='dense_2')(x)
x = Dropout(0.5, name='dropout_2')(x)
x = Dense(8, activation='softmax', name='predictions')(x)
model = Model(input, x)
\end{lstlisting}
\section{Experiments and Results}
\label{exp}
Performance of neural networks is determined by how well will it generalize, i.e. how high accuracy will it achieve on previously unseen data (if it performs well on training data, but underachieves on test data, it is said that CNN overfits). In order to prevent overfitting, a number of techniques can be used: increase the size of the dataset, change network architecture or apply hyperparameter tuning techniques, which consist of selecting a set of optimal hyperparameters for the learning algorithms. The selection of such parameters for networks is done in \texttt{hyperparameter\_tuning.py}. The first step consists of defining the hyperparameter dictionary with parameters and values to be tested, such as the number of epochs for which the network is to be trained, optimization techniques, etc. The next step consists of training a network with all combinations of parameters and values defined, after which network performances are compared in order to determine the optimal hyperparameter set.
CNNSimple network trained on the NCT-CRC-HE-100K dataset was trained for 35 epochs, using RMSProp optimizer with a learning rate of 0.00004, using a categorical cross-entropy loss function. Before feeding data to the network, data augmentation (applying random transformations in order to produce more images, such as translation, rotation, sheer) has been used. In order to assess the performance of the network, the accuracy metrics function was employed. CNNSimple achieved 93.89\% validation and 94.21\% test accuracy, and 0.24 validation and 0.06 test loss (\textcolor{red}{\autoref{fig:netsperf}}).
VGG19Simple network trained on the BreakHis dataset was trained for 90 epochs, using RMSProp optimizer with a learning rate of 0.0001, using a categorical cross-entropy loss function. Before feeding data to the network, data augmentation (applying random transformations in order to produce more images, such as translation, rotation, sheer) has been used. In order to assess the performance of the network, the accuracy metrics function was employed. After training a network with a frozen convolutional base, the last two convolutional blocks were unfrozen, and network was trained again using RMSProp optimizer with a learning rate of 0.00004. VGG19Simple achieved 85.3\% validation and 83.59\% test accuracy, and 0.72 validation and 1.02 test loss (\textcolor{red}{\autoref{fig:netsperf}}).
\begin{figure}[h]
\centering
\subfigure[\scriptsize CNNSimple Accuracy]{\label{fig:a}\includegraphics[width=72mm]{cnn_acc.png}}
\subfigure[\scriptsize CNNSimple Loss]{\label{fig:b}\includegraphics[width=72mm]{cnn_loss.png}}
\subfigure[\scriptsize VGG19Simple Accuracy]{\label{fig:vgg19perf}\includegraphics[width=72mm]{vgg19_acc.png}}
\subfigure[\scriptsize VGG19Simple Loss]{\label{fig:b}\includegraphics[width=72mm]{vgg19_loss.png}}
\caption{CNNSimple and VGG19Simple performance on BreakHis and NCT-CRC-HE-100K train and validation datasets (respectively)}
\label{fig:netsperf}
\end{figure}
\begin{figure}[h]
\centering
\begin{minipage}{.5\textwidth}
\centering
\hspace*{-0.7cm}
\vspace*{-0.45cm}
\includegraphics[scale=0.37]{cnn_cf.png}
\captionof{figure}{Confusion Matrix (a summary table of correct and incorrect predictions broken down by each class) of CNNSimple on NCT-CRC-HE-1OOK test dataset}
\label{fig:cnncf}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\hspace*{-0.7cm}
\includegraphics[scale=0.37]{vgg19_cf.png}
\captionof{figure}{Confusion Matrix of VGG19Simple on BreakHis test dataset}
\label{fig:vgg19cf}
\end{minipage}
\end{figure}
\clearpage
\section{Graphical User Interface}
\label{gui}
In order to make using program simple and straightforward, graphical user interface using PyQt5 has been created. Following window classes have been constructed:
\begin{itemize}
\itemsep 0em
\item Window class in \texttt{window.py}, used as base class for every other window, which contains primitive functions for setting up window, creating central widget, retranslating text, etc.
\item AboutAuthorWindow class in \texttt{about\_author\_window.py}, which contains two labels (containing image and CV respectively)
\item AboutDatasetsWindow class in \texttt{about\_datasets\_window.py}, which contains two labels (containing dataset sample images and dataset overview numbers respectively)
\item AboutModelsWindow class in \texttt{about\_models\_window.py}, which contains five labels (containing network name, network architecture, accuracy, loss and confusion matrix plots respectively)
\item SimpleWindow class in \texttt{simple\_window.py}, which contains one image label
\item InspectConvWindow class in \texttt{inspect\_conv\_window.py}, which contains three labels (containing convolutional layer text, number text and image respectively), button (with associated action), combo box (containing network layer names) and line edit (for filter/channel selection)
\item MainWindow class in \texttt{main\_window.py}, which connects every other window and provides high-level program functionality
\end{itemize}
In addition, every window-specific information is located in \texttt{config.py}, such as window sizes, positions and names of labels, paths to images, etc. GUI component definitions are located in \texttt{gui\_components.py}.
\subsection{Main Window}
MainWindow class is the central part of the GUI, as it defines the window which appears when the program is started. It has a menu bar from which every other window can be reached, and a central widget which contains input image, tissue-type radio buttons, classify button and output class label, and probabilities plot. Function \emph{classifyButtonEvent}, associated with \emph{classifyButton} is an integral part of the class, as it loads network based on the tissue type of the input image, classifies it, and writes output to labels.
\section{Utilities}
\label{utils}
For program to work seamlessly, certain utilities needed to be implemented:
\begin{itemize}
\itemsep 0em
\item \texttt{dataset\_overview.py}, used for obtaining basic information about dataset, such as sample images and image distribution per class
\item \texttt{misc.py}, containing functions for reading file contents, loading images, etc.
\item \texttt{predict\_image.py}, used for predicting class to which an image belongs
\item \texttt{save\_model.py}, used for saving neural network information and performance after being trained, such as arguments, architecture, accuracy, loss and confusion matrix plots, filters, etc.
\item \texttt{visualize\_filters.py}, used for visualizing network filter patterns
\item \texttt{visualize\_intermediate\_activations\_and\_heatmaps.py}, used for visualizing intermediate activations of the network, and heatmaps of class activation
\end{itemize}
\section{Implementing Additional Features}
In addition to the ease of use of the Histopathologic Cancer Detection program, source code was written in such a way to make expanding scope (problem space) of the program by including additional features quite simple and fast. If new tissue type (ex. lung tissue), along with tissue/cancer subtype classification was to be added to the program, it would be accomplished in the following four steps:
\begin{enumerate}
\itemsep 0em
\item \textbf{Creating dataset} \\
Obtaining dataset of new tissue type, and preparing dataset to be fed to \texttt{Keras}-built CNN, which includes creating appropriate directory structure and distributing data
\item \textbf{Building CNN} \\
Crating new convolutional neural network class inherited from BaseCNN by defining data generator transformations, network architecture, etc.
\item \textbf{Fine-tuning CNN} \\
Defining hyperparameter dictionary and training neural networks in order to increase classification accuracy and prevent overfitting
\item \textbf{Extending GUI} \\
Adding an additional radio button to MainWindow class, and extending action associated with classify button, as well as further analysis actions, to use newly created network
\end{enumerate}
\section{Testing}
\label{tests}
Source code for Histopathologic Cancer Detection can be divided into two equal chunks: code responsible for CNN-related functionality and code responsible for GUI-related functionality.
The code responsible for CNN-related functionality has been unit-tested using \texttt{PyTest} testing framework, and has a code coverage over 93\%. Unit tests for data module can be found in \texttt{test\_data.py}, unit tests for models and experiments modules can be found in \texttt{test\_models\_experiments.py}, and unit tests for utilities can be found in \texttt{test\_utils.py}.
Code responsible for GUI-related functionality has been tested manually, where numerous histopathologic slides have been loaded and classified, as well as further analyzed, in order to inspect program functionality.
\section{External Code}
Main ideas for advanced use of Histopathologic Cancer Detection, i.e. for further analysis of CNN representations and visualization of heatmaps of class activations, intermediate activations and filters of convolutional layers, as well as some parts of source code, have been taken from \cite{chollet2018deep}. The code is licensed under MIT license. | {
"alphanum_fraction": 0.7899418082,
"avg_line_length": 81.9263565891,
"ext": "tex",
"hexsha": "ba37e51c05bd31857cd9441d4e76eeddb2fe3458",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e3223856026a4bebeaeca46ea15dd42957c1e7da",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "bmarko98/histopathologic-cancer-detection",
"max_forks_repo_path": "thesis_paper/chapters/impl.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "e3223856026a4bebeaeca46ea15dd42957c1e7da",
"max_issues_repo_issues_event_max_datetime": "2020-03-03T22:19:35.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-03-03T22:19:35.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "bmarko98/histopathologic-cancer-detection",
"max_issues_repo_path": "thesis_paper/chapters/impl.tex",
"max_line_length": 1006,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "e3223856026a4bebeaeca46ea15dd42957c1e7da",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "bmarko98/histopathologic-cancer-detection",
"max_stars_repo_path": "thesis_paper/chapters/impl.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-12T12:48:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-04-17T11:40:00.000Z",
"num_tokens": 5114,
"size": 21137
} |
\section{Channels and Carriers}
\p{\phantomsection\label{sThree}The term \q{channels of communication} is usually associated with
process calculi, wherein channels pass values between procedures,
which in turn are combined sequentially or in parallel.
The possibility of \q{bidirectional} channels between
concurrent processes gives channels more structure, and
points to the wider scope of process calculi, compared to
lambda calculi. There is a well-known \q{embedding} of
(classical) \mOldLambda{}-Calculus into process calculi, although
there does not appear to be comparable quantities of
research into process-algebraic
interpretations of \q{expanded} \mOldLambda{}-calculi with
objects, exceptions, and so forth. Since process algebras
and calculi prioritize the analysis of computations
that run in parallel \mdash{} and the algebraic combinations
of procedures, such as concurrent vs. sequential \mdash{} the
narrower themes of \mOldLambda{}-abstraction in one single
procedure may seem tangential to serious process analysis.
}
\p{If real, such an assumption is unfortunate,
because the underlying semantics
of communicating values between disparate computing procedures
\mdash{} even apart from issues of concurrency, or from issues
concerning whether two superficially different \mOldLambda{}-expressions
are structurally equivalent (i.e., apart from the
main analytic themes of process- and \mOldLambda{}-calculi,
respectively) \mdash{}
is important in its own right. Any value passed between
procedures \i{may} be reinterpreted as belonging to a
different type (the number \litOne{} can be an integer in one
context that gets interpreted as the decimal \litOnePtZ{} somewhere
else), which \i{may} even cause a modification (\litOnePtO{} gets
truncated to just \litOne{}, say, if the decimal is passed
to a function expecting an integer). Moreover, inter-procedure
communications \i{may} be subject to gatekeeping and/or overloading
which blocks the target procedure from running if the
passed values are incorrect (relative to some specification) or
which selects one from a set of possible procedures \mbox{to run based
on the values passed for each occasion}.
}
\p{These processual issues give rise to different operators than conventional
process-calculi, but they can still be introduced as algebraic
structures on groups of procedures. Let \piOne{}, \piTwo{}, and \piThree{}
represent three procedures; we can use notation like \pisbl{} to represent
a \q{guarded} inter-procedure combination, wherein a gatekeeper is
in effect that may \i{block} \piTwo{}. We may also use notation like
\pisfo{} to represent a \q{polymorphic} inter-procedure combination,
wherein a gatekeeper selects one of \piTwo{} or \piThree{} as the
proper procedure to target based on passed values. Note that the former
case is a special example of the latter if we assign \piThree{} in the
first notation as a \q{null} procedure, \makebox{which does nothing,
having no actions or effects}.
}
\p{These inter-procedure relations have nontrivial semantics
even if we do not include concurrency or bidirectional
communication, so that \q{channels} are closer in spirit to
lambda abstraction than to stateful \q{message-lines} as in
the Calculus of Communicating Systems.
As the expansion of lambda calculi
toward Object-Orientation (in the \q{Sigma} or \sigmaCalculus{}) and
other variations shows, even a simpler notion of channels,
generalized from lambda abstractions (from which
bidirectional mutable channels can then be modeled) is
not uninteresting. In the literature,
criteria like \q{guarded choice}, supplementing
underlying process calculi, appears to be understood
principally at the level of \i{procedures} \mdash{} for one example,
the case of \q{choosing} one of several procedures is
similar to an operator that unifies multiple procedures
into \i{one} procedure with an \q{either or} logic:
\q{\piOne{} \i{or} \piTwo{}} is a procedure which may on some
occasions execute as \piOne{} and elsewhere as \piTwo{}
(see for instance \cite{EricWalkingshaw}). With
this either-or logic defined on procedures, inter-procedure combinations
(where one procedure sends values to a second, causing the second to begin)
which are guarded and polymorphic can be modeled as combinations
where the second procedure is an either-or sum of multiple
more-specific procedures (possibly including one \q{do nothing} no-op).
}
\p{This model, however, neglects the details which guide how either-or
choices are understood to be made, according to the intended
theoretical models. An important class of guarded-choice cases is
where choices are made on the basis specifically of the \i{values}
sent between procedures \mdash{} say, dispatching to different
function-bodies according to whether or not a number is in a range
\rRan{}. These cases can perhaps be described as \i{localized guarded choice}
because all information pertinent to the guard is derived from
the passed values (and not from any global or ambient state).
We can further identify \i{constructor-localized} guarded choice as
cases where the \q{guards} are simply the value-constructors for
values passed in to the target procedure (maybe cast). In
a language like \Cpp{} one or more \q{hidden} functions can execute
before a called function actually begins, insofar as
\q{copy constructors} may be in effect, controlling how values are copied
from one function's context to another. Without deliberately
modeling process calculi at a technical level, then, \Cpp{} does
provide a \q{hook} where various process-related ideas can be implemented.
}
\p{Insofar as Localized Guarded Choice bases guarded-choice semantics
solely on inter-procedure passed values, I would argue that it lies
in a theoretically intermediate position between \mOldLambda{}-style and
process calculi. From one perspective, Localized Guarded Choice is a
variation on inter-procedure combination, wherein one already-executing
procedure initiates a second by passing one or more values between them.
This perspective is perhaps the more intuitive when seen from the
point of view of the prior procedure.
}
\p{However, Localized Guarded Choice is
also a variation on \mOldLambda{}-abstraction, because it presumes that
the abstracting of some symbol in the target procedure's implementation
is not abstracted indiscriminately, but rather can only
be furnished with values conformant to some
specification. We can envision a \q{Local Guard Calculus}, maybe
dubbed \mGamma{}-calculus, which works on the assumption that
for each \mOldLambda{}-abstraction there is a corresponding
value constructor which executes as a procedure before a target
procedure is called (and may even prevent the target
procedure from starting, e.g. by throwing an exception). From the
perspective of the prior, calling procedure, this hidden execution
appears to be an added procedural layer, an intermediary
function that lies between itself and the target. From the
perspective of the target procedure, however, the intermediary
value-constructors appear more as logical guarantees keyed to
its implementational specifications \mdash{} i.e., as abstraction-refinements.
In short, depending on perspective, this hypothetical
Local Guard Calculus can be seen as either a special case of
Lambda or Process calculi, respectively.
}
\p{Since the relevant issues of gatekeeping and overloading are
\q{intermediate} in this sense, they are not central foci of
either genre of calculi, and call for a distinct terminology and
conceptualization. I will accordingly think of function implementations
as for practical purposes \q{procedures} which communicate via
\q{channels}, but my sketch of \q{Channel Algebra} is not a
process calculus or process algebra \i{per se}, and will
skirt around foundational process-oriented topics like concurrency,
or the possibility of stateful, bidirectional channels.
Moreover, my ambition is to
model source code and behavior via semantic graphs which
can fit into the \RDF{} (and \DH{}/\CH{}) ecosystems,
a reconstruction that can
start at the simpler non-concurrent, single-threaded level.
So for the duration of this section I
will attend exclusively to the semantics of
functions passing values to other functions so as to
initiate the procedures which the target functions implement;
setting aside considerations such as whether the prior
function \q{blocks} until the second function completes
(which represents sequential-process semantics) or,
instead, continues running, concurrently.
}
\decoline{}
\p{Suppose one function calls a second. From a high level perspective,
this has several consequences that can be semantic-graph represented
\mdash{} among others, that the calling function depends on an
implementation of the callee being available \mdash{} but at the
source code level the key consequence is that a node representing
source tokens which designate functional values enters into
different semantic relations (modeled by different kinds
of edge-annotations) than nodes marking other types
of values and literals. Suppose we have an edge-annotation
that \nodex{} is a value passed to \nodef{}; this graph is only
semantically well-formed if \nodef{}'s representatum has
functional type (by analogy to the \mbox{well-formedness criteria, at
least in typed settings, of \lambdaxfx{})}.
}
\vspace{-1em}
\p{This motivates the following: suppose we have a Directed
Hypergraph, where the
nodes for each hyper-edge represent source-code tokens (specifically,
symbols and literals). Via the relevant Source Code Ontology, we can
assert that certain
edge-annotations are only possible if a token (in subject or object position)
designates a value passed to a function. From the various edge-annotation
kinds which meet this criteria, we can define a set of \q{channel kinds}.
}
\p{For every function called, there is a corresponding
function implementation which has its own graph representation.
Assume this implementation includes a \i{signature} and a \i{body}.
With sufficient compiler work, the body can be expressed as an
Abstract Syntax Tree and, with further analysis, an
Abstract Semantic Graph \mdash{} which in turn will identify semantic
connections such as the scoped relation between a symbol
appearing in the signature and the same symbol in the
body \mdash{} these symbols will all share the same
value (unless a symbol in a nested lexical scope \q{hides}
the symbol seen in the parent scope).
}
\p{This implicitly
assumes that symbols \q{hold} values; to make the
notion explicit, I will say that symbols are
\i{carriers} for values. It may be appropriate
to consider literals as carriers (as demonstrated by
the \Cpp{} \operatorqq{}, the conversion of character-string
literals to typed values is not always trivial, nor
necessarily fixed by the language engine). For now, though,
assume that carriers correspond to lexically-scoped symbols
(I also ignore globals). Carriers do not necessarily hold a
value at every point in the execution of a program; they
may be \q{preinitialized}, and also \q{retired} (the
latter meaning they no longer hold a meaningful value;
consider deleted pointers or references to out-of-scope
symbols). A carrier may pass through a
\q{career} from preinitialized to initialized, maybe then
changing to hold \q{different} values, and maybe then
retired. I assume a carrier is associated with a single type
throughout its career, and can only hold values appropriate for its
type. The variety of possible
careers for carriers is not directly tied to its type: a
carrier which cannot change values (be reinitialized)
is not necessarily holding a \const{}-typed value.
}
\p{Having introduced the basic idea of \q{carriers} I will now consider
carrier operations in more detail, before then expanding on the
theory of carriers by considering how carriers group into channels.
}
\spsubsection{Carriers and Careers}
\p{In material fact, computers work with \q{numbers} that
are electrical signals; but in practice, we reason about
software via computer code which abstracts and is
materially removed from actual running programs. In these
terms, we can note that \q{computers} (in this
somewhat abstract sense) do not deal \i{directly} with
numbers (or indeed other kinds of values), but rather
deal with \q{symbols} that express, or serve as placeholders
for, or \q{carry} values. Even \q{literal} values are composed
of symbols: the number (say) \oneTwoThree{}, which has a particular
physical incarnation in electrons, is represented in source
code by a Unicode or \ascii{} character string (which materially
bears little resemblance to a \q{digital} \oneTwoThree{}).
}
\p{This primordial fact is orthogonal to type systems. The underlying
notion of type theory is of \i{typed values} \mdash{} the idea
that the whole universe of values that may be encountered
during a computation can be classified into types, so
a particular value always has one single declared type
(though it may be a direct candidate for classification
as other types as well). There is a core distinction
between \i{types} and \i{sets}: two distinct types can
be instantiated by the same set of possible values, and
one type can cover a different spectrum of values on different
computers. For example, any type which has an expandable data collection
\mdash{} say, a list of numbers \mdash{} can have values which
would take ever larger amounts of memory; e.g., as lists grow
larger (I made essentially the same
comment about pointer-lists on page
\hyperref[pointerlist]{\pageref{pointerlist}}).
Since different computers have different available
memory at different times, there is no fixed relation between
the set of \q{inhabitants} of these types which can be
represented on different computers, or on one single computer
at different times. Type theory therefore belongs to
a logicomathematical (or philosophical) foundation distinct from
set theory. Instead of defining types from sets of values,
types are instead built up from more primitive types via
several operators, most notably \q{products}, or \q{tuples}
wherein a fixed number of values of (in the general case)
distinct types are grouped together
(as I outlined from one
perspective within subsection \sectsym{}\hyperref[types]{1.4}).
}
\p{But type theory still consider these \q{values} as abstract, primitive
constructs. Both types and values are in essence undefinable concepts,
for which there are no deeper concepts in terms of which they can be defined
(although this de-facto non-definition can be given a rather
elegant gloss with Category Theory). Recognizing that the type-value
pair does not itself include the medium through which a value
is \i{represented} in computer code, we can say that an equally
primordial concept can be (say) a \i{carrier}, a symbolic expression
which \q{holds} a value, or the anticipation of one. So we introduce
\q{carriers} as a third fundamental concept. All carriers, in
this theory, have one single (declared) type; and all carriers
can hold one (or in some contexts many) values; so there is
an interrelationship between the notions \i{type}, \i{value},
and \i{carrier}.
}
\p{Carriers come in two broad groups: \i{literals} (like \qOneTwoThree{})
which embody a single value everywhere they occur in source
code, and \i{symbols} (like \xSymbol{})
which can have different values when they
occur in different places in source code. There may be many
\q{instances} of a single source code symbol in one code base,
each of which, if they occur in different \q{scopes}, may
have their own value unrelated to the others. In general one
symbol might carry different values at different time,
though some programming environments may restrict this variability.
}
\p{\i{Immutable} symbols are tightly bound to their values; they have
one single value for the full time that they have any value
at all, and therefore act as \q{transparent} proxies for their value
\mdash{} especially if languages discourage constructs like
non-initialized values and \addressOf{} operators (which
can cause symbols to hold no-longer-meaningful values).
}
\p{Because \q{unitialized} carriers and \q{dangling pointers}
are coding errors, within \q{correct} code, carriers
and values are bound tightly enough that the whole carrier/value
distinction might be considered an artifact of programming practice,
out of place in a rigorous discussion of programming languages
(as logicomathematical systems, in some sense). But even
if the \q{career} of symbols is unremarkable, we cannot avoid
in some contexts \mdash{} within a debugger and/or an \IDE{}
(Integrated Development Environment, software for writing programs),
for example \mdash{} needing to formally distinguish the carrier
from the value which it holds, or recognize that carriers
can potentially be in a \q{state} where, at some point in which
they are relevant for code analysis or evaluation, they
do not yet (or do not any longer) hold meaningful values.
The \q{trajectory} of carrier \q{lifetime} \mdash{} from being declared,
to being initialized, to falling \q{out of scope} or otherwise \q{retired}
\mdash{} should be integrated into our formal inventory of programming
constructs, not relegated to an informal \q{metalanguage} suitable
for discussing computer code as practical documents but not as formal systems.
I'd make the analogy to how lambda calculus models \q{symbol rewriting}
as a formal part of its theory, rather than a notational intuition
which reflects mathematical conventions but is seen as part
of mathematical \q{metalanguage} rather than mathematical \q{language}.
The history of mathematical foundations reveals how convention, notation,
and metalanguage tends to become codified into theory, models, and
(mathematical) language proper. Likewise, we can develop a formal
theory of carriers which models that carriers have different states
\mdash{} including ones where they do not hold meaningful values
\mdash{} separate and apart from whether the underlying type
system allows for mutable value-holders, or pointers, references
to lexically scoped symbols, and other code formations that
can lead to \q{undefined behavior}.
}
\p{Consider a carrier which has been declared, so it is assigned a type and belongs to
some given scope in its source code, but has not yet been initialized.
The carrier does not therefore, in the theory, hold any value at all. This
is different from holding a default value, or a null value, or
any other value that a type system might introduce to represent
\q{missing} information. In practice, of course, insofar as a \q{carrier} refers
to a segment of memory, the carrier will be seen by the computer as having
\q{some} (effectively random) value; some bit pattern left behind by some
other calculation, which obviously has no \q{meaning}. Programming languages
usually talk about the (mis)use of uninitialized variables (i.e., in this
context, carriers) as leading to \q{undefined behavior} and therefore beyond any
structured reasoning about code, and leave it at that. We can be more rigorous
however and define being uninitialized as a \i{state} of a carrier, as is
holding a meaningful value (once it \i{is} initialized) and then, some time
later, no longer holding a meaningful value, because its associated memory
has been deemed recyclable (typically, once a symbol falls out of scope). Any
carrier therefore has at least three kinds of state, which we can
call \i{pre-initialized}, \i{initialized}, and \i{retired}.\footnote{Again, though,
we should not think of (say) \q{Preinitialized} as a \i{value} with a \i{type}.
A carrier declared as type \intthrtwo{}, say (32-bit integer) can only hold
values of this type, when it holds any value at all, which precludes
interpreting \q{Preinitialized} as some kind of \q{null} value which is somehow
arranged to be an instance of \intthrtwo{}.
}
}
\p{In this theory, carriers are the basic means by which values are represented
within computer code, including representing the \q{communication} of
values between different parts of code source, such as calling a function.
The \q{information flow} modeled by a function-call includes values
held by carriers at the function-call-site being transferred to
carriers at the function-implementation site. This motivates
the idea of a \q{transfer} of values between carriers, a kind of primitive
operation on carriers, linking disparate pieces of code. It also
illustrates that the symbols used to name function parameters, as
part of function signatures, should be considered \q{carriers} analogous
to lexically-scoped and declared symbols.
}
\p{Taking this further, we can define a \i{channel} as a list of carriers which, by
inter-carrier transfers, signify (or orchestrate) the passage of data into and out
of function bodies (note that this usage varies somewhat from process calculi,
where a channel would correspond roughly to what is here called a single carrier;
here channels in the general case are composed of multiple carriers).
I'll use the notation \opTransfer{} to represent inter-carrier transfer: let
\carrOne{} and \carrTwo{} be carriers, then \carrOneOpTransferTwo{} is a
transfer \q{operator} (note that \opTransfer{} is non-commutative; the
\q{transfer} happens in a fixed direction), marking the
logical moment when a value is moved from code-point to code-point.
The \opTransfer{} is intended
to model several scenarios, including \q{forced coercions} where the associated
value is modified. Meanwhile, without further details a \q{transfer} can be
generalized to \i{channels} in multiple ways.
If \carrOne{} and \carrTwo{} are carriers which belong to two channels
(\chanOne{}, \chanTwo{}), then \carrOneOpTransferTwo{} elevates to
a transfer between the channels
\mdash{} but this needs two indices to be concrete: the notation has to
specify which carrier in \chanOne{} transfers to which carrier in \chanTwo{}.
For example, consider the basic function-composition \fofg{}:
\fDotOfGX{} = \fOfGx{}. The analogous \q{transfer} notation
would be, say, \gOpTransferOneOneF{}:
here the first carrier in the \returnch{} channel
of \funG{} transfers to the first carrier in the \lambdach{} channel of \funF{}
(the subscripts indicate the respective positions).
}
\p{Most symbols in a function body (and corresponding nodes in a
semantic graph) accordingly represent carriers, which
are either passed in to a function or lexically declared
in a function body. Assume each function body corresponds
with one lexical scope which can have subscopes
(the nature of these scopes and how they fit in
graph representation will be addressed later in this section).
The \i{declared} carriers are initialized with
values returned from other functions (perhaps the
current function called recursively), which can
include constructors that work on literals (so, the
carrier-binding in source code can look like
a simple assignment to a literal, as in \intieqzero{}).
In sum, whether they are passed \i{to} a function or
declared \i{in} a function, carriers are only initialized
\mdash{} and only participate in the overall semantics
of a program \mdash{} insofar as they are passed to other functions
or bound to their return values.
}
\p{Furthermore, both of these cases introduce associations between
different carriers in different areas of source code. When a carrier
is passed \i{to} a function, there is a corresponding carrier
(declared in the callee's signature) that receives the
former's value: \q{calling a function} means
transferring values between carriers present at the site of
the function call to those present in the function's
implementation. Sometimes this works in reverse: a function's
return may cause the value of one of its carriers to be
transferred to a carrier in the caller (whatever
carrier is bound to the caller's return value).
}
\p{Let \carOne{} and \carTwo{} be two carriers. The \opTransfer{} operator
(representing a value passed from \carOne{} to \carTwo{}) encompasses
several specific cases. These include:
\begin{enumerate}\eli{} Value transfer directly between two carriers in one scope, like \aeqb{} or \aceqb{}.
\eli{} A value transferred between one carrier in one function body when the return
value of that function is assigned to a carrier at the call site, as in \yeqfx{}
when \fSym{} returns with \retFive{}, so the value \five{} is transferred to \ySym{}.
\eli{} \label{transf}A value transferred between a
carrier at a call-site and a carrier in the
called function's body. Given \yeqfx{} and \fFun{} declared as, say,
\fIntI{}, then the value in carrier \xSym{} at the call-site is transferred to
the carrier \iSym{} in the function body. In particular, every node
in the called function's code-graph whose vertex represents a source-code
token representing symbol \iSym{} then becomes a carrier whose value is
that transferred from \xSym{}.
\eli{} A value transferred between a \returnch{} channel and either a
\lambdach{} or \sigmach{} channel, as part of a nested expression or a
\q{chain of method calls}. So in \hfx{}, the value held by the carrier
in \fFuns{}'s \returnch{} channel is transferred to the first carrier in
\hFun{}'s \lambdach{}. An analogous \returnch{}\opTransfer{}\sigmach{}
transfer is seen in code like \fxdoth{}: the value
in \fFuns{}'s \returnch{} channel becomes the value in \hFun{}'s \sigmach{},
i.e., its \q{\this{}} (we can use \opTransfer{}
as a notation between \i{channels} in this case because we understand
the Channel Algebra in force to restrict the size of both
\returnch{} and \sigmach{} to be at most one carrier).
\end{enumerate}
}
\vspace{-1em}
%Define two narrower operators
%modeling a transfer of values between them: \carOnetoTwos{}
%means direct assignment (between carriers in the same
%scope, as in \aeqb{});
\p{Let \carOnetoTwof{} be the special case of \opTransfer{}
corresponding to item \hyperref[transf]{(3)}: a transfer
effectuated by a function call, where
\carOne{} is at the call site and \carTwo{} is part of a
function's signature. If \fOne{} calls \fTwo{} then
\carOne{} is in \fOne{}'s context, \carTwo{} is in \fTwo{}'s
context, and \carTwo{} is initialized with a copy of
\carOne{}'s value prior to \fTwo{} executing.
A \i{channel} then becomes a
collection of carriers which are found in the
scope of one function and can be on the
right hand side of an \carOnetoTwof{} operator.\footnote{In general, we might say that two carriers are \q{convoluted}
if there is a value passed between them or if their values
somehow become interdependent (as an example of
interdependence without direct transfer, consider tests that
that two lists are the same size, or two numbers
monotone increasing, as a runtime disambiguation of
dependent-typed polymorphic implementations). Depending
on context, convolution can refer to a structure in
program graphs in the abstract or to an event in
program execution: modeling a program as a series
of discrete points in time \mdash{} each point inhabited by a
small change in program state \mdash{} two carriers are convoluted
at the time-point where a value-transfer occurs (or the
steps toward some kind of gatekeeping check get initiated).
}
}
\p{To flesh out Channels' \q{transfer semantics} further, I will refer back
to the model of function-implementations as represented in code
graphs. If we assume that all code in a computer program is found
in some function-body, then we can assume that any function-call
operates in the context of some other function-body. In particular,
any carrier-transfer caused by a function call involves a link between
nodes in two different code graphs (I set aside the case of recursive
functions \mdash{} those which call themselves \mdash{} for this discussion).
}
\p{So, to review my discussion in this section so far, I started with the
process-algebraic notion of inter-procedure combinations; but whereas process
calculi enlarge this notion by distinguishing different syntheses of procedures
(such as, concurrent versus sequential), I have focused instead on the
communication of values between procedures. The semantics of inter-procedure
value-transfers is unexpectedly detailed, because it has to recognize
the possibility of nontrivial copy semantics, casts, overloading,
and perhaps \q{gatekeeping}. Furthermore, in addition to these
semantic details, analysis of value-transfers is particularly significant
in the context of Source Code Ontologies and \RDF{} or
Directed Hypergraph representations
of computer code. This is because code-graphs give us a rigorous foundation
for modeling computer programs as sets of function-implementations
which call one another. Instead of abstractly talking about
\q{procedures} as conceptual primitives, we can see procedures as
embodied in code-graphs (and function-values as
constructed from them, which I emphasized last section).
\q{Passing values between} procedures is
then explicitly a family of relationships between nodes
(or hypernodes) in disparate code-graphs, and the various semantic
nuances associated with some such transfers (type casts, for example)
can be directly modeled by edge-annotations. Given these
possibilities, I will now explore further how the framework of
\i{carriers} and \i{channels} fits into a code-graph context.
}
\spsubsectiontwoline{Channel Complexes, Code Graphs, and Carrier Transfers}
\p{For this discussion, assume that \fOne{} and \fTwo{} are implemented functions with
code graphs \gammaOne{} and \gammaTwo{}, respectively. Assume
furthermore that some statement
or expression in \fOne{} involves a call to \fTwo{}. There are several specific
cases that can obtain: the expression calling \fTwo{} may be nested in a
larger expression; \fTwo{} may be called for its side effects alone,
with no concern to its return value (if any); or the result of \fTwo{}
may be bound to a symbol in \fOne{}'s scope, as in \yeqfx{}. I'll take
this third case as canonical; my discussion here extends to the other
cases in a relatively straightforward manner.
}
\p{A statement like \yeqfx{} has two parts: the expression \fx{} and the symbol
\ySym{} to which the results of \fFun{} are assigned. Assume that this
statement occurs in the body of function \fOne{}; \xSym{} and
\ySym{} are then symbols in \fOne{}'s scope and the symbol \fFun{} designates
(or resolves to) a function which corresponds to what I refer to here as
\fTwo{}. Assume \fTwo{} has a signature like \fIntI{} As such, the
expression \fx{}, where \xSym{} is a carrier in the context of
\fOne{}, describes a \i{carrier transfer} according to which the value
of \xSym{} gets transferred to the carrier \iSym{} in \fTwo{}'s context.
}
\p{In the terms I used earlier, \fTwo{}'s signature represents a
channel \q{package} \mdash{} which, in the current example, has
a \lambdach{} channel of size one (with one carrier of
type \int{}) and a \returnch{} channel of size one (\fTwo{}
returns one \int{}). Considered in the context of carrier-transfers
between code graphs, a Channel Package can be seen as a description
of how two distinct code-graphs may be connected via carrier
transfers. When a function is called, there is a channel
complex which I'll call a \i{payload} that supplies values to
the channel \i{package}. In the concrete example, the
statement \yeqfx{} is a \i{call site} describing a channel
\i{payload}, which becomes connected to a function
implementation whose signature represents a channel
\i{package}: a collection of transfers \carrOneOpTransferTwo{}
together describe an overall transfer between a
\i{payload} and a \i{package}.
}
\p{More precisely, the \fx{} example represents a carrier transfer whose
target is part of \fTwo{}'s \lambdach{} channel, which we can notate
\carrOneOpTransferTwolambda{}. Furthermore, the full statement \yeqfx{}
shows a transfer in the opposite direction: the value in
\fTwo{}'s \i{\returnch{}} channel is transferred to the carrier \ySym{} in
the \i{payload}. This relation, involving a return
channel, can be expressed with notation like \carrTwoOpTransferOnereturn{}.
The syntax of a programming language governs how code at a call site
supplies values for carrier transfers to and from a function body: in
the current example, binding a call-result to a symbol always
involves a transfer from a \i{\returnch{}} channel, whereas handling an
exception via code like \catchexce{} transfers a value from a called
function's \i{\exceptionch{}} channel. The syntactic difference between
code which takes values from \returnch{} and \exceptionch{} channels,
respectively, helps reinforce the \i{semantic} difference between
exceptions and \q{ordinary} returns. Similarly, method-call syntax
like \objfx{} visually separates the values that get transferred to a
\q{\sigmach{}} channel (\obj{} in this case)
from the \q{ordinary} (\lambdach{}) inputs, reinforcing Object semantics.
}
\p{To consolidate the terms I am using: we can interpret both function
\i{signatures} and \i{calls} in terms of channels. Both involve
\q{carrier transfers} in which values are transferred \i{to} or \i{from}
the channels described by a function signature. The distinction between
functions' \q{inputs} and \q{outputs} can be more rigorously stated,
with this further background, as the distinction between channels
in function signatures which receive values \i{from} carriers at
a call site (inputs), and those \i{from which} \mbox{values are obtained
as a procedure has completed (outputs)}.
}
%\vspace{-1em}
\p{A Channel Expression Language (\CXL{}) can describe channels both in
signatures and at call-sites. The aggregation of channels generically
described by \CXL{} expressions I am calling a \i{Channel Complex}.
A Channel Complex representing a function \i{signature} I am calling
a \i{Channel Package}, whereas complexes representing a function
\i{call} I am calling a \i{Channel Payload}. Input channels
are then those whose carrier transfers occur in the payload-to-package
direction, whereas output channels are the converse.
}
\vspace{1em}
\itcl{rz}
\vspace{-1em}
\p{The demo code for this chapter uses an Intermediate
Representation which builds complexes representing both
function signatures and dynamically evaluated
function-calls. Figure \ref{lst:rz} shows
code in the special Interface Description Language
which includes a simple signature (\OneOverlay{}).
It also has several dynamic calls, including one to
create a new object of a \Cpp{} class called \FnDoc{}
(\TwoOverlayu{}) and one to retrieve a preconstructed
object of a class called \KCMEnv{} (\ThreeOverlay{})
(in this language the \dbleq{} operator is used
when the right-hand-side expression
is a value constructor, and there are assignment
operators, notated with a preceding back-slash as in
\Seq{} or \Sdbleq{}, specifically for assigning values
to previously-uninitialized symbols).
}
\vspace{1em}
\itcl{rzd}
\vspace{-1em}
\p{Corresponding to this sample, the Intermediate Representation
(actually Common \Lisp{}) in Figure \ref{lst:rzd},
to which the above code compiles, shows both channel
constructions for signatures (\OneOverlay{}) and
dynamic calls (\TwoOverlay{}, \ThreeOverlay{}, and
\FourOverlay{}). These calls vary in the channel formations
used \mdash{} for instance, \FourOverlay{} uses a sigma
channel (and maps to a \Cpp{} method via the \Qt{} meta-object
system); whereas \ThreeOverlay{} uses a predefined
\Cpp{} function exposed to the scripting engine
(in this case one which retrieves a predefined \KCMEnv{}
object that is globally part of the scripting
environment; \envv{} stands for \q{environment value}),
and \TwoOverlay{} default-constructs an object of
the requested type (again, the \Qt{} code called from
this \Lisp{} form will employ \Qt{} meta-object code
to actually allocate the object, providing a type identifier based on
the type name). These structural differences are reflected in how
the \Cpp{} callback methods are named (here visible indirectly by
the naming convention for some \Lisp{} symbols, which get
translated behind the scenes to
method names of a certain \Qt{}-based class) \mdash{}
\kprom{}, for example, maps to the particular \Cpp{} function
which the engine
uses to default-construct a value given a corresponding type name.
This code shows that channel complexes can be constructed
for several purposes, including but not limited to
dynamic evaluation (when evaluation is desired,
it can be triggered by \kcmde{}, as at \FiveOverlay{}).
}
\p{In addition to the payload/package distinction, we can also
understand Channel Complexes at two further levels. On the one hand,
we can treat Channel Complexes as found \i{in source code},
where they describe the general pattern of payload/package
transfers. On the other hand, we can represent Channel Complexes
\i{at runtime} in terms of the actual values and types held by
carriers as transfers are effectuated prior to, and then
after, execution of the called function. Accordingly,
each Channel Complex may be classified as a \i{compile-time}
payload or package, or a \i{runtime} payload or package, respectively.
The code accompanying this chapter includes a \q{Channel Complex library}
\mdash{} for creating and analyzing Channel Complexes via a special
(\Lisp{}-based) Intermediate Representation \mdash{}
that represents complexes of each variety, so it can be used
both for static analysis and for enhanced runtimes and scripting.
}
\p{This formal channel/complex/package/payload/carrier vocabulary codifies
what are to some degree common-sense frameworks through which programmers
reason about computer code. This descriptive
framework (I would argue) more effectively integrates the
\i{semantic} and \i{syntactic} dimensions of source code and program
execution (Figures \ref{lst:rzsyn} and \ref{lst:rzsem},
listed as an addendum the end of this section, sample the
syntactic \mdash{} or grammar/parsing \mdash{} and semantic/evaluative components
of the demo code compiler pipeline, respectively).
}
\p{Computer programs can be understood \i{semantically} in
terms of \mOldLambda{}-Calculi combined with
models of computation (call-by-value or
by-reference, eager and lazy evaluation, and so forth). These
semantic analyses focus on how values change and are passed between
functions during the course of a running program. From this
perspective, source code is analyzed in terms of the semantics
of the program it describes: what are the semantic patterns and
specifications that can be predicted of running programs on the
basis of source code in itself? At the same time, source code
can also be approached \i{syntactically}, as well-formed
expressions of a formal language. From this perspective,
correct source code can be matched against language grammars and,
against this background, individual code elements
(like tokens, code blocks, expressions, and statements) \mdash{} and
their inter-relationships \mdash{} may be identified.
}
\p{The theory of Channel Complexes straddles both the semantic and syntactic
dimensions of computer code. Semantically, carrier-transfers capture the
fundamental building blocks of program semantics: the overall
evolving runtime state of a program can be modeled as a succession
of carrier-transfers, each involving specific typed values plus
code-graph node-pairs, marking
code-points bridged via a transfer. Meanwhile,
syntactically, how carriers belong to channels \mdash{}
the carrier-to-channel map fixing carriers' semantics \mdash{} structures and
motivates languages' grammars and rules. In particular,
carrier-transfers induce relationships between code-graph nodes.
As a result, language
grammars can be studied through code-graphs' profiles
insofar as they satisfy \RDF{}
and/or \DH{} Ontologies.
}
\p{In sum, a \DH{} and/or Semantic Web representation of computer code
can be a foundation for both semantic and syntactic analyses, and this
may be considered a benefit of Channel Complex representations
even if they only restate what are established semantic patterns
mainstream programming language \mdash{} for example, even if they
are restricted to a \sigmach{}-\lambdach{}-\returnch{}-\exceptionch{}
Channel Algebra modeled around, say, \Cpp{} semantics
prior to \CppEleven{} (more recent \Cpp{} standards also call for
a \q{\capturech{}} channel for inline \q{lambda} functions).
}
\p{At the same time, one of my claims in this chapter is that
more complex Channel Algebras can lead to new tactics for
introducing more expressive type-theoretic semantics in
mainstream programming environments. As such, most of the
rest of this section will explore additional Channel Kinds
and associated Channel Complexes which extend, in addition
to merely codifying, mainstream languages' syntax and semantics.
}
%\p{%To flesh out Channels' \q{transfer semantics} further, it is necessary to
%expain the intersection between channels and types in more detail, which I will do by
%(rather casually) appproaching types through Category Theory.
%}
%\subsection{Enhancing Type Systems via Channel Kinds}
%\p{%}
| {
"alphanum_fraction": 0.7893200341,
"avg_line_length": 56.2311901505,
"ext": "tex",
"hexsha": "b27d5c60716dd024304a054a3afe078bfb40d307",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8fd304d7df709d32367e49a98fb99f16162c5477",
"max_forks_repo_licenses": [
"BSL-1.0"
],
"max_forks_repo_name": "ScignScape-RZ/phcg",
"max_forks_repo_path": "cdh/elsevier/section3.ngml.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8fd304d7df709d32367e49a98fb99f16162c5477",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSL-1.0"
],
"max_issues_repo_name": "ScignScape-RZ/phcg",
"max_issues_repo_path": "cdh/elsevier/section3.ngml.tex",
"max_line_length": 118,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8fd304d7df709d32367e49a98fb99f16162c5477",
"max_stars_repo_licenses": [
"BSL-1.0"
],
"max_stars_repo_name": "ScignScape-RZ/phcg",
"max_stars_repo_path": "cdh/elsevier/section3.ngml.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 9749,
"size": 41105
} |
\chapter{Fundamental Parameters of Main Sequence Stars in an Instant \\with Machine Learning}
\chaptermark{Fundamental Stellar Parameters with Machine Learning}
\label{chap:ML}
The contents of this chapter were authored by
%published in October of 2016 as ``\emph{Fundamental Parameters of Main-Sequence Stars in an Instant with Machine Learning}'' by
E.~P.~Bellinger, G.~C.~Angelou, S.~Hekker, S.~Basu, W.~H.~Ball, and E.~Guggenberger and published in October of 2016 in \emph{The Astrophysical Journal}, 830 (1), 31.\footnote{Contribution statement: The work of this chapter was carried out by me; the text was mainly written by me, with contributions from G.~C.~Angelou, in collaboration with the other authors.}
\nocite{2016apj...830...31b}
\section*{Chapter Summary}
Owing to the remarkable photometric precision of space observatories like \emph{Kepler}, stellar and planetary systems beyond our own are now being characterized \emph{en masse} for the first time. These characterizations are pivotal for endeavors such as searching for Earth-like planets and solar twins, understanding the mechanisms that govern stellar evolution, and tracing the dynamics of our Galaxy. The volume of data that is becoming available, however, brings with it the need to process this information accurately and rapidly. While existing methods can constrain \mb{fundamental stellar parameters such as ages, masses, and radii} from these observations, they require substantial computational efforts to do so.
We develop a method based on machine learning for rapidly estimating fundamental parameters of main-sequence solar-like stars from classical and asteroseismic observations. We first demonstrate this method on a hare-and-hound exercise and then apply it to the Sun, 16~Cyg~A \& B, and $34$ planet-hosting candidates that have been observed by the \emph{Kepler} spacecraft. We find that our estimates and their associated uncertainties are comparable to the results of other methods, but with the additional benefit of being able to explore many more stellar parameters while using much less computation time. We furthermore use this method to present evidence for an empirical diffusion-mass relation. Our method is open source and freely available for the community to use.\footnote{The source code for all analyses and for all figures appearing in this chapter can be found electronically at \url{https://github.com/earlbellinger/asteroseismology} \citep{earl_bellinger_2016_55400}.}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%% BODY OF PAPER %%%%%%%%%%%%%%%%%%
\section{Introduction}
In recent years, dedicated photometric space missions have delivered dramatic improvements to time-series observations of solar-like stars. These improvements have come not only in terms of their precision, but also in their time span and sampling, which has thus enabled direct measurement of dynamical stellar phenomena such as pulsations, binarity, and activity. Detailed measurements like these place strong constraints on models used to determine the ages, masses, and chemical compositions of these stars. This in turn facilitates a wide range of applications in astrophysics, such as testing theories of stellar evolution, characterizing extrasolar planetary systems \citep[e.g.][]{2015ApJ...799..170C, 2015MNRAS.452.2127S}, assessing galactic chemical evolution \citep[e.g.][]{2015ASSP...39..111C}, and performing ensemble studies of the Galaxy \citep[e.g.][]{2011Sci...332..213C, 2013MNRAS.429..423M, 2014ApJS..210....1C}.
The motivation to increase photometric quality has in part been driven by the goal of measuring oscillation modes in stars that are like our Sun. Asteroseismology, the study of these oscillations, provides the opportunity to constrain the ages of stars through accurate inferences of their interior structures. However, stellar ages cannot be measured directly; instead, they depend on indirect determinations via stellar modelling.
Traditionally, to determine the age of a star, procedures based on iterative optimization (hereinafter IO) seek the stellar model that best matches the available observations \citep{1994ApJ...427.1013B}.
Several search strategies have been employed, including exploration through a pre-computed grid of models (i.e.\ grid-based modelling, hereinafter GBM; see \citealt{2011ApJ...730...63G, 2014ApJS..210....1C}); or \emph{in situ} optimization (hereinafter ISO) such as genetic algorithms \citep{2014ApJS..214...27M}, Markov-chain Monte Carlo \citep{2012MNRAS.427.1847B}, or the downhill simplex algorithm (\citealt{2013apjs..208....4p}; see e.g.\ \citealt{2015MNRAS.452.2127S} for an extended discussion on the various methods of dating stars). Utilizing the detailed observations from the \emph{Kepler} and CoRoT space telescopes, these procedures have constrained the ages of several field stars to within $10\%$ of their main-sequence lifetimes \citep{2015MNRAS.452.2127S}.
IO is computationally intensive in that it demands the calculation of a large number of stellar models (see \citealt{2009ApJ...699..373M} for a discussion). ISO requires that new stellar tracks are calculated for each target, as they do not know \emph{a priori} all of the combinations of stellar parameter values that the optimizer will need for its search. They furthermore converge to local minima and therefore need to be run multiple times from different starting points to attain global coverage. GBM by way of interpolation in a high-dimensional space, on the other hand, is sensitive to the resolution of each parameter and thus requires a very fine grid of models to search through \citep[see e.g.][who use more than five million models that were varied in just four initial parameters]{2010ApJ...725.2176Q}. Additional dimensions such as efficiency parameters (e.g.\ overshooting or mixing length parameters) significantly impact on the number of models needed and hence the search times for these methods. As a consequence, these approaches typically use, for example, a solar-calibrated mixing length parameter or a fixed amount of convective overshooting. Since these values in other stars are unknown, keeping them fixed therefore results in underestimations of uncertainties. This is especially important in the case of atomic diffusion, which is essential when modelling the Sun \citep[see e.g.][]{1994MNRAS.269.1137B}, but is usually disabled for stars with ${M/M_\odot > 1.4}$ because it leads to the unobserved consequence of a hydrogen-only surface \citep{2002A&A...390..611M}.
These concessions have been made because the relationships connecting \mb{observations} of stars to their internal \mb{properties} are non-linear and difficult to characterize. Here we will show that through the use of machine learning, it is possible to avoid these difficulties by capturing those relations statistically and using them to construct a regression model capable of relating observations of stars to their structural, chemical, and evolutionary properties. The relationships can be learned using many fewer models than IO methods require, and can be used to process entire stellar catalogs with a cost of only seconds per star.
To date, only about a hundred solar-like oscillators have had their frequencies resolved, allowing each of them be modelled in detail using costly methods based on IO. In the forthcoming era of TESS \citep{2015JATIS...1a4003R} and PLATO \citep{2014ExA....38..249R}, however, seismic data for many more stars will become available, and it will not be possible to dedicate large amounts of supercomputing time to every star. Furthermore, for many stars, it will only be possible to resolve \emph{global} asteroseismic quantities rather than individual frequencies. Therefore, the ability to rapidly constrain stellar parameters for large numbers of stars by means of global oscillation analysis will be paramount.
In this work, we consider the constrained multiple-regression problem of inferring fundamental stellar \mb{parameters} from observable \mb{quantities}. We construct a random forest of decision tree regressors to learn the relationships connecting observable quantities of main-sequence (MS) stars to their zero-age main-sequence (ZAMS) histories and current-age structural and chemical attributes. We validate our technique by inferring the parameters of simulated stars in a hare-and-hound exercise, the Sun, and the well-studied stars 16~Cyg~A~and~B. Finally, we conclude by applying our method on a catalog of \emph{Kepler} objects-of-interest (hereinafter KOI; \citealt{2016MNRAS.456.2183D}).
We explore various model physics by considering stellar evolutionary tracks that are varied not only in their initial mass and chemical composition, but also in their efficiency of convection, extent of convective overshooting, and strength of gravitational settling. We compare our results to the recent findings from GBM \citep{2015MNRAS.452.2127S}, ISO \citep{2015ApJ...811L..37M}, interferometry \citep{2013MNRAS.433.1262W}, and asteroseismic glitch analyses \citep{2014ApJ...790..138V} and find that we obtain similar estimates but with orders-of-magnitude speed-ups.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Grid %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\Needspace{5\baselineskip}
\section{Method} \label{sec:Method}
We seek a multiple-regression model capable of characterizing observed stars. To obtain such a model, we build a matrix of evolutionary simulations and use machine learning to discover relationships in the \mb{stellar models} that connect observable quantities of stars to the model quantities that we wish to predict. \mb{The matrix is structured such that each column contains a different stellar quantity and each row contains a different stellar model.} We construct this matrix by extracting models along evolutionary sequences (see Appendix \ref{sec:selection} for details on the model selection process) and summarizing them to yield the same types of information as the stars being observed. Although each star (and each stellar model) may have a different number of \mb{oscillation} modes observed, it is possible to condense this information into only a few numbers by leveraging the fact that the frequencies of these modes follow a regular pattern \citep[for a review of solar-like oscillations, see][]{2013ARA&A..51..353C}. Once the machine has processed this matrix, one can feed the algorithm a catalogue of \mb{stellar observations} and use it to predict the \mb{fundamental} parameters of those stars.
The \mb{observable information obtained from models that can be} used to inform the algorithm may include, but \mb{is} not limited to, combinations of temperatures, metallicities, global oscillation information, surface gravities, luminosities, and/or radii. From these, the machine can learn how to infer stellar parameters such as ages, masses, core hydrogen and surface helium abundances. If luminosities, surface gravities, and/or radii are not supplied, then they may be predicted as well. In addition, the machine can also infer evolutionary parameters such as the initial stellar mass and initial chemical compositions as well as the mixing length parameter, overshoot coefficient, and diffusion multiplication factor needed to reproduce observations, which are explained in detail below.
\subsection{Model Generation}
\label{sec:models}
We use the open-source 1D stellar evolution code \emph{Modules for Experiments in Stellar Astrophysics} \citep[MESA;][]{2011apjs..192....3p} to generate main-sequence stellar models from solar-like evolutionary tracks varied in initial mass $M$, helium $Y_0$, metallicity $Z_0$, mixing length parameter $\alpha_{\text{MLT}}$, overshoot coefficient $\alpha_{\text{ov}}$, and \mb{diffusion multiplication factor} $D$. \mb{The diffusion multiplication factor} serves to amplify or diminish the effects of diffusion, where a value of zero turn\mb{s} it off and a value of two double\mb{s} all velocities. The initial conditions are varied in the ranges ${M\in [0.7, 1.6]\;M_\odot}$, ${Y_0\in [0.22, 0.34]}$, ${Z_0\in [10^{-5}, 10^{-1}]}$ (varied logarithmically), ${\alpha_{\text{MLT}}\in [1.5, 2.5]}$, ${\alpha_{\text{ov}}\in [10^{-4}, 1]}$ (varied logarithmically), and ${D\in [10^{-6}, 10^2]}$ (varied logarithmically). We put a cut-off of $10^{-3}$ and $10^{-5}$ on $\alpha_{\text{ov}}$ and $D$, respectively, below which we consider them to be zero and \mb{disable them}. The initial parameters of each track are chosen in a quasi-random fashion so as to populate the initial-condition hyperspace as homogeneously and rapidly as possible (shown in Figure~\ref{fig:inputs}; see Appendix \ref{sec:grid} for more details).
%\begin{figure*}
%\afterpage{
%\clearpage% To flush out all floats, might not be what you want
%\cleardoublepage
%\cleartoleftpage
% \clearpage% flush all other floats
% \ifodd\value{page}
% \else% uncomment this else to get odd/even instead of even/odd
% \expandafter\afterpage% put it on the next page if this one is odd
% \fi
\begin{landscape}
%\thispagestyle{lscape}
%\pagestyle{lscape}
\begin{figure}
\centering
%\includegraphics[width=\linewidth,keepaspectratio,natwidth=1812,natheight=1084, trim={0 2cm 0 2cm}, clip]{inputs.pdf}
\includegraphics[width=\linewidth, keepaspectratio]{inputs.png}
\caption[Initial conditions for evolutionary model grid]{(Caption on other page.)\label{fig:inputs}}
%\end{figure*}
\end{figure}
\end{landscape}
%\clearpage
%}
\begin{figure}
\contcaption{Scatterplot matrix (lower panels) and density plots (diagonal) of evolutionary track initial conditions considered. Mass ($M$), initial helium ($Y_0$), initial metallicity ($Z_0$), mixing length parameter ($\alpha_{\text{MLT}}$), overshoot ($\alpha_{\text{ov}}$), and diffusion multiplication factor ($D$) were varied in a quasi-random fashion to obtain a low-discrepancy grid of model tracks. Points are colored by their initial hydrogen ${X_0=1-Y_0-Z_0}$, with blue being \mb{high} $X_0$ (${\approx 78\%}$) and black being \mb{low} $X_0$ (${\approx 56\%}$). The parameter space is densely populated with evolutionary tracks of maximally different initial conditions.}
\end{figure}
We use MESA version r8118 with the Helmholtz-formulated equation of state that allows for radiation pressure and interpolates within the 2005 update of the OPAL EOS tables \citep{2002apj...576.1064r}. We assume a \citet{1998SSRv...85..161G} solar composition for our initial abundances and opacity tables. Since we restrict our study to the main sequence, we use an eight-isotope nuclear network consisting of $^1$H, $^3$He, $^4$He, $^{12}$C, $^{14}$N, $^{16}$O, $^{20}$Ne, and $^{24}$Mg. We use a step function for overshooting and set a scaling factor ${f_0 = \alpha_{\text{ov}}/5}$ to determine the radius ${r_0 = H_p \cdot f_0}$ inside the convective zone at which convection switches to overshooting, where $H_p$ is the pressure scale height. \mb{The overshooting parameter applies to all convective boundaries and is kept fixed throughout the course of a track's evolution, so a non-zero value does not imply that the model has a convective core at any specific age.} %We opt for a classical formulation of overshooting rather than more recent implementations with exponential decay.
All pre-main-sequence (PMS) models are calculated with a simple photospheric approximation, after which an Eddington $T-\tau$ atmosphere is appended on at ZAMS. We call ZAMS the point at which the nuclear luminosity of the models make up $99.9\%$ of the total luminosity. We calculate atomic diffusion with gravitation settling and without radiative levitation on the main sequence using five diffusion class representatives: $^1$H, $^3$He, $^4$He, $^{16}$O, and $^{56}$Fe \citep{burgers1969flow}.\footnote{The atomic number of each representative isotope is used to calculate the diffusion rate of the other isotopes allocated to that group; see \citet{2011apjs..192....3p}.}
Following their most recent measurements, we correct the defaults in MESA of the gravitational constant (${G=6.67408\times 10^{-8}}$~\si{\per\g\cm\cubed\per\square\s}; \citealt{2015arXiv150707956M}), the gravitational mass of the Sun (${M_\odot = 1.988475\times 10^{33}}$~\si{\g}~${= \mu G^{-1} = 1.32712440042\times 10^{11}}$~\si{\km\per\s}~$G^{-1}$, where $\mu$ is the standard gravitational parameter; \citealt{pitjeva2015determination}), and the solar radius (${R_\odot = 6.95568\times 10^{10}}$~\si{\cm}; \citealt{2008ApJ...675L..53H}).
Each track is evolved from ZAMS to either an age of ${\tau=16}$ Gyr or until terminal-age main sequence (TAMS), which we define as having a fractional core hydrogen abundance ($X_{\text{c}}$) below $10^{-3}$. Evolutionary tracks with efficient heavy-element settling can develop discontinuities in their surface abundances if they lack sufficient model resolution. We implement adaptive remeshing by recomputing any track with abundance discontinuities in its surface layers using finer spatial and temporal resolutions (see Appendix \mb{\ref{sec:remeshing}} for details). Running stellar physics codes in a batch mode like this requires care, so we manually inspect multiple evolutionary diagnostics to ensure that proper convergence has been achieved. %Hertzsprung-Russell, Kippenhahn, and Christensen-Dalsgaard diagrams of the evolutionary tracks to ensure that proper convergence has been achieved.
%This occurs as a result of models having efficient diffusion requiring finer spatial and temporal resolution than those models without efficient diffusion (see Appendix \ref{sec:remeshing} for details). In order to prevent biases towards any particular run, we select the same number of models from each evolutionary track (see Appendix \ref{sec:selection} for details). Running stellar physics codes in a batch mode like this requires care, so we manually inspect the evolutionary tracks to ensure that proper convergence has been achieved. %Hertzsprung-Russell, Kippenhahn, and Christensen-Dalsgaard diagrams of the evolutionary tracks to ensure that proper convergence has been achieved.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Seismology %%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Calculation of Seismic Parameters}
\label{sec:seis}
We use the ADIPLS pulsation package \citep{2008Ap&SS.316..113C} to compute p-mode oscillations up to spherical degree ${\ell=3}$ below the acoustic cut-off frequency. We use on average of around $4,000$ points per stellar model and therefore have adequate resolution to calculate frequencies without remeshing. We denote any frequency separation $S$ as the difference between a frequency $\nu$ of spherical degree $\ell$ and radial order $n$ and another frequency, that is:
\begin{equation}
S_{(\ell_1, \ell_2)}(n_1, n_2) \equiv \nu_{\ell_1}(n_1) - \nu_{\ell_2}(n_2).
\end{equation}
The large frequency separation is then
\begin{equation}
\Delta\nu_\ell(n) \equiv S_{(\ell, \ell)}(n, n-1)
\end{equation}
and the small frequency separation is
\begin{equation}
\delta\nu_{(\ell, \ell+2)}(n) \equiv S_{(\ell, \ell+2)}(n, n-1).
\end{equation}
Near-surface layers of stars are poorly-modeled, which induces systematic frequency offsets \citep[see e.g.][]{1999A&A...351..689R}. The ratios between the large and small frequency separations (Equation~\ref{eqn:LSratio}), and also between the large frequency separation and five-point-averaged frequencies (Equation~\ref{eqn:rnl}) have been shown to be less sensitive to the surface term than the aforementioned separations and are therefore valuable asteroseismic diagnostics of stellar interiors \citep{2003A&A...411..215R}. They are defined as
\begin{equation}
\mathrm{r}_{(\ell,\ell+2)}(n) \equiv \frac{\delta\nu_{(\ell, \ell+2)}(n)}{\Delta\nu_{(1-\ell)}(n+\ell)} \label{eqn:LSratio}
\end{equation}
%and
\begin{equation}
\mathrm{r}_{(\ell, 1-\ell)}(n) \equiv \frac{\mathrm{dd}_{(\ell,1-\ell)}(n)}{\Delta\nu_{(1-\ell)}(n+\ell)} \label{eqn:rnl}
\end{equation}
where
\begin{align}
\mathrm{dd}_{0,1}(n) \equiv \frac{1}{8} \big[\nu_0(n-1) &- 4\nu_1(n-1)
+6\nu_0(n) \notag\\&- 4\nu_1(n) + \nu_0(n+1)\big]\\
\mathrm{dd}_{1,0}(n) \equiv -\frac{1}{8} \big[\nu_1(n-1) &- 4\nu_0(n)
+6\nu_1(n) \notag\\&- 4\nu_0(n+1) + \nu_1(n+1)\big].
\end{align}
Since the set of radial orders that are observable differs from star to star, we collect global statistics on $\Delta\nu_0$, $\delta\nu_{0,2}$, $\delta\nu_{1,3}$, $r_{0,2}$, $r_{1,3}$, $r_{0,1}$, and $r_{1,0}$. We mimic the range of observable frequencies in our models by weighting all frequencies by their position in a Gaussian envelope centered at the predicted frequency of maximum oscillation power $\nu_{\max}$ and having full-width at half-maximum of ${0.66\cdot\nu_{\max}{}^{0.88}}$ as per the prescription given by \citet{2012A&A...537A..30M}. We then calculate the weighted median of each variable, which we denote with angled parentheses (e.g.\ $\langle r_{0,2}\rangle$). We choose the median rather than the mean because it is a robust statistic with a high breakdown point, meaning that it is much less sensitive to the presence of outliers (for a discussion of breakdown points, see \citealt{hampel1971general}, who attributed them to Gauss). This approach allows us to predict the fundamental \mb{stellar} parameters of any solar-like oscillator with multiple observed modes irrespective of which exact radial orders have been detected. Illustrations of the methods used to derive the frequency separations and ratios of a stellar model are shown in Figure~\ref{fig:ratios}.
\afterpage{
%\clearpage% To flush out all floats, might not be what you want
\begin{landscape}
%\thispagestyle{lscape}
%\pagestyle{lscape}
\begin{figure}
%\begin{figure*}
\centering
\includegraphics[width=0.45\linewidth,keepaspectratio,natwidth=251,natheight=150]{solar-test-Dnu0.pdf}%
\includegraphics[width=0.45\linewidth,keepaspectratio,natwidth=251,natheight=150]{solar-test-dnu02.pdf}\\
\includegraphics[width=0.45\linewidth,keepaspectratio,natwidth=251,natheight=150]{solar-test-r02.pdf}%
\includegraphics[width=0.45\linewidth,keepaspectratio,natwidth=251,natheight=150]{solar-test-r01.pdf}\\
\caption[Seismic parameters of a stellar model]{Calculation of seismic parameters for a stellar model. %A simulated power spectrum is shown at the top with examples of $\Delta\nu_0$, $dd_{0,1}$, $\delta\nu_{0,2}$, and $\delta\nu_{1,3}$.
The large and small frequency separations $\Delta\nu_0$ (top left) and $\delta\nu_{0,2}$ (top right) and frequency ratios $r_{0,2}$ (bottom left) and $r_{0,1}$ (bottom right) are shown as a function of frequency. The vertical dotted line in these bottom four plots indicates $\nu_{\max}$. Points are sized and colored proportionally to the applied weighting\mb{, with large blue symbols indicating high weight and small red symbols indicating low weight.} }%
\label{fig:ratios}
%\end{figure*}
\end{figure}
\end{landscape}
}
\subsection{Training the Random Forest} \label{sec:forest}
We train a random forest regressor on our matrix of evolutionary models to discover the relations that facilitate inference of stellar parameters from \mb{observed} quantities. A schematic representation of the topology of our random forest regressor can be seen in Figure~\ref{fig:rf}. \mb{Random forests arise in machine learning through the family of algorithms known as CART, i.e. Classification and Regression Trees.} There are several good textbooks that discuss random forests \citep[see e.g.][Chapter 15]{hastie2005elements}. \mb{A random forest is an ensemble regressor, meaning that it is composed of many individual components that each perform statistical regression, and the forest subsequently averages over the results from each component \citep{breiman2001random}. The components of the ensemble are decision trees, each of which learns a set of decision rules for relating \mb{observable quantities} to \mb{stellar parameters}. An ensemble approach is preferred because using only a single decision tree that is able to see all of the training data may result in a regressor that has memorized the training data and is therefore unable to generalize to as yet unseen values. This undesirable phenomenon is known in machine learning as over-fitting, and is analogous to fitting $n$ data points using a degree $n$ polynomial: the fit will work perfectly on the data that was used for fitting, but fail badly on any unseen data. To avoid this, each decision tree in the forest is given a random subset of the evolutionary models and a random subset of the observable quantities from which to build a set of rules relating observed quantities to stellar parameters. This process, known as statistical bagging \citep[][Section~8.7]{hastie2005elements}, prevents the collection of trees from becoming over-fit to the training data, and thus results in a regression model that is capable of generalizing the information it has learned and predicting values for data on which it has not been trained. }
\afterpage{
\begin{figure*}[ht]
\centering
\begin{adjustwidth*}{}{-0.35cm}
\centering
\input{random_forest.tex}
\end{adjustwidth*}
%\includegraphics[natwidth=393,natheight=393]{random_forest.pdf}
%\includegraphics[natwidth=413, natheight=392, trim={0.84cm 0 0 0}, clip]{random_forest.pdf}
\caption[Random Forest]{A schematic representation of a random forest regressor for inferring fundamental stellar parameters. \mb{Observable quantities} such as \mb{$T_{\text{eff}}$ and [Fe/H]} and global asteroseismic quantities like \mb{$\langle\Delta\nu\rangle$ and} $\langle\delta\nu_{0,2}\rangle$ are input on the left side. These quantities are then fed through to some number of hidden decision trees, which each independently predict \mb{parameters} like age and mass. The predictions are then averaged and output on the right side. All inputs and outputs are optional. For example, surface gravities, luminosities, and radii are not always available \mb{from observations} (e.g.\ with the KOI stars\mb{, see Section~\ref{sec:koi} below}). In their absence, these quantities can be predicted instead of being supplied. In this case, those nodes can be moved over to the ``prediction'' side instead of being on the ``observations'' side. Also, in addition to potentially unobserved inputs like stellar radii, other interesting model parameters can be predicted as well, such as core hydrogen mass fraction or surface helium abundance. \label{fig:rf} }
\end{figure*}
}
\subsubsection*{Feature Importance} \label{sec:importances}
\mb{The CART algorithm uses} information theory to decide which rule is the best choice for inferring \mb{stellar parameters} like age and mass from the supplied information \citep[][Chapter 9]{hastie2005elements}. At every stage, the rule that creates the largest decrease in mean squared error (MSE) is crafted. A rule may be, for example, ``all models with ${L <0.4\;L_\odot}$ have ${M< 1\;M_\odot}$.'' Rules are created until every \mb{stellar model} that was supplied to that particular tree is fully explained by a sequence of decisions. We moreover use a variant on random forests known as \emph{extremely} randomized trees \citep{geurts2006extremely}, which further randomize attribute splittings (e.g.\ split on L) and the location of the cut-point (e.g.\ split on ${0.4 \; L/L_\odot}$) used when creating decision rules.
\mb{The process of constructing a random forest} presents an opportunity for not only inferring stellar parameters from observations, but also for understanding the relationships that exist in the \mb{stellar models}. Each decision tree explicitly ranks the relative ``importance'' of each observable quantity \mb{for inferring stellar parameters}, where importance is defined in terms of both the reduction in MSE after defining a decision rule based on that quantity and the number of models that use that rule. \mb{In machine learning, the variables that have been measured and are supplied as inputs to the algorithm are known as ``features.'' Figure~\ref{fig:importances} shows a feature importance plot, i.e.~distributions of relative importance over all of the trees in the forest for each feature used to infer stellar parameters. The features that are used most often to construct decision rules are metallicity and temperature, which are each significantly more important features than the rest.} The importance of [Fe/H] is due to the fact that the determinations of quantities like the $Z_0$ and $D$ depend nearly entirely on it \citep[see also][]{2017apj...839..116a}. Note that importance does not indicate indispensability: an appreciable fraction of decision rules being made based off of \mb{one feature} does not mean that another forest without that \mb{feature} would not perform just as well. That being said, these results indicate that the best area to improve measurements would be in metallicity determinations, because for stars being predicted using this random forest, less precise values here means exploring many more paths and hence arriving at less certain predictions.
For many stars, \mb{stellar quantities} such as radii, luminosities, surface gravities, and/or oscillation modes with spherical degree ${\ell=3}$ are not available from observations. For example, the KOI data set \mb{(see Section~\ref{sec:koi} below)} lacks all of this information, and the hare-and-hound exercise data \mb{(see Section~\ref{sec:hnh} below)} lack all of these except luminosities. We therefore must train random forests that predict those quantities instead of using them as \mb{features}. We show the relative importance for \mb{the remaining features that were} used to train these forests in Figure~\ref{fig:importances2}. When ${\ell=3}$ modes and luminosities are omitted, effective temperature jumps in importance and ties with [Fe/H] as the most \mb{important feature}.
\begin{figure}
\centering
\includegraphics[width=300.887892pt,%0.451\linewidth, %
keepaspectratio,natwidth=251,natheight=300]{importances-perturb.pdf}
\caption[Feature Importances]{Box-and-whisker plots of relative importance for each observable feature in inferring fundamental stellar parameters as measured by a random forest regressor grown from a grid of evolutionary models. The boxes display the first ($16\%$) and third ($84\%$) quartile of feature importance over all trees, the center line indicates the median, and the whiskers extend to the most extreme values.}
\label{fig:importances}
\end{figure}
\afterpage{
%\clearpage% To flush out all floats, might not be what you want
%\clearpage% flush all other floats
%\ifodd\value{page}
%\else% uncomment this else to get odd/even instead of even/odd
% \expandafter\afterpage% put it on the next page if this one is odd
%\fi
\begin{landscape}
%\thispagestyle{lscape}
%\pagestyle{lscape}
\begin{figure}
%\begin{figure*}[!hbtp]
\centering
\includegraphics[width=0.45\linewidth,keepaspectratio,natwidth=251,natheight=300]{importances-hares.pdf}%\hfill
\includegraphics[width=0.45\linewidth, keepaspectratio,natwidth=251,natheight=300]{importances-kages.pdf}
\caption[Feature Importances (Hare-and-Hound, KAGES)]{Box-and-whisker plots of relative importance for each feature in measuring fundamental stellar parameters for the hare-and-hound exercise data (left), where luminosities are available; and the \emph{Kepler} objects-of-interest (right), where they are not. Octupole (${\ell=3}$) modes have not been measured in any of these stars, so ${\langle\delta\nu_{1,3}\rangle}$ and ${\langle r_{1,3}\rangle}$ from evolutionary modelling are not supplied to these random forests. The boxes are sorted by median importance.%\vspace*{4mm}
\label{fig:importances2} }
%\end{figure*}
\end{figure}
\end{landscape}
}
\subsubsection*{Advantages of CART}
We choose random forests over any of the many other non-linear regression routines (e.g.\ neural networks, support vector regression, etc.) for several reasons.
First, random forests perform \emph{constrained} regression; that is, they only make predictions within the boundaries of the supplied training data \citep[see e.g.][Section~9.2.1]{hastie2005elements}. This is in contrast to other methods like neural networks, which ordinarily perform unconstrained regression and are therefore not prevented from predicting non-physical quantities such as negative masses or from violating conservation requirements.
Secondly, due to the decision rule process that is explained below, random forests are insensitive to the scale of the data. Unless care is taken, other regression methods will artificially weight some observable \mb{quantities} like temperature as being more important than, say, luminosity, solely because temperatures are written using larger numbers (e.g., $5777$ vs.\ $1$, see for example section 11.5.3 of \citealt{hastie2005elements} for a discussion).
Consequently, solutions obtained by other methods will change if they are \mb{run using features that are} expressed using different units of measure.
For example, other methods will produce different regressors if trained on luminosity values expressed in solar units verses values expressed in erg\mb{s}, whereas random forests will not. \mb{Commonly, this problem is mitigated in other methods by means of variable standardization and through the use of Mahalabonis distances \citep{mahalanobis1936generalized}.
However, these transformations are arbitrary, and handling variables naturally without rescaling is thus preferred. }
Thirdly, random forests take only seconds to train, which can be a large benefit if different stars have different \mb{features} available. For example, some stars have luminosity information available whereas others do not, so a different regressor must be trained for each. In the extreme case, if one wanted to make predictions for stars using all of their respectively observed frequencies, one would need to train a new regressor for each star using the subset of simulated frequencies that correspond to the ones observed for that star. Ignoring the difficulties of surface-term corrections and mode identifications, such an approach would be well-handled by random forest, suffering only a small hit to performance from its relatively small training cost. On the other hand, it would be infeasible to do this on a star-by-star basis with most other routines such as deep neural networks, because \mb{those} methods can take days or even weeks to train.
And finally\mb{, as we saw in the previous section,} random forests provide the opportunity to extract insight about the actual regression being performed by examining the importance of each \mb{feature} in making predictions.
\subsubsection*{Uncertainty}
\label{sec:uncertainties}
There are three separate sources of uncertainty in predicting stellar parameters. The first is the systematic uncertainty in the physics used to model stars. These uncertainties are unknown, however, and hence cannot be propagated. The second is the uncertainty belonging to the observations of the star. We propagate measurement uncertainties $\sigma$ into the predictions by perturbing all measured quantities ${n=10,000}$ times with normal noise having zero mean and standard deviation $\sigma$. We account for the covariance between asteroseismic separations and ratios by recalculating them upon each perturbation.
The final source is regression uncertainty. Fundamentally, each parameter can only be constrained to the extent that observations are able to bear information pertaining to that parameter. Even if observations were error-free, there still may exist a limit to which information gleaned from the surface may tell us about the physical \mb{qualities} and evolutionary history of a star. We quantify those limits via cross-validation: we train the random forest on only a subset of the simulated evolutionary tracks and make predictions on a held-out validation set. We randomly hold out a different subset of the tracks $25$ times to serve as different validation sets and obtain averaged accuracy scores.
We calculate accuracies using several scores. The first is the explained variance score V$_{\text{e}}$:
\begin{equation}
\text{V}_{\text{e}} = 1 - \frac{\text{Var}\{ y - \hat y \}}{\text{Var}\{ y \}}
\end{equation}
where $y$ is the \mb{true} value we want to predict from the validation set (e.g.\ stellar mass), $\hat y$ is the predicted value from the random forest, and Var is the variance, i.e.\ the square of the standard deviation. This score tells us the extent to which the regressor has reduced the variance in the parameter it is predicting. The value ranges from negative infinity, which would be obtained by a pathologically bad predictor; to one for a perfect predictor, which occurs if all of the values are predicted with zero error.
The next score we consider is the residuals of each prediction, i.e.\ the absolute difference between the true value $y$ and the predicted value $\hat y$. Naturally, we want this value to be as low as possible. We also consider the precision of the regression $\hat \sigma$ by taking the standard deviation of predictions across all of the decision trees in the forest. Finally, we consider these scores together by calculating the distance of the residuals in units of precision, i.e.\ ${\abs{\hat y - y} / \hat{\sigma}}$.
Figure~\ref{fig:evaluation-tracks} shows these accuracies as a function of the number of evolutionary tracks used in the training of the random forest. Since the residuals and standard deviations of each parameter are incomparable, we normalize them by dividing by the maximum value. We also consider the number of trees in the forest and the number of models per evolutionary track\mb{. In this work, we use $256$ trees in each forest, which we have selected via cross-validation by choosing a number of trees that is greater than the point at which we saw that the explained variance was no longer increasing greatly;} see Appendix \ref{sec:evaluation} for an extended discussion.
\afterpage{
%\clearpage% To flush out all floats, might not be what you want
%\clearpage% flush all other floats
%\ifodd\value{page}
%\else% uncomment this else to get odd/even instead of even/odd
% \expandafter\afterpage% put it on the next page if this one is odd
%\fi
\begin{landscape}
%\thispagestyle{lscape}
%\pagestyle{lscape}
\begin{figure}
%\begin{figure*}
\vspace*{-1cm}
\centering
%\includegraphics[width=0.66\linewidth,keepaspectratio,natwidth=3526,natheight=138, trim={0 10cm 0 10cm}, clip]{legend.pdf}\\
\includegraphics[width=0.6\linewidth,keepaspectratio,natwidth=3526,natheight=138]{legend.png}\\
\includegraphics[width=0.5\linewidth,keepaspectratio,natwidth=251,natheight=150]{num_tracks-ev.pdf}%\hfill
\includegraphics[width=0.5\linewidth,keepaspectratio,natwidth=251,natheight=150]{num_tracks-dist.pdf}\\
\includegraphics[width=0.5\linewidth,keepaspectratio,natwidth=251,natheight=150]{num_tracks-diff.pdf}%\hfill
\includegraphics[width=0.5\linewidth,keepaspectratio,natwidth=251,natheight=150]{num_tracks-sigma.pdf}%\\
%\end{figure*}
%\end{figure}
%\begin{figure}
% \centering
\caption[Evaluations of regression accuracy]{Evaluations of regression accuracy. Explained variance (top left), accuracy per precision distance (top right), normalized absolute error (bottom left), and normalized uncertainty (bottom right) for each stellar parameter as a function of the number of evolutionary tracks used in training the random forest. These results use $64$ models per track and $256$ trees in the random forest. \label{fig:evaluation-tracks}}
\end{figure}
\end{landscape}
}
When supplied with enough \mb{stellar models}, the random forest reduces the variance in each parameter and is able to make precise inferences. The forest has very high predictive power for most \mb{parameters}, and as a result, essentially all of the uncertainty when predicting quantities such as stellar radii and luminosities will stem from observational uncertainty. However, for some model \mb{parameters}---most notably the mixing length parameter---there is still a great deal of variance in the residuals. Prior to \mb{the point where the regressor has been trained on} about $500$ evolutionary tracks, the differences between the true and predicted mixing lengths actually have a greater variance than just the true mixing lengths themselves. Likewise, the diffusion multiplication factor is difficult to constrain because a star can achieve the same present-day [Fe/H] by either having a large initial non-hydrogen abundance and a large diffusion \mb{multiplication} factor, or by having the same initial [Fe/H] as present [Fe/H] but with diffusion disabled. These difficult-to-constrain \mb{parameters} will therefore be predicted with substantial \mb{uncertainties} regardless of the precision of the observations.
%On the other hand, the accuracy per precision score shows that the regressor is well-calibrated: in the average case, the values are predicted within one $\hat\sigma$ as desired. In other words, the uncertainties for all of the parameters---even for the ones that cannot be constrained very well---are still appropriately gauged by the regressor.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Results %%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Results}
We perform three tests of our method. We begin with a hare-and-hound simulation exercise to show that we can reliably recover parameters. We then move to the Sun and the solar-like stars 16~Cyg~A \& B, which have been the subjects of many investigations; and we conclude by applying our method to $34$ \emph{Kepler} objects-of-interest. In each case, we train our random forest regressor on the subset of observational data that is available for the stars being processed. In the case of the Sun and 16~Cygni, we know very accurately their radii, luminosities, and surface gravities. For other stars, we will predict this information instead of supplying it.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Hare and Hound %%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Hare and Hound}
\label{sec:hnh}
We performed a blind hare-and-hound exercise to evaluate the performance of our predictor. Author S.B.\ prepared twelve models varied in mass, initial chemical composition, and mixing length parameter with only some models having overshooting and only some models having atomic diffusion included. The models were evolved without rotation using the Yale rotating stellar evolution code \citep[YREC;][]{2008ApSS.316...31D}, which is a different evolution code than the one that was used to train the random forest. Effective temperatures, luminosities, [Fe/H] and $\nu_{\max}$ values as well as ${\ell=0},1,2$ frequencies were obtained from each model. Author G.C.A.\ perturbed the ``observations'' of these models according to the scheme devised by \citet{spaceinn}. \mb{Appendix \ref{sec:hare-and-hound} lists the true values and the perturbed observations of the hare-and-hound models}. The perturbed observations and their uncertainties were given to author E.P.B.\@, who used the described method to recover the stellar parameters of \mb{these} models without being given access to the true values. Relative differences between the true and predicted ages, masses, and radii for these models are plotted against their true values in Figure~\ref{fig:hare-comparison}. The method is able to recover the true model values within uncertainties even when they have been perturbed by noise. We do not compare the predicted mixing length parameter, overshooting parameter or diffusion \mb{multiplication} factor \mb{the interpretation of these parameters depends on how they have been defined and their precise implementation.}%they depend on the physics of the evolutionary code used, e.g.\ the choice in stellar atmosphere.
\afterpage{
%\clearpage% To flush out all floats, might not be what you want
\begin{landscape}
%\thispagestyle{lscape}
%\pagestyle{lscape}
\begin{figure}
%\begin{figure}
\centering
%\includegraphics[width=0.5\linewidth,keepaspectratio]{basu-Age-rel.pdf}\hfill
%\includegraphics[width=0.5\linewidth,keepaspectratio]{basu-Mass-rel.pdf}\\
%\includegraphics[width=0.5\linewidth,keepaspectratio]{basu-Radius-rel.pdf}
\includegraphics[width=0.487\linewidth,keepaspectratio,natwidth=251,natheight=150]{basu-Age-rel.pdf}%
\includegraphics[width=0.487\linewidth,keepaspectratio,natwidth=251,natheight=150]{basu-Mass-rel.pdf}\\
\includegraphics[width=0.487\linewidth,keepaspectratio,natwidth=251,natheight=150]{basu-Radius-rel.pdf}
\caption[Hare-and-hound results]{Relative differences between the predicted and true values for age (top left), mass (top right), and radius (bottom) as a function of the true values in a hare-and-hound simulation exercise. \vspace*{5mm} %Predicted values of age, mass, and radius plotted against the true values in a hare-and-hound simulation exercise.
\label{fig:hare-comparison}}
\end{figure}
\end{landscape}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% The Sun & 16~Cygni %%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{The Sun and the 16~Cygni System}
To ensure confidence in our predictions on \emph{Kepler} data, we first degrade the frequencies of the Sun at solar minimum that were obtained by the Birmingham Solar-Oscillations Network \citep[BiSON;][]{2014MNRAS.439.2025D} to the level of information that is achievable by the spacecraft. We also degrade the Sun's uncertainties of \mb{other} observations by applying 16~Cyg~B's uncertainties of effective temperature, luminosity, surface gravity, metallicity, $\nu_{\max}$, radius, and radial velocity. Finally, we perturb each value with random Gaussian noise according to its uncertainty to reflect the fact that the measured value of an uncertain observation is not \emph{per se} the true value. We use the random forest whose feature importances were shown in Figure~\ref{fig:importances} to predict the values of the Sun; i.e.\ the random forest trained on effective temperatures, metallicities, luminosities, surface gravities, radii, and global asteroseismic \mb{quantities} $\langle \Delta\nu_0 \rangle$, $\langle \delta\nu_{0,2} \rangle$, $\langle \delta\nu_{1,3} \rangle$, $\langle r_{0,2} \rangle$, $\langle r_{1,3} \rangle$, $\langle r_{0,1} \rangle$, and $\langle r_{1,0} \rangle$. We show in Figure~\ref{fig:corner} the densities for the predicted mass, initial composition, mixing length parameter, overshoot coefficient, and diffusion multiplication factor needed for fitting an evolutionary model to degraded data of the Sun as well as the predicted solar age, core hydrogen abundance, and surface helium abundance. \mb{As discussed in Section~\ref{sec:uncertainties}, these densities show the distributions resulting from running $10,000$ different noise perturbations fed through the random forest.} Relative uncertainties ${\epsilon=100\cdot\sigma/\mu}$ are also indicated, where $\mu$ is the mean and $\sigma$ is the standard deviation of the quantity being predicted. Our predictions are in good agreement with the known values (see also Table~\ref{tab:results} and Table~\ref{tab:results-ca}, and \emph{cf}.~Equation~\ref{eq:solar-cal-vals}).
Several parameters show multimodality due to model degeneracies. For example, two solutions for the initial helium are present. This is because it covaries with the mixing length parameter: the peak of \mb{higher} $Y_0$ corresponds to the peak of \mb{lower} $\alpha_{\text{MLT}}$ and vice versa. Likewise, high values of surface helium correspond to low values of the diffusion \mb{multiplication} factor.
Effective temperatures, surface gravities, and metallicities of 16~Cyg~A~and~B were obtained from \citet{2009A&A...508L..17R}; radii and luminosities from \citet{2013MNRAS.433.1262W}; and frequencies from \citet{2015MNRAS.446.2959D}. We obtained the radial velocity measurements of 16~Cyg~A~and~B from \citet{2002ApJS..141..503N} and corrected frequencies for Doppler shifting as per the prescription in \citet{2014MNRAS.445L..94D}. We tried with and without line-of-sight corrections and found that it did not affect the predicted quantities or their uncertainties. We use the same random forest as we used for the degraded solar data to predict the \mb{parameters} of these stars. The initial parameters---masses, chemical compositions, mixing lengths, diffusion \mb{multiplication} factors, and overshoot coefficients---for 16~Cygni as predicted by machine learning \mb{are shown} in Table~\ref{tab:results}, and the predicted current parameters---age, surface helium and core hydrogen abundances---\mb{are shown} in Table~\ref{tab:results-ca}. For reference we also show the predicted solar values from these inputs there as well. These results support the hypothesis that 16~Cyg~A~and~B were co-natal; i.e.\ they formed at the same time with the same initial composition.
We additionally predict the radii and luminosities of 16~Cyg~A~and~B instead of using them as \mb{features}. Figure~\ref{fig:interferometry} shows our inferred radii, luminosities and surface helium abundances of 16~Cyg~A~and~B plotted \mb{along with} the values determined by interferometry \citep{2013MNRAS.433.1262W} and an asteroseismic estimate \citep{2014ApJ...790..138V}. Here again we find excellent agreement between our method and the measured values.
\citet{2015ApJ...811L..37M} performed detailed modelling of 16~Cyg~A~and~B using the Asteroseismic Modeling Portal (AMP), a genetic algorithm for matching individual frequencies of stars to stellar models. They calculated their results without heavy-element diffusion (i.e.\ with helium-only diffusion) and without overshooting. In order to account for systematic uncertainties, they multiplied the spectroscopic uncertainties of 16~Cyg~A~and~B by an arbitrary constant ${C=3}$. Therefore, in order to make a fair comparison between the results of our method and theirs, we generate a new matrix of evolutionary models with those same conditions and also increase the uncertainties on [Fe/H] by a factor of $C$. In Figure~\ref{fig:16Cyg-hist}, we show probability densities of the predicted parameters of 16~Cyg~A~and~B that we obtain using machine learning in comparison with the results obtained by AMP. We find the values and uncertainties agree well. To perform their analysis, AMP required more than $15,000$ hours of CPU time to model 16~Cyg~A~and~B using the world's 10th fastest supercomputer, the Texas Advanced Computing Center Stampede \citep{TOP500}. Here we have obtained comparable results in roughly one minute \mb{on a computing cluster with $64$ $2.5$~GHz cores} using only global asteroseismic \mb{quantities} and no individual frequencies. Although more computationally expensive than our method, detailed optimization codes like AMP do have advantages in that they are additionally able to obtain detailed structural models of stars. % as well as characterize stars beyond the main sequence, where our method has not yet been extended.
%We note however that detailed optimization codes like AMP remain important for obtaining structural models of stars as well as for performing post-main sequence modelling, where our method has not yet been extended.
\afterpage{
%\thispagestyle{lscape}
%\clearpage
%\cleartoleftpage%\cleardoublepage
%\clearpage% flush all other floats
%\ifodd\value{page}
%\else% uncomment this else to get odd/even instead of even/odd
% \expandafter\afterpage% put it on the next page if this one is odd
%\fi
\begin{landscape}
\begin{figure}
\vspace*{-1cm}
\centering
\includegraphics[width=\linewidth,keepaspectratio,natwidth=1004,natheight=601]{Tagesstern.pdf}
%\end{figure}
%\newpage
%\begin{figure}
\caption[Posterior distributions for degraded solar data]{Predictions from machine learning of initial (top six) and current (bottom three) stellar parameters for degraded solar data. Labels are placed at the mean and $3\sigma$ levels. \mb{Dashed and dot-dashed} lines indicate the median and quartiles\mb{, respectively}. Relative uncertainties $\epsilon$ are shown beside each plot. Note that the overshoot parameter applies to all convective boundaries and is not modified over the course of evolution, so a non-zero value does not imply a convective core. %\vspace*{10mm}
\label{fig:corner} }
\end{figure}
\end{landscape}
}
\afterpage{
%\clearpage
%\begin{landscape}
\begin{table} \centering %\scriptsize
\caption{Means and standard deviations for predicted initial stellar parameters of the Sun (degraded data) and 16~Cyg~A~and~B.
\label{tab:results}}
\hspace*{-1.7cm}\begin{tabular}{cccccccc}
Name & $M/M_\odot$ & $Y_0$ & $Z_0$ & $\alpha_{\mathrm{MLT}}$ & $\alpha_{\mathrm{ov}}$ & D \\ \hline \hline
Sun & 1.00 $\pm$ 0.012 & 0.270 $\pm$ 0.0062 & 0.020 $\pm$ 0.0014 & 1.88 $\pm$ 0.078 & 0.06 $\pm$ 0.015 & 3.7 $\pm$ 3.18 \\
16~Cyg~A & 1.08 $\pm$ 0.016 & 0.262 $\pm$ 0.0073 & 0.022 $\pm$ 0.0014 & 1.86 $\pm$ 0.077 & 0.07 $\pm$ 0.028 & 0.9 $\pm$ 0.76 \\
16~Cyg~B & 1.03 $\pm$ 0.015 & 0.268 $\pm$ 0.0065 & 0.021 $\pm$ 0.0015 & 1.83 $\pm$ 0.069 & 0.11 $\pm$ 0.029 & 1.9 $\pm$ 1.57
\\ \hline \end{tabular}
\end{table}
\begin{table} \centering %\scriptsize
\caption{Means and standard deviations for predicted current-age stellar \mb{parameters} of the Sun (degraded data) and 16~Cyg~A~and~B. \label{tab:results-ca}}
\begin{tabular}{cccc}
Name & $\tau/$Gyr & X$_{\mathrm{c}}$ & $Y_{\mathrm{surf}}$ \\ \hline \hline
Sun & 4.6 $\pm$ 0.20 & 0.34 $\pm$ 0.027 & 0.24 $\pm$ 0.017 \\
16~Cyg~A & 6.9 $\pm$ 0.40 & 0.06 $\pm$ 0.024 & 0.246 $\pm$ 0.0085 \\
16~Cyg~B & 6.8 $\pm$ 0.28 & 0.15 $\pm$ 0.023 & 0.24 $\pm$ 0.017
\\ \hline \end{tabular}
\end{table}
%\end{landscape}
}
\afterpage{
%\clearpage
\begin{landscape}
\begin{figure}
\centering
\includegraphics[width=0.45\linewidth, keepaspectratio,natwidth=251,natheight=150]{cyg-radius.pdf}%
\includegraphics[width=0.45\linewidth, keepaspectratio,natwidth=251,natheight=150]{cyg-L.pdf}\\
\includegraphics[width=0.45\linewidth, keepaspectratio,natwidth=251,natheight=150]{cyg-Y_surf.pdf}
%\includegraphics[width=0.5\linewidth, keepaspectratio,natwidth=251,natheight=150]{cyg-radius.pdf}\hfill
%\includegraphics[width=0.5\linewidth, keepaspectratio,natwidth=251,natheight=150]{cyg-L.pdf}\\
%\includegraphics[width=0.5\linewidth, keepaspectratio,natwidth=251,natheight=150]{cyg-Y_surf.pdf}
\caption[Posterior distributions for 16~Cygni (radius, luminosity, surface helium abundance)]{Probability densities for predictions of 16~Cyg~A (red) and B (blue) from machine learning of radii (top left), luminosities (top right), and surface helium abundances (bottom). Relative uncertainties $\epsilon$ are shown beside each plot. Predictions and $2\sigma$ uncertainties from interferometric (``int'') measurements and asteroseismic (``ast'') estimates are shown with arrows.}
\label{fig:interferometry}
\end{figure}
\end{landscape}
}
\afterpage{
%\clearpage
\begin{landscape}
\begin{figure}
\centering
\includegraphics[width=0.47\linewidth, keepaspectratio,natwidth=251,natheight=150]{cyg-age.pdf}\hfill
\includegraphics[width=0.47\linewidth, keepaspectratio,natwidth=251,natheight=150]{cyg-M.pdf}\\
\includegraphics[width=0.47\linewidth, keepaspectratio,natwidth=251,natheight=150]{cyg-Y.pdf}\hfill
\includegraphics[width=0.47\linewidth, keepaspectratio,natwidth=251,natheight=150]{cyg-Z.pdf}
\caption[Posterior distributions for 16~Cygni (age, mass, initial helium abundance, initial metallicity)]{Probability densities showing predictions from machine learning of fundamental stellar parameters for 16~Cyg~A (red) and B (blue) \mb{along with} predictions from AMP modelling. Relative uncertainties are shown beside each plot. Predictions and $2\sigma$ uncertainties from AMP modelling are shown with arrows. \vspace*{5mm}
\label{fig:16Cyg-hist}}
\end{figure}
\end{landscape}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Kepler Objects of Interest %%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{\emph{Kepler} Objects of Interest}
\label{sec:koi}
We obtain observations and frequencies of the KOI targets from \citet{2016MNRAS.456.2183D}. We use line-of-sight radial velocity corrections when available, which was only the case for KIC 6278762 \citep{2002AJ....124.1144L}, KIC 10666592 \citep{2013A&A...554A..84M}, and KIC 3632418 \citep{2006AstL...32..759G}. We use the random forest whose feature importances were shown in Figure~\ref{fig:importances2} to predict the fundamental \mb{parameters} of these stars; that is, the random forest that is trained on effective temperatures, metallicities, and asteroseismic quantities ${\langle \Delta\nu_0 \rangle}$, ${\langle \delta\nu_{0,2} \rangle}$, ${\langle r_{0,2} \rangle}$, ${\langle r_{0,1} \rangle}$, and ${\langle r_{1,0} \rangle}$. The predicted initial conditions---masses, chemical compositions, mixing lengths, overshoot coefficients, and diffusion \mb{multiplication} factors---are shown in Table~\ref{tab:results-kages}; and the predicted current conditions---ages, core hydrogen abundances, surface gravities, luminosities, radii, and surface helium abundances---are shown in Table~\ref{tab:results-kages-curr}. Figure~\ref{fig:us-vs-them} shows the fundamental parameters obtained from our method plotted against those obtained by \citet[hereinafter KAGES]{2015MNRAS.452.2127S}. We find good agreement across all stars.
Although still in statistical agreement, the median values of our predicted ages are systematically lower and the median values of our predicted masses are systematically higher than those predicted by KAGES. We conjecture that these discrepancies arise from differences in input physics. We vary the efficiency of diffusion, the extent of convective overshooting, and the value of the mixing length parameter to arrive at these estimates, whereas \mb{the KAGES} models are calculated using fixed \mb{amounts} of diffusion, without overshoot, and with a solar-calibrated mixing length. Models with overshooting, for example, will be more evolved at the same age due to having larger core masses. Without direct access to their models, however, the exact reason is difficult to pinpoint.
We find a significant linear trend in the \emph{Kepler} objects-of-interest between the diffusion multiplication factor and stellar mass needed to reproduce observations (${P = 0.0001}$ from a two-sided t-test with ${N-2=32}$ degrees of freedom). Since the values of mass and diffusion \mb{multiplication} factor are uncertain, we use Deming regression to estimate the coefficients of this relation without regression dilution \citep{deming1943statistical}. We show the diffusion multiplication factors as a function of stellar mass for all of these stars in Figure~\ref{fig:diffusion}. We find that the diffusion \mb{multiplication} factor linearly decreases with mass, i.e.\
\begin{equation} \label{eq:diffusion}
\text{D} = ( 8.6 \pm 1.94 ) - ( 5.6 \pm 1.37 ) \cdot \text{M}/\text{M}_\odot
\end{equation}
and that this relation explains observations better than any constant factor (e.g., ${D=1}$ or ${D=0}$).
\afterpage{
%\clearpage
%\begin{landscape}
%\begin{longtable}{ccccccc} %\centering \footnotesize%\scriptsize
\begin{table}
\caption{Means and standard deviations for initial conditions of the KOI data set inferred via machine learning. The values obtained from degraded solar data predicted on these quantities are shown for reference. \label{tab:results-kages}}
\hspace*{-1.9cm}\begin{tabular}{ccccccc}
KIC & $M/M_\odot$ & $Y_0$ & $Z_0$ & $\alpha_{\mathrm{MLT}}$ & $\alpha_{\mathrm{ov}}$ & $D$ \\ \hline\hline
3425851 & 1.15 $\pm$ 0.053 & 0.28 $\pm$ 0.020 & 0.015 $\pm$ 0.0028 & 1.9 $\pm$ 0.23 & 0.06 $\pm$ 0.057 & 0.5 $\pm$ 0.92 \\
3544595 & 0.91 $\pm$ 0.032 & 0.270 $\pm$ 0.0090 & 0.015 $\pm$ 0.0028 & 1.9 $\pm$ 0.10 & 0.2 $\pm$ 0.11 & 4.9 $\pm$ 4.38 \\
3632418 & 1.39 $\pm$ 0.057 & 0.267 $\pm$ 0.0089 & 0.019 $\pm$ 0.0032 & 2.0 $\pm$ 0.12 & 0.2 $\pm$ 0.14 & 1.1 $\pm$ 1.01 \\
4141376 & 1.03 $\pm$ 0.036 & 0.267 $\pm$ 0.0097 & 0.012 $\pm$ 0.0025 & 1.9 $\pm$ 0.12 & 0.1 $\pm$ 0.11 & 4.0 $\pm$ 4.09 \\
4143755 & 0.99 $\pm$ 0.037 & 0.277 $\pm$ 0.0050 & 0.014 $\pm$ 0.0026 & 1.77 $\pm$ 0.033 & 0.37 $\pm$ 0.071 & 13.4 $\pm$ 5.37 \\
4349452 & 1.22 $\pm$ 0.056 & 0.28 $\pm$ 0.012 & 0.020 $\pm$ 0.0043 & 1.9 $\pm$ 0.17 & 0.10 $\pm$ 0.090 & 7.3 $\pm$ 8.82 \\
4914423 & 1.19 $\pm$ 0.048 & 0.274 $\pm$ 0.0097 & 0.026 $\pm$ 0.0046 & 1.8 $\pm$ 0.11 & 0.08 $\pm$ 0.043 & 2.3 $\pm$ 1.6 \\
5094751 & 1.11 $\pm$ 0.038 & 0.274 $\pm$ 0.0082 & 0.018 $\pm$ 0.0030 & 1.8 $\pm$ 0.11 & 0.07 $\pm$ 0.041 & 2.3 $\pm$ 1.39 \\
5866724 & 1.29 $\pm$ 0.065 & 0.28 $\pm$ 0.011 & 0.027 $\pm$ 0.0058 & 1.8 $\pm$ 0.13 & 0.12 $\pm$ 0.086 & 7.0 $\pm$ 8.38 \\
6196457 & 1.31 $\pm$ 0.058 & 0.276 $\pm$ 0.005 & 0.032 $\pm$ 0.0050 & 1.71 $\pm$ 0.050 & 0.16 $\pm$ 0.055 & 5.7 $\pm$ 2.34 \\
6278762 & 0.76 $\pm$ 0.012 & 0.254 $\pm$ 0.0058 & 0.013 $\pm$ 0.0017 & 2.09 $\pm$ 0.069 & 0.06 $\pm$ 0.028 & 5.3 $\pm$ 2.23 \\
6521045 & 1.19 $\pm$ 0.046 & 0.273 $\pm$ 0.0071 & 0.027 $\pm$ 0.0044 & 1.82 $\pm$ 0.074 & 0.12 $\pm$ 0.036 & 3.2 $\pm$ 1.31 \\
7670943 & 1.30 $\pm$ 0.061 & 0.28 $\pm$ 0.017 & 0.021 $\pm$ 0.0045 & 2.0 $\pm$ 0.23 & 0.06 $\pm$ 0.064 & 1.0 $\pm$ 2.55 \\
8077137 & 1.23 $\pm$ 0.070 & 0.270 $\pm$ 0.0093 & 0.018 $\pm$ 0.0028 & 1.8 $\pm$ 0.14 & 0.2 $\pm$ 0.11 & 2.9 $\pm$ 2.08 \\
8292840 & 1.15 $\pm$ 0.079 & 0.28 $\pm$ 0.010 & 0.016 $\pm$ 0.0049 & 1.8 $\pm$ 0.15 & 0.1 $\pm$ 0.12 & 11. $\pm$ 10.7 \\
8349582 & 1.23 $\pm$ 0.040 & 0.271 $\pm$ 0.0069 & 0.043 $\pm$ 0.0074 & 1.9 $\pm$ 0.12 & 0.11 $\pm$ 0.060 & 2.5 $\pm$ 1.11 \\
8478994 & 0.81 $\pm$ 0.022 & 0.272 $\pm$ 0.0082 & 0.010 $\pm$ 0.0012 & 1.91 $\pm$ 0.054 & 0.21 $\pm$ 0.068 & 17. $\pm$ 9.74 \\
8494142 & 1.42 $\pm$ 0.058 & 0.27 $\pm$ 0.010 & 0.028 $\pm$ 0.0046 & 1.70 $\pm$ 0.064 & 0.10 $\pm$ 0.051 & 1.6 $\pm$ 1.65 \\
8554498 & 1.39 $\pm$ 0.067 & 0.272 $\pm$ 0.0082 & 0.031 $\pm$ 0.0032 & 1.70 $\pm$ 0.077 & 0.14 $\pm$ 0.079 & 1.7 $\pm$ 1.17 \\
8684730 & 1.44 $\pm$ 0.030 & 0.277 $\pm$ 0.0075 & 0.041 $\pm$ 0.0049 & 1.9 $\pm$ 0.14 & 0.29 $\pm$ 0.094 & 15.2 $\pm$ 8.81 \\
8866102 & 1.26 $\pm$ 0.069 & 0.28 $\pm$ 0.013 & 0.021 $\pm$ 0.0048 & 1.8 $\pm$ 0.15 & 0.08 $\pm$ 0.070 & 5. $\pm$ 7.48 \\
9414417 & 1.36 $\pm$ 0.054 & 0.264 $\pm$ 0.0073 & 0.018 $\pm$ 0.0028 & 1.9 $\pm$ 0.13 & 0.2 $\pm$ 0.1 & 2.2 $\pm$ 1.68 \\
9592705 & 1.45 $\pm$ 0.038 & 0.27 $\pm$ 0.010 & 0.029 $\pm$ 0.0038 & 1.72 $\pm$ 0.064 & 0.12 $\pm$ 0.056 & 0.6 $\pm$ 0.47 \\
9955598 & 0.93 $\pm$ 0.028 & 0.27 $\pm$ 0.011 & 0.023 $\pm$ 0.0039 & 1.9 $\pm$ 0.10 & 0.2 $\pm$ 0.13 & 2.2 $\pm$ 1.76 \\
10514430 & 1.13 $\pm$ 0.053 & 0.277 $\pm$ 0.0046 & 0.021 $\pm$ 0.0039 & 1.78 $\pm$ 0.059 & 0.30 $\pm$ 0.097 & 4.7 $\pm$ 1.77 \\
10586004 & 1.31 $\pm$ 0.078 & 0.274 $\pm$ 0.0055 & 0.038 $\pm$ 0.0071 & 1.8 $\pm$ 0.13 & 0.2 $\pm$ 0.13 & 4.3 $\pm$ 3.99 \\
10666592 & 1.50 $\pm$ 0.023 & 0.30 $\pm$ 0.013 & 0.030 $\pm$ 0.0032 & 1.8 $\pm$ 0.11 & 0.06 $\pm$ 0.043 & 0.2 $\pm$ 0.14 \\
10963065 & 1.09 $\pm$ 0.031 & 0.264 $\pm$ 0.0083 & 0.014 $\pm$ 0.0025 & 1.8 $\pm$ 0.11 & 0.05 $\pm$ 0.027 & 3.1 $\pm$ 2.68 \\
11133306 & 1.11 $\pm$ 0.044 & 0.272 $\pm$ 0.0099 & 0.021 $\pm$ 0.0040 & 1.8 $\pm$ 0.16 & 0.04 $\pm$ 0.033 & 5. $\pm$ 5.75 \\
11295426 & 1.11 $\pm$ 0.033 & 0.27 $\pm$ 0.010 & 0.025 $\pm$ 0.0036 & 1.81 $\pm$ 0.084 & 0.05 $\pm$ 0.035 & 1.3 $\pm$ 0.87 \\
11401755 & 1.15 $\pm$ 0.039 & 0.271 $\pm$ 0.0057 & 0.015 $\pm$ 0.0023 & 1.88 $\pm$ 0.055 & 0.33 $\pm$ 0.071 & 3.8 $\pm$ 1.81 \\
11807274 & 1.32 $\pm$ 0.079 & 0.276 $\pm$ 0.0097 & 0.024 $\pm$ 0.0051 & 1.77 $\pm$ 0.083 & 0.11 $\pm$ 0.066 & 5.4 $\pm$ 5.61 \\
11853905 & 1.22 $\pm$ 0.055 & 0.272 $\pm$ 0.0072 & 0.029 $\pm$ 0.0050 & 1.8 $\pm$ 0.12 & 0.18 $\pm$ 0.086 & 3.3 $\pm$ 1.85 \\
11904151 & 0.93 $\pm$ 0.033 & 0.265 $\pm$ 0.0091 & 0.016 $\pm$ 0.0030 & 1.8 $\pm$ 0.13 & 0.05 $\pm$ 0.029 & 3.1 $\pm$ 2.09 \\
Sun & 1.00 $\pm$ 0.0093 & 0.266 $\pm$ 0.0035 & 0.018 $\pm$ 0.0011 & 1.81 $\pm$ 0.032 & 0.07 $\pm$ 0.021 & 2.1 $\pm$ 0.83 \\ \hline \end{tabular}
%\end{longtable}
\end{table}
%\end{landscape}
%}
\clearpage
%\afterpage{
%\clearpage
%\begin{landscape}
%\hspace*{-1.7cm}\begin{longtable}{ccccccc} %\centering \scriptsize
\begin{table}
\caption{Means and standard deviations for current-age conditions of the KOI data set inferred via machine learning. The values obtained from degraded solar data predicted on these quantities are shown for reference. \label{tab:results-kages-curr}}
\hspace*{-2.2cm}\begin{tabular}{ccccccc}
KIC & $\tau/$Gyr & X$_{\mathrm{c}}$ & log g & L$/$L$_\odot$ & R$/$R$_\odot$ & $Y_{\mathrm{surf}}$ \\ \hline\hline
3425851 & 3.7 $\pm$ 0.76 & 0.14 $\pm$ 0.081 & 4.234 $\pm$ 0.0098 & 2.7 $\pm$ 0.16 & 1.36 $\pm$ 0.022 & 0.27 $\pm$ 0.026 \\
3544595 & 6.7 $\pm$ 1.47 & 0.31 $\pm$ 0.078 & 4.46 $\pm$ 0.016 & 0.84 $\pm$ 0.068 & 0.94 $\pm$ 0.020 & 0.23 $\pm$ 0.023 \\
3632418 & 3.0 $\pm$ 0.36 & 0.10 $\pm$ 0.039 & 4.020 $\pm$ 0.0076 & 5.2 $\pm$ 0.25 & 1.91 $\pm$ 0.031 & 0.24 $\pm$ 0.021 \\
4141376 & 3.4 $\pm$ 0.67 & 0.38 $\pm$ 0.070 & 4.41 $\pm$ 0.011 & 1.42 $\pm$ 0.097 & 1.05 $\pm$ 0.019 & 0.24 $\pm$ 0.022 \\
4143755 & 8.0 $\pm$ 0.80 & 0.07 $\pm$ 0.022 & 4.09 $\pm$ 0.013 & 2.3 $\pm$ 0.12 & 1.50 $\pm$ 0.029 & 0.17 $\pm$ 0.023 \\
4349452 & 2.4 $\pm$ 0.78 & 0.4 $\pm$ 0.10 & 4.28 $\pm$ 0.012 & 2.5 $\pm$ 0.14 & 1.32 $\pm$ 0.022 & 0.22 $\pm$ 0.043 \\
4914423 & 5.2 $\pm$ 0.58 & 0.06 $\pm$ 0.032 & 4.162 $\pm$ 0.0097 & 2.5 $\pm$ 0.16 & 1.50 $\pm$ 0.022 & 0.24 $\pm$ 0.023 \\
5094751 & 5.3 $\pm$ 0.67 & 0.07 $\pm$ 0.039 & 4.209 $\pm$ 0.0082 & 2.2 $\pm$ 0.13 & 1.37 $\pm$ 0.017 & 0.23 $\pm$ 0.024 \\
5866724 & 2.4 $\pm$ 0.96 & 0.4 $\pm$ 0.12 & 4.24 $\pm$ 0.017 & 2.7 $\pm$ 0.13 & 1.42 $\pm$ 0.022 & 0.23 $\pm$ 0.038 \\
6196457 & 4.0 $\pm$ 0.73 & 0.18 $\pm$ 0.061 & 4.11 $\pm$ 0.022 & 3.3 $\pm$ 0.21 & 1.68 $\pm$ 0.041 & 0.24 $\pm$ 0.016 \\
6278762 & 10.3 $\pm$ 0.96 & 0.35 $\pm$ 0.026 & 4.557 $\pm$ 0.0084 & 0.34 $\pm$ 0.022 & 0.761 $\pm$ 0.0061 & 0.19 $\pm$ 0.023 \\
6521045 & 5.6 $\pm$ 0.370 & 0.027 $\pm$ 0.0097 & 4.122 $\pm$ 0.0055 & 2.7 $\pm$ 0.15 & 1.57 $\pm$ 0.025 & 0.22 $\pm$ 0.019 \\
7670943 & 2.3 $\pm$ 0.59 & 0.32 $\pm$ 0.088 & 4.234 $\pm$ 0.0099 & 3.3 $\pm$ 0.23 & 1.44 $\pm$ 0.025 & 0.26 $\pm$ 0.029 \\
8077137 & 4.4 $\pm$ 0.96 & 0.08 $\pm$ 0.052 & 4.08 $\pm$ 0.016 & 3.7 $\pm$ 0.24 & 1.68 $\pm$ 0.044 & 0.22 $\pm$ 0.031 \\
8292840 & 3.4 $\pm$ 1.48 & 0.3 $\pm$ 0.14 & 4.25 $\pm$ 0.023 & 2.6 $\pm$ 0.20 & 1.34 $\pm$ 0.026 & 0.19 $\pm$ 0.049 \\
8349582 & 6.7 $\pm$ 0.53 & 0.02 $\pm$ 0.012 & 4.16 $\pm$ 0.012 & 2.2 $\pm$ 0.12 & 1.52 $\pm$ 0.016 & 0.23 $\pm$ 0.015 \\
8478994 & 4.6 $\pm$ 1.75 & 0.50 $\pm$ 0.055 & 4.55 $\pm$ 0.012 & 0.51 $\pm$ 0.036 & 0.79 $\pm$ 0.014 & 0.21 $\pm$ 0.022 \\
8494142 & 2.8 $\pm$ 0.52 & 0.18 $\pm$ 0.067 & 4.06 $\pm$ 0.018 & 4.5 $\pm$ 0.32 & 1.84 $\pm$ 0.043 & 0.24 $\pm$ 0.029 \\
8554498 & 3.7 $\pm$ 0.79 & 0.09 $\pm$ 0.060 & 4.04 $\pm$ 0.015 & 4.1 $\pm$ 0.20 & 1.86 $\pm$ 0.043 & 0.25 $\pm$ 0.018 \\
8684730 & 3.0 $\pm$ 0.38 & 0.24 $\pm$ 0.065 & 4.06 $\pm$ 0.046 & 4.1 $\pm$ 0.53 & 1.9 $\pm$ 0.11 & 0.17 $\pm$ 0.040 \\
8866102 & 1.9 $\pm$ 0.71 & 0.4 $\pm$ 0.11 & 4.27 $\pm$ 0.014 & 2.8 $\pm$ 0.16 & 1.36 $\pm$ 0.024 & 0.24 $\pm$ 0.039 \\
9414417 & 3.1 $\pm$ 0.31 & 0.09 $\pm$ 0.030 & 4.016 $\pm$ 0.0058 & 5.0 $\pm$ 0.32 & 1.90 $\pm$ 0.032 & 0.21 $\pm$ 0.026 \\
9592705 & 3.0 $\pm$ 0.38 & 0.05 $\pm$ 0.026 & 3.973 $\pm$ 0.0087 & 5.7 $\pm$ 0.37 & 2.06 $\pm$ 0.035 & 0.26 $\pm$ 0.015 \\
9955598 & 7.0 $\pm$ 0.98 & 0.37 $\pm$ 0.035 & 4.494 $\pm$ 0.0061 & 0.66 $\pm$ 0.041 & 0.90 $\pm$ 0.013 & 0.25 $\pm$ 0.020 \\
10514430 & 6.5 $\pm$ 0.89 & 0.06 $\pm$ 0.022 & 4.08 $\pm$ 0.014 & 2.9 $\pm$ 0.17 & 1.62 $\pm$ 0.026 & 0.22 $\pm$ 0.021 \\
10586004 & 4.9 $\pm$ 1.39 & 0.12 $\pm$ 0.090 & 4.09 $\pm$ 0.041 & 3.1 $\pm$ 0.27 & 1.71 $\pm$ 0.070 & 0.24 $\pm$ 0.021 \\
10666592 & 2.0 $\pm$ 0.24 & 0.15 $\pm$ 0.036 & 4.020 $\pm$ 0.0066 & 5.7 $\pm$ 0.33 & 1.98 $\pm$ 0.018 & 0.29 $\pm$ 0.014 \\
10963065 & 4.4 $\pm$ 0.58 & 0.16 $\pm$ 0.054 & 4.292 $\pm$ 0.0070 & 2.0 $\pm$ 0.1 & 1.24 $\pm$ 0.015 & 0.22 $\pm$ 0.029 \\
11133306 & 4.1 $\pm$ 0.84 & 0.22 $\pm$ 0.079 & 4.319 $\pm$ 0.0096 & 1.7 $\pm$ 0.11 & 1.21 $\pm$ 0.019 & 0.22 $\pm$ 0.036 \\
11295426 & 6.2 $\pm$ 0.78 & 0.09 $\pm$ 0.036 & 4.283 $\pm$ 0.0059 & 1.65 $\pm$ 0.095 & 1.26 $\pm$ 0.016 & 0.24 $\pm$ 0.012 \\
11401755 & 5.6 $\pm$ 0.630 & 0.037 $\pm$ 0.0053 & 4.043 $\pm$ 0.0071 & 3.4 $\pm$ 0.19 & 1.69 $\pm$ 0.026 & 0.21 $\pm$ 0.026 \\
11807274 & 2.8 $\pm$ 1.05 & 0.3 $\pm$ 0.11 & 4.17 $\pm$ 0.024 & 3.5 $\pm$ 0.22 & 1.57 $\pm$ 0.038 & 0.22 $\pm$ 0.035 \\
11853905 & 5.7 $\pm$ 0.78 & 0.04 $\pm$ 0.020 & 4.11 $\pm$ 0.011 & 2.7 $\pm$ 0.16 & 1.62 $\pm$ 0.030 & 0.23 $\pm$ 0.022 \\
11904151 & 9.6 $\pm$ 1.43 & 0.08 $\pm$ 0.037 & 4.348 $\pm$ 0.0097 & 1.09 $\pm$ 0.06 & 1.07 $\pm$ 0.019 & 0.21 $\pm$ 0.026 \\
Sun & 4.6 $\pm$ 0.16 & 0.36 $\pm$ 0.012 & 4.439 $\pm$ 0.0038 & 1.01 $\pm$ 0.041 & 1.000 $\pm$ 0.0066 & 0.245 $\pm$ 0.0076 \\ \hline \end{tabular}
%\end{longtable}
%\end{landscape}
\end{table}
}
\afterpage{
%\clearpage
\begin{landscape}
\begin{figure*}
\centering
\includegraphics[width=0.28\linewidth,keepaspectratio,natwidth=167,natheight=167]{kages-logg.pdf}%
\includegraphics[width=0.28\linewidth,keepaspectratio,natwidth=167,natheight=167]{kages-Radius.pdf}%
\includegraphics[width=0.28\linewidth,keepaspectratio,natwidth=167,natheight=167]{kages-Luminosity.pdf}\\
\includegraphics[width=0.28\linewidth,keepaspectratio,natwidth=167,natheight=167]{kages-Mass.pdf}%
\includegraphics[width=0.28\linewidth,keepaspectratio,natwidth=167,natheight=167]{kages-Age.pdf}
\caption[Surface gravities, radii, luminosities, masses, and ages for $34$ \emph{Kepler} objects-of-interest]{Predicted surface gravities, radii, luminosities, masses, and ages of $34$ \emph{Kepler} objects-of-interest plotted against the suggested KAGES values. Medians, $16\%$ quantiles, and $84\%$ quantiles are shown for each point. A dashed line of agreement is shown in all panels to guide the eye. }
\label{fig:us-vs-them}
\end{figure*}
\end{landscape}
}
\afterpage{
%\clearpage
\begin{landscape}
\begin{figure*}
\centering
\includegraphics[width=0.98\linewidth,keepaspectratio,natwidth=502,natheight=300]{diffusion.pdf}
\caption[Empirical diffusion-mass relation]{Logarithmic diffusion multiplication factor as a function of stellar mass for $34$ \emph{Kepler} objects-of-interest. The solid line is the line of best fit from Equation~(\ref{eq:diffusion}) and the dashed lines are the $50\%$ confidence interval around this fit. \label{fig:diffusion} }
\end{figure*}
\end{landscape}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Discussion %%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Discussion}
The amount of time it takes to make predictions for a star using a trained random forest can be decomposed into two parts: the amount of time it takes to calculate perturbations to the observations of the star \mb{(see Section~\ref{sec:uncertainties})}, and the amount of time it takes to make a prediction on each perturbed set of observations. Hence we have
\begin{equation}
t = n(t_p + t_r)
\end{equation}
where $t$ is the total time, $n$ is the number of perturbations, $t_p$ is the time it takes to perform a single perturbation, and $t_r$ is the random forest regression time. We typically see times of ${t_p = (7.9 \pm 0.7) \cdot 10^{-3} \; (\si{\s})}$ and ${t_r = (1.8 \pm 0.4) \cdot 10^{-5} \; (\si{\s})}$. We chose a conservative ${n=10,000}$ for the results presented here, which results in a time of around a minute per star. Since each star can be processed independently and in parallel, a computing cluster could feasibly process a catalog containing millions of objects in less than a day. Since \mb{${t_r \ll t_p}$}, the calculation depends almost entirely on the time it takes to perturb the observations.\footnote{Our perturbation code uses an interpreted language (R), so if needed, there is still room for speed-up.} There is also the one-time cost of training the random forest, which takes less than a minute and can be reused without retraining on every star with the same information. It does need to be retrained if one wants to consider a different combination of input or output parameters.
There is a one-time cost of generating the matrix of training data. We ran our simulation generation scheme for a week on our computing cluster and obtained $5,325$ evolutionary tracks with $64$ models per track, \mb{which resulted in a} $123$~MB \mb{matrix of stellar models}. This is at least an order of magnitude fewer models than the amount that other methods use. Furthermore, this is in general more tracks than is needed by our method: we showed in Figure~\ref{fig:evaluation-tracks} that for most parameters---most notably age, mass, luminosity, radius, initial metallicity, and core hydrogen abundance---one needs only a fraction of the models that we generated in order to obtain good predictive accuracies. Finally, unless one wants to consider a different range of parameters or different input physics, this matrix would not need to be calculated again; a random forest trained on this matrix can be re-used for all future stars that are observed. Of course, our method would still work if trained using a different matrix of models, and our grid should work with other grid-based modelling methods.
Previously, \citet{pulone1997age} developed a neural network for predicting stellar age based on the star's position in the Hertzsprung-Russell diagram. More recently, \citet{2016arXiv160200902V} have worked on incorporating seismic information into that analysis as we have done here. Our method provides several advantages over these approaches. Firstly, the random forests that we use perform constrained regression, meaning that the values we predict for quantities like age and mass will always be non-negative and within the bounds of the training data, which is not true of the neural networks-based approach that they take. Secondly, using \emph{averaged} frequency separations allows us to make predictions without need for concern over which radial orders were observed. Thirdly, we have shown that our random forests are very fast to train, and can be retrained in only seconds for stars that are missing observational constraints such as luminosities. In contrast, deep neural networks are computationally intensive to train, potentially taking days or weeks to converge depending on the breadth of network topologies considered in the cross-validation. Finally, our grid is varied in six initial parameters---$M$, $Y_0$, $Z_0$, $\alpha_{\text{MLT}}$, $\alpha_{\text{ov}}$, and $D$, which allows our method to explore a wide range of stellar model parameters.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Conclusions %%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Conclusions}
Here we have considered the constrained multiple-regression problem of inferring fundamental stellar parameters from observations. We created a grid of evolutionary tracks varied in mass, chemical composition, mixing length parameter, overshooting coefficient, and diffusion \mb{multiplication} factor. We evolved each track in time along the main sequence and collected \mb{observable quantities} such as effective temperatures and metallicities as well as global statistics on the modes of oscillations from models along each evolutionary path. We used this matrix of \mb{stellar models} to train a machine learning algorithm to be able to discern the patterns that relate observations to \mb{fundamental stellar parameters}. We then applied this method to hare-and-hound exercise data, the Sun, 16~Cyg~A~and~B, and $34$ planet-hosting candidates that have been observed by \emph{Kepler} and rapidly obtained precise initial conditions and current-age values of these stars. %, which is vital information for ensemble studies of the Galaxy. The retrodicted initial conditions like the mixing length parameter and overshoot coefficient can additionally be used as strong priors when performing more detailed stellar modelling, e.g.\ when obtaining a reference model for an inversion.
Remarkably, we were able to empirically determine the value of the diffusion \mb{multiplication} factor and hence the efficiency of diffusion required to reproduce the observations instead of inhibiting it \emph{ad hoc}. A larger sample size \mb{will} better constrain the diffusion \mb{multiplication} factor and determine what other variables are relevant in its parameterization. \mb{This is work in progress.}
The method presented here has many advantages over existing approaches. First, random forests can be trained and used in only seconds and hence provide substantial speed-ups over other methods. Observations of a star simply need to be fed through the forest---akin to plugging numbers into an equation---and do not need to be subjected to expensive iterative optimization procedures.
Secondly, random forests perform non-linear and non-parametric regression, which means that the method can use orders-of-magnitude fewer models for the same level of precision, while additionally attaining a more rigorous appraisal of uncertainties for the predicted quantities.
Thirdly, our method allows us to investigate wide ranges and combinations of stellar parameters.
And finally, the method presented here provides the opportunity to extract insights from the statistical regression that is being performed, which is achieved by examining the relationships in stellar physics that the machine learns by analyzing simulation data. This contrasts the blind optimization processes of other methods that provide an answer but do not indicate the elements that were important in doing so.
We note that the predicted quantities reflect a set of choices in stellar physics. Although such biases are impossible to propagate, varying model parameters that are usually kept fixed---such as the mixing length parameter, diffusion \mb{multiplication} factor, and overshooting coefficient---takes us a step in the right direction. Furthermore, the fact that quantities such as stellar radii and luminosities---quantities that have been measured accurately, not just precisely---can be reproduced both precisely and accurately by this method, gives a degree of confidence in its efficacy.
The method we have presented here is currently only applicable to main-sequence stars. We intend to extend this study to later stages of evolution.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Acknowledgements %%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\paragraph*{Acknowledgements}
\noindent The research leading to the presented results has received funding from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no 338251 (StellarAges). This research was undertaken in the context of the International Max Planck Research School \mb{for Solar System Research}. S.B.\ acknowledges partial support from NSF grant AST-1514676 and NASA grant NNX13AE70G. W.H.B. acknowledges research funding by Deutsche Forschungsgemeinschaft (DFG) under grant SFB 963/1 ``Astrophysical flow instabilities and turbulence'' (Project A18).
\paragraph*{Software}
\noindent Analysis in this chapter was performed with \mb{python 3.5.1} libraries scikit-learn \mb{0.17.1} \citep{scikit-learn}, NumPy \mb{1.10.4} \citep{van2011numpy}, and pandas \mb{0.17.1} \citep{mckinney2010data} as well as \mb{R 3.2.3} \citep{R} and the R libraries magicaxis \mb{1.9.4} \citep{magicaxis}, RColorBrewer \mb{1.1-2} \citep{RColorBrewer}, parallelMap \mb{1.3} \citep{parallelMap}, data.table \mb{1.9.6} \citep{data.table}, lpSolve \mb{5.6.13} \citep{lpSolve}, ggplot2 \mb{2.1.0} \citep{ggplot2}, GGally \mb{1.0.1} \citep{GGally}, scales \mb{0.3.0} \citep{scales}, deming \mb{1.0-1} \citep{deming}, and matrixStats \mb{0.50.1} \citep{matrixStats}.
%\newpage
\section{Appendix}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Model selection %%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Model Selection}
\label{sec:selection}
To prevent statistical bias towards the evolutionary tracks that generate the most models, i.e.\ the ones that require the most careful calculations and therefore use smaller time-steps, or those that live on the main sequence for a longer amount of time; we select ${n=64}$ models from each evolutionary track such that the models are as evenly-spaced in core hydrogen abundance as possible. \mb{We chose $64$ because it is a power of two, which thus allows us to successively omit every other model when testing our regression routine and still maintain regular spacings.}
Starting from the original vector of length $n$ of core hydrogen abundances $\mathbf x$, we find the subset of length $m$ that is closest to the optimal spacing $\mathbf b$, where\lr{\footnote{\lr{This equation has been corrected from the original publication.}}
\begin{equation}
b_i
=
X_T + (i-1) \cdot \frac{X_Z - X_T}{m-1},
\qquad
i=1,\ldots,m
\end{equation}}
\iffalse
\begin{equation}
\mathbf b = \left[
X_T,
\ldots,
\frac{(m-i)\cdot X_T + X_Z}{m-1},
\ldots,
X_Z
\right],
\qquad i=(m-1),\ldots,2
\end{equation}
\fi
with $X_Z$ being the core hydrogen abundance at ZAMS and $X_T$ being that at TAMS. To obtain the closest possible vector to $\mathbf b$ from our data $\mathbf x$, we solve a transportation problem using integer optimization \citep{23145595}. First we set up a cost matrix $\boldsymbol{C}$ consisting of absolute differences between the original abundances $\mathbf x$ and the ideal abundances $\mathbf b$:
\begin{equation}
\boldsymbol{C} = \left[
\begin{array}{cccc}
\abs{b_1-x_1} & \abs{b_1-x_2} & \dots & \abs{b_1-x_n} \\
\abs{b_2-x_1} & \abs{b_2-x_2} & \dots & \abs{b_2-x_n} \\
\vdots & \vdots & \ddots & \vdots \\
\abs{b_m-x_1} & \abs{b_m-x_2} & \dots & \abs{b_m-x_n}
\end{array} \right].
\end{equation}
We then require that exactly $m$ values are selected from $\mathbf x$, and that each value is selected no more than one time. Simply selecting the closest data point to each ideally-separated point will not work because this could result in the same point being selected twice; and selecting the second closest point in that situation does not remedy it because a different result could be obtained if the points were processed in a different order.
We denote the optimal solution matrix by $\hat{\boldsymbol{S}}$, and find it by minimizing the cost matrix subject to the following constraints:
\begin{align}
\hat{\boldsymbol{S}} = \underset{\boldsymbol S}{\arg\min} \; & \sum_{ij} S_{ij} C_{ij} \notag\\
\text{subject to } & \sum_j S_{ij} \leq 1 \; \text{ for all } i=1\ldots n \notag\\
\text{and } & \sum_i S_{ij} = 1 \; \text{ for all } j=1\ldots m.
\label{eq:optimal-spacing}
\end{align}
The indices of $\mathbf x$ that are most near to being equidistantly-spaced are then found by looking at which columns of $\hat{\boldsymbol S}$ contain ones, and we are done. The solution is visualized in Figure~\ref{fig:nearly-even}.
\afterpage{
\clearpage
\begin{landscape}
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth, keepaspectratio,natwidth=502,natheight=117]{nearly-even.pdf}
\caption[Model selection]{A visualization of the model selection process performed on each evolutionary track in order to obtain the same number of models from each track. The blue crosses show all of the models along the evolutionary track as they vary from ZAMS to TAMS in core hydrogen abundance and the red crosses show the models selected from this track. The models were chosen via linear transport such that they satisfy Equation~(\ref{eq:optimal-spacing}). For reference, an equidistant spacing is shown with black points. \vspace*{5mm}
\label{fig:nearly-even} }%
\end{figure}
\end{landscape}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Grid strategy %%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Initial Grid Strategy}
\label{sec:grid}
The initial conditions of a stellar model can be viewed as a \mb{six-dimensional hyperrectangle} with dimensions $M$, $Y_0$, $Z_0$, $\alpha_{\text{MLT}}$, $\alpha_{\text{ov}}$, and $D$. In order to vary all of these parameters simultaneously and fill the \mb{hyperrectangle} as quickly as possible, we construct a grid of initial conditions following a quasi-random point generation scheme. This is in contrast to linear or random point generation schemes, over which it has several advantages.
A linear grid subdivides all dimensions in which initial quantities can vary into equal parts and creates a track of models for every combination of these subdivisions. Although in the limit such a strategy will fill the \mb{hyperrectangle} of initial conditions, it does so very slowly. It is furthermore suboptimal in the sense that linear grids maximize redundant information, as each varied quantity is tried with the exact same values of all other parameters that have been considered already. In a high-dimensional setting, if any of the parameters are irrelevant to the task of the computation, then the majority of the tracks in a linear grid will not contribute any new information.
A refinement on this approach is to create a grid of models with randomly varied initial conditions. Such a strategy fills the space more rapidly, and furthermore solves the problem of redundant information. However, this approach suffers from a different problem: since the points are generated at random, they tend to ``clump up'' at random as well. This results in random gaps in the parameter space, which are obviously undesirable.
Therefore, in order to select points that do not stack, do not clump, and also fill the space as rapidly as possible, we generate Sobol numbers \citep{sobol1967distribution} in the unit 6-cube and map them to the parameter ranges of each quantity that we want to vary. Sobol numbers are a sequence of $m$-dimensional vectors ${x_1 \ldots x_n}$ in the unit hypercube $I^m$ constructed such that the integral of a real function $f$ in that space is equivalent in the limit to that function evaluated on those numbers, that is,
\begin{equation}
\int_{I^m} f = \lim_{n \to \infty} \frac{1}{n}\sum_{i=1}^n f(x_i)
\end{equation}
with the sequence being chosen such that the convergence is achieved as quickly as possible. By doing this, we both minimize redundant information and furthermore sample the hyperspace of possible stars as uniformly as possible. Figure~\ref{fig:grids} visualizes the different methods of generating multidimensional grids: linear, random, and the quasi-random strategy that we took. This method applied to initial model conditions was shown in Figure~\ref{fig:inputs} with 1- and 2D projection plots of the evolutionary tracks generated for our grid.
\begin{figure*}
\centering
\includegraphics[width=0.32\textwidth,keepaspectratio,natwidth=876,natheight=1605,
trim={7.75cm 9.8cm 7.75cm 9.8cm}, clip]{linear.pdf}\hfill
\includegraphics[width=0.32\textwidth,keepaspectratio,natwidth=876,natheight=1605,
trim={7.75cm 9.8cm 7.75cm 9.8cm}, clip]{random.pdf}\hfill
\includegraphics[width=0.32\textwidth,keepaspectratio,natwidth=876,natheight=1605,
trim={7.75cm 9.8cm 7.75cm 9.8cm}, clip]{quasirandom.pdf}\\
\parbox{0.32\textwidth}{\centering Linear}\hfill
\parbox{0.32\textwidth}{\centering Random}\hfill
\parbox{0.32\textwidth}{\centering Quasi-random}
\caption[Comparison of point generation schemes (linear, random, quasi-random)]{Results of different methods for generating multidimensional grids portrayed via a unit cube projected onto a unit square. Linear (left), random (middle), and quasi-random (right) grids are generated in three dimensions, with color depicting the third dimension, i.e., the distance between the reader and the screen. From top to bottom, all three methods are shown with 100, 400, and 2000 points generated, respectively. }%
\label{fig:grids}
\end{figure*}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Remeshing %%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Adaptive Remeshing}
\label{sec:remeshing}
When performing element diffusion calculations in MESA, the surface abundance of each isotope is considered as an average over the outermost cells of the model. The number of outer cells ${N}$ is chosen such that the mass of the surface is more than ten times the mass of the ${(N+1)^{\text{th}}}$ cell. Occasionally, this approach can lead to a situation where surface abundances change dramatically and discontinuously in a single time-step. These abundance discontinuities then propagate as discontinuities in effective temperatures, surface gravities, and radii. An example of such a difficulty can be seen in Figure~\ref{fig:discontinuity}.
\begin{figure}[t!]
\centering
%\includegraphics[width=0.5\linewidth,keepaspectratio]{discontinuity-1.pdf}\hfill
%\includegraphics[width=0.5\linewidth,keepaspectratio]{discontinuity-2.pdf}\\
%\includegraphics[width=0.5\linewidth,keepaspectratio]{discontinuity-3.pdf}\\
\includegraphics[width=0.7\linewidth,keepaspectratio,natwidth=251,natheight=150,trim={0 1.4cm 0 0.25cm},clip]{discontinuity-1.pdf}\\
\includegraphics[width=0.7\linewidth,keepaspectratio,natwidth=251,natheight=150,trim={0 1.4cm 0 0.25cm},clip]{discontinuity-2.pdf}\\
\includegraphics[width=0.7\linewidth,keepaspectratio,natwidth=251,natheight=150,trim={0 0 0 0.25cm},clip]{discontinuity-3.pdf}\\
\caption[Surface abundance discontinuity detection]{Three iterations of surface abundance discontinuity detection and iterative remeshing for an evolutionary track. The detected discontinuities are encircled in red. The third iteration has no discontinuities and so this track is considered to have converged. \vspace*{5mm} \label{fig:discontinuity} }
\end{figure}
Instead of being a physical reality, these effects arise only when there is insufficient mesh resolution in the outermost layers of the model. We therefore seek to detect these cases and re-run any such evolutionary track using a finer mesh resolution. We consider a track an outlier if its surface hydrogen abundance changes by more than $1\%$ in a single time-step. We iteratively re-run any track with outliers detected using a finer mesh resolution, and, if necessary, smaller time-steps, until convergence is reached. The process and a resolved track can also be seen in Figure~\ref{fig:discontinuity}.
Some tracks still do not converge without surface abundance discontinuities despite the fineness of the mesh or the brevity of the time-steps, and are therefore not included in our study. These troublesome evolutionary tracks seem to be located only located in a thin ridge of models having sufficiently high stellar mass (${M > M_\odot}$), a deficit of initial metals (${Z_0 < 0.001}$) and a specific inefficiency of diffusion (${D \simeq 0.01}$). A visualization of this can be seen in Figure~\ref{fig:diffusion-gap}.
\afterpage{
\clearpage
\begin{landscape}
\begin{figure*}
\centering
\includegraphics[width=0.452\linewidth,keepaspectratio,natwidth=251,natheight=300]{FeH0_M_D.pdf}%\hfill
\includegraphics[width=0.452\linewidth,keepaspectratio,natwidth=251,natheight=300]{Fe_H_M_D.pdf}
\caption[Model convergence as a function of mass and diffusion]{Stellar mass as a function of diffusion \mb{multiplication} factor colored by initial surface metallicity (left) and final surface metallicity (right). A ridge of \mb{missing points indicating} unconverged evolutionary tracks can be seen around a diffusion \mb{multiplication} factor of $0.01$. Beyond this ridge, tracks that were initially metal-poor end their main-sequence lives with all of their metals drained from their surfaces. \label{fig:diffusion-gap} }
\end{figure*}
\end{landscape}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Model evaluation %%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Evaluating the Regressor}
\label{sec:evaluation}
In training the random forest regressor, we must determine how many evolutionary tracks $N$ to include, how many models $M$ to extract from each evolutionary track, and how many trees $T$ to use when growing the forest. As such it is useful to define measures of gauging the accuracy of the random forest so that we may evaluate it with different combinations of these parameters.
By far the most common way of measuring the quality of a random forest regressor is its so-called ``out-of-bag'' (OOB) score \citep[see e.g.\ Section~3.1 of][]{breiman2001random}. While each tree is trained on only a subset (or ``bag'') of the stellar models, all trees are tested on all of the models that they did not see. This provides an accuracy score representing how well the forest will perform when predicting on observations that it has not seen yet. We can then use the scores defined in Section~\ref{sec:uncertainties} to calculate OOB scores.
However, such an approach to scoring is too optimistic in this scenario. Since a tree can get models from every simulation, predicting the \mb{parameters} of a model when the tree has been trained on one of that model's neighbors leads to an artificially inflated OOB score. This is especially the case for quantities like stellar mass, which do not change along the main sequence. A tree that has witnessed neighbors on either side of the model being predicted will have no error when predicting that model's mass, and hence the score will seem artificially better than it should be.
Therefore, we opt instead to build validation sets containing entire tracks that are left out from the training of the random forest. We omit models and tracks in powers of two so that we may roughly maintain the regular spacing that we have established in our grid of models (refer back to Appendices \ref{sec:grid} and \ref{sec:selection} for details).
We have already shown in Figure~\ref{fig:evaluation-tracks} these cross-validated scores as a function of the number of evolutionary tracks. Figure~\ref{fig:app-evaluation-models} now shows these scores as a function of the number of models obtained from each evolutionary track, and Figure~\ref{fig:app-evaluation-trees} shows them as a function of the number of trees in the forest. Naturally, $\hat\sigma$ increases with the number of trees, but this is not a mark against having more trees: this score is trivially minimal when there is only one tree, as that tree must agree with itself! We find that although more is better for all quantities, there is not much improvement after about ${T=32}$ and ${M=16}$. It is also interesting to note that the predictions do not suffer very much from using only four models per track, which results in a random forest trained on only a few thousand models.
\afterpage{
\clearpage
\begin{landscape}
\begin{figure*}
\centering
\includegraphics[width=0.6\linewidth,keepaspectratio,natwidth=3526,natheight=138]{legend.png}\\
\includegraphics[width=0.45\linewidth,keepaspectratio,natwidth=251,natheight=150]{num_points-ev.pdf}%
\includegraphics[width=0.45\linewidth,keepaspectratio,natwidth=251,natheight=150]{num_points-dist.pdf}\\
\includegraphics[width=0.45\linewidth,keepaspectratio,natwidth=251,natheight=150]{num_points-diff.pdf}%
\includegraphics[width=0.45\linewidth,keepaspectratio,natwidth=251,natheight=150]{num_points-sigma.pdf}\\
\caption[Evaluations of regression accuracy against the number of models per evolutionary track]{%Explained variance (top) and accuracy per precision score (bottom) of each stellar parameter as a function of the number of models per evolutionary track (left) and the number of trees used in training the random forest (right).
Explained variance (top left), accuracy per precision distance (top right), normalized absolute error (bottom left), and normalized standard deviation of predictions (bottom right) for each stellar parameter as a function of the number of models per evolutionary track. \label{fig:app-evaluation-models}}
\end{figure*}
\end{landscape}
\clearpage
}
\afterpage{
\clearpage
\begin{landscape}
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth,keepaspectratio,natwidth=3526,natheight=138]{legend.png}\\
\includegraphics[width=0.45\linewidth,keepaspectratio,natwidth=251,natheight=150]{num_trees-ev.pdf}%
\includegraphics[width=0.45\linewidth,keepaspectratio,natwidth=251,natheight=150]{num_trees-dist.pdf}\\
\includegraphics[width=0.45\linewidth,keepaspectratio,natwidth=251,natheight=150]{num_trees-diff.pdf}%
\includegraphics[width=0.45\linewidth,keepaspectratio,natwidth=251,natheight=150]{num_trees-sigma.pdf}\\
\caption[Evaluations of regression accuracy against the number of trees]{Explained variance (top left), accuracy per precision distance (top right), normalized absolute error (bottom left), and normalized model uncertainty (bottom right) for each stellar parameter as a function of the number of trees used in training the random forest. \label{fig:app-evaluation-trees}}
\end{figure}
\end{landscape}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Hare-and-Hound %%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Hare and Hound}
\label{sec:hare-and-hound}
Table~\ref{tab:hnh-true} lists the true values of the hare-and-hound exercise performed here, and Table~\ref{tab:hnh-perturb} lists the perturbed inputs that were supplied to the machine learning algorithm.
\afterpage{
\clearpage
%\begin{landscape}
\begin{table} \centering %\scriptsize
\caption{True values for the hare-and-hound exercise. \label{tab:hnh-true}}
\hspace*{-0.66cm}\begin{tabular}{ccccccccccc}
Model & $R/R_\odot$ & $M/M_\odot$ & $\tau$ & $T_{\text{eff}}$ & $L/L_\odot$ & [Fe/H] & $Y_0$ & $\nu_{\max}$ & $\alpha_{\text{ov}}$ & $D$ \\ \hline\hline
0 & 1.705 & 1.303 & 3.725 & 6297.96 & 4.11 & 0.03 & 0.2520 & 1313.67 & No & No \\
1 & 1.388 & 1.279 & 2.608 & 5861.38 & 2.04 & 0.26 & 0.2577 & 2020.34 & No & No \\
2 & 1.068 & 0.951 & 6.587 & 5876.25 & 1.22 & 0.04 & 0.3057 & 2534.29 & No & No \\
3 & 1.126 & 1.066 & 2.242 & 6453.57 & 1.98 & -0.36 & 0.2678 & 2429.83 & No & No \\
4 & 1.497 & 1.406 & 1.202 & 6506.26 & 3.61 & 0.14 & 0.2629 & 1808.52 & No & No \\
5 & 1.331 & 1.163 & 4.979 & 6081.35 & 2.18 & 0.03 & 0.2499 & 1955.72 & No & No \\
6 & 0.953 & 0.983 & 2.757 & 5721.37 & 0.87 & -0.06 & 0.2683 & 3345.56 & No & No \\
7 & 1.137 & 1.101 & 2.205 & 6378.23 & 1.92 & -0.31 & 0.2504 & 2483.83 & No & No \\
8 & 1.696 & 1.333 & 2.792 & 6382.22 & 4.29 & -0.07 & 0.2555 & 1348.83 & No & No \\
9 & 0.810 & 0.769 & 9.705 & 5919.70 & 0.72 & -0.83 & 0.2493 & 3563.09 & No & No \\
10 & 1.399 & 1.164 & 6.263 & 5916.71 & 2.15 & 0.00 & 0.2480 & 1799.10 & Yes & Yes \\
11 & 1.233 & 1.158 & 2.176 & 6228.02 & 2.05 & 0.11 & 0.2796 & 2247.53 & Yes & Yes
\\ \hline \end{tabular}
\end{table}
%\end{landscape}
%}
%\clearpage
\begin{table} \centering %\scriptsize
\caption{Supplied (perturbed) inputs for the hare-and-hound exercise. \label{tab:hnh-perturb}}
\begin{tabular}{ccccc}
Model & $T_{\text{eff}}$ & $L/L_\odot$ & [Fe/H] & $\nu_{\max}$ \\ \hline\hline
0 & 6237 $\pm$ 85 & 4.2 $\pm$ 0.12 & -0.03 $\pm$ 0.09 & 1398 $\pm$ 66 \\
1 & 5806 $\pm$ 85 & 2.1 $\pm$ 0.06 & 0.16 $\pm$ 0.09 & 2030 $\pm$ 100 \\
2 & 5885 $\pm$ 85 & 1.2 $\pm$ 0.04 & -0.05 $\pm$ 0.09 & 2630 $\pm$ 127 \\
3 & 6422 $\pm$ 85 & 2.0 $\pm$ 0.06 & -0.36 $\pm$ 0.09 & 2480 $\pm$ 124 \\
4 & 6526 $\pm$ 85 & 3.7 $\pm$ 0.11 & 0.14 $\pm$ 0.09 & 1752 $\pm$ 89 \\
5 & 6118 $\pm$ 85 & 2.2 $\pm$ 0.06 & 0.04 $\pm$ 0.09 & 1890 $\pm$ 101 \\
6 & 5741 $\pm$ 85 & 0.8 $\pm$ 0.03 & 0.06 $\pm$ 0.09 & 3490 $\pm$ 165 \\
7 & 6289 $\pm$ 85 & 2.0 $\pm$ 0.06 & -0.28 $\pm$ 0.09 & 2440 $\pm$ 124 \\
8 & 6351 $\pm$ 85 & 4.3 $\pm$ 0.13 & -0.12 $\pm$ 0.09 & 1294 $\pm$ 67 \\
9 & 5998 $\pm$ 85 & 0.7 $\pm$ 0.02 & -0.85 $\pm$ 0.09 & 3290 $\pm$ 179 \\
10 & 5899 $\pm$ 85 & 2.2 $\pm$ 0.06 & -0.03 $\pm$ 0.09 & 1930 $\pm$ 101 \\
11 & 6251 $\pm$ 85 & 2.0 $\pm$ 0.06 & 0.13 $\pm$ 0.09 & 2360 $\pm$ 101
\\ \hline \end{tabular}
\end{table}
}
\clearpage
| {
"alphanum_fraction": 0.7307905084,
"avg_line_length": 126.216957606,
"ext": "tex",
"hexsha": "af7176ed5a9e8ca863ac873f040959574a2191e0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "101c3f822b505c9e62beeb41ca30fe46c279ad79",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "earlbellinger/Ph.D.-Thesis",
"max_forks_repo_path": "tex/ch2_ML/Bellinger.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "101c3f822b505c9e62beeb41ca30fe46c279ad79",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "earlbellinger/Ph.D.-Thesis",
"max_issues_repo_path": "tex/ch2_ML/Bellinger.tex",
"max_line_length": 2070,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "101c3f822b505c9e62beeb41ca30fe46c279ad79",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "earlbellinger/Ph.D.-Thesis",
"max_stars_repo_path": "tex/ch2_ML/Bellinger.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 29721,
"size": 101226
} |
% ------------------------------------------------------------------------+
% Copyright (c) 2001 by Punch Telematix. All rights reserved. |
% |
% Redistribution and use in source and binary forms, with or without |
% modification, are permitted provided that the following conditions |
% are met: |
% 1. Redistributions of source code must retain the above copyright |
% notice, this list of conditions and the following disclaimer. |
% 2. Redistributions in binary form must reproduce the above copyright |
% notice, this list of conditions and the following disclaimer in the |
% documentation and/or other materials provided with the distribution. |
% 3. Neither the name of Punch Telematix nor the names of other |
% contributors may be used to endorse or promote products derived |
% from this software without specific prior written permission. |
% |
% THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED |
% WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF |
% MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. |
% IN NO EVENT SHALL PUNCH TELEMATIX OR OTHER CONTRIBUTORS BE LIABLE |
% FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR |
% CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF |
% SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR |
% BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, |
% WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE |
% OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN |
% IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. |
% ------------------------------------------------------------------------+
%
% $Id: modules.tex,v 1.1.1.1 2004/07/12 14:07:44 cvs Exp $
%
\subsection{Modules}
\subsubsection{Operation}
Modules are object code, generated by a C
compiler. By means of the support for modules in \oswald, they can be loaded when
the kernel is already running. The module system thus enables the programmer
to add functionality to the kernel at runtime.
This offers the kernel programmer a powerful tool:
\begin{itemize}
\item It allows to add new functionality, unknown at the moment the kernel
was started up, to be inserted without stopping it.
\item It allows to customize the behavior of the kernel at a later stage.
\item It allows to upgrade certain behavior or capabilities, without
having to statically link the code for this behavior in at the time the
kernel was compiled and linked.
\item It allows to remove certain functionality from the kernel when it is
no longer needed, to save on space and potentially CPU usage.
\end{itemize}
Since modules are in fact object files, generated by a compiler and
optionally 'prelinked' by a linker, the module code of \oswald must perform
some extra steps to make 'it run', steps which are normally performed
by the linker.
These steps can be enumerated as:
\begin{enumerate}
\item Loading the module; this means translating the format of the compiler
product object code (\oswald only supports the ELF format) into the internal
format that \oswald can work with further. This step also performs some
checks on the offered ELF data to see whether it complies with the ELF
standard.
This step of loading ELF data into the internal format is performed by calling the \txt{x\_module\_load}
function.
\item Resolving the module; this means that the names of the symbols that are either exported from
the module or imported are bound to the correct address.
This step of resolving symbols is performed by calling the \txt{x\_module\_resolve}
function.
\item The relocation step; the compiler generates code without knowing at
what memory address the code will end up in the running kernel. It therefore
generates code assuming the starting address is \txt{0x00000000} and will
produce information that will indicate at which places in the code, the real
address must be 'patched'. This information consists of 2 parts:
\begin{enumerate}
\item A symbol that describes the address that needs to be patched into this
location, i.e. the \textbf{what} goes into the patch.
\item A relocation record that describes \textbf{how} and \textbf{where} this symbol address needs
to be patched into the code.
\end{enumerate}
This process of patching is called relocation and is performed by calling the \txt{x\_module\_relocate}
function.
\item Initialization; each module must export a function that is called by
the kernel, to initialize the internal module data structures.
Only when this step has been run successfully, the module is ready to be
used by other parts of the kernel or by other modules.
\end{enumerate}
Is is possible to change the value of a symbol after step 3, so
that module startup parametrization can be performed. To achieve this, one can
use the \txt{x\_module\_search} function to find a symbol and change its
value as needed. For initial parametrization, this must be done before step
4 of initialization.
\textbf{Sample Code}
The following is a code sample that illustrates how a module can be loaded,
resolved, relocated, initialized and how an exported function can be
searched for and called.
Note that the specifics of how the ELF data is read in from a file or from
memory is beyond the scope of the sample.
\bcode
\begin{verbatim}
x_Module Module;
x_module module = &Module;
x_status status;
int (*function)(int, int);
x_ubyte * elf_data;
/*
** The variable 'function' is a function pointer; this
** function takes two int's as arguments and returns an
** int as result. We assume that the variable 'elf_data'
** points to an in memory image of an ELF object file.
*/
status = x_module_load(module, elf_data, malloc, free);
if (status != xs_success) {
printf("Loading failed %d\n", status);
// handle failure
}
status = x_module_resolve(module);
if (status != xs_success) {
printf("Resolving failed %d\n", status);
// handle failure
}
status = x_module_relocate(module);
if (status != xs_success) {
printf("Relocation failed %d\n", status);
// handle failure
}
status = x_module_init(module);
if (status != xs_success) {
printf("Initialization failed %d\n", status);
// handle failure
}
/*
** Suppose that the module is known to export a function
** with the name 'do_add' that adds two integers and
** returns the result.
*/
status = x_module_search(module, "do_add", &function);
if (status == xs_success) {
printf("Address of 'do_add' is %p\n", function);
printf("do_add(10, 20) = %d\n", (*function)(10, 20));
}
else {
printf("Not found %d\n", status);
}
...
\end{verbatim}
\ecode
\subsubsection{Brief Explanation on ELF, Resolution and Relocation}
This section will explain some elements on the ELF format that can not be
found back in the documentation or will clarify some structures. An overview
of a very simple ELF object file and how it is transformed into a memory
image is shown in figure \ref{fig:elf}.
The best way to study the ELF format is reading the documentation a few
times\footnote{I know it is hard to read, but it is the real thing...} and
using the 'objdump' command of the GNU toolset. Learn to use this command,
it's your friend.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=\textwidth]{elf}
\caption{ELF Structures and the In Memory Image.\label{fig:elf}}
\end{center}
\end{figure}
In figure \ref{fig:elf} only a few sections relevant for the following
explanation are shown, more section types exists. For more information, we
refer to the available ELF documentation.
The following explanation is looking very repetitive but this is for the
sake of clarity. For somebody that is experienced in the art of ELF, it
will look childish\footnote{Eric, this means you shouldn't read further...}.
The very simple ELF object code of figure \ref{fig:elf} has 2 sections that
need to be translated to memory, i.e. the \txt{.text} section that
contains executable code and the \txt{.data} section that contains data
that is referred to from the executable code in the \txt{.text} section. The
assembly code that is represented in figure \ref{fig:elf} is the following small
but very useful function\footnote{Yes, I know that there is no newline in
the format string.}.
\bcode
\begin{verbatim}
void myfun(void) {
printf("this is a string");
}
\end{verbatim}
\ecode
In the assembly code of the \txt{.text} segment, we see that the address
of \txt{printf} is not yet filled in, since the address was not known
at compile time. Also the address of the string \txt{"this is a string"}
is not yet known at compile time. The \txt{.data} section contains the
characters that make up the string.
It is important to note that the placeholder for the string address
(relocation 1) contains the 32 bit word \txt{0x00000000}, while the
placeholder for the \txt{print} call (relocation 2), contains the word
\txt{0xfffffffc} in little endian format. Written as a signed integer,
this second placeholder contains the decimal value $-4$.
The contents of these placeholders are values that will be used in the
relocation process. In ELF speak, they are called an \textbf{addend}. In the ELF
documentation, the addend is indicated by the character \textbf{'A'}. The
addend is a value that is added during the relocation algorithm to the end
result. Note that the placeholder value $-4$ for the call is to compensate
for the fact that the program counter on a x86 architecture is pointing four
bytes further than the current instruction, so if a program counter relative value needs to be
patched into the \txt{.text} section, we need to compensate for this
program counter offset; therefore the $-4$ addend value.
The section address in the ELF image of an object file always starts at the
value \txt{0x00000000}
since the compiler does not know where in memory the image will go. In the
module code, the contents of the \txt{.text} and \txt{.data} will be
copied over into an allocated memory area. In this specific example, the
\txt{.text} section starts at \txt{0x4740} and the \txt{.data}
section immediately follows at address \txt{0x4758}. Note that some bytes
in the \txt{.text} memory area are wasted because of alignment
considerations. These alignment requirements are also given in the ELF
structure. Please refer to the TIS document for more information.
The next step, after allocating memory and copying over the relevant section
contents to that memory, is resolving the addresses of the symbols used in
the ELF data. In this specific example, there is 1 imported symbol, namely
\txt{printf}. The symbol
number 2, that refers to \txt{printf} has an undefined section and an
undefined type. The ELF standard does not care what type an undefined
element is, it could be an object or a function pointer. The module code
needs to resolve all these undefined symbols and attach them to the correct
address, i.e. it has to fill in the proper value for the undefined symbols.
These undefined symbols are searched for in the available symbol tables. If
such a symbol can not be resolved, the module can not be used. In this
specific example, the symbol value for \txt{printf} has been resolved
into the address \txt{0x00004f00}.
In the resolution step, the values for the remaining symbols also needs to
be filled in. In our example, this means adding the value of the symbol to
the in memory address of the section the symbol refers to. For symbol 1,
this means adding the value to the in memory address of the \txt{.data}
section and for symbol 3, adding the value to the \txt{.text} in memory address.
The address value of a symbol, is indicated by the character \textbf{'S'} in
the ELF standard.
From the binding information, we can also deduce that the symbol for
\txt{myfun} has binding 'global' and is thus exported from the module. It
is therefore a very good idea to make all possible definitions in the source
code static, such that only the required functions are exported (and
valuable memory space will be saved in the symbol tables).
After the resolution step, the relocation takes place. The example shows 2
relocation records. All relocation records from a .rel section, refer to the same
section that needs patching; in this example, there is only one section that
needs relocation and therefore there is only a single \txt{.rel} section.
In more complex ELF files, there can be a multitude of relocation sections
and symbol tables. In this specific example, the section that needs patching
is the \txt{.text} section. The offset of a relocation record added to
the memory address where the section is, yields the address in memory that
needs relocation, in the ELF text this is indicated by the character
\textbf{'P'}\footnote{Probably to indicate that this address needs
patching.}.
The relocation can now be performed. In our simple example, only 2 different
relocation types or algorithms are required:
\begin{itemize}
\item \txt{R\_386\_32} This is an absolute relocation, given by the
following algorithm:
$*P = A + S$
The first relocation is of this type and as one can see from the mnemonics,
it is used to push the address of the format string on the stack. Therefore
it is an absolute address and the addend is 0.
\item \txt{R\_386\_32} This is a program counter relative relocation,
given by the algorithm:
$*P = A + S - P$
This relocation type is used for the second relocation record and its result
is an argument for the \txt{call} opcode of the x86. Since this
opcode expects a relative and signed offset from the current
program counter, we need to compensate for the fact that the program counter
is already pointing 4 bytes further, therefore the $-4$ addend value or
said otherwise, P contains the value that the program counter has at the \textbf{call} instruction
plus 4 bytes.
\end{itemize}
In the example given by figure \ref{fig:elf}, note that the relocations have
NOT been performed yet in the in memory representation, i.e. the original
addends \txt{0x00000000} and \txt{0xfffffffc} are still in the in memory
data.
Note that relocation types are CPU specific, so the relocation type e.g. 4 for a
X86 CPU is not the same type as the relocation type 4 for an ARM processor!
In more complex cases as this example (e.g. shared object files and position
independent code), many more relocation types exist and sections to relocate
exist, but the principles always remain the same.
\subsubsection{Identifiers, Symbol Tables and Symbols}
An important element in the module code is the handling of symbols and
identifiers. Symbols are associations of a \textbf{name} and a \textbf{memory
address}. As indicated above, symbols play in important role in the
relocation process and to export functionality from the module to other
parts of the kernel or other modules.
Symbols are combined in tables, called symbol tables. The name of a symbol
is called an \textbf{identifier} in \oswald. How identifiers, symbol tables
and symbols relate to each other is explained in figure \ref{fig:symbols}.
\begin{figure}[!ht]
\begin{center}
\rotatebox{270}{\includegraphics[height=0.65\textheight]{symbols}}
\caption{Relation between identifiers, symbol tables and symbols.\label{fig:symbols}}
\end{center}
\end{figure}
Figure \ref{fig:symbols} shows how the identifier for the \txt{printf} symbol is
linked into the hashtable and points to all the symbols that have the same
identifier. If the \txt{x\_ident$\rightarrow$symbols} field is traced
through the dashed and dotted line, the first symbol encountered defines the
address of the \txt{printf} function and links further through the
\txt{x\_symbol$\rightarrow$next} field to two other symbols that refer to
this symbol to define the address value of \txt{printf}. This is indicated by
the fact that these two symbols refer back to the first, by means of the
dotted line and their \txt{x\_symbol$\rightarrow$value.defines} fields.
Also note that all the symbols refer to the identifier for \txt{printf}
through their \txt{x\_symbol$\rightarrow$ident} field.
The first symbol table, the one that contains the symbol definition of the
\txt{printf} function, has its \txt{x\_symtab$\rightarrow$module}
field pointing to \txt{NULL} indicating that this symbol table belongs to
the kernel itself and contains kernel exported function addresses.
In the following paragraphs, the structures for identifiers, symbol tables
and symbols will be explained in more detail.
%\newpage
\textbf{Identifier Structure Definition}
The structure definition of an identifier is as follows:
\bcode
\begin{verbatim}
1: typedef struct x_Ident * x_ident;
2:
3: typedef struct x_Ident {
4: x_ident next;
5: x_symbol symbols;
6: x_ubyte string[0];
18: } x_Ident;
\end{verbatim}
\ecode
The relevant fields in the identifier structure are the following:
\begin{itemize}
\item \txt{x\_ident$\rightarrow$next} All identifiers in the module code
are unique. To look up identifiers, they are kept in a hashtable and entries
with the same hash value are 'chained'. This field chains this identifier to
the next identifier with the same hash value.
\item \txt{x\_ident$\rightarrow$symbols} A symbol is defined later on.
The list of symbols that have the same identifier starts with this field.
\item \txt{x\_ident$\rightarrow$string} The identifier structure is a
variable sized structure and the end of the structure is used to store the
ASCII characters that make up the string representation of the name.
\end{itemize}
\textbf{Symbol Table Structure Definition}
The structure definition of a symbol table is as follows:
\bcode
\begin{verbatim}
1: typedef struct x_Symtab * x_symtab;
2:
3: typedef struct x_Symtab {
4: x_symtab next;
5: x_module module;
6: x_int num_symbols;
7: x_int capacity;
8: x_uword madgic;
9: x_Symbol symbols[0];
18: } x_Symtab;
\end{verbatim}
\ecode
The relevant fields in the symtab structure are the following:
\begin{itemize}
\item \txt{x\_symtab$\rightarrow$next} Symbol tables are kept in a linked
list and this field is used to chain further to the next symbol table in the
list.
\item \txt{x\_symtab$\rightarrow$module} Symbol tables are associated
with a module. The module either exports or imports the symbols that are in
its symbol table. There is only one case where this field contains NULL, and
this is when a symbol table is associated with symbols exported by the
kernel. Note that the kernel symbol table only exports symbols.
\item \txt{x\_symtab$\rightarrow$num\_symbols} This field contains the
number of symbols that are available in the symbol table.
\item \txt{x\_symtab$\rightarrow$capacity} Some symbol tables are
allocated with a certain number of free slots, to fill in later. This field
total number of slots that are available in the symbol table. The number of
available slots is found from \txt{capacity} - \txt{num\_symbols}.
\item \txt{x\_symtab$\rightarrow$madgic} There is a function
\txt{x\_symbol2symtab} that can be used to find back the symbol table
from a symbol reference. This field is used as a fencepost to help this
function; it indicates the start of the array of \txt{x\_Symbol} elements
in the \txt{x\_Symtab} structure.
\item \txt{x\_symtab$\rightarrow$symbols} This is the array of
\txt{x\_Symbol} structures that are part of the symbol table.
\end{itemize}
\textbf{Symbol Structure Definition}
The structure definition of a symbol is as follows:
\bcode
\begin{verbatim}
1: typedef struct x_Symbol * x_symbol;
2:
3: typedef struct x_Symbol {
4: x_symbol next;
5: x_ident ident;
6: x_flags flags;
7: union {
8: x_address address;
9: x_symbol defines;
10: } value;
18: } x_Symbol;
\end{verbatim}
\ecode
The relevant fields in the symbol structure are the following:
\begin{itemize}
\item \txt{x\_symbol$\rightarrow$next} This is the field that chains to
the next symbol with the same identifier.
\item \txt{x\_symbol$\rightarrow$ident} This is the reference back to the
identifier that is the name of the symbol.
\item \txt{x\_symbol$\rightarrow$flags} Symbols can carry different flags
to indicate e.g. which field of the \txt{value} union is valid. Also the
number of referrals to this symbol is stored in the lower bits of the
\txt{flags} field. The number of referrals only applies to symbols that
define an address and indicates how many other symbols rely on this symbol
to define the address.
The most important flags are:
\begin{enumerate}
\item \txt{SYM\_EXPORTED} This symbol defines a memory address, i.e. the
\txt{value.address} field is valid.
\item \txt{SYM\_IMPORTED} The symbol refers to another symbol that defines
the address value, i.e. the \txt{value.defines} field is valid and points
to the defining symbol.
\item \txt{SYM\_RESOLVED} The symbol has been set up properly and is bound
to either and address or to the symbol that defines the address.
\item \txt{SYM\_FUNCTION} The symbol refers to a function pointer.
\item \txt{SYM\_VARIABLE} The symbol refers to a data object.
\item \txt{SYM\_SPECIAL} The symbol refers to a special object or function
pointer like e.g. the initializer function, or other internally used
elements.
\end{enumerate}
More flags are possible for which we refer to the source code of the module
functionality.
\item \txt{x\_symbol$\rightarrow$value.address} This is the address
location that is associated with the symbol.
\item \txt{x\_symbol$\rightarrow$value.defines} When a symbol refers to
another defining symbol, this field is the reference to the symbol that
defines the address.
\end{itemize}
\subsubsection{Loading a Module}
Loading is the initial step in transforming ELF compliant data (e.g. an ELF object
file) into a module that can be used by \oswald.
A module is initially loaded by means of the following call:
\txt{x\_status x\_module\_load(x\_module module, x\_ubyte * elf\_data, x\_malloc a, x\_free f);}
The different return values that this call can produce are summarized
in table \ref{table:module_load}.
\oswald assumes that the passed pointer \txt{elf\_data} is a single chunk
of memory that contains all the objects file data and considers it read only
data. This enables the programmer that wants to package the kernel to put
this data in read only memory.
The ELF structures that are part of the data and that need to be modified,
will be copied over into runtime modifiable structures.
\footnotesize
\begin{longtable}{||l|p{9cm}||}
\hline
\hfill \textbf{Return Value} \hfill\null & \textbf{Meaning} \\
\hline
\endhead
\hline
\endfoot
\endlastfoot
\hline
% \begin{table}[!ht]
% \begin{center}
% \begin{tabular}{||>{\footnotesize}l<{\normalsize}|>{\footnotesize}c<{\normalsize}||} \hline
% \textbf{Return Value} & \textbf{Meaning} \\ \hline
\txt{xs\_success} &
\begin{minipage}[t]{9cm}
The loading was successful and the subsequent step of resolution can be taken for preparing
the module further. The \txt{MOD\_LOADED} flag is set after successful
return.
\end{minipage} \\
\txt{xs\_no\_mem} &
\begin{minipage}[t]{9cm}
In allocating memory for its internal structures, the function encountered a
NULL reply from the allocation function. Nothing of the potentially already allocated
memory is freed.
\end{minipage} \\
\txt{xs\_not\_elf} &
\begin{minipage}[t]{9cm}
In checking the internal consistency of the ELF file, or in checking the
version of the ELF ABI that this code supports, a discrepancy was found. No
further processing can be done on this module.
\end{minipage} \\
\hline
\multicolumn{2}{c}{} \\
\caption{Return Status for \txt{x\_module\_load}}
\label{table:module_load}
\end{longtable}
\normalsize
% \hline
% \end{tabular}
% \caption{Return Status for \txt{x\_module\_load}}
% \label{table:module_load}
% \end{center}
% \end{table}
\subsubsection{Resolving a Module}
A module can be resolved with the following call:
\txt{x\_status x\_module\_resolve(x\_module module);}
The different return values that this call can produce are summarized
in table \ref{table:module_resolve}.
\footnotesize
\begin{longtable}{||l|p{9cm}||}
\hline
\hfill \textbf{Return Value} \hfill\null & \textbf{Meaning} \\
\hline
\endhead
\hline
\endfoot
\endlastfoot
\hline
% \begin{table}[!ht]
% \begin{center}
% \begin{tabular}{||>{\footnotesize}l<{\normalsize}|>{\footnotesize}c<{\normalsize}||} \hline
% \textbf{Return Value} & \textbf{Meaning} \\ \hline
\txt{xs\_success} &
\begin{minipage}[t]{9cm}
The module symbols have been successfully resolved and are linked up with
the proper addresses. Calling this function for a module that has been
resolved already also yields this return value. As a result the
\txt{MOD\_RESOLVED} flag is set.
\end{minipage} \\
\txt{xs\_seq\_error} &
\begin{minipage}[t]{9cm}
An error occurred because the module was not yet loaded, i.e. the
\txt{MOD\_LOADED} flag was not set for this module.
\end{minipage} \\
\txt{xs\_no\_symbol} &
\begin{minipage}[t]{9cm}
A symbol that is imported by the module could not be found back in the
existing symbol tables. Maybe additional modules need to be loaded to define
the missing symbol.
\end{minipage} \\
\hline
\multicolumn{2}{c}{} \\
\caption{Return Status for \txt{x\_module\_resolve}}
\label{table:module_resolve}
\end{longtable}
\normalsize
% \hline
% \end{tabular}
% \caption{Return Status for \txt{x\_module\_resolve}}
% \label{table:module_resolve}
% \end{center}
% \end{table}
\subsubsection{Relocating a Module}
A module can be relocated with the following call:
\txt{x\_status x\_module\_relocate(x\_module module);}
The different return values that this call can produce are summarized
in table \ref{table:module_relocate}.
\footnotesize
\begin{longtable}{||l|p{9cm}||}
\hline
\hfill \textbf{Return Value} \hfill\null & \textbf{Meaning} \\
\hline
\endhead
\hline
\endfoot
\endlastfoot
\hline
% \begin{table}[!ht]
% \begin{center}
% \begin{tabular}{||>{\footnotesize}l<{\normalsize}|>{\footnotesize}c<{\normalsize}||} \hline
% \textbf{Return Value} & \textbf{Meaning} \\ \hline
\txt{xs\_success} &
\begin{minipage}[t]{9cm}
Information on the return value.
\end{minipage} \\
\hline
\multicolumn{2}{c}{} \\
\caption{Return Status for \txt{x\_module\_relocate}}
\label{table:module_relocate}
\end{longtable}
\normalsize
% \hline
% \end{tabular}
% \caption{Return Status for \txt{x\_module\_relocate}}
% \label{table:module_relocate}
% \end{center}
% \end{table}
\subsubsection{Initializing a Module}
A module can be initialized with the following call:
\txt{x\_status x\_module\_initialize(x\_module module);}
The different return values that this call can produce are summarized
in table \ref{table:module_initialize}.
\footnotesize
\begin{longtable}{||l|p{9cm}||}
\hline
\hfill \textbf{Return Value} \hfill\null & \textbf{Meaning} \\
\hline
\endhead
\hline
\endfoot
\endlastfoot
\hline
% \begin{table}[!ht]
% \begin{center}
% \begin{tabular}{||>{\footnotesize}l<{\normalsize}|>{\footnotesize}c<{\normalsize}||} \hline
% \textbf{Return Value} & \textbf{Meaning} \\ \hline
\txt{xs\_success} &
\begin{minipage}[t]{9cm}
Information on the return value.
\end{minipage} \\
\hline
\multicolumn{2}{c}{} \\
\caption{Return Status for \txt{x\_module\_initialize}}
\label{table:module_initialize}
\end{longtable}
\normalsize
% \hline
% \end{tabular}
% \caption{Return Status for \txt{x\_module\_initialize}}
% \label{table:module_initialize}
% \end{center}
% \end{table}
\subsubsection{Searching for an Exported Symbol}
A certain function address can be searched for in a module with the following call:
\txt{x\_status x\_module\_search(x\_module module, const char * name, void ** address);}
The different return values that this call can produce are summarized
in table \ref{table:module_search}.
\footnotesize
\begin{longtable}{||l|p{9cm}||}
\hline
\hfill \textbf{Return Value} \hfill\null & \textbf{Meaning} \\
\hline
\endhead
\hline
\endfoot
\endlastfoot
\hline
% \begin{table}[!ht]
% \begin{center}
% \begin{tabular}{||>{\footnotesize}l<{\normalsize}|>{\footnotesize}c<{\normalsize}||} \hline
% \textbf{Return Value} & \textbf{Meaning} \\ \hline
\txt{xs\_success} &
\begin{minipage}[t]{9cm}
Information on the return value.
\end{minipage} \\
\hline
\multicolumn{2}{c}{} \\
\caption{Return Status for \txt{x\_module\_search}}
\label{table:module_search}
\end{longtable}
\normalsize
% \hline
% \end{tabular}
% \caption{Return Status for \txt{x\_module\_search}}
% \label{table:module_search}
% \end{center}
% \end{table}
\subsubsection{Parsing a Function Identifier for JNI}
The module code of \oswald offers a utility function that can parse the
function identifier into the appropriate components for interfacing with a
JNI system.
The JNI utility 'javah' will read Java class files that contain declarations
of native functions and will generate the appropriate header file and
signatures to be used in C code. Since Java has the capabilities of
overloading methods and C has not, there is a translation step taking from
the Java naming methods into an C style naming.
For instance, the overloaded Java class methods:
\txt{My\_Peer.destroy(char b[], int i, String s)}
\txt{My\_Peer.destroy(long j, char b[], int i, String s)}
is translated into function names that are acceptable in C:
\txt{Java\_My\_1Peer\_destroy\_\_\_3BILjava\_lang\_String\_2}
\txt{Java\_My\_1Peer\_destroy\_\_J\_3BILjava\_lang\_String\_2}
I.e. the Java method names are mangled.
The specifics of this mangling are outside of the scope of this
document. Any good book on JNI will go into the details of this name
mangling.
\oswald offers a utility function \txt{x\_symbol\_java} that helps in
translating the C style function names back into the appropriate Java
conventions that can be used inside a virtual machine. This function is:
\txt{x\_int x\_symbol\_java(x\_symbol symbol, unsigned char * buffer, x\_size num);}
The arguments are the \txt{symbol} that needs parsing, a \txt{buffer} and the size of
this buffer in \txt{num}. The function will return the number of
characters that are used in the buffer. This function will check for
overflow of the buffer. The buffer is filled with 3 components of the
function name:
\begin{enumerate}
\item The name of the class, in this case 'My\_Peer', as a nul terminated
character string, that begins at \txt{buffer + buffer[0]}. The length of
the class name without the trailing nul character included is stored in
\txt{buffer + buffer[1]}.
\item The name of the method, in this case 'destroy', as a nul terminated
character string, that begins at \txt{buffer + buffer[2]}. The length of
the method name without the trailing nul character included is stored in
\txt{buffer + buffer[3]}.
\item The signature of the arguments, as a nul terminated character string.
For the first example this is '([BILjava/lang/String;)',
and for the second this is '(J[BILjava/lang/String;)'. The string begins at \txt{buffer + buffer[4]}. The length of
the argument signature without the trailing nul character included is stored in
\txt{buffer + buffer[5]}.
\end{enumerate}
So the first 6 positions of the \txt{buffer} array of characters are used
as indexes into this array and the lengths of the strings. This limits the
lengths and indexes to be less than 255.
Also note that the argument signature component of the buffer is written in the
descriptor format of the Java Virtual Machine specification, but without the
return type\footnote{This is no limitation since Java overloaded methods
must all return the same type.}.
When this function is called on a symbol that does not represent a Java
native method function name, i.e. a name that doesn't begin with
'Java\_', the returned result is 0.
%\bibliographystyle{natbib}
%\bibliography{modules}
| {
"alphanum_fraction": 0.7474159807,
"avg_line_length": 37.662004662,
"ext": "tex",
"hexsha": "113bfb2c8e05eb9fb4d5b552fd7bdede1d8eb81b",
"lang": "TeX",
"max_forks_count": 9,
"max_forks_repo_forks_event_max_datetime": "2021-07-13T11:35:45.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-05-05T15:19:17.000Z",
"max_forks_repo_head_hexsha": "079bcf51dce9442deee2cc728ee1d4a303f738ed",
"max_forks_repo_licenses": [
"ICU"
],
"max_forks_repo_name": "kifferltd/open-mika",
"max_forks_repo_path": "vm-cmp/kernel/oswald/doc/modules.tex",
"max_issues_count": 11,
"max_issues_repo_head_hexsha": "079bcf51dce9442deee2cc728ee1d4a303f738ed",
"max_issues_repo_issues_event_max_datetime": "2020-12-14T18:08:58.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-04-11T10:45:33.000Z",
"max_issues_repo_licenses": [
"ICU"
],
"max_issues_repo_name": "kifferltd/open-mika",
"max_issues_repo_path": "vm-cmp/kernel/oswald/doc/modules.tex",
"max_line_length": 115,
"max_stars_count": 41,
"max_stars_repo_head_hexsha": "079bcf51dce9442deee2cc728ee1d4a303f738ed",
"max_stars_repo_licenses": [
"ICU"
],
"max_stars_repo_name": "kifferltd/open-mika",
"max_stars_repo_path": "vm-cmp/kernel/oswald/doc/modules.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-28T20:18:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-05-14T12:03:18.000Z",
"num_tokens": 8271,
"size": 32314
} |
\documentclass{ximera}
\input{../preamble}
\title{ODEs: Foundations}
%%%%%\author{Philip T. Gressman}
\begin{document}
\begin{abstract}
We study the fundamental concepts and properties associated with ODEs.
\end{abstract}
\maketitle
\section*{(Video) Calculus: Single Variable}
\textbf{Note: For now you can begin the video at around 1:45. We will discuss linear and separable ODEs shortly. The midpoint method and Runge-Kutta beginning around 12:15 are important things to understand, but you will not be expected to compute these yourself like you will for Euler's method.}
\youtube{UISSWvg1pg0}
\section*{Online Texts}
\begin{itemize}
\item \link[OpenStax II 4.1: ODEs]{https://openstax.org/books/calculus-volume-2/pages/4-1-basics-of-differential-equations} and \link[Direction Fields and Numerical Methods]{https://openstax.org/books/calculus-volume-2/pages/4-2-direction-fields-and-numerical-methods}
\item \link[Ximera OSU: ODEs]{https://ximera.osu.edu/mooculus/calculus2/differentialEquations/titlePage} and \link[Numerical Methods]{https://ximera.osu.edu/mooculus/calculus2/numericalMethods/titlePage}
\end{itemize}
\section*{Examples}
\begin{example}
Below you will find a slope field for the ODE $y' = -xy$.
\begin{center}
\begin{image}
\includegraphics[width=4in]{images/slopeX01.png}
\end{image}
\end{center}
There are three curves shown:
\[ \begin{aligned}
\text{In Red: } \ & {\displaystyle y = 2 e^{-\frac{x^2}{2}}}, \\
\text{In Orange: } \ & {\displaystyle y = -\frac{4x}{3} e^{-\frac{x^2}{2}}}, \\
\text{In Yellow: } \ & {\displaystyle y = e^{-\frac{x^2}{2} - x - \frac{3}{2}}}.
\end{aligned} \]
Which of the three curves is a solution of the ODE $y' = -xy$? In other words, which of the three curves is correctly aligned with the slope field?
\begin{multipleChoice}
\choice[correct]{Red}
\choice{Orange}
\choice{Yellow}
\end{multipleChoice}
There are also a number of points marked on the graph: point B at $(-1.5,0)$, point C at $(-1.5,-0.65)$, point D at $(1.5,0.65)$, point E at $(1.5,0)$, and point F at $(1.5,-0.65)$. The solution of the ODE which begins at the point B will pass through the point \wordChoice{\choice{D}\choice[correct]{E}\choice{F}}. Similarly, the solution which begins at C will pass through \wordChoice{\choice{D}\choice{E}\choice[correct]{F}}.
\end{example}
\begin{example}
Can two different solutions of the same first-order ODE $y' = f(x,y)$ have graphs which cross?
\begin{multipleChoice}
\choice{Yes: There are infinitely many solutions, so they can always cross.}
\choice{Yes: You can generally have many different initial conditions, so they can be arranged to cross.}
\choice{No: There is only one solution passing through any horizontal line.}
\choice[correct]{No: If there were a point of intersection, the tangent lines would be the same so the curves \\
couldn't actually cross each other there.}
\end{multipleChoice}
\end{example}
\begin{example}
The concept behind Euler's method is that we approximate a solution of an ODE of the form $y' = f(x,y)$ with a polygonal graph. Usually each segment of the graph has the same width (referred to as the \textit{step size}). We use the function $f(x,y)$ to dictate the slope of each piece of the approximation.
Let $y(x)$ be the solution to the initial value problem $y' = x + \frac{\ln y}{\ln 2} - \frac{y}{2}$ with $y(0) = 1$. Use Euler's Method with step size $h = 1$ to approximate the value of $y(3)$.
\begin{itemize}
\item For this particular ODE, we have
\[ f(x,y) = \answer{ x + \frac{\ln y}{\ln 2} - \frac{y}{2}}. \]
\item Supposing that we start at the point$x = 0$, $y = 1$, the slope will be
\[ f(0,1) = \answer{-\frac{1}{2}.} \]
\item Consider a line segment beginning at the point $(0,1)$ having slope $f(0,1)$ which you just calculated.
\begin{center}
\begin{image}
\includegraphics[width=4in]{images/euler01.png}
\end{image}
\end{center}
The line segment will pass through the point
\[ (1 , \answer{\frac{1}{2}}) \]
(use your value of $f(0,1)$ to compute a numerical value of $y$ corresponding to $x = 1$).
\item Now we repeat: Take a new line segment beginning at the point you just found.
\begin{center}
\begin{image}
\includegraphics[width=4in]{images/euler02.png}
\end{image}
\end{center}
Its slope is given by
\[ f \left(1,\answer{\frac{1}{2}} \right) = \answer{-\frac{1}{4}}. \]
This new line segment passes through the point $(2,\answer{1/4})$.
\item Repeat again:
\begin{center}
\begin{image}
\includegraphics[width=4in]{images/euler03.png}
\end{image}
\end{center}
the line segment beginning at the point you just found will have slope
\[ f \left(2,\answer{\frac{1}{4}} \right) = \answer{-\frac{1}{8}} \]
and will pass through the point $(3,\answer{1/8})$. Since we have arrived at an $x$-value of $3$, we may stop. The $y$-value of this most recent point is our answer:
\[ y(3) \approx \answer{\frac{1}{8}}. \]
\end{itemize}
\end{example}
\end{document}
| {
"alphanum_fraction": 0.7141399714,
"avg_line_length": 49.5050505051,
"ext": "tex",
"hexsha": "c29a40d293a76cc5bcd0aac2eb5cac558679d71f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ptgressman/math104",
"max_forks_repo_path": "odes/29odewarm.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ptgressman/math104",
"max_issues_repo_path": "odes/29odewarm.tex",
"max_line_length": 429,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ptgressman/math104",
"max_stars_repo_path": "odes/29odewarm.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1518,
"size": 4901
} |
The \eslmod{sq} module provides \Easel's object for single biological
sequences: an \ccode{ESL\_SQ}.
Sequence objects invariably become complicated even when their
designer intends them to be simple. There's many things we want to do
with a sequence, and useful features naturally accrete over time. If a
library isn't careful to balance creeping featuritis against having an
easy way to start using the object in simple applications, then the
sequence object - possibly the most fundamental object of a
biosequence library - can become a barrier to anyone else actually
using the library! All those useful features won't matter much if you
can't figure out how to turn your sequence data into an object, or get
it back out. \Easel\ expects you to have your own preferred way of
dealing with sequence data that's not necessarily \Easel's way, so it
provides simple ways to create sequence objects from elemental (C
string) data, and simple ways to get elemental C strings back out.
This lets you minimize your exposure to \Easel's more advanced (and
complicated) capabilities if you like.
The most basic use of an \ccode{ESL\_SQ} object is to hold one
complete sequence, simply as a plain C string. A sequence may also
have a name, an accession, and a description line. This is called a
\esldef{text mode} sequence. In text mode, \Easel\ doesn't know
whether the sequence is DNA, RNA, protein, or something else; it's
just an ASCII character string. This limits some of \Easel's more
powerful abilities, such as the ability to check the sequence for
errors, or to automatically deal with degenerate residue codes; but
it's a simple mode that's easy to start using.
Alternatively, a sequence may be in \esldef{digital mode}. In digital
mode, sequences are predigested and encoded into \Easel's internal
format, which makes many sequence routines more robust, efficient, and
powerful. Digital mode requires \eslmod{alphabet} augmentation.
In addition to storing a complete sequence, an \ccode{ESL\_SQ} is
designed to be used in three other situations:
\begin{itemize}
\item to hold a \esldef{subsequence} of a larger source sequence. The
object maintains source and coordinate information necessary for
crossreferencing the subsequence's coordinate system to the original
source coordinate system.
\item to hold a \esldef{window} of a larger source sequence. This is
like a subsequence, but is more specifically intended for reading a
sequence from a file in overlapping windows. This avoids having to
suck an entire chromosome (for example) into memory at any one
time. The stored subsequence is composed of two segments, a
\esldef{previous context} that gets saved from the previous window,
and a \esldef{new window} of fresh residues. The size of both the
context and the window are configurable at the time each new window
is read.
\item to hold only \esldef{information} about a sequence, such as its
name, its length, and its position in a file, excluding the sequence
(and optional secondary structure annotation) itself. This is handy
for example when indexing a sequence file, when we'd rather not read
any (possibly prohibitively large) sequence into memory until after
we've mapped out how big it is.
\end{itemize}
To keep all this straight, the object contains a bunch of internal
bookkeeping data.
Sequence objects are growable and reusable, for efficiency in memory
allocation. If you're going to go through many different sequences
sequentially, you would typically just allocate a single
\ccode{ESL\_SQ} object and \ccode{esl\_sq\_Reuse()} it for each new
sequence, rather than creating and destroying a lot of objects.
A sequence object can also store a secondary structure annotation line
for the sequence, one character per residue.
When augmented with \eslmod{msa}, an interface to the \ccode{ESL\_MSA}
multiple alignment object provides the ability to extract single
unaligned sequences from a multiple alignment.
You would often use the \eslmod{sq} module in conjunction with
\eslmod{sqio}, which provides the ability to read and write
\ccode{ESL\_SQ} objects from and to files.
Table~\ref{tbl:sq_api} lists the functions in the \eslmod{sq} API.
% Table generated by autodoc -t esl_sq.c (so don't edit here, edit esl_sq.c:)
\begin{table}[hbp]
\begin{center}
{\small
\begin{tabular}{|ll|}\hline
\apisubhead{Text version of the \ccode{ESL\_SQ} object.}\\
\hyperlink{func:esl_sq_Create()}{\ccode{esl\_sq\_Create()}} & Create a new, empty \ccode{ESL\_SQ}.\\
\hyperlink{func:esl_sq_CreateFrom()}{\ccode{esl\_sq\_CreateFrom()}} & Create a new \ccode{ESL\_SQ} from text information.\\
\hyperlink{func:esl_sq_Grow()}{\ccode{esl\_sq\_Grow()}} & Assure that a \ccode{ESL\_SQ} has space to add more residues.\\
\hyperlink{func:esl_sq_GrowTo()}{\ccode{esl\_sq\_GrowTo()}} & Grows an \ccode{ESL\_SQ} to hold a seq of at least \ccode{n} residues.\\
\hyperlink{func:esl_sq_Copy()}{\ccode{esl\_sq\_Copy()}} & Make a copy of an \ccode{ESL\_SQ}.\\
\hyperlink{func:esl_sq_Reuse()}{\ccode{esl\_sq\_Reuse()}} & Reinitialize an \ccode{ESL\_SQ} for re-use.\\
\hyperlink{func:esl_sq_Destroy()}{\ccode{esl\_sq\_Destroy()}} & Frees an \ccode{ESL\_SQ}.\\
\apisubhead{Digitized version of the \ccode{ESL\_SQ} object. (Requires \ccode{alphabet})}\\
\hyperlink{func:esl_sq_CreateDigital()}{\ccode{esl\_sq\_CreateDigital()}} & Create a new, empty \ccode{ESL\_SQ} in digital mode.\\
\hyperlink{func:esl_sq_CreateDigitalFrom()}{\ccode{esl\_sq\_CreateDigitalFrom()}} & Create a new digital \ccode{ESL\_SQ} from text info.\\
\hyperlink{func:esl_sq_Digitize()}{\ccode{esl\_sq\_Digitize()}} & Convert an \ccode{ESL\_SQ} to digital mode.\\
\hyperlink{func:esl_sq_Textize()}{\ccode{esl\_sq\_Textize()}} & Convert an \ccode{ESL\_SQ} to text mode.\\
\hyperlink{func:esl_sq_GuessAlphabet()}{\ccode{esl\_sq\_GuessAlphabet()}} & Guess alphabet type of a single sequence.\\
\apisubhead{Other functions that operate on sequences.}\\
\hyperlink{func:esl_sq_SetName()}{\ccode{esl\_sq\_SetName()}} & Format and set a name of a sequence.\\
\hyperlink{func:esl_sq_SetAccession()}{\ccode{esl\_sq\_SetAccession()}} & Format and set the accession field in a sequence.\\
\hyperlink{func:esl_sq_SetDesc()}{\ccode{esl\_sq\_SetDesc()}} & Format and set the description field in a sequence.\\
\hyperlink{func:esl_sq_SetSource()}{\ccode{esl\_sq\_SetSource()}} & Format and set the source name field in a sequence.\\
\hyperlink{func:esl_sq_CAddResidue()}{\ccode{esl\_sq\_CAddResidue()}} & Add one residue (or terminal NUL) to a text seq.\\
\hyperlink{func:esl_sq_XAddResidue()}{\ccode{esl\_sq\_XAddResidue()}} & Add one residue (or terminal sentinel) to digital seq.\\
\hyperlink{func:esl_sq_GetFromMSA()}{\ccode{esl\_sq\_GetFromMSA()}} & Get a single sequence from an MSA.\\
\hyperlink{func:esl_sq_FetchFromMSA()}{\ccode{esl\_sq\_FetchFromMSA()}} & Fetch a single sequence from an MSA.\\
\hline
\end{tabular}
}
\end{center}
\caption{The \eslmod{sq} API.}
\label{tbl:sq_api}
\end{table}
\subsection{Example of getting data in and out of an \ccodeincmd{ESL\_SQ}}
The easiest way to create a new \ccode{ESL\_SQ} object is with the
\ccode{esl\_sq\_CreateFrom()} function, which just takes character
strings for a sequence and its name (and also, optionally, an
accession, description, and/or secondary structure annotation string).
You can also build up (and/or change and manipulate) the contents of
an \ccode{ESL\_SQ} object by accessing the name, accession,
description, sequence, and structure annotation line more directly.
This code shows examples of both approaches:
\input{cexcerpts/sq_example}
A few things to notice about that code:
\begin{itemize}
\item Every sequence has a name and a sequence. If we didn't want to
add the optional accession, description, or structure annotation
line, we'd pass \ccode{NULL} for those arguments to
\ccode{esl\_sq\_CreateFrom()}.
\item An RNA secondary structure annotation line is shown here as part
of the example, but it's really sort of a more advanced
feature. It's good to know it's there (see the \eslmod{wuss} module
for more information about how \Easel\ annotates RNA structure) but
you can ignore it if you're getting started.
\item The \ccode{esl\_sq\_Set*} functions use the same syntax as C's
\ccode{*printf()} family, which gives you a flexible way to create
new sequence names, accessions, and descriptions automatically.
\item The sequence in \ccode{sq->seq} is just a C string. (Here it's a
copy of the \ccode{testseq} string.) That has a couple of
implications. One is that it's a verbatim copy of what you provided;
\Easel\ doesn't know (or care) whether it's DNA or protein sequence,
upper case or lower case, or if it contains illegal non-sequence
characters. With a text mode sequence, that's \emph{your} problem!
For more robustness and defined biosequence alphabets, read on below
about digital mode sequences. The second implication is that, as a C
string, the \ccode{n} residues are indexed \ccode{0..sq->n-1}, not
\ccode{1..sq->n}.
\item If you're going to directly copy a sequence of length \ccode{n}
into a \ccode{sq->seq} field, note the \ccode{esl\_sq\_GrowTo()}
call, which makes sure the sequence object is allocated with enough
space for \ccode{n} residues; and don't forget to set \ccode{sq->n}.
\item The structure annotation \ccode{sq->ss} is also a C string,
indexed identically to \ccode{sq->seq}, but it's optional, and isn't
allocated by default; \ccode{esl\_sq\_GrowTo()} calls will only
reallocate for the structure annotation string after it's been
allocated at least once. Hence the \ccode{esl\_strdup} call in the
example, which duplicates (allocates and copies) the annotation into
\ccode{sq->ss}.
\end{itemize}
To get simple character strings back out of an \ccode{ESL\_SQ} object,
you're encouraged to peek inside the object. (Yeah, I know, object
oriented design says that there should be methods for this,
independent of the object's implementation; but I balance that against
simplicity, and here, simplicity wins.) The object is defined and
documented in \ccode{esl\_sq.h}. It contains various information; the
stuff you need to know is:
\input{cexcerpts/sq_sq}
Ignore the \ccode{dsq} field for now; we're about to get to it, when
we talk about digital mode sequences.
The \ccode{ESL\_SQ} object itself doesn't particularly care about the
contents of these text fields, so long as they're C strings, and so
long as \ccode{n} is the length of the \ccode{seq} (and optional
\ccode{ss}, if it's non-\ccode{NULL}) strings. However, sequence file
formats do impose some expectations on the annotation strings, and it
would be a Good Idea to adhere to them:
\begin{sreitems} {\emcode{desc}}
\item [\emcode{name}] A sequence name is almost always expected to be
a single ``word'' (no whitespace), like \ccode{SNRPA\_HUMAN}.
\item [\emcode{acc}] An accession is also usually expected to be a
single ``word'' with no whitespace, like \ccode{P09012}. Database
accessions only make sense if you know what database they're for, so
when sequences might be from different databases, you'll sometimes
see accessions prefixed with a code indicating the source database,
as in something like \ccode{UniProt:P09012}. Again, \Easel\ itself
isn't enforcing the format of this string, so your application is
free to create its own accession/version/database format as needed.
\item [\emcode{desc}] A description line is something like \ccode{U1
small nuclear ribonucleoprotein A (U1 snRNP protein A) (U1A protein)
(U1-A).}; a one-line summary of what the sequence is. You can expect
the description line to show up in the tabular output of sequence
analysis applications, so ideally you want it to be short and sweet
(so it fits on one line with a name, accession, score, coords, and
other information from an analysis app). You also don't want the
description line to end in a newline (\verb+\n+) character, or the
description line will introduce unexpected line breaks in these
tabular output files.
\end{sreitems}
You can reach into a \ccode{ESL\_SQ} and copy or modify any of these
strings, but don't try to overwrite them with a larger string unless
You Know What You're Doing. Their memory allocations are managed by
the \ccode{ESL\_SQ} object. Instead, use the appropriate
\ccode{esl\_sq\_Set*} function to overwrite an annotation field.
The \eslmod{sq} module isn't much use by itself; it's a building block
for several other modules. For example, one of the main things you'll
want to do with sequences is to read them from a file. For examples
and documentation of sequence input, see the \eslmod{sqio} module.
\subsection{Example of using a digital \ccodeincmd{ESL\_SQ}}
What follows might make more sense if you've read about the
\eslmod{alphabet} module first. \eslmod{alphabet}'s documentation
explains how \Easel uses an internal digital biosequence ``alphabet'',
where residues are encoded as small integers, suitable for direct use
as array indices. But here's an example anyway, of creating and
accessing a digital mode sequence:
\input{cexcerpts/sq_example2}
Things to notice about this code:
\begin{itemize}
\item An \ccode{ESL\_SQ} object has a \ccode{sq->seq} if it's in text
mode, and \ccode{sq->dsq} if its in digital mode. These two fields are
mutually exclusive; one of them is \ccode{NULL}.
\item If you looked at the contents of \ccode{sq->dsq} in either of
the objects, you'd see that each residue is encoded as a value
\ccode{0..3}, representing (for an RNA alphabet) the residues
\ccode{ACGU}.
\item That representation is defined by the digital RNA alphabet
\ccode{abc}, which was the first thing we created.
\item In digital mode, both the sequence residues and the optional
secondary structure characters are indexed \ccode{1..n}.
\item To make the digital sequence in the first sequence object, we
created a digital sequence \ccode{dsq} by encoding the
\ccode{testseq} using \ccode{esl\_abc\_CreateDsq()}; this
function allocated new memory for \ccode{dsq}, so we have to
free it. An \ccode{ESL\_DSQ *} is just a special character array;
it's not a full-fledged \Easel\ object, and so there's no
conventional \ccode{Create()},\ccode{Destroy()} function pair.
\item In the second sequence object, we used
\ccode{esl\_abc\_Digitize()} to encode the \ccode{testseq} directly
into space that the \ccode{sq2} object already had allocated, saving
us the temporary allocation of another \ccode{dsq}, because we
created it in digital mode (\ccode{esl\_sq\_CreateDigital()}) and
made it big enough to hold \ccode{n} digital residues with
\ccode{esl\_sq\_GrowTo()}. Notice that \ccode{esl\_sq\_GrowTo()} is
smart enough to know whether to grow the digital or the text mode
sequence field.
\item By convention, when using digital sequences, we usually keep
track of (and pass as arguments) both a digital sequence \ccode{dsq}
and its length \ccode{n}, and we also need to have the digital
alphabet itself \ccode{abc} available to know what the \ccode{dsq}
means; with text mode sequences, we usually just pass the string
pointer. Thus the \ccode{esl\_sq\_CreateDigitalFrom()} function
takes \ccode{abc}, \ccode{dsq}, and \ccode{n} as arguments, whereas
the counterpart text mode \ccode{esl\_sq\_CreateDigitalFrom()} only
took a C string \ccode{seq}. This is solely a convention - digital
sequences begin and end with a special sentinel character, so we
could always count the length of a \ccode{dsq} if we had to (using
\ccode{esl\_abc\_dsqlen()}, for example), much as we can use ANSI
C's \ccode{strlen()} to count the number of chars in a C string up
to the sentinel \verb+\0+ \ccode{NUL} character at the end.
\item To get the structure annotation to be indexed \ccode{1..n} for
consistency with the \ccode{dsq}, even though the annotation string
is still just an ASCII string, it's offset by one, and the leading
character is set by convention to a \verb+\0+. Therefore to access
the whole structure string (for printing, for instance), you want to
access \ccode{sq->ss+1}. This is a hack, but it's a simple one, so
long as you don't forget about the convention.
\item Because the original sequence has been encoded, you may not get
the original sequence back out when you decode the digital values as
alphabet symbols. \ccode{abc->sym[sq2->dsq[3]]}, for example, takes
the third digital residue and looks it up in the alphabet's symbol
table, returning the canonical character it's
representing. Upper/lower case distinctions are lost, for example;
digital alphabet symbol tables are uniformly upper case. And this
example shows another example, where the input \ccode{testseq}
contains T's, but since the digital alphabet was declared as RNA,
the symbol table represents those residues as U's when you access
them.
\item In that respect, a more careful example should have checked the
return status of the \ccode{esl\_abc\_CreateDsq()} and
\ccode{esl\_abc\_Digitize()} calls. These have a normal failure
mode, when the input text sequence contains one or more ASCII
characters that are unrecognized and therefore invalid in the
digital alphabet. If this had happened, these functions would have
returned \ccode{eslEINVAL} instead of \ccode{eslOK}. We can get away
without checking, however, because the functions just replace any
invalid character with an ``any'' character (representing \ccode{N}
for DNA or RNA, \ccode{X} for protein).
\end{itemize}
For more information about how digital sequence alphabets work, see
the \eslmod{alphabet} module.
| {
"alphanum_fraction": 0.7602731994,
"avg_line_length": 52.8835820896,
"ext": "tex",
"hexsha": "ea7a83946b2d0aba21ae09d2396a75c5d6fdc249",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6c1beb53922471218386b7ed24e72fb093fe457c",
"max_forks_repo_licenses": [
"Linux-OpenIB"
],
"max_forks_repo_name": "YJY-98/PROSAVA",
"max_forks_repo_path": "Linux/easel/esl_sq.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6c1beb53922471218386b7ed24e72fb093fe457c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Linux-OpenIB"
],
"max_issues_repo_name": "YJY-98/PROSAVA",
"max_issues_repo_path": "Linux/easel/esl_sq.tex",
"max_line_length": 138,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6c1beb53922471218386b7ed24e72fb093fe457c",
"max_stars_repo_licenses": [
"Linux-OpenIB"
],
"max_stars_repo_name": "YJY-98/PROSAVA",
"max_stars_repo_path": "Linux/easel/esl_sq.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4920,
"size": 17716
} |
\chapter{System Design}
POXVine consists of three main components.
\begin{itemize}
\item The \emph{host mapper} is responsible to map the virtual network entities (hosts and switches) onto the physical topology. This mapping can be done based on different heuristics, so POXVine allows you to customize the host mapper. We have developed a host mapper \emph{MinSwitchMapper}, which tries to minimize the number of \emph{physical switches} which contain rules to the virtual topology.
\item The \emph{network Mapper} is an application built over the \emph{POX} controller which uses the \emph{virtual-to-physical} mappings to add the required routing OpenFlow rules on the mininet switches, so that the virtual hosts can talk to one other. Another important design consideration is that the \emph{virtual network abstraction} must be preserved, that is, if a packet is to flow across a route in the virtual topology, on the physical topology, it must traverse the virtual network entities in the same order.
\item The \emph{Mininet} infrastructure is used to emulate the physical network topology and the virtual hosts which are connected to the emulated physical switches (according to the \emph{virtual-to-physical}) mappings).
\end{itemize}
I explain the individual components in the coming sections.
\begin{figure}
\noindent
\makebox[\textwidth]{\includegraphics[width=17cm]{Figures/poxvine.png}}%
\caption{POXVine Architecture}
\end{figure}
\section{MinSwitchMapper}
The host mapper module is responsible for finding the \emph{virtual-to-physical} mappings of the virtual network entities, i.e on which physical hosts, the virtual hosts and switches are placed. This mapping can be done according to various considerations, like \emph{maximizing number of virtual hosts, minimizing the number of switches mapped to a virtual topology, greedy host allocation, providing bandwidth guarantees etc.} \\
All the virtual network entities are mapped to physical hosts, which are connected by the physical network topology. Consider the network graph which is formed by using only the switches and links that are required to connect all the physical hosts (we use the shortest path between two hosts in the network graph). Figure 2.2 demonstrates an example of such a graph. \\
\begin{figure}
\noindent
\makebox[\textwidth]{\includegraphics[width=17cm]{Figures/networkg.png}}%
\caption{Example of the Network Graph Heuristic. (A) shows the physical network topology. Suppose if the virtual hosts are mapped to the hosts as shown in (A). The network graph consisting of shortest paths to all the hosts are shown in (B)}
\end{figure}
We have developed \emph{MinSwitchMapper}, which minimises the diameter of the network graph connecting the hosts of the virtual topology, The basis for this heuristic is that the traffic of this tenant's hosts are confined to the \emph{smallest portion} in the physical topology. This also minimises the number of switches where rules regarding this virtual topology is installed, thus increasing the number of tenants we can accomodate in POXVine (provided the physical host capacity is not insufficient)
\section{NetworkMapper}
The \emph{NetworkMapper} module is an application built on top of the POX controller. The role of the NetworkMapper is to use the \emph{virtual-to-physical mappings} generated by the host mapper module and add the required routing rules on the mininet switches for the virtual hosts of a tenant. One important design decision is that the NetworkMapper preserves the \emph{virtual network abstraction}. Let us suppose there are two virtual hosts \emph{v1} and \emph{v2}, connected by a path of two switches \emph{vs1} and \emph{vs2}, i.e $v1 \rightarrow vs1 \rightarrow vs2 \rightarrow v2 $. Irrespective of the mapping, the Network Mapper must add the rules such that traffic from $v1 \rightarrow v2 $ must traverse through $vs1$, $vs2 $ and $v2$ in that order.
\subsection{Route Tagging}
\begin{figure}
\noindent
\makebox[\textwidth]{\includegraphics[width=12cm]{Figures/rt.png}}%
\caption{Mapping of \emph{v1, v2, vs1 and vs2}. (1),(2) and (3) depict the three different rules a packet from $v1$ to $v2$ needs at switch $s3$.}
\end{figure}
Consider the two virtual hosts \emph{v1} and \emph{v2}, connected by the following path in the virtual topology.
\begin{center}
$v1 \rightarrow vs1 \rightarrow vs2 \rightarrow v2 $
\end{center}
Consider the mapping as shown in Figure 2.3. The path taken by a packet from $v1$ to $v2$ must go to $vs1$, then $vs2$ then $v2$. At switch s3, we need three different rules for this packet.
\begin{enumerate}
\item The packet from $v1$ reaches $s3$ for the first time. The packet is sent out to $vs1$.
\item The packet is received from $vs1$. The packet is sent to out to $s2$ and will be subsequently sent to $vs2$
\item The packet is received after traversing $vs2$. The packet is sent out to $v2$.
\end{enumerate}
Therefore, the rules added at $s3$ cannot differentiate these three kind of flows using just the IP headers. For this, we incorporate \emph{RouteTags} in the packet header \cite{simple}. In our case, the VLAN ID header field is used to store both the \emph{tenant ID} and the \emph{RouteTag}. Using the RouteTag, we can differentiate which part of the route the packet is in. Listing an example of the rules on switch $s3$.
\begin{center}
Rule 1 $\rightarrow$ \\
$Match$ : IP Src=$v1$ $|$ IP Dst=$v2$ $|$ RouteTag=1 \\
$Action$ : Output=$vs1$ $|$ RouteTag=2 \\
Rule 2 $\rightarrow$ \\
$Match$ : IP Src= $v1$ $|$ IP Dst= $v2$ $|$ RouteTag = 2 \\
$Action$ : Output=$s2$ $|$ RouteTag = 3 \\
Rule 3 $\rightarrow$ \\
$Match$ : IP Src= $v1$ $|$ IP Dst= $v2$ $|$ RouteTag = 3 \\
$Action$ : Output=$v2$
\end{center}
Thus, we identify straight paths in the network route and assign a Route Tag for each of them, thus the switch can distinguish which
part of the network route the packet is in. We will look at Route Tag calculation in the next chapter.
\subsection {Switch Tunnelling}
As seen in the previous example, we know that $s3$ needs to have three different rules for the different Route Tag
packets. Let us consider switch $s2$. Adding three different rules for $s2$ is wasteful, as for $s2$, the route tag is
of no importance. It just sends packets from $s1$ to $s3$ and $s3$ to $s1$. Inspired from \cite{simple}, we establish
switch tunnels in the network. Thus, if we divide the physical network route into \emph{segments} (where the start and end of each segment is connected to a virtual entity), then the switches in the middle of the segment do not need fine grained rules, they need to route the packet to the end-switch of the segment. \\
Thus, the NetworkMapper adds routing rules for every other switch on each switch. At the start of each \emph{segment}, the rules added modify the source MAC address (unused field for the POXVine system) to indicate the end-switch of the segment. The switches in the middle will just route the packet to that switch. Revisiting the example in Figure 2.3, switch $s1$ will have the following rule.
\begin{center}
$Match$ : IP Src=$v1$ $|$ IP Dst=$v2$ \\
$Action$ : Output=$s2$ $|$ RouteTag=1 $|$ MAC Src=$s3$\\
\end{center}
Switch $s2$ will have switch tunnel rules to switch $s1$ and $s3$.
\begin{center}
Rule 1 $\rightarrow$\\
$Match$ : MAC Src=$s3$ \\
$Action$ : Output=$s3's\ Port\ Number$ \\
Rule 2 $\rightarrow$\\
$Match$ : MAC Src=$s1$ \\
$Action$ : Output=$s1's\ Port\ Number$ \\
\end{center}
\section{Mininet Infrastructure Creator}
POXVine uses Mininet to create a \emph{emulated} physical/virtual topology as per the specifications. From the topology configurations and the virtual-to-physical mappings, a hybrid topology configuration is created for the Mininet Infrastructure Creator, which comprises of all the physical topology's switches and links, and according to the virtual-to-physical mappings, the virtual hosts and switches connected to the corresponding physical switches. We do not create the physical hosts in Mininet. Future versions of POXVine can run the virtual hosts and switches on real 'physical' hosts.
| {
"alphanum_fraction": 0.7610132159,
"avg_line_length": 81.72,
"ext": "tex",
"hexsha": "901d696e71756c529d7608ba8798173b07e80e38",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c3db9bca2b4cdbeb1c43d04e5ec9ad6c2f77a1d2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sskausik08/virtnetsim",
"max_forks_repo_path": "btp-report/Chapters/chapter4.tex",
"max_issues_count": 9,
"max_issues_repo_head_hexsha": "c3db9bca2b4cdbeb1c43d04e5ec9ad6c2f77a1d2",
"max_issues_repo_issues_event_max_datetime": "2015-03-31T10:26:46.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-03-16T08:11:05.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sskausik08/virtnetsim",
"max_issues_repo_path": "btp-report/Chapters/chapter4.tex",
"max_line_length": 763,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "c3db9bca2b4cdbeb1c43d04e5ec9ad6c2f77a1d2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sskausik08/virtnetsim",
"max_stars_repo_path": "btp-report/Chapters/chapter4.tex",
"max_stars_repo_stars_event_max_datetime": "2015-03-20T09:29:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-03-20T09:29:07.000Z",
"num_tokens": 2110,
"size": 8172
} |
\chapter{Using Python on a Macintosh \label{using}}
\sectionauthor{Bob Savage}{[email protected]}
Python on a Macintosh running Mac OS X is in principle very similar to
Python on any other \UNIX{} platform, but there are a number of additional
features such as the IDE and the Package Manager that are worth pointing out.
Python on Mac OS 9 or earlier can be quite different from Python on
\UNIX{} or Windows, but is beyond the scope of this manual, as that platform
is no longer supported, starting with Python 2.4. See
\url{http://www.cwi.nl/\textasciitilde jack/macpython} for installers
for the latest 2.3 release for Mac OS 9 and related documentation.
\section{Getting and Installing MacPython \label{getting-OSX}}
Mac OS X 10.4 comes with Python 2.3 pre-installed by Apple. However, you are
encouraged to install the most recent version of Python from the Python website
(\url{http://www.python.org}). A ``universal binary'' build of Python 2.5, which
runs natively on the Mac's new Intel and legacy PPC CPU's, is available there.
What you get after installing is a number of things:
\begin{itemize}
\item A \file{MacPython 2.5} folder in your \file{Applications} folder. In here
you find IDLE, the development environment that is a standard part of official
Python distributions; PythonLauncher, which handles double-clicking Python
scripts from the Finder; and the ``Build Applet'' tool, which allows you to
package Python scripts as standalone applications on your system.
\item A framework \file{/Library/Frameworks/Python.framework}, which includes
the Python executable and libraries. The installer adds this location to your
shell path. To uninstall MacPython, you can simply remove these three
things. A symlink to the Python executable is placed in /usr/local/bin/.
\end{itemize}
The Apple-provided build of Python is installed in
\file{/System/Library/Frameworks/Python.framework} and \file{/usr/bin/python},
respectively. You should never modify or delete these, as they are
Apple-controlled and are used by Apple- or third-party software.
IDLE includes a help menu that allows you to access Python documentation. If you
are completely new to Python you should start reading the tutorial introduction
in that document.
If you are familiar with Python on other \UNIX{} platforms you should read the
section on running Python scripts from the \UNIX{} shell.
\subsection{How to run a Python script}
Your best way to get started with Python on Mac OS X is through the IDLE
integrated development environment, see section \ref{IDE} and use the Help menu
when the IDE is running.
If you want to run Python scripts from the Terminal window command line or from
the Finder you first need an editor to create your script. Mac OS X comes with a
number of standard \UNIX{} command line editors, \program{vim} and
\program{emacs} among them. If you want a more Mac-like editor, \program{BBEdit}
or \program{TextWrangler} from Bare Bones Software (see
\url{http://www.barebones.com/products/bbedit/index.shtml}) are good choices, as
is \program{TextMate} (see \url{http://macromates.com/}). Other editors include
\program{Gvim} (\url{http://macvim.org}) and \program{Aquamacs}
(\url{http://aquamacs.org}).
To run your script from the Terminal window you must make sure that
\file{/usr/local/bin} is in your shell search path.
To run your script from the Finder you have two options:
\begin{itemize}
\item Drag it to \program{PythonLauncher}
\item Select \program{PythonLauncher} as the default application to open your
script (or any .py script) through the finder Info window and double-click it.
\program{PythonLauncher} has various preferences to control how your script is
launched. Option-dragging allows you to change these for one invocation, or
use its Preferences menu to change things globally.
\end{itemize}
\subsection{Running scripts with a GUI \label{osx-gui-scripts}}
With older versions of Python, there is one Mac OS X quirk that you need to be
aware of: programs that talk to the Aqua window manager (in other words,
anything that has a GUI) need to be run in a special way. Use \program{pythonw}
instead of \program{python} to start such scripts.
With Python 2.5, you can use either \program{python} or \program{pythonw}.
\subsection{Configuration}
Python on OS X honors all standard \UNIX{} environment variables such as
\envvar{PYTHONPATH}, but setting these variables for programs started from the
Finder is non-standard as the Finder does not read your \file{.profile} or
\file{.cshrc} at startup. You need to create a file \file{\textasciitilde
/.MacOSX/environment.plist}. See Apple's Technical Document QA1067 for
details.
For more information on installation Python packages in MacPython, see section
\ref{mac-package-manager}, ``Installing Additional Python Packages.''
\section{The IDE\label{IDE}}
MacPython ships with the standard IDLE development environment. A good
introduction to using IDLE can be found at
\url{http://hkn.eecs.berkeley.edu/~dyoo/python/idle_intro/index.html}.
\section{Installing Additional Python Packages \label{mac-package-manager}}
There are several methods to install additional Python packages:
\begin{itemize}
\item \url{http://pythonmac.org/packages/} contains selected compiled packages
for Python 2.5, 2.4, and 2.3.
\item Packages can be installed via the standard Python distutils mode
(\samp{python setup.py install}).
\item Many packages can also be installed via the \program{setuptools}
extension.
\end{itemize}
\section{GUI Programming on the Mac}
There are several options for building GUI applications on the Mac with Python.
\emph{PyObjC} is a Python binding to Apple's Objective-C/Cocoa framework, which
is the foundation of most modern Mac development. Information on PyObjC is
available from \url{http://pybojc.sourceforge.net}.
The standard Python GUI toolkit is \module{Tkinter}, based on the cross-platform
Tk toolkit (\url{http://www.tcl.tk}). An Aqua-native version of Tk is bundled
with OS X by Apple, and the latest version can be downloaded and installed from
\url{http://www.activestate.com}; it can also be built from source.
\emph{wxPython} is another popular cross-platform GUI toolkit that runs natively
on Mac OS X. Packages and documentation are available from
\url{http://www.wxpython.org}.
\emph{PyQt} is another popular cross-platform GUI toolkit that runs natively on
Mac OS X. More information can be found at
\url{http://www.riverbankcomputing.co.uk/pyqt/}.
\section{Distributing Python Applications on the Mac}
The ``Build Applet'' tool that is placed in the MacPython 2.5 folder is fine for
packaging small Python scripts on your own machine to run as a standard Mac
application. This tool, however, is not robust enough to distribute Python
applications to other users.
The standard tool for deploying standalone Python applications on the Mac is
\program{py2app}. More information on installing and using py2app can be found
at \url{http://undefined.org/python/\#py2app}.
\section{Application Scripting}
Python can also be used to script other Mac applications via Apple's Open
Scripting Architecture (OSA); see
\url{http://appscript.sourceforge.net}. Appscript is a high-level, user-friendly
Apple event bridge that allows you to control scriptable Mac OS X applications
using ordinary Python scripts. Appscript makes Python a serious alternative to
Apple's own \emph{AppleScript} language for automating your Mac. A related
package, \emph{PyOSA}, is an OSA language component for the Python scripting
language, allowing Python code to be executed by any OSA-enabled application
(Script Editor, Mail, iTunes, etc.). PyOSA makes Python a full peer to
AppleScript.
\section{Other Resources}
The MacPython mailing list is an excellent support resource for Python users and
developers on the Mac:
\url{http://www.python.org/community/sigs/current/pythonmac-sig/}
Another useful resource is the MacPython wiki:
\url{http://wiki.python.org/moin/MacPython}
| {
"alphanum_fraction": 0.782673637,
"avg_line_length": 44.8826815642,
"ext": "tex",
"hexsha": "ca522c69fdf0dd740f4c4c3bdbd204226716dea0",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2019-07-18T21:33:17.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-01-30T21:52:13.000Z",
"max_forks_repo_head_hexsha": "ded7cae112fc4f9f174a95456aeff6c1429dbc9b",
"max_forks_repo_licenses": [
"PSF-2.0"
],
"max_forks_repo_name": "6769/Python2.5.2",
"max_forks_repo_path": "Doc/mac/using.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ded7cae112fc4f9f174a95456aeff6c1429dbc9b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"PSF-2.0"
],
"max_issues_repo_name": "6769/Python2.5.2",
"max_issues_repo_path": "Doc/mac/using.tex",
"max_line_length": 80,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "ded7cae112fc4f9f174a95456aeff6c1429dbc9b",
"max_stars_repo_licenses": [
"PSF-2.0"
],
"max_stars_repo_name": "6769/Python2.5.2",
"max_stars_repo_path": "Doc/mac/using.tex",
"max_stars_repo_stars_event_max_datetime": "2015-10-23T02:57:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-10-23T02:57:29.000Z",
"num_tokens": 1933,
"size": 8034
} |
\documentclass[12pt,a4paper]{article}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{makeidx}
\usepackage{graphicx}
\usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry}
\usepackage[hidelinks]{hyperref}
\author{Bert Peters}
\title{Homework 1}
\begin{document}
\maketitle
\section{Scheduling conflicts}
I had a conflict that prevented me from attending the first lecture, due to a meeting for the teaching assistants for the Operating Systems course. This was a one time thing, and I do not expect any further scheduling conflicts.
\section{Components in WWW search systems}
I can discern 3 main components in WWW search systems:
\begin{itemize}
\item A retrieval system. This system is responsible for getting the information from external sources. It should distribute the requests to external sources that no sources are queried too much, but still index the entire source. It could also clean up the retrieved data somewhat, so that it is easier to index.
\item An indexing system, storing the retrieved data in a structured manner. It should link resources to concepts, whether it is keywords, or actual logical concepts.
\item A weighting system, determining the relative importance of the resources. This is necessary to rank the individual results in a result set. Page rank is a widely known example of this, but any network centrality measure could be used.
\end{itemize}
Arguably, a display component can be considered as well, but that is rather trivial when you already consider the above components.
\section{Implementing components}
For the retrieval system, you typically want some spider program that can visit web pages and simply follow all links on that page. It should then queue all of the links it has not visited (or queued) for retrieval and continue this process ad infinitum. Since retrieving data is not that much of a CPU intensive process, you have some asynchronous retrieval system that can perform many requests at the same time and a synchronous process that handles the responses as they come in.
For the indexing system, you need a quick way of looking up certain records related to concepts. A simple way to do this is to store your concepts and records in some SQL database and link them with a junction table, adding indices where appropriate. Really though, anything that can offer something like $O (\log n)$ complexity for lookups is nice. Anything else probably will not deal well with the volume of data.
As mentioned above, the weighting system needs to determine relative importance of records. This can be done using any of the various graph centrality measures. Most centrality measures are $O(n^2)$ to compute, so approximations need to be made.
\section{Unsolved problems searching for WWW information}
\begin{description}
\item[The natural language problem.] A computer needs to know what you mean, but natural language is hard, with its ambiguities and nuances. This has been partially solved by training users for the last few decades that they need to talk some weird haikus to search engines, but that still does not solve homonyms and related issues.
\item[Spam prevention.] As long as there have been search engines, there have been websites trying to exploit flaws in the ranking algorithms. When Altavista ranked you based on how frequent a word was on your page, pages became filled with hidden text. Googles page rank has introduced paid links.
\item[Unstructured data.] The web is full of ``natural'' content, but the information is generally more useful when structured. It is difficult to extract such structured data from unstructured sources.\footnote{This is actually partially solved with the introduction of data annotations based on \url{https://schema.org/}.}
\end{description}
\section{Personal search problems}
\begin{description}
\item[Controlling your search bubble] Search engines heavily personalize your search results to what they think is you. This is really great when it allows me to search for topics related to \LaTeX~and not be questioned about the results not being suitable for university computers, but in other cases it is rather dangerous, especially in political discourse.
Services like duckduckgo enable you to have no bubble, but a controllable bubble would be even better.
\item[Credibility of sources] Search engines have crawled a huge amount of sources, and not all of them are credible. Most notably, some websites are high on the search results page but offer no content whatsoever.\footnote{A notable example is \url{http://installion.co.uk}, which offers ``tutorials'' for installing things without any useful content, yet they are the top result when looking for such tutorials.} It would be nice if you could easily flag them as useless.
\item[Topical searches] Most general purpose search engines only have keywords, which works fine, but search terms often belong to multiple fields, this can be problematic.
It would be great to be able to specify ``I am currently looking for internet security things'' and then search for shellshock without finding anything about the physical phenomenon.
\end{description}
\end{document} | {
"alphanum_fraction": 0.8024904215,
"avg_line_length": 79.0909090909,
"ext": "tex",
"hexsha": "55861c3bf655f8164fd0fdd4a34b3cdbced22449",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "df69c31937a3baab2af5d8f7c6750c9fe6a5e5c5",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "bertptrs/uni-mir",
"max_forks_repo_path": "Homework1/Homework1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "df69c31937a3baab2af5d8f7c6750c9fe6a5e5c5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "bertptrs/uni-mir",
"max_issues_repo_path": "Homework1/Homework1.tex",
"max_line_length": 483,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "df69c31937a3baab2af5d8f7c6750c9fe6a5e5c5",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "bertptrs/uni-mir",
"max_stars_repo_path": "Homework1/Homework1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1103,
"size": 5220
} |
\section{\module{tempfile} ---
Generate temporary file names.}
\declaremodule{standard}{tempfile}
\modulesynopsis{Generate temporary file names.}
\indexii{temporary}{file name}
\indexii{temporary}{file}
This module generates temporary file names. It is not \UNIX{} specific,
but it may require some help on non-\UNIX{} systems.
Note: the modules does not create temporary files, nor does it
automatically remove them when the current process exits or dies.
The module defines a single user-callable function:
\begin{funcdesc}{mktemp}{}
Return a unique temporary filename. This is an absolute pathname of a
file that does not exist at the time the call is made. No two calls
will return the same filename.
\end{funcdesc}
The module uses two global variables that tell it how to construct a
temporary name. The caller may assign values to them; by default they
are initialized at the first call to \function{mktemp()}.
\begin{datadesc}{tempdir}
When set to a value other than \code{None}, this variable defines the
directory in which filenames returned by \function{mktemp()} reside.
The default is taken from the environment variable \envvar{TMPDIR}; if
this is not set, either \file{/usr/tmp} is used (on \UNIX{}), or the
current working directory (all other systems). No check is made to
see whether its value is valid.
\end{datadesc}
\begin{datadesc}{template}
When set to a value other than \code{None}, this variable defines the
prefix of the final component of the filenames returned by
\function{mktemp()}. A string of decimal digits is added to generate
unique filenames. The default is either \file{@\var{pid}.} where
\var{pid} is the current process ID (on \UNIX{}), or \file{tmp} (all
other systems).
\end{datadesc}
\strong{Warning:} if a \UNIX{} process uses \code{mktemp()}, then
calls \function{fork()} and both parent and child continue to use
\function{mktemp()}, the processes will generate conflicting temporary
names. To resolve this, the child process should assign \code{None}
to \code{template}, to force recomputing the default on the next call
to \function{mktemp()}.
| {
"alphanum_fraction": 0.7616800378,
"avg_line_length": 39.9811320755,
"ext": "tex",
"hexsha": "95034bbcb3c562a5fbd351bcbffa2a5556055c6e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7fbaeb819ca7b20dca048217ff585ec195e999ec",
"max_forks_repo_licenses": [
"Unlicense",
"TCL",
"DOC",
"AAL",
"X11"
],
"max_forks_repo_name": "1byte2bytes/cpython",
"max_forks_repo_path": "Doc/lib/libtempfile.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7fbaeb819ca7b20dca048217ff585ec195e999ec",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense",
"TCL",
"DOC",
"AAL",
"X11"
],
"max_issues_repo_name": "1byte2bytes/cpython",
"max_issues_repo_path": "Doc/lib/libtempfile.tex",
"max_line_length": 72,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "7fbaeb819ca7b20dca048217ff585ec195e999ec",
"max_stars_repo_licenses": [
"Unlicense",
"TCL",
"DOC",
"AAL",
"X11"
],
"max_stars_repo_name": "1byte2bytes/cpython",
"max_stars_repo_path": "Doc/lib/libtempfile.tex",
"max_stars_repo_stars_event_max_datetime": "2019-10-25T21:41:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-25T21:41:07.000Z",
"num_tokens": 527,
"size": 2119
} |
\subsection{\texttt{bingle.py}}\label{code:bingle}
\begin{verbatim}
#bingle.py: A simple two-particle collision simulation using
# ESyS-Particle
# Author: D. Weatherley
# Date: 15 May 2007
# Organisation: ESSCC, University of Queensland
# (C) All rights reserved, 2007.
#
#
#import the appropriate ESyS-Particle modules:
from esys.lsm import *
from esys.lsm.util import Vec3, BoundingBox
#instantiate a simulation object
#and initialise the neighbour search algorithm:
sim = LsmMpi(numWorkerProcesses=1, mpiDimList=[1,1,1])
sim.initNeighbourSearch(
particleType="NRotSphere",
gridSpacing=2.5,
verletDist=0.5
)
#specify the number of timesteps and timestep increment:
sim.setNumTimeSteps(10000)
sim.setTimeStepSize(0.001)
#specify the spatial domain for the simulation:
domain = BoundingBox(Vec3(-20,-20,-20), Vec3(20,20,20))
sim.setSpatialDomain(domain)
#add the first particle to the domain:
particle=NRotSphere(id=0, posn=Vec3(-5,5,-5), radius=1.0, mass=1.0)
particle.setLinearVelocity(Vec3(1.0,-1.0,1.0))
sim.createParticle(particle)
#add the second particle to the domain:
particle=NRotSphere(id=1, posn=Vec3(5,5,5), radius=1.5, mass=2.0)
particle.setLinearVelocity(Vec3(-1.0,-1.0,-1.0))
sim.createParticle(particle)
#specify the type of interactions between colliding particles:
sim.createInteractionGroup(
NRotElasticPrms(
name = "elastic_repulsion",
normalK = 10000.0,
scaling = True
)
)
#Execute the simulation:
sim.run()
\end{verbatim}
| {
"alphanum_fraction": 0.7342105263,
"avg_line_length": 27.6363636364,
"ext": "tex",
"hexsha": "d286c2533f3884a9e58c8afadb0c519dc9c16017",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e56638000fd9c4af77e21c75aa35a4f8922fd9f0",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "danielfrascarelli/esys-particle",
"max_forks_repo_path": "Doc/Tutorial/examples/bingle.py.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e56638000fd9c4af77e21c75aa35a4f8922fd9f0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "danielfrascarelli/esys-particle",
"max_issues_repo_path": "Doc/Tutorial/examples/bingle.py.tex",
"max_line_length": 67,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e56638000fd9c4af77e21c75aa35a4f8922fd9f0",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "danielfrascarelli/esys-particle",
"max_stars_repo_path": "Doc/Tutorial/examples/bingle.py.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 456,
"size": 1520
} |
\section{General concepts using 1D DC resistivity inversion}\label{sec:dc1d}
{\em See cpp/py files in the directory \file{doc/tutorial/code/dc1d}.}
\subsection{Smooth inversion}\label{sec:dc1dsmooth}
{\em Example file \file{dc1dsmooth.cpp}.}\\
Included into the GIMLi library are several electromagnetic 1d forward operators.
For direct current resistivity there is a semi-analytical solution using infinite sums that are approximated by Ghosh filters.
The resulting function calculates the apparent resistivity of any array for a given resistivity and thickness vector.
There are two main parameterisation types:
\begin{itemize}
\item a fixed parameterisation where only the parameters are varied
\item a variable parameterisation where parameters and geometry is varied
\end{itemize}
Although for 1d problems the latter is the typical one (resistivity and thickness), we start with the first since it is more general for 2d/3d problems and actually easier.
Accordingly, in the file dc1dmodelling.h/cpp two classes called \cw{DC1dRhoModelling} and \cw{DC1dBlockModelling} are defined.
For the first we first define a thickness vector and create a mesh using the function \cw{createMesh1d}.
The the forward operator is initialized with the data.
\begin{lstlisting}[language=C++]
RMatrix abmnr; loadMatrixCol( abmnr, dataFile ); //! read data
RVector ab2 = abmnr[0], mn2 = abmnr[1], rhoa = abmnr[2]; //! 3 columns
RVector thk( nlay-1, max(ab2) / 2 / ( nlay - 1 ) ); //! const. thickn.
DC1dRhoModelling f( thk, ab2, mn2 ); //! initialise forward operator
\end{lstlisting}
Note that the mesh generation can also be done automatically using another constructor.
However most applications will read or create a mesh from in application and pass it.
By default, the transformations for data and model are identity transformations.
We initialise two logarithmic transformations for the resistivity and the apparent resistivity by
\begin{lstlisting}[language=C++]
RTransLog transRho;
RTransLog transRhoa;
\end{lstlisting}
Alternatively, we could set lower/upper bounds for the resistivity using
\begin{lstlisting}[language=C++]
RTransLogLU transRho( lowerbound, upperbound);
\end{lstlisting}
Appendix \ref{app:trans} gives an overview on available transformation functions.
Next, the inversion is initialized and a few options are set
\begin{lstlisting}[language=C++]
RInversion inv( data.rhoa(), f, verbose );
inv.setTransData( transRhoa ); //! data transform
inv.setTransModel( transRho ); //! model transform
inv.setRelativeError( errPerc / 100.0 ); //! constant relative error
\end{lstlisting}
A starting model of constant values (median apparent resistivity) is defined
\begin{lstlisting}[language=C++]
RVector model( nlay, median( data.rhoa() ) ); //! constant vector
inv.setModel( model ); //! starting model
\end{lstlisting}
Finally, the inversion is called and the model is retrieved using \lstinline|model = inv.run();|
A very important parameter is the regularisation parameter $\lambda$ that controls the strength of the smoothness constraints (which are the default constraint for any 1d/2d/3d mesh).
Whereas $w^c$ and $w^m$ are dimensionless and 1 by default, $\lambda$ has, after eq. (\ref{eq:min}), the reciprocal and squared unit of $m$ and can thus have completely different values for different problems\footnote{The regularisation parameter has therefore to be treated logarithmically.}.
However, since often the logarithmic transform is used, the default value of $\lambda=20$ is often a first guess.
Other values are set by
\begin{lstlisting}
inv.setLambda( lambda ); //! set regularisation parameter
\end{lstlisting}
In order to optimise $\lambda$, the L-curve \citep{guentherruecker06,guentherdiss} can be applied to find a trade-off between data fit and model roughness by setting \lstinline|inv.setOptimizeLambda(true);|.
For synthetic data or field data with well-known errors we can also call \lstinline|model = inv.runChi1();|, which varies $\lambda$ from the starting value such that the data are fitted within noise ($\chi^2=1$).
We created a synthetic model with resistivities of 100(soil)-500(unsaturated)-20(saturated)-1000(bedrock) $\Omega$m and thicknesses of 0.5, 3.5 and 6 meters.
A Schlumberger sounding with AB/2 spacings from 1.0 to 100\,m was simulated and 3\% noise were added.
Data format of the file \file{sond1-100.dat} is the unified data format\footnote{See \url{www.resistivity.net?unidata} for a description.}.
\begin{figure}[htbp]
\includegraphics[width=\textwidth]{sond1-100-3lambdas}\\[-3ex]
~~~a\hfill ~~~b \hfill ~~~c \hfill ~
\caption{Smooth 1d resistivity inversion results for a) $\lambda=200\Rightarrow \chi^2=11.1$/rrms=10.1\%, b) $\lambda=20\Rightarrow \chi^2=1.2$/rrms=3.3\%, and c) $\lambda=2\Rightarrow \chi^2=0.6$/rrms=2.4\%, red-synthetic model, blue-estimated model}\label{fig:dc1d-3lambda}
\end{figure}
Figure~\ref{fig:dc1d-3lambda} shows the inversion result for the three different regularisation parameters 300, 30 and 3.
Whereas the first is over-smoothed, the other are much closer at the reality.
The rightmost figure over-fits the data ($\chi^2=0.3<1$) but is still acceptable.
The L-curve method yields a value of $\lambda=2.7$, which is too low.
However, if we apply the $\chi^2$-optimization we obtain a value of $\lambda=15.2$ and with it the data are neither over- nor under-fitted.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Block inversion}\label{sec:dc1dblock}
{\em Example file \file{dc1dblock.cpp} in the directory \file{doc/tutorial/code/dc1d}.}\\
Alternatively, we might invert for a block model with unknown layer thickness and resistivity.
We change the mesh generation accordingly and use the forward operator \lstinline|DC1dModelling|:
\begin{lstlisting}[language=C++]
DC1dModelling f( nlay, ab2, mn2 );
\end{lstlisting}
\lstinline|createMesh1DBlock| creates a block model with two regions\footnote{With a second argument \lstinline|createMesh1DBlock| can create a block model with thickness and several parameters for a multi-parameter block inversion.}.
Region 0 contains the thickness vector and region 1 contains the resistivity vector.
There is a region manager as a part of the forward modelling class that administrates the regions.
We first define different transformation functions for thickness and resistivity and associate it to the individual regions.
\begin{lstlisting}[language=C++]
RTransLog transThk;
RTransLogLU transRho( lbound, ubound );
RTransLog transRhoa;
f.region( 0 )->setTransModel( transThk );
f.region( 1 )->setTransModel( transRho );
\end{lstlisting}
For block discretisations, the starting model can have a great influence on the results.
We choose the median of the apparent resistivities and a constant thickness derived from the current spread as starting values.
\begin{lstlisting}[language=C++]
double paraDepth = max( ab2 ) / 3;
f.region( 0 )->setStartValue( paraDepth / nlay / 2.0 );
f.region( 1 )->setStartValue( median( rhoa ) );
\end{lstlisting}
For block inversion a scheme after \cite{marquardt}, i.e. a local damping of the changing without interaction of the model parameters and a decreasing regularisation strength is favourable.
\begin{lstlisting}[language=C++]
inv.setMarquardtScheme( 0.9 ); //! local damping with decreasing lambda
\end{lstlisting}
The latter could also be achieved by \begin{enumerate}
\item setting the constraint type to zero (damping) by \lstinline|inv.setConstraintType(0)|
\item switching to local regularization by \lstinline|inv.setLocalRecularization(true)|
\item defining the lambda decreasing factor by \lstinline|inv.setLambdaDecrease(0.9)|
\end{enumerate}
%The choice of appropriate regularisation parameters is somewhat more complicated since the result strongly depends on the starting model and the preceding models \citep{guentherdiss}.
%The latter can be done by \lstinline|inv.setLambdaDecrease( factor );|
With the default regularization strength $\lambda=20$ we obtain a data fit slightly below the error estimate.
The model (Fig.~\ref{fig:dc1dblock-resres}a) clearly shows the four layers (blue) close to the synthetic model (red).
%Some parameters are overestimated and some are underestimated.
\begin{figure}[htbp]
\centering\includegraphics[width=0.7\textwidth]{dc1dblock-resres}\\[-3ex]
~\hfill a\hfill ~ \hfill ~~~~~b \hfill ~ \hfill ~ \hfill ~
\caption{a) Block 1d resistivity inversion result (red-synthetic model, blue-estimated model)) and b) resolution matrix}\label{fig:dc1dblock-resres}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Resolution analysis}\label{sec:dc1dresolution}
One may now be interested in the resolution properties of the individual model parameters.
The resolution matrix $\R^M$ defines the projection of the real model onto the estimated model:
\begin{equation}
\m^{est} = \R^M \m^{true} + (\I - \R^M) \m^R + \S^\dagger\D\n\quad,
\end{equation}
\citep{guentherdiss} where $\S^\dagger\D\n$ represents the generalised inverse applied to the noise.
Note that $\m^R$ changes to $\m^k$ for local regularisation schemes \citep{friedel03}.
\citet{guentherdiss} also showed that the model cell resolution (discrete point spread function) can be computed by solving an inverse sub-problem with the corresponding sensitivity distribution instead of the data misfit.
This is implemented in the inversion class by the function \lstinline|modelCellResolution( iModel )| where \lstinline|iModel| is the number of the model cell.
This approach is feasible for bigger higher-dimensional problems and avoids the computation of the whole resolution matrix.
A computation for representative model cells can thus give insight of the resolution properties of different parts of the model.
For the block model we successively compute the whole resolution matrix.
\begin{lstlisting}
RVector resolution( nModel ); //! create single resolution vector
RMatrix resM; //! create empty matrix
for ( size_t iModel = 0; iModel < nModel; iModel++ ) {
resolution = inv.modelCellResolution( iModel );
resM.push_back( resolution ); //! push back the single vector
}
save( resM, "resM" ); //! save resolution matrix
\end{lstlisting}
In Figure~\ref{fig:dc1dblock-resres} the model resolution matrix is shown and the diagonal elements are denoted.
The diagonal elements show that the resolution decreases with depth.
The first resistivity is resolved nearly perfect, whereas the other parameters show deviations from 1.
$\rho_2$ is positively connected with $d_2$, i.e. an increase of resistivity can be compensated by an increased resistivity.
For $\rho_3$ and $d_3$ the correlation is negative.
These are the well known H- and T-equivalences of thin resistors or conductors, respectively, and show the equivalence of possible models that are able to fit the data within noise.
%Similarly, we can obtain the resolution kernels also for smooth inversion.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Structural information}\label{sec:dc1dstruct}
{\em Example file \file{dc1dsmooth-struct.cpp} in the directory \file{doc/tutorial/code/dc1d}.}\\
Assume we know the ground water table at 4\,m from a well.
Although we know nothing about the parameters, this structural information should be incorporated into the model.
We create a thickness vector of constant 0.5\,m.
The first 8 model cells are located above the water table, so the 8th boundary contains the known information.
Therefore we set a marker different from zero (default) after creating the mesh
\begin{lstlisting}
//! variant 1: set mesh (region) marker
f.mesh()->boundary( 8 ).setMarker( 1 );
\end{lstlisting}
This causes the boundary between layer 8 and 9 being disregarded, the corresponding $w^c_8$ is zero and allows for arbitrary jumps in the otherwise smoothed model.
Figure~\ref{fig:dc1dsmooth-struct} shows the result, at 4\,m the resistivity jumps from a few hundreds down to almost 10.
\begin{figure}[htbp]
\centering\includegraphics[width=0.45\textwidth]{sond1-100-struct.pdf}
%\\[-3ex]
%~\hfill a\hfill ~ \hfill ~~~~~b \hfill ~ \hfill ~ \hfill ~
\caption{Inversion result with the ground water table at 4\,m as structural constraint.}\label{fig:dc1dsmooth-struct}
\end{figure}
Note that we can set the weight to zero also directly, either as a property of the inversion
\begin{lstlisting}
//! variant 2: application of a constraint weight vector to inversion
RVector bc( inv.constraintsCount(), 1.0 );
bc[ 6 ] = 0.0;
inv.setCWeight( bc );
\end{lstlisting}
or the (only existing) region.
\begin{lstlisting}
//! variant 3: application of a boundary control vector to region
RVector bc( f.regionManager().constraintCount(), 1.0 );
bc[ 7 ] = 0.0;
f.region( 0 )->setConstraintsWeight( bc );
\end{lstlisting}
Of course, in 2d/3d inverse problems we do not set the weight by hand.
Instead, we put an additional polygon (2d) or surface (3d) with a marker $\neq 0$ into the PLC before the mesh generation.
By doing so, arbitrary boundaries can be incorporated as known boundaries.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Regions}\label{sec:dc1dregion}
{\em Example file \file{dc1dsmooth-region.cpp} in the directory \file{doc/tutorial/code/dc1d}.}\\
In the latter sections we already used regions.
A default mesh contains a region with number 0.
A block mesh contains a region 0 for the thickness values and regions counting up from 1 for the individual parameters.
Higher dimensional meshes can be created automatically by using region markers, e.g. for specifying different geological units.
In our case we can divide part above and a part below water level.
In 2d or 3d we would, similar to the constraints above, just put a region marker $\neq 0$ into the PLC and the mesh generator will automatically associate this attribute to all cells in the region.
Here we set the markers of the cells $\geq8$ to the region marker 1.
\begin{lstlisting}
Mesh * mesh = f.mesh();
mesh->boundary( 8 ).setMarker( 1 );
for ( size_t i = 8; i < mesh->cellCount(); i++ )
mesh->cell( i ).setMarker( 1 );
\end{lstlisting}
Now we have two regions that are decoupled automatically.
The inversion result is identical to the one in Figure~\ref{fig:dc1dsmooth-struct}.
However we can now define the properties of each region individually.
For instance, we might know the resistivities to lie between 80 and 800\,$\Omega$m above and between 10 and 100\,$\Omega$m below.
Consequently we define two transformations and apply it to the regions.
\begin{lstlisting}
RTransLogLU transRho0( 80, 800 );
RTransLogLU transRho1( 10, 1000 );
f.region( 0 )->setTransModel( transRho0 );
f.region( 1 )->setTransModel( transRho1 );
\end{lstlisting}
Additionally we might try to improve the very smooth transition between groundwater and bedrock.
We decrease the model control (strength of smoothness) in the lower region by a factor of 10.
\begin{lstlisting}
f.region( 1 )->setModelControl( 0.1 );
\end{lstlisting}
The result is shown in Figure~\ref{fig:dc1dsmooth-region}.
The resistivity values are much better due to adding information about the valid ranges.
Furthermore the transition zone in the lower region is clear.
\begin{figure}[htbp]
\centering\includegraphics[width=0.45\textwidth]{sond1-100-region.pdf}
\caption{Inversion result using two regions of individual range constraint transformations and regularization strength (model control).}%
\label{fig:dc1dsmooth-region}
\end{figure}
| {
"alphanum_fraction": 0.7492073557,
"avg_line_length": 61.6015625,
"ext": "tex",
"hexsha": "7d8ca843a7ba2b188cfcc08cdd4021e36c16bc6a",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-03-29T04:28:40.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-03-29T04:28:40.000Z",
"max_forks_repo_head_hexsha": "5fafebb7c96dd0e04e2616df402fa27a01609d63",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "ivek1312/gimli",
"max_forks_repo_path": "doc/tutorial/concepts.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5fafebb7c96dd0e04e2616df402fa27a01609d63",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "ivek1312/gimli",
"max_issues_repo_path": "doc/tutorial/concepts.tex",
"max_line_length": 293,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "196ac4d6dd67e0326cccc44a87b367f64051e490",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "mjziebarth/gimli",
"max_stars_repo_path": "doc/tutorial/concepts.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-17T12:43:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-07-10T00:56:59.000Z",
"num_tokens": 4055,
"size": 15770
} |
%
% API Documentation for QSTK
% Module QSTK.csvconverter.compustat_csv_to_pkl
%
% Generated by epydoc 3.0.1
% [Mon Mar 5 00:49:20 2012]
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Module Description %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\index{QSTK \textit{(package)}!QSTK.csvconverter \textit{(package)}!QSTK.csvconverter.compustat\_csv\_to\_pkl \textit{(module)}|(}
\section{Module QSTK.csvconverter.compustat\_csv\_to\_pkl}
\label{QSTK:csvconverter:compustat_csv_to_pkl}
Created on June 1, 2011
\textbf{Author:} John Cornwell
\textbf{Contact:} [email protected]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Functions %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Functions}
\label{QSTK:csvconverter:compustat_csv_to_pkl:convert}
\index{QSTK \textit{(package)}!QSTK.csvconverter \textit{(package)}!QSTK.csvconverter.compustat\_csv\_to\_pkl \textit{(module)}!QSTK.csvconverter.compustat\_csv\_to\_pkl.convert \textit{(function)}}
\vspace{0.5ex}
\hspace{.8\funcindent}\begin{boxedminipage}{\funcwidth}
\raggedright \textbf{convert}()
\vspace{-1.5ex}
\rule{\textwidth}{0.5\fboxrule}
\setlength{\parskip}{2ex}
\setlength{\parskip}{1ex}
\end{boxedminipage}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Variables %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Variables}
\vspace{-1cm}
\hspace{\varindent}\begin{longtable}{|p{\varnamewidth}|p{\vardescrwidth}|l}
\cline{1-2}
\cline{1-2} \centering \textbf{Name} & \centering \textbf{Description}& \\
\cline{1-2}
\endhead\cline{1-2}\multicolumn{3}{r}{\small\textit{continued on next page}}\\\endfoot\cline{1-2}
\endlastfoot\raggedright \_\-\_\-p\-a\-c\-k\-a\-g\-e\-\_\-\_\- & \raggedright \textbf{Value:}
{\tt \texttt{'}\texttt{QSTK.csvconverter}\texttt{'}}&\\
\cline{1-2}
\end{longtable}
\index{QSTK \textit{(package)}!QSTK.csvconverter \textit{(package)}!QSTK.csvconverter.compustat\_csv\_to\_pkl \textit{(module)}|)}
| {
"alphanum_fraction": 0.52,
"avg_line_length": 34.0579710145,
"ext": "tex",
"hexsha": "7acb8b74ecbdfe50a03bc69e5d25f351af5f799e",
"lang": "TeX",
"max_forks_count": 154,
"max_forks_repo_forks_event_max_datetime": "2022-03-19T02:27:59.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-30T09:41:15.000Z",
"max_forks_repo_head_hexsha": "0eb2c7a776c259a087fdcac1d3ff883eb0b5516c",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "jenniyanjie/QuantSoftwareToolkit",
"max_forks_repo_path": "Legacy/Docs/pdf/QSTK.csvconverter.compustat_csv_to_pkl-module.tex",
"max_issues_count": 19,
"max_issues_repo_head_hexsha": "0eb2c7a776c259a087fdcac1d3ff883eb0b5516c",
"max_issues_repo_issues_event_max_datetime": "2021-07-19T11:13:47.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-01-04T13:12:33.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "jenniyanjie/QuantSoftwareToolkit",
"max_issues_repo_path": "Legacy/Docs/pdf/QSTK.csvconverter.compustat_csv_to_pkl-module.tex",
"max_line_length": 202,
"max_stars_count": 339,
"max_stars_repo_head_hexsha": "4981506c37227a72404229d5e1e0887f797a5d57",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "elxavicio/QSTK",
"max_stars_repo_path": "Docs/pdf/QSTK.csvconverter.compustat_csv_to_pkl-module.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-23T23:32:24.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-01T10:06:49.000Z",
"num_tokens": 664,
"size": 2350
} |
%!TEX root = ../thesis.tex
\chapter{Dedication}
\label{ch:dedication}
| {
"alphanum_fraction": 0.7142857143,
"avg_line_length": 17.5,
"ext": "tex",
"hexsha": "2d8f2f3a2943f2c11a6f3472760254a03de39427",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "17e09d7c28bc0c578e961d35d1c96411ece14bc4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "tobinsouth/RandomResources",
"max_forks_repo_path": "Writing/Latex/templates/UofA Thesis Template/front/dedication.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "17e09d7c28bc0c578e961d35d1c96411ece14bc4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "tobinsouth/RandomResources",
"max_issues_repo_path": "Writing/Latex/templates/UofA Thesis Template/front/dedication.tex",
"max_line_length": 26,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "17e09d7c28bc0c578e961d35d1c96411ece14bc4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "tobinsouth/RandomResources",
"max_stars_repo_path": "Writing/Latex/templates/UofA Thesis Template/front/dedication.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 23,
"size": 70
} |
\documentclass{article}
\usepackage{fullpage}
\usepackage{xspace}
\usepackage{educ-homework}
\newcommand{\UNIX}{\textsc{Unix}\xspace}
\setcounter{secnumdepth}{0}
\name{Joshua S. Allen}
\begin{document}
\LARGE
\noindent \textbf{Masters Thesis Proposal:} \\
\noindent \textbf{Distributed Processor Allocation Algorithms} \\
\Large
\noindent {by Joshua S. Allen}
\normalsize
\vspace{1em}
\section{Introduction}
% Say something about distributed systems??
A processor allocation algorithm is a way of determining the best processor
within a distributed system on which to run a process. Say, for example,
that machine $A$ has only one processor and its load average is very high.
The user of machine $A$ now wants to run a new process. However, since its
load average is so high, machine $A$ decides to give the process to another
machine. Hence, a processor allocation algorithm is invoked to determine
the best processor to which to give the process. An alteration of this
approach is to allow processes to migrate dynamically even after they have
started executing.
Other than automatically offloading or migrating processes within a
distributed, another use of a processor allocation algorithms could be in a
distributed implementation of the Unix utility \texttt{top}. In the
business world, distributed processor allocation algorithms could be used to
assign tasks of a project to employees.
\section{Goal}
The goal of this masters thesis is to develop fair processor allocation in
distributed systems using a distributed algorithm. Three algorithms will be
designed and implemented using kali-scheme, a distributed implementation of
scheme48. Various information will be logged during testing of the system.
At the end of testing, some of the logged information will be plotted for
statistical analysis, and a comparison of the algorithms will be given.
\section{Criteria}
In general, distributed systems can be compared using the following
criteria \cite{Tanenbaum}. Distributed processor allocation is no
exception.
\begin{itemize}
\item Transparency
The distributed system should provide services such that the user is
unaware of where within the system that the services take place.
The system should function, at least from the user's perspective, as
a single unit.
\item Flexibility
The distributed system should be designed in such a way that it is
easily changeable and customizable.
\item Reliability
The distributed system should provide consistency and should
function even in the presence of failures when possible. In other
words the distributed system should be fault tolerant. When
possible, failures should go unnoticed by the user.
\item Performance
The distributed system should provide a reasonable and consistent
response time to users and in general the system should provide a
level of performance not less than that of a stand-alone machine.
In other words, the distributed system should not penalize
performance, and when possible it should enhance it.
\item Scalability
The design of a distributed system should be such that it functions
appropriately and efficiently no matter how many machines comprise
it.
\end{itemize}
Given a collection of processors, each of which can receive external job
requests, fair processor allocation should be performed for newly incoming
jobs. In the case of dynamic process migration, fair processor allocation
should occur during the lifetime of processes. The unfairness at time $t$
can be measured by the maximum difference in jobs between two processors.
The performance of an allocation method over a time period can be measured
by the average unfairness over that time period.
However, load alone is not enough to determine the best processor on which
to offload a process. From a user point of view, a distributed system
should offer better than or equal to performance of a stand-alone machine.
Hence, criteria such as time delay between process creation and initial
execution is important. Items that would effect this include hop count of
messages in the network, whether a binary needs to be migrated across the
network, etc.
Another important variable to check when offloading processes is the amount
of free memory. If a machine does not have enough free memory to run a new
process then it would be pointless to try running it there. Additionally,
if a machine does have enough free memory but very little of it, then
performance might be effected due to swapping. Hence, many factors other
than load need to be taken into account when determining the best machine on
which to offload a process and when comparing the performance of processor
allocation algorithms.
Three measurable statistics come to mind when measuring the
\emph{performance} of a processor allocation algorithm:
\begin{itemize}
\item Maximizing CPU utilization (comparison of load averages
over time)
\item Minimizing response time (the time elapse from process
creation to process execution)
\item Minimizing total elapsed time a process requires to finish
(the time elapse from process creation to process termination).
\end{itemize}
\emph{Scalability} can be measured by comparing the results of performance
measurements of the algorithms running variable sizes of distributed
systems.
\emph{Fault Tolerance} can be determined by introducing failures into the
distributed and accessing whether the system continues to perform and, if
so, if it performs at the same performance levels.
\emph{Flexibility} is not something that can easily be measured and hence a
descriptive explanation of the features that make an algorithm flexible will
need to be discussed, such a modularity, ease of use, etc.
\emph{Transparency} also is difficult to measure, so again a descriptive
explanation of why and how an algorithm is transparent will be given.
%\section{Background}
\section{Approach}
\subsection{Algorithms}
\subsubsection{Token based Algorithm}
At startup, a logical ring is constructed, where each node in the ring
represents a processor in the distributed system. For test purposes the
coordinator will be the machine on which the user is interacting. However,
the coordinator can be elected using an election algorithm. A single packet
(the token) is then transmitted with an entry for each processor in the
system. The entry will contain the following information: load and amount
of free memory.
Each time a machine receives the token, it does the following things:
\begin{enumerate}
\item The machine updates the token's usage table with its current load
and amount of free memory.
\item If the machine wants to offload a process, it picks the machine with
the lowest load that has enough free memory, and migrates the
process to that machine.
\item The machine forwards the token to the next machine in the logical
ring.
\end{enumerate}
\subsubsection{Randomized Algorithm}
Each machine maintains a table comprised of values reflecting the last known
load on each machine, initially all values are zero (neutral). When a new
job is introduced to a processor, that processor allocates the job to some
processor in the system, perhaps itself. This choice is based on the table
most of the time and made randomly the rest of the time (parameter $p$).
The table is updated in the following way: When processor $A$ chooses to
give a job to processor $B$, $A$ sends a request to $B$, which includes
$A's$ current load. $B$ updates its table using this information and
decides whether to accept the job or forward the request to another machine.
Ultimately, some machine $C$ accepts the request and obtains the job from
$A$. The only way a job could bounce around the system indefinitely is for
new jobs to be introduced to the system faster than the system can send
messages back and forth between machines. This is is highly unlikely, thus
practically there is no indefinite postponement of jobs, although
theoretically it is possible. Each machine ($A$ through $C$) updates its
table using load information passed to it in the request message, which is
appended to the request as it is forwarded to each machine. A naive update
rule is simply to replace old data with new data. However, a better
approach might be to use weighted average of the old and new data, perhaps
incorporating the age of the old data.
\subsubsection{Centralized Algorithm}
For comparison reasons, a centralized version of the algorithm will be
implemented in addition to the distributed ones. In the centralized
algorithm, all process creation requests go through a single machine,
although they can still originate anywhere. When a process is offloaded to
a machine by the coordinator, that machine must return its load average to
the coordinator. The same is true when the process finishes. Hence, the
coordinator always knows (within time $\epsilon$, defined to be the amount of
time for a client to respond to the coordinator) the load average on every
machine in the distributed system.
\section{Implementation}
The three algorithms will be implemented using kali-scheme, a distributed
implementation of scheme48 \cite{kali}. Processors will be simulated
processors using threads in a single \UNIX process, several \UNIX processes
on the same physical machine, and several \UNIX processes on multiple
physical machines.
Initially, threads will be used to simulate processors and then several
\UNIX processes on multiple physical machines in order to perform more
realistic tests. This will be more realistic because network delays and
actual machine usage patterns can be taken into account.
\section{Current Status}
Basic working implementations of all three of the algorithms are complete.
Currently, all processors and processes in the system are simulated using
threads within a single process on a single machine. Logging and simple
graph generation is implemented so that test results can be easily
compared. All three algorithms currently have no fault tolerance and assume
static allocation at the beginning of process execution. Only a few
statistics are currently being logged such as load and amount of free
memory.
A general outline of the paper is prepared with initial background
information in place. Also available is an initial bibliography.
\section{Plan for Future Work}
All algorithms will be converted to run on multiple physical machines
instead of running as threads on a single machine. Instead of supplying a
pseudo-load average when creating a process, real load averages will be used
(perhaps thread loads). More information will be tracked for statistical
analysis, including network delays, hop counts, and the amount of time
between process creation and initial execution. For the randomized
algorithm, more update algorithms will be developed in order to provide more
optimal results. More data will be plotted for statistical analysis. All
of the algorithms will be modified to perform dynamic process migration
(movement of processes during execution among machines). Finally, the
algorithms will be altered to include fault tolerance.
In addition to programming changes made to the algorithms, more test cases
will be run on the system, varying many variables. Much analysis will be
done on the results of the test cases in order to provide comparisons of the
algorithms, including statistical analysis and commentary based upon the
work of others and my own thoughts.
Results from the current implementation of the algorithms, where processors
and processes are fully simulated will be compared to the final results
where each processor in the system represents a physical processor. If
results differentiate significantly, then an analysis of how the simulation
could be changed will be provided. Other test cases will be performed in a
quasi-simulated environment where some things are simulated, such as
processes, yet other things are not, such as processors.
Finally, an analysis of efficient ways to offload (static) and migrate
(dynamic) processes in a distributed system will be given.
\bibliography{proposal}
\bibliographystyle{alpha}
\end{document}
| {
"alphanum_fraction": 0.7810108418,
"avg_line_length": 41.9531772575,
"ext": "tex",
"hexsha": "ea7cb0d60ffd71d0edaae77eb8055a1aeab830c8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "315b888252196c0c8bedee68fcc70e8fa6c10a4c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "wingdagger/distributed-processor-allocation",
"max_forks_repo_path": "proposal/proposal.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "315b888252196c0c8bedee68fcc70e8fa6c10a4c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "wingdagger/distributed-processor-allocation",
"max_issues_repo_path": "proposal/proposal.tex",
"max_line_length": 78,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "315b888252196c0c8bedee68fcc70e8fa6c10a4c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "wingdagger/distributed-processor-allocation",
"max_stars_repo_path": "proposal/proposal.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2580,
"size": 12544
} |
\subsection{Resistor Calculator}
\screenshot{plugins/images/ss-resistor}{Resistor calculator}{img:resistor}
The resistor calculator is a plugin that works in 3 modes:
\subsubsection{Colour to Resistance}
In Colour to Resistance mode, use the menus to select the colours of the
bands of a resistor which you would like to know the resistance of.
\subsubsection{Resistance to Colour}
In Resistance to Colour mode, use the menus to select the unit that you
would like to use (choose from Ohms, Kiloohms, Megaohms), and use
the on-screen keyboard to input the value of the resistor that you would
like to know the colour code of. The colour codes are presented
\opt{lcd_color}{graphically and} textually.
\subsubsection{LED resistance}
LED resistance calculator is used to determine the resistor necessary to light
an LED safely at a given voltage. First, select the voltage that the LED will
use (the first option is the most common and a safe bet), and the current
that it will draw (likewise with the first option). Then, use the onscreen
keyboard to type in the supply voltage and, if selected, the custom
forward current. This function produces safe estimates, but use your own
judgement when using these output values. Power rating and displayed resistance
are rounded up to the nearest common value.
| {
"alphanum_fraction": 0.797564688,
"avg_line_length": 45.3103448276,
"ext": "tex",
"hexsha": "2ae22dd862c50910143e05a45bbeb2eff98ee40f",
"lang": "TeX",
"max_forks_count": 15,
"max_forks_repo_forks_event_max_datetime": "2020-11-04T04:30:22.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-21T13:58:13.000Z",
"max_forks_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC",
"max_forks_repo_path": "manual/plugins/resistor.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2",
"max_issues_repo_issues_event_max_datetime": "2018-05-18T05:33:33.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-07-04T18:15:33.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC",
"max_issues_repo_path": "manual/plugins/resistor.tex",
"max_line_length": 79,
"max_stars_count": 24,
"max_stars_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC",
"max_stars_repo_path": "manual/plugins/resistor.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-05T14:09:46.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-03-10T08:43:56.000Z",
"num_tokens": 292,
"size": 1314
} |
\section{\uppercase{Experimental evaluation}}\label{sec:results}
\noindent Several tests were conducted in the simulated environments presented earlier for evaluating the ability of the proposed system to find suitable constellations of sensors for maximizing the observable surface area of a given set of target objects.
In the active perception environment introduced in \cref{fig:active-perception-environment}, it was performed two tests with the sensor deployment shown in \cref{fig:sensors-deployment-active-perception-environment}. In the first test it was estimated the best sensor position for observing a single target object (green starter motor) being occluded by a human hand starting to grasp it. By visually inspecting the scene in \cref{fig:active-perception-1-sensor}, it can be seen that the system chose a very reasonable sensor position, achieving a surface coverage of 27.73\%, despite the heavy occlusions present. Moreover, when expanding the number of sensors to 3 (in the second test), the system managed to select a sensor constellation with a good spatial distribution (shown in \cref{fig:active-perception-3-sensors}) that managed to improve the sensor coverage to 61.91\%.
Moving to the single bin picking environments, presented in \cref{fig:bin-picking-environment,fig:bin-picking-with-occlusions-environment}, it was made four more tests using the deployed sensors seen in \cref{fig:sensors-deployment-bin-picking-environments}. In the first test it was estimated the best position for a single sensor to observe the target object that was inside the stacking box, which had large occlusions on its surroundings, but could be clearly observed from above. As can be seen in \cref{fig:bin-picking-1-sensor}, the system choose a suitable observation sensor that managed to achieve a surface coverage of 45.10\%. When increasing the number of sensors to 5 (in the second test), the system relied on more sensor data and improved the surface coverage to 64.63\%. To make the active perception for this bin picking use case more challenging, it was added three occluding differential gearboxes on top of the target object (scene shown on \cref{fig:bin-picking-with-occlusions-environment}) in order to create large occlusions that significantly reduced the number of useful sensors in the deployed populations (presented in \cref{fig:sensors-deployment-bin-picking-environments}). Analyzing the best sensor position estimated by the system (shown in \cref{fig:sensor-data-processing,fig:bin-picking-with-occlusions-1-sensor}) it can be seen that the pose chosen was very reasonable, achieving a surface coverage of 19.27\%. When increasing the number of sensors to 3, the system deployed a constellation with good spatial distribution and managed to improved the surface coverage percentage of the target object to 31.19\% (as can be seen in \cref{fig:bin-picking-with-occlusions-3-sensors}).
Increasing the level of complexity even further, in the final test it was added three more target objects to the simulation environment (as presented in \cref{fig:multiple-bin-picking-with-occlusions-environment}) and the number of populations with different sensor types was increased to 7 (shown in \cref{fig:sensors-deployment-multiple-bin-picking-environment}). Analyzing \cref{fig:multiple-bin-picking-with-occlusions-10-sensors} in which the system estimated a constellation of 10 sensors to observe the 4 target objects, it can be seen that the system chose 4 sensors on the front wall (which had a better observation area for the target objects in the trolley shelves), 3 on the ceiling (for retrieving sensor data for the target objects on top of the trolley) and then for observing the remaining surface areas of the target objects, it chose one sensor on the left wall, another on the right wall and finally another one on the back wall, reaching 10 sensors in total and achieving a surface coverage of the target objects of 43.93\%.
These 7 constellations of sensors computed using \cref{alg:best-n-views} (which relied on a \gls{ransac} approach), show that the proposed system can estimate a suitable sensor configuration for maximizing the observable surface area of several target objects even on complex environments with significant occlusions. Moreover, the system managed to compute useful solutions in bounded and reasonable time (from less than a second to a few minutes depending on the number and characteristics of the deployed sensors) for a problem that is combinatorial explosive in terms of processing time complexity.
\begin{figure}
\centering
\includegraphics[height=.2\textwidth]{best-views-estimation/active-perception/1-sensor/gazebo-corner}\hspace{4em}
\includegraphics[height=.2\textwidth]{best-views-estimation/active-perception/1-sensor/rviz-front-corner}\\
\includegraphics[height=.2\textwidth]{best-views-estimation/active-perception/1-sensor/rviz-back-corner}\hspace{2em}
\includegraphics[height=.2\textwidth]{best-views-estimation/active-perception/1-sensor/rviz-top}
\caption{Estimation of the best sensor position for the active perception environment with a 27.73\% of surface area coverage (top left showing the Gazebo color rendering and remaining images displaying the best sensor as a large red arrow, the deployed sensors as small coordinate frames and the observed sensor data as green spheres).}
\label{fig:active-perception-1-sensor}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=.2\textwidth]{best-views-estimation/active-perception/3-sensors/gazebo-front-right-corner}\hspace{4em}
\includegraphics[height=.2\textwidth]{best-views-estimation/active-perception/3-sensors/rviz-front-right}\\
\includegraphics[height=.2\textwidth]{best-views-estimation/active-perception/3-sensors/rviz-back-left}\hspace{2em}
\includegraphics[height=.2\textwidth]{best-views-estimation/active-perception/3-sensors/rviz-top}
\caption{Estimation of the 3 best sensors disposition for the active perception environment with a 61.91\% of surface area coverage.}
\label{fig:active-perception-3-sensors}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=.14\textwidth]{best-views-estimation/bin-picking/1-sensor/gazebo-top}\hspace{2em}
\includegraphics[height=.14\textwidth]{best-views-estimation/bin-picking/1-sensor/rviz-front}\\
\includegraphics[height=.2\textwidth]{best-views-estimation/bin-picking/1-sensor/rviz-top-front}\hspace{2em}
\includegraphics[height=.2\textwidth]{best-views-estimation/bin-picking/1-sensor/rviz-top}
\caption{Estimation of the best sensor position for the bin picking environment with a 45.10\% of surface area coverage.}
\label{fig:bin-picking-1-sensor}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=.19\textwidth]{best-views-estimation/bin-picking/5-sensors/gazebo-front}\hspace{2em}
\includegraphics[height=.14\textwidth]{best-views-estimation/bin-picking/5-sensors/rviz-top}\\
\includegraphics[height=.2\textwidth]{best-views-estimation/bin-picking/5-sensors/rviz-corner}\hspace{2em}
\includegraphics[height=.14\textwidth]{best-views-estimation/bin-picking/5-sensors/rviz-sensor-data}
\caption{Estimation of the 5 best sensors disposition for the bin picking environment with a 64.63\% of surface area coverage.}
\label{fig:bin-picking-5-sensors}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=.13\textwidth]{best-views-estimation/bin-picking-with-occlusions/1-sensor/gazebo-top}\hspace{2em}
\includegraphics[height=.13\textwidth]{best-views-estimation/bin-picking-with-occlusions/1-sensor/rviz-corner}\\
\includegraphics[height=.2\textwidth]{best-views-estimation/bin-picking-with-occlusions/1-sensor/rviz-front}\hspace{2em}
\includegraphics[height=.2\textwidth]{best-views-estimation/bin-picking-with-occlusions/1-sensor/rviz-top}
\caption{Estimation of the best sensor position for the bin picking with occlusions environment with a 19.27\% of surface area coverage.}
\label{fig:bin-picking-with-occlusions-1-sensor}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=.18\textwidth]{best-views-estimation/bin-picking-with-occlusions/3-sensors/gazebo-top}\hspace{2em}
\includegraphics[height=.2\textwidth]{best-views-estimation/bin-picking-with-occlusions/3-sensors/rviz-front}\\
\includegraphics[height=.21\textwidth]{best-views-estimation/bin-picking-with-occlusions/3-sensors/rviz-top-back}\hspace{2em}
\includegraphics[height=.21\textwidth]{best-views-estimation/bin-picking-with-occlusions/3-sensors/rviz-top}
\caption{Estimation of the 3 best sensors disposition for the bin picking with occlusions environment with a 31.19\% of surface area coverage.}
\label{fig:bin-picking-with-occlusions-3-sensors}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.44\textwidth]{best-views-estimation/multiple-bin-picking-with-occlusions/10-sensors/gazebo-front}\vspace{2em}
\includegraphics[width=.25\textwidth]{best-views-estimation/multiple-bin-picking-with-occlusions/10-sensors/rviz-front-corner}\vspace{2em}
\includegraphics[width=.43\textwidth]{best-views-estimation/multiple-bin-picking-with-occlusions/10-sensors/rviz-front}\vspace{2em}
\includegraphics[width=.43\textwidth]{best-views-estimation/multiple-bin-picking-with-occlusions/10-sensors/rviz-top}
\caption{Estimation of the 10 best sensors disposition for the multiple bin picking with occlusions environment with a 43.93\% of surface area coverage.}
\label{fig:multiple-bin-picking-with-occlusions-10-sensors}
\end{figure}
| {
"alphanum_fraction": 0.8062821725,
"avg_line_length": 116.0853658537,
"ext": "tex",
"hexsha": "8a2d8b455fce866ab57e9eba270c49565704a4ff",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "44838a7bb96da2b807bfb3bfec29d8564b9f4c7f",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "carlosmccosta/active_perception_article",
"max_forks_repo_path": "tex/sections/experimental-evaluation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "44838a7bb96da2b807bfb3bfec29d8564b9f4c7f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "carlosmccosta/active_perception_article",
"max_issues_repo_path": "tex/sections/experimental-evaluation.tex",
"max_line_length": 1716,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "44838a7bb96da2b807bfb3bfec29d8564b9f4c7f",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "carlosmccosta/active_perception_article",
"max_stars_repo_path": "tex/sections/experimental-evaluation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2456,
"size": 9519
} |
\chapter*{Acknowledgements}
First of all, I would like to thank my supervisor, Professor Minoru Okada. Professor Okada is kind, knowledgeable, and rigorous in scientific attitude. Thank him for giving me an opportunity to study in Japan and let me do wireless power transfer research that I am interested in. He has continued to help me during my three years of study and life. At the same time, I would like to thank Professor Yuichi Hayashi for my research advice and guidance, so that I have a deeper understanding of the weak research environment. His help greatly improved my research.
Then I would like to thank Associate Professor Takeshi Higashino and Assistant Professor Duong Quang Thang. With their help, they have made me better understand the knowledge of wireless power transfer and wireless communication, and they have helped me overcome the difficulties in professional understanding and helped me complete this topic. Thank them very much. Here, I would also like to thank Assistant Professor Chen Na for her continuous help and encouragement in my studies, so that I have a better understanding of the field of wireless communication.
I would also like to thank the members, staff, and seniors of the Network Systems Laboratory for their companionship in the study and life. We studied together and played together, and established a strong friendship together. Thank you to the international students who have helped me while studying abroad. Thank you for your kindness to me.
Finally, I would like to thank my family for their support of studying abroad, let me choose the knowledge I like, and always provide me with abundant financial support.
Thank you all for your kind help again. | {
"alphanum_fraction": 0.8089430894,
"avg_line_length": 156.5454545455,
"ext": "tex",
"hexsha": "2ce82e8093b739eb1ff68a9964af5f141ce5f932",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "61740eb3242ffb83eb2c3f779854e4f9b42c4bb9",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "iostreamrain/MasterThesis",
"max_forks_repo_path": "tex/acknowledgements.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "61740eb3242ffb83eb2c3f779854e4f9b42c4bb9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "iostreamrain/MasterThesis",
"max_issues_repo_path": "tex/acknowledgements.tex",
"max_line_length": 563,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "61740eb3242ffb83eb2c3f779854e4f9b42c4bb9",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "iostreamrain/MasterThesis",
"max_stars_repo_path": "tex/acknowledgements.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 330,
"size": 1722
} |
\section{Limits of spectral clustering}
\label{ch:ulrike2004}
\textit{Limits of spectral clustering} by Ulrike Von Luxburg. \\
Cited by 102. \textit{Advances in neural information processing systems}.
\newline
\textbf{Main point} is that \begin{inparaenum}[\itshape a\upshape)]
\item normalized spectral clustering is superior than unnormalized spectral clustering.
\end{inparaenum}
| {
"alphanum_fraction": 0.7953367876,
"avg_line_length": 32.1666666667,
"ext": "tex",
"hexsha": "bbdcbda448f7e0f077eb95196f5d308bd99a09be",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-03-16T21:50:52.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-03-16T21:50:52.000Z",
"max_forks_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "wsgan001/AnomalyDetection",
"max_forks_repo_path": "report/references/reference_research/ulrike2004.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "wsgan001/AnomalyDetection",
"max_issues_repo_path": "report/references/reference_research/ulrike2004.tex",
"max_line_length": 87,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "wsgan001/AnomalyDetection",
"max_stars_repo_path": "report/references/reference_research/ulrike2004.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 100,
"size": 386
} |
\documentclass[journal]{IEEEtran}
\usepackage[pdftex]{graphicx}
\graphicspath{{img/}}
\usepackage{cite}
\usepackage{amsmath}
\DeclareMathOperator*{\argmin}{arg\,min}
\usepackage{amssymb}
\usepackage{mathtools}
\usepackage{booktabs,siunitx}
\usepackage{threeparttable}
%\usepackage{subcaption}
\usepackage{multirow}
\usepackage[caption=false,font=footnotesize,labelfont=sf,textfont=sf]{subfig}
\usepackage[hidelinks]{hyperref}
%\usepackage{endfloat}
% TODO: remove
\usepackage{xcolor}
\newcommand\TODO[1]{\textcolor{red}{TODO: #1}}
\newcommand\FIXME[1]{\textcolor{blue}{FIXME: #1}}
\begin{document}
\title{Automatic Brain Tissue Segmentation in MRI using Supervised Machine Learning}
\author{Michael~Rebsamen\textsuperscript{*},
Jan~Riedo\textsuperscript{*},
Michael~Mueller\textsuperscript{*}
\thanks{\textsuperscript{*}All authors contributed equally. Biomedical~Engineering, University~of~Bern. Authors~e-mail: [email protected], [email protected], [email protected]}}
\markboth{Biomedical Engineering, Medical Image Analysis Lab, \today}
{}%2nd title, required for the primary to appear
% The only time the second header will appear is for the odd numbered pages
% after the title page when using the twoside option.
\maketitle
\begin{abstract}
Magnetic resonance imaging (MRI) is a widespread imaging modality for the brain. Information in MR images can provide valuable markers for the diagnosis of neurodegenerative diseases or the monitoring of tumors. Many analytical methods are based on a labour-intensive volumetric segmentation of the tissue types. An automatic processing of MR images is therefore of high interest in clinical routines. We assess four supervised machine learning algorithms for the automatic segmentation of brain tissue in MR images: decision forest (DF), k-nearest neighbors (kNN), support vector machine (SVM), and stochastic gradient descent (SGD). The results yield similar accuracies but different runtime and scaling behaviours. An ensemble with the predictions of all algorithms outperforms the individual algorithms.
\end{abstract}
\begin{IEEEkeywords}
Brain tissue segmentation, MRI, machine learning, DF, kNN, SVM, SGD
\end{IEEEkeywords}
\section{Introduction}\label{s.introduction}
Segmentation of brain tissues from magnetic resonance images has many clinical applications. Clinicians gain useful information from a separation of tissue into its three main anatomical types: white matter (WM), grey matter (GM), and ventricles (VT). Voxel-based morphometric measures have been used to investigate brain disorders like Alzheimer’s disease~\cite{busatto2003voxel}, Parkinson's disease~\cite{price2004voxel}, or temporal lobe epilepsy~\cite{rummel2017personalized}. However, manual segmentation of MR images is a labour-intensive task requiring expert skills. Fully automatic approaches for brain tissue segmentation are therefore a topic of active research. A good algorithm classifies the tissue types with high accuracy across a variety of images from different patients. Such a classification is a typical task for machine learning. These algorithms tend to perform well given enough training data during the learning phase. The availability of ground-truth data in sufficient quantity and quality for supervised learning is a particular challenge when working with medical images due to privacy concerns and the costs for manual segmentation. Optimisation of the learning phase with a limited number of training data is therefore required.
Current research is evaluating a variety of discriminative machine learning algorithms on the classification task of brain segmentation. Most proposed methods use ensembles such as decision forests (DFs), with several individual machine learning algorithms combined or sequentially applied. A classic DF implementation was performed by Yaqub et al. \cite{Yaqub2014}, achieving scores similar to current benchmarks in brain segmentation. Many extensions to DF with probabilistic models have been successfully tested, such as conditional random fields \cite{Pereira2016}. Moreover, it was shown that even the simple, instance-based k-nearest neighbors (kNN) algorithm yields accurate results \cite{Anbeek2004,Cocosco2003,Warfield2000} for brain tissue classification. Furthermore, linear classifiers under convex loss functions, such as linear support vector machines (SVMs) have been successfully applied for automatic brain tumor segmentation \cite{Bauer2011}. Another approach of this type is the stochastic gradient descent (SGD), which is well established in the field of optimisation problem solvers and a standard for training artificial neural networks \cite{LeCun1998}. It has gained importance in the field of large-scale machine learning problems \cite{Bottou2010}. There is no record so far of an application of pure SGD on brain segmentation. Current development tends to bring more accurate results by incorporating 3D neighborhood information \cite{Li2011,Despotovic2013}, prior information from atlases\cite{Pohl2006,Ashburner2005}, deformable models \cite{Moreno2014}, or combinations thereof \cite{Ortiz2014}.
In this experiment, we compare the performance of four well-known machine learning algorithms for the segmentation of brain tissue in MRI data. The chosen metrics include accuracy, computational efficiency, and the amount of training data required. We use the existing medical image analysis pipeline to assess four supervised learning algorithms: decision forest (DF), k-nearest neighbors (kNN), support vector machine (SVM), and stochastic gradient descent (SGD). The algorithms were trained using seven features extracted from a set of 70 MR images and the prediction of the segmentation was evaluated on a different set of 30 MR images. All algorithms are able to classify the brain tissue types, although with varying degrees of accuracy and runtime behaviours. Unsurprisingly, the highest dice coefficient is achieved by combining the predictions (probabilities) of all four algorithms to an ensemble and by using all features. An analysis of feature importance reveals that the various features have a different influence on the algorithms.
\section{Methods}
\subsection{Dataset}
All experiments were conducted on a subset of 100 unrelated, healthy subjects from a dataset provided by the \textit{Human Connectome Project} \cite{van2013wu}. From each individual, a total of eight 3-tesla head MR images are available: T1 and T2-weighted image volumes not skull-stripped (but defaced for anonymization) and skull-stripped with a bias field correction, and both modalities once in native T1 space and once in MNI-atlas space \cite{mazziotta2001probabilistic}. The image sizes are $182\times182\times217$ voxels with a spatial resolution of 1 mm.
Ground-truth labels are automatically generated using FreeSurfer~\cite{fischl2012freesurfer}, assigning each voxel either to background (BG), WM, GM, or VT. The dataset was split in a training set with 70 images and a test set with 30 images.
\subsection{Pipeline}\label{s.pipeline}
Training and testing data are loaded sequentially, each put through the pipeline consisting of: registration, pre-processing, feature extraction and training/classification. For testing, two additional steps, namely post-processing and evaluation are added.
The data was registered to an atlas space with a multi-modal rigid transformation, using a regular step gradient descent optimizer to establish a common coordinate system. Skull stripping and bias field correction are applied in order to have images of the brain only, with less influence of the MRI scanning characteristics. Furthermore, the preprocessing module applies a gradient anisotropic diffusion filter and z-score normalization to ensure comparable grey scale ranges among all images in the subsequent steps.
Preprocessed data is then fed into the feature extraction module, where seven features are computed. The feature matrix consists of three coordinate features corresponding to the position of the voxel in the atlas space, and an intensity and a gradient on both the T1 and T2 modalities. During feature extraction, a random mask is applied on the training data in order to randomly select a subset of the voxels available. The mask is adjustable individually for BG, WM, GM, and VT. During the learning phase, the selected algorithm is trained to classify voxels based on the given features. In the segmentation phase, the trained algorithm is used to classify all voxels in previously unseen test images. The classified testing data is then forwarded to a post-processing module where a dense conditional random field \cite{krahenbuhl2011efficient} aims to reduce noise by eliminating small isolated regions. Finally, the evaluation phase assesses the performance of the segmentation by comparing the result to ground-truth and calculating a dice coefficient (see Sec.~\ref{ch.eval}) for each class.
The medical image analysis pipeline is implemented in Python using TensorFlow~\cite{tensorflow2015-whitepaper}, scikit-learn~\cite{pedregosa2011scikit} and ITK~\cite{yoo2002engineering}. The tests were performed on a Linux cluster with 4 CPUs and 36 GB memory assigned to the jobs.
\subsection{Decision Forest (DF)}
The decision forest is an algorithm for supervised regression and classification. A DF is an ensemble of many decision trees. A decision tree starts at the tree root and splits the data based on the feature which results in the largest information gain (IG). The splitting procedure is repeated at each child node down to the terminal nodes. The terminal nodes at the bottom are the resulting predictors. The optimisation of the information gain is described as:
\begin{equation}
\begin{split}
IG = H(S) -\sum_{i \in {L,R}} \frac{ \vert S ^i \vert}{\vert S \vert} H(S^i) \\
H(S) = \sum_{y \in Y} p(y) \hspace{0.1 cm} log \hspace{0.1 cm} p(y)\\
%\Theta_{j}^{*} = arg \max_{w, b, \xi} IG_{j}
\end{split}
\end{equation}
where the information gain is the difference of the entropy $H(S)$ before and after the split point. Entropy measures impurity in a node. Decision tree is all about finding the highest possible information gain at each node. Connecting these nodes results in the most homogeneous branch, which has the lowest entropy.
The depth of the trees can be regulated by the chosen number of nodes. A high number of trees improves the accuracy, but increases the computational costs due the learning of the additional trees. Bias of the trees can be reduced by choosing a higher number of nodes.
The hyperparameters were determined by a grid search on the train data, individually for WM, GM and VT. Visualised in Fig.~(\ref{f.df_gridsearch}) are the results of the grid search for different number of nodes and trees. Therefore the chosen result of 160 trees and 3000 nodes is a trade off for the three tissue types. The result is marked with a red cross in Fig.~(\ref{f.df_gridsearch}) for each tissue type.
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth]{images/df_grid}
\caption{Optimisation of hyperparameters for DF with a grid search for WM, GM and VT. The red cross marks the chosen \textit{number of trees}~=~160 and \textit{maximum nodes per tree}~=~3000. Colour represents a scaled dice, individually for each subplot.}\label{f.df_gridsearch}
\end{figure}
\subsection{k-Nearest Neighbors (kNN)}
The k-nearest neighbors algorithm does not construct a general model or learn any weights, but actually stores instances of the training data $X$ and labels $y$. A new data point $x$ is classified by a majority vote of its $k$ nearest neighbors. In case of $k=1$, this is just the label of the nearest point:
\begin{equation}
\begin{split}
& \hat{y} = y_i \\
\text{where } & i = \argmin ||X_{i,:} - x||_2^2
\end{split}
\end{equation}
Neighbors are weighted by a function which is inverse proportional to its distance to give closer points more weight. The characteristics of kNN lead to a relatively low training time, and a rather high computation time for new points, depending on the amount of neighbors defined to vote. A weakness of kNN is that is treats all features equally and cannot learn that one feature is more discriminative than another.
A hyperparameter search was conducted for the $k$-value. The higher $k$, the better the overall dice with the drawback of a higher computation time. A value of $k=20$ was found to be fast with only little trade-off in computation time.
\subsection{Support Vector Machine (SVM)}
Classification using support vector machines tries to find a hyperplane, separating the data in a high-dimensional feature space. Given the feature vector $x_i$ and the binary label $y_i$, the SVM solves the following optimisation problem during training:
\begin{equation}
\begin{split}
\min_{w, b, \xi} \ & \frac{1}{2}w^Tw + C\sum_{i=1}^m \xi_i \\
\ \text{ s.t. } & y_i(w^T\phi(x_i)+b) \geq 1-\xi_i, \; i = 1, \ldots, m \\
& \xi_i \geq 0, \; i = 1, \ldots, m
\end{split}
\end{equation}
where $w\in \mathbb{R}^n $ is the normal vector and $b \in \mathbb{R}$ the offset of the separating hyperplane and $\phi(x_i)$ maps $x_i$ into a higher-dimensional space.
The SVM implementation is based on libSVM \cite{chang2011libsvm}. Multiclass classification is solved with a \textit{one-against-one} approach. To output probabilities, the predictions are calibrated using \textit{Platt} scaling in which multiclass problems require an additional cross-validation which is an expensive operation for large datasets.
Given the relative low number of available features, we have chosen a radial basis function (RBF) kernel. A regularization term $C$ and a model capacity $\gamma$ needs to be chosen. These hyperparameters were determined with an exhaustive search and cross-validated on a subset of the training data, yielding $C=500$ and $\gamma=0.00005$.
\subsection{Stochastic Gradient Descent (SGD)}
Stochastic gradient descent is widely used in machine learning for solving optimisation problems iteratively. SGD has proven to be efficient for large-scale linear predictions \cite{zhang2004solving}.
In current context, SGD learns a linear scoring function $f(x) = w^Tx + b$ with model parameters $w$ and $b$ by minimizing the training error
\begin{equation}
\argmin_{w,b} \frac{1}{m}\sum_{i=1}^mL(y_i, f(x_i)) + \alpha R(w)
\end{equation}
where $L$ is a loss function that measures miss-classifications, $R$ is a regularization term penalising model complexity, and $\alpha$ is a non-negative hyperparameter.
In each iteration, a sample is randomly chosen from the training set and the model parameters are updated accordingly:
\begin{equation}
w \leftarrow w - \eta \left(\alpha \frac{\partial R(w)}{\partial w} + \frac{\partial L (w^Tx_i + b, y_i)}{\partial w} \right),
\end{equation}
where $\eta$ is the learning rate controlling the step size.
We use a smoothed hinge loss (\textit{modified\_huber}) for the loss function $L$, a $l_2$ penalty ($||w||_2$) for the regularization term $R$, and a gradually decaying learning rate. This makes SGD similar to a linear SVM. Again, the hyperparameters $\eta = 0.5$ and $\alpha = 0.01$ were determined with an exhaustive search and cross-validated on a subset of the training data.
\subsection{Performance Evaluation}\label{ch.eval}
The dice coefficient is a commonly used metric to compare the spatial overlap, ranging from 0 (no overlap) to 1 (perfect overlap). To evaluate the accuracy of the segmentation, a dice coefficient is calculated between the prediction (E) and ground-truth (G) for each of the three labels.
\begin{equation}
D = \frac{2|E \bigcap G|}{|E| + |G|} = \frac{2 TP}{2 TP + FP + FN}
\end{equation}
\vspace{1mm}
\section{Results}
\subsection{Segmentation Accuracy}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{images/boxplot}
\caption{Distribution of dice coefficients of the 30 test images for WM, GM, and VT. All algorithms were trained with optimal hyperparameters on the full training set of 70 images.}\label{f.boxplot}
\end{figure}
\begin{table*}[t]
\renewcommand{\arraystretch}{1.2}
\newcommand\mulrow[2]{\multirow{#1}{*}{\shortstack[c]{#2}}}
\caption{Performance Comparison}
\label{tab:perf_compare}
\centering
\begin{threeparttable}
\begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}}c*{6}{S[table-number-alignment=center,table-figures-decimal=2,table-auto-round]}@{}}
\toprule
Features & {Size Dataset} & {\shortstack[c]{DF}} & {\shortstack[c]{kNN}} & {\shortstack[c]{SVM}} & {\shortstack[c]{SGD}} & {\shortstack[c]{ensemble}}\\
\midrule
\mulrow{3}{All\\(f1-f7)}
& 3 & {0.85/0.81/0.62} & {0.70/0.57/0.50} & {0.83/0.80/0.61} & {0.82/0.80/0.35} & {-}\\
& 12 & {0.85/0.81/0.59} & {0.75/0.66/0.67} & {0.84/0.81/0.61} & {0.82/0.80/0.34} & {-}\\
& 70 & {0.85/0.80/0.60} & {0.79/0.76/0.72} & {0.84/0.82/0.61} & {0.82/0.80/0.34} & {0.82/0.79/0.71}\\
\midrule
\mulrow{3}{Coordinates only\\(f1-f3)}
& 3 & {0.67/0.63/0.22} & {0.70/0.55/0.41} & {0.59/0.52/0.0} & {0.17/0.23/0.00} & {-}\\
& 12 & {0.67/0.64/0.11} & {0.74/0.63/0.56} & {0.59/0.57/0.0} & {0.19/0.22/0.00} & {-}\\
& 70 & {0.67/0.64/0.16} & {0.77/0.71/0.62} & {0.60/0.58/0.31} & {0.17/0.21/0.00} & {-}\\
\midrule
\mulrow{3}{All non-coordinates \\(f4-f7)}
& 3 & {0.84/0.80/0.50} & {0.85/0.80/0.45} & {0.84/0.79/0.0} & {0.82/0.80/0.34} & {-}\\
& 12 & {0.85/0.80/0.49} & {0.85/0.81/0.45} & {0.85/0.80/0.45} & {0.82/0.80/0.33} & {-}\\
& 70 & {0.85/0.80/0.48} & {0.85/0.81/0.54} & {0.85/0.80/0.44} & {0.82/0.80/0.34} & {-}\\
\bottomrule
\end{tabular*}
\begin{tablenotes}
\item Overview of achieved accuracy for the different algorithms. Mean dice coefficients for white matter/grey matter/ventricles.
\item f1-f3: Coordinate features, f4: T1 intensity, f5: T1 gradient, f6: T2 intensity, f7: T2 gradient.
\end{tablenotes}
\end{threeparttable}
\end{table*}
All four optimized and examined algorithms are able to segment the MR images into the three tissue types. The performance measured by the dice coefficient for various configurations can be found in Tab.~(\ref{tab:perf_compare}). Boxplots with the statistical distributions for each algorithm on the whole test set and with all seven features can be seen in Fig.~(\ref{f.boxplot}).
The highest dice coefficients for white and gray matter are reached by SVM with $0.84 \pm 0.01$ (mean $\pm$ SD) for WM and $0.82 \pm 0.01$ for GM but at the cost of a lower value of $0.61 \pm 0.05$ for VT. The highest mean value for VT is reached by kNN with $0.72$ but also with the largest standard deviation of $\pm 0.07$ and lower values for WM ($0.79 \pm 0.01$) and GM ($0.76 \pm 0.02$). Although SGD is competitive on WM ($0.82 \pm 0.01$) and GM ($0.80 \pm 0.01$), the performance on VT is the lowest ($0.34 \pm 0.07$).
The coordinate features seem to have a negative effect on the segmentation of WM and GM with kNN as this is the only algorithm reaching significant higher scores on those tissue types without using the coordinates, but with a negative effect on the ventricles.
The best dice coefficients are reached by combining the predictions from all algorithms to an ensemble.
\subsection{Training and Testing Time}
A comparison of the computation time both for training and testing is shown in Fig.~(\ref{f.runtimebarplot}), performed with the infrastructure specified at the end of Sec.~\ref{s.pipeline}. Using the whole training set of 70 MR images, kNN is the fastest to train with $215s$, whereas SGD is the fastest on the prediction with only $42s$ per sample. The training time is growing approximately linear with the amount of training data for kNN and SVM. For DF and SGD, the time required for training is growing significant slower than the amount of training data, leading to a good scaling behaviour. For DF, kNN and SGD, the testing time is independent of the amount of training data used. Only for SVM the testing time is also growing with the size of the training data.
\vspace{-2mm}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{images/runtimes}
\caption{Time for training and testing of the algorithms with training set sizes of 3, 12 and 70 samples and using all seven features. Test time is for one sample only and includes pre-processing, prediction and post-processing.}\label{f.runtimebarplot}
\end{figure}
\vspace{-3mm}
\subsection{Feature Inspection}
Figure~(\ref{scatterplot}) shows the scatter matrix of all features. The whole dataset of 100 images was used for this evaluation. On the diagonal are the histograms to visualise the data distribution of each feature. The first three histograms represent the coordinate features and are normally distributed. The fourth and fifth histogram belong to the intensity features and follow a non-normal distribution. The last two histograms show the gradient features and are partially normal distributed. The right upper part of the diagonal visualises the linear correlation between each of the feature with the associated correlation coefficient. The left bottom part of the diagonal is redundant to the upper part.
The coordinate features with correlation coefficients between -0.11 and 0.05 are independent and do not correlate with any other feature at all. The T1 intensity feature has a moderate linear relationship with the T1 gradient feature and a weak with the T2 gradient feature. The T1 gradient feature has also a moderate correlation with both of the T2 features. The T2 intensity and gradient feature correlate with an correlation coefficient of 0.83 very strong among themselves.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{images/ScatterPlotMatrix}
\caption{Scatter matrix plot for the seven features on the full dataset of 100 samples. Each feature is compared against each other. The diagonal shows the data distribution of every feature. The correlation coefficients between two features are shown in the right upper part. f1-f3: Coordinate features, f4: T1 intensity, f5: T1 gradient, f6: T2 intensity, f7: T2 gradient.}
\label{scatterplot}
\end{figure}
The following Fig.~(\ref{FeatEval}) visualises the influence of each feature type and the used algorithms. The feature types are divided in three coordinate, two intensity and two gradient features. The first column in this figure is calculated with all the features combined and is considered as a reference. Each column belongs to a single feature type in which the dice of the four used algorithms is visualised. Vertically aligned are dices for WM, GM and VT displayed.
All four algorithms achieve similar results on the intensity features. The GM and WM are equal or even higher compared to the result of all the features combined. The dice for VT with the gradient features is between 0.0 and 0.05. With the coordinate features only, kNN reaches dice coefficients of 0.65 to 0.77 for all tissue types. Whereas the scores of SGD are as low as 0.22 and below with the coordinate features.
\begin{figure}[h]
\centering
\includegraphics[width=0.49\textwidth]{images/FeatureEvaluation}
\caption{Comparison of the importance of the individual features for different algorithms. Each algorithm was trained on the full training set of 70 samples and tested on the 30 samples of the test set. The dice score of each algorithm is calculated separately for the three tissue types. W: white matter, G: grey matter, V: ventricles.}
\label{FeatEval}
\end{figure}
\subsection{Ground-Truth Validity}
A sample of a ground-truth segmentation can be seen in Fig.~(\ref{f.slice_ground_truth}). By comparing this image with the other images in Fig.~(\ref{f.slices_all}) there is obviously some misclassification in the ground-truth. Especially the center of the brain is mainly classified as background, which is undeniably wrong.
Furthermore, the region between GM and the scull is labelled as background in the ground-truth images. As we only classify WM, GM and VT this might be correct, but anatomically, there is cerebrospinal fluid (CSF) on the boarder like in the ventricles. For algorithms less dependent on the coordinates such as SGD and kNN, this leads to VT classifications outside GM (see Figs.~\ref{f.slice_knn},~\ref{f.slice_sgd}), which is wrong in terms of ventricles but correct for CSF.
\subsection{Random Mask Optimisation}
Tuning the random mask (see Sec.~\ref{s.pipeline}) turned out to be crucial to improve the classification of the ventricles. A first approach to take approximately the same absolute numbers from all voxel types turned out to be suboptimal. Better results were achieved by including roughly the same proportion of voxels from WM, GM and VT. From the background class however, which has the highest absolute number of voxels, approximately the same number of voxels as from the other classes were taken. The fraction of voxels taken into account are therefore 0.004 for WM and VT, 0.003 for GM, and 0.0003 for background. Except for SGD where a better segmentation is achieved with 0.04 for VT which results in approximately the same amount of voxels from all types to be used for training.
\begin{figure}[h!]
\centering
\subfloat[DF]{\label{f.slice_df}\includegraphics[width=0.3\linewidth]{images/slice_DF}}
\hfill
\subfloat[kNN]{\label{f.slice_knn}\includegraphics[width=0.3\linewidth]{images/slice_kNN}}
\hfill
\subfloat[SVM]{\label{f.slice_svm}\includegraphics[width=0.3\linewidth]{images/slice_SVM}}
\hfill
\subfloat[SGD]{\label{f.slice_sgd}\includegraphics[width=0.3\linewidth]{images/slice_SGD}}
\hfill
\subfloat[ensemble]{\label{f.slice_ensemble}\includegraphics[width=0.3\linewidth]{images/slice_ensemble}}
\hfill
\subfloat[ground-truth]{\label{f.slice_ground_truth}\includegraphics[width=0.3\linewidth]{images/slice_ground_truth}}
%\caption{Slices of best performances of \protect\subref{f.slice_df} DF, \protect\subref{f.slice_knn} kNN, \protect\subref{f.slice_svm} SVM, \protect\subref{f.slice_sgd} SGD, \protect\subref{f.slice_ensemble} ensemble, and \protect\subref{f.slice_ground_truth} ground-truth.}
\caption{Segmentation of optimally tuned algorithms, ensemble, and ground-truth. Size Trainingset: 70 images. All algorithms trained with all features except for kNN, which is trained only on non-coordinate features. All images are from the same head and show the same slice.}
\label{f.slices_all}
\end{figure}
\section{Discussion}
Our experiments confirm that DF is a good default choice for the segmentation of brain tissue in MR imaging data. All four algorithms reach a similar accuracy but with different runtime behaviours. DF and SGD allow incremental training, in which input data can be processed sequentially in batches of arbitrary sizes. On the other hand, kNN and SVM require all training data to be held in memory, which inherently limits their application. Large amounts of data are trained most efficiently with SGD, few data with SVM. This observation is consistent with the mathematical foundation of these two algorithms. The good scaling behaviour of stochastic gradient descent is based on the approximation of a gradient by randomly (stochastically) choosing samples from a large dataset and not necessarily having to consult all data points in each iteration. A support vector machine might find a solution with a limited number of samples by mapping low-dimensional features to a higher-dimensional feature space, in which the data is more likely to be separable. However, this comes with high computational costs for fitting such a complex model to a larger amount of data. The required tuning of the hyperparameters is still mainly an empirical and experience driven task.
From Table~(\ref{tab:perf_compare}) we observe that the non-coordinate features seem to be a better discriminator for WM and GM, since using only the intensity and gradient features consistently leads to high dice coefficients for these classes. The lower score of kNN for WM and GM using all features is most likely due to its inherent inability to give the coordinate features less weight.
We have observed a rather small influence of the size of the training set, DF, SVM, and SGD reaching a similar dice coefficient with both, 3 and 70 training samples. This might be due to the limited number of features used. With additional features, the amount of training data might become more important.
The segmentation of the ventricles yielded the lowest dice coefficients across all algorithms. We attribute this partially to the ground-truth data, which is of equivocal quality, especially in the regions of the ventricles. A manual augmentation of the data by experts is required to judge how much is indeed related to this.
The dataset includes only MR images from healthy individuals. How well the segmentation generalizes for anatomical disturbances, tumors or brain diseases remains to be tested.
Finally, we have only combined the algorithms to a simple ensemble by taking the max. probability for each voxel from the predictions. Advanced combinations like sequentially applying two methods or use one method on a global level and the other on a local level might further improve the results.
\section{Conclusion}
A variety of machine learning algorithms are capable of solving the problem. To the best of our knowledge, this is the first description of a successful application of a plain SGD for brain tissue segmentation. Besides accuracy, different runtime behaviours and the clinical application might influence the choice of a preferred algorithm. Careful tuning of hyperparameters is required to yield good results.
The major challenge remains the quality of ground-truth data for training and validation. As long as the test set is the output from another (imperfect) algorithm, any approach is just an approximation of the other mechanism.
In the current setup, the number of available features was limited to seven which is known to be on the lower bound for these algorithms. A deep learning approach that directly processes the raw input data and implicitly learns how to extract features might be a better choice in this case. Whether such a neural network outperforms the classical machine learning algorithms remains to be investigated.
\section*{Acknowledgement}
The calculations in the present work were performed on UBELIX (http://www.id.unibe.ch/hpc), the HPC cluster at the University of Bern.
\bibliographystyle{IEEEtran}
\bibliography{references}
\end{document} | {
"alphanum_fraction": 0.7832710832,
"avg_line_length": 108.6192170819,
"ext": "tex",
"hexsha": "f33aabef88c566951cc697494f14fb6ded8f64ea",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-05-16T19:34:54.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-05-16T19:34:54.000Z",
"max_forks_repo_head_hexsha": "82d3f0f4344620fd22384108b022730cde9c7215",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "mrunibe/MIALab",
"max_forks_repo_path": "tex/mialab.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "82d3f0f4344620fd22384108b022730cde9c7215",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "mrunibe/MIALab",
"max_issues_repo_path": "tex/mialab.tex",
"max_line_length": 1625,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "82d3f0f4344620fd22384108b022730cde9c7215",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "mrunibe/MIALab",
"max_stars_repo_path": "tex/mialab.tex",
"max_stars_repo_stars_event_max_datetime": "2019-01-02T15:31:35.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-12-05T09:03:28.000Z",
"num_tokens": 7614,
"size": 30522
} |
\zchapter{UCC::Services}
% clash with global command
% \newcommand{\server}[1]{\emph{#1}}
%\begin{mdframed}
%\null
\begin{comment}
This Chapter provides an overview of UCC's services (as of January 2014); how to use them, what they are for, what servers are responsible for them. The full hostname for a server is \server{server.ucc.asn.au}.
Servers are usually named after fish beginning with M. This is because they are in the Machine Room, and they run Linux. The mascot for Linux is Tux, a penguin, and he likes to eat fish.
Remember that all services are maintained by UCC's members. If you are interested in learning more, or running a new service, ask someone!
\end{comment}
%\end{mdframed}
\newenvironment{uccservice}[2]
{
\section{#1}
%\begin{mdframed}
%\noindent{\bf Machine(s) Involved:} \server{#2} % Freshers don't need to know this
%\end{mdframed}
}
\begin{uccservice}{Games}{heathred}
The Heathred A. Loveday memorial games server hosts many games including: Minecraft, TF2 and Wolfenstein: Enemy Territory (ET).
Administrator access to \server{heathred} is fairly unrestricted; it is also available as a general use server. For example, its GPU has been used in the past for number crunching projects.
\end{uccservice}
\begin{uccservice}{Drinks and Snacks --- Dispense}{merlo, coke machine, snack machine}
UCC's most successful service is undoubtably the internet connected coke machine and not quite internet connected snack machine. These use serial communications to talk to \server{merlo}, which runs open source software written by talented members including John Hodge, Mark Tearle and David Adam.
A relay connected to \server{merlo} can be activated by door members from the snack machine to open the club's electronic door lock.
\end{uccservice}
%\begin{uccservice}{Mumble}{heathred}
%What's that? I couldn't quite hear you?
%Mumble is a thing for voice chat whilst playing games. \server{heathred} runs a surprisingly popular Mumble server.
%\end{uccservice}
\begin{uccservice}{Clubroom Music}{robotnik}
From within the clubroom, you can navigate to \url{http://robotnik} to play music over the speakers. Beware, as repeated abuse may lead to activation of the dreaded "loldongs" mode.
\end{uccservice}
\begin{uccservice}{Email}{mooneye}
UCC proudly runs its own mail server. You have an email account \texttt{<[email protected]>}.
Upon creating your account you can choose an address to foward all emails to. You can change this at any time by editing the ".forward" file in your home directory.
\begin{comment}
% Normal people just use forwarding
Alternately, you can use one of several methods to check your UCC email directly.
\begin{enumerate}
\item alpine --- Connect via SSH and run "alpine".
\item webmail --- Several options will be presented to you at \url{http://webmail.ucc.asn.au}
\item mail client (eg: Thunderbird) --- The server name is \server{secure.ucc.asn.au}. Use port 993 and IMAP. With your UCC username and password.
\end{enumerate}
\end{comment}
\end{uccservice}
\begin{uccservice}{Web Hosting}{mantis, mussel}
Members can publish their own sites! SSH to a server and edit the files in the directory "public-html". The website will appear at \url{http://username.ucc.asn.au}.
\end{uccservice}
\begin{uccservice}{Wiki Hosting}{mooneye}
UCC uses a Wiki called "MoinMoin" to store documentation on servers, events, and miscellaneous things. It is visible at \url{http://wiki.ucc.asn.au}.
\end{uccservice}
\begin{comment}
% Not useful to Freshers
\begin{uccservice}{User Logins}{mussel, mylah}
We use something called LDAP for authentication and linux accounts. SAMBA is involved for windows logins. Only one member really knows how this works, so I will move swiftly on.
\end{uccservice}
\begin{uccservice}{Network Servers}{murasoi, mooneye}
Murasoi is a wheel-only server which serves as a router for all of UCC's networks and runs the infamous "ucc-fw" firewall. Murasoi also acts as the DHCP server.
DNS is on mooneye. The magic that makes \url{http://username.ucc.asn.au} point to your website happens on mooneye.
\end{uccservice}
\begin{uccservice}{File Storage}{mylah, enron/stearns, nortel/onetel, motsugo}
With your account comes not one, but \emph{two} "home" directories for your files.
The one most commonly seen is accessable on clubroom machines. It will be named "/home/ucc/username" on clubroom linux machines. On servers however, that path leads to a different home directory; to get to your clubroom home directory (called "away") you must access "/away/ucc/username".
Home directories on the servers are considered slightly more secure than your "away" directory.
If you are using Linux, you can use the program "sshfs" to mount your home or away directories remotely. This is probably the most convenient way to upload, download and edit files. Under windows, the programs "WinSCP" or "Filezilla" are recommended.
%For interest: Not that interesting
%\server{enron} and \server{stearns} are our slowly dying SAN which stores "away". \server{mylah} mounts the SAN directly and exports the filesystem over NFS.
%\server{motsugo}'s disks contain "home" which is exported only to servers via NFS.
%The NetApp \server{nortel} and \server{onetel} store Virtual Machine (VM) images, and "/services" --- the directory that contains UCC's website, amongst other things.
\end{uccservice}
\end{comment}
\begin{uccservice}{Virtual Machine Hosting}{medico, motsugo, heathred, mylah}
Members who are particularly nice to wheel group can get their own VM hosted at UCC. %\server{medico} runs the amazing ProxMox interface and is used for all new VMs. The typical way to use this interface is from a web browser on \server{maaxen}, a VM running on \server{medico}...
%\server{heathred} is used for VMs when wheel complains that they aren't important enough to justify using all of \server{medico}'s CPU *cough* minecraft *cough*.
\end{uccservice}
% Either mentioned elsewhere or not important
\begin{uccservice}{Windows Server}{maaxen}
\server{maaxen} is our token Windows server. It can be accessed through RDP, but beware, as it only supports two simultaneous sessions. \server{maaxen} boasts a range of useful programs including Notepad and Matlab.
\end{uccservice}
\begin{uccservice}{IRC}{mussel, mantis}
Our two IRC servers are bridged with CASSA and ComSSA, computer science associations at other Universities.
\end{uccservice}
\begin{uccservice}{General Use}{motsugo}
SSH access is available to several servers, but \server{motsugo} is the best choice for general use. %It is mostly used for personal software projects, and to run members' screen sessions so they can be \emph{constantly} connected to IRC.
\end{uccservice}
| {
"alphanum_fraction": 0.7712040332,
"avg_line_length": 49.5882352941,
"ext": "tex",
"hexsha": "b365d2130cc9bfda6c8a317f9bd0957a0d6e7fae",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0cb01fafd4a5269c0d008ffb056437a28b3809ef",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "ucc/FresherGuide",
"max_forks_repo_path": "2015/chapters/Services.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0cb01fafd4a5269c0d008ffb056437a28b3809ef",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "ucc/FresherGuide",
"max_issues_repo_path": "2015/chapters/Services.tex",
"max_line_length": 298,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0cb01fafd4a5269c0d008ffb056437a28b3809ef",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "ucc/FresherGuide",
"max_stars_repo_path": "2015/chapters/Services.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1729,
"size": 6744
} |
%*******************************************************************************
%*********************************** First Chapter *****************************
%*******************************************************************************
\ifpdf
\graphicspath{{Chapter1/Figs/}}
\else
\graphicspath{{Chapter1/Figs/}}
\fi
\chapter{Privacy in Bitcoin} %Title of the First Chapter
\label{ch:Privacy in Bitcoin}
\section{Overview of Bitcoin} %Section - 1.1
\label{sec:1-Overview of Bitcoin}
To facilitate better understanding of the issues surrounding anonymity in Bitcoin and digital currencies in general, this paper introduces a basic model of Bitcoin. For conciseness, only parts of Bitcoin that relate to anonymity are presented and the other concepts are simplified. The explanations of Bitcoin are adapted from \cite{andreas2014mastering} and \cite{narayanan2016bitcoin}.
As defined by to the creator of Bitcoin Satoshi Nakamoto, Bitcoin is a purely peer-to-peer electronic cash system that allows direct online payments without going through a financial institution \cite{Nakamoto2008}. When a payer wants to make a payment, he constructs a \kwTransaction{}{} and broadcast it to the Bitcoin peer-to-peer network, where it can be validated by any peer (interchangeably referred to as node) through its digital signature. A simplified data structure for a \kwTransaction{}{} is shown in Fig.~\ref{fig:bitcoin_transactions}:
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.7]{bitcoin-transactions}
\caption{Bitcoin \textit{transactions}}
\label{fig:bitcoin_transactions}
\end{center}
\end{figure}
Each \kwTransaction{}{} has a set of \kwOutput{s} which denotes public key addresses (destination of payment) and the payment amounts. For a payment to be valid, the payer must have the funds to make the payment. The set of \kwInput{s} in a \kwTransaction{}{} denotes the source of funds used in the payment. To denote the payer’s source of funds, an \kwInput{} refers to a previous \kwOutput{} that has been paid to the public key address of the payer. Essentially a payer is using some funds that are previously paid to him/her to pay the payee. In order for the \kwTransaction{}{} to be valid, the payer must indeed own the public keys address that the previous \kwOutput{} has paid to. Nodes in the Bitcoin network validates this by checking whether the digital signature signed by the payer’s private key in the \kwInput{} is valid under the public key address in the previous \kwOutput{} referred to by the \kwInput{}. If the digital signature is valid, it means that the payer knows the private key that corresponds to the public key address that the previous \kwOutput{} has paid to, and he indeed owns the public key address. The example above shows the case where there is only one \kwInput{} and one \kwOutput{} in a transaction, but a \kwTransaction{}{} can have multiple \kwInput{s} and \kwOutput{s}. In such cases, the payer is combining sources of funds from multiple public key addresses and paying them to multiple payment addresses. At this point, it can be noted that a user can own multiple public key addresses which is achieved by simply creating them.
In addition to proof of ownership of funds, the amounts referenced by in the \kwInput{} and the amount stated in the \kwOutput{} of a \kwTransaction{}{} must also be equal. For cases where there are multiple \kwInput{s} and \kwOutput{s} in a transaction, the total amount in the previous \kwOutput{s} referenced by the \kwInput{s} must equal to the total amount stated by the \kwOutput{s}. Essentially, this means that the source of payment must be in the correct amount to fulfil the payment. Another point note is that a previous \kwOutput{} referenced by an \kwInput{} must be consumed in full, which means that the funds in the previous \kwOutput{} cannot be broken down to make a payment of an exact amount. Change is given back to the payer by creating an additional \kwOutput{} in the \kwTransaction{}{} that pays back the excess amount to one of the payer’s public key address. This mechanism ensures that the \kwInput{} and \kwOutput{} amounts of a \kwTransaction{}{} always tally. Lastly, to prevent double spending, the previous \kwOutput{} that is referenced by the \kwInput{} must not be already referenced by the \kwInput{} of another \kwTransaction{}{}.
Once a node has performed all the above checks and validated a transaction, it forwards the \kwTransaction{}{} to its neighbours who carry on the process. If the node who receives the \kwTransaction{}{} happens to be a miner node, it combines the \kwTransaction{}{} with other \kwTransaction{}{s} to form a block, and attempts to publish the \kwBlock{} to the Blockchain through the consensus protocol. The new \kwBlock{} that passes the consensus protocol is accepted by all nodes and extends the Blockchain by pointing to the latest \kwBlock{} in the Blockchain. A simplified example illustrating the relationships between \kwBlock{s} and \kwTransaction{}{s} in the Blockchain is shown in Fig.~\ref{fig:bitcoin_blockchain}.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.65]{bitcoin-blockchain}
\caption{\textit{blocks} and \textit{transactions} in the Blockchain}
\label{fig:bitcoin_blockchain}
\end{center}
\end{figure}
As seen, the Blockchain contains nothing but chains of transfers of ownership of funds from the public key address in one \kwOutput{} to the public key address in another \kwOutput{} across different \kwBlock{s}. Those \kwOutput{s} that are not referred to by any \kwInput{} represent unspent funds, and can be referenced to by a new transaction.
Once a \kwTransaction{}{} has made it to the Blockchain for some while, the payment is considered confirmed. The append-only property of the Blockchain prevents confirmed \kwTransaction{}{} from being tampered, while the public nature of the Blockchain allows anyone to check for the validity of a \kwTransaction{}{}.These features ensures that Bitcoin payments are secure.
\section{Pseudonymity of Bitcoin}
\label{sec:1-Pseudonymity of Bitcoin}
\textbf{\textit{Pseudonymity}} refers to the case of a user using an identity that is not his/her real identity \cite{narayanan2016bitcoin}. Bitcoin achieves pseudonymity as payments made with public key addresses instead of real world identities. These addresses are generated randomly, and a user can generate as many public key addresses as he wants to further hide his/her identity. Hence it is impossible to tell who a public key address belongs to given only the information of the public key address. However pseudonymity alone in Bitcoin does not achieve privacy as public key addresses can still be linked to real world identities given other information (\S\ref{sec:1-Ways to De-anonymise Bitcoin}). Due to the public nature of the Blockchain, once a real world identity is linked to a public key address, all the \kwTransaction{}{s} in the past, present and future using that address can be linked to that real world identity.
\section{Definition of Anonymity and Anonymity Set}
\label{sec:1-Definition of Anonymity and Anonymity Set}
To achieve privacy in Bitcoin, the anonymity property needs to be satisfied. Anonymity in the context of Bitcoin requires pseudonymity and unlinkability \cite{narayanan2016bitcoin}. \textbf{\textit{Unlinkability}} refers to the case that it is hard to:
\begin{enumerate}
\item Link different public key addresses of the same user
\item Link different \kwTransaction{}{s} made by the same user
\item Link a payer to the payee
\end{enumerate}
If the above properties are satisfied, Bitcoin and digital currency systems in general can be truly anonymous and \kwTransaction{}{s} can be fully private.
Another term that is relevant to anonymity is the \textbf{\textit{anonymity set}} \cite{narayanan2016bitcoin}. An anonymity set in Bitcoin is the set of \kwTransaction{}{s} in which an adversary cannot distinguish a particular \kwTransaction{}{} from. In other words, even if an adversary knows that a \kwTransaction{}{} is linked to a particular user, he cannot identify which \kwTransaction{}{} it is if that \kwTransaction{}{} is in its anonymity set. An anonymity set containing one \kwTransaction{}{} essentially implies no anonymity, while an anonymity set containing all the \kwTransaction{}{s} on the Blockchain implies full anonymity. Hence the size of the anonymity set in a protocol can be used to measure its level of anonymity.
\section{Ways to De-anonymise Bitcoin}
\label{sec:1-Ways to De-anonymise Bitcoin}
\subsection{Linking Payers to Payee}
\label{sec:1-Linking Payers to Payee}
Given the properties of anonymity, it can be seen that Bitcoin is not anonymous. The current Bitcoin protocol clearly violates the property that it should be hard to link a payer to the payee. The \kwInput{} of a \kwTransaction{}{} provides the link to the payer’s public key address, while the \kwOutput{} of a \kwTransaction{}{} states the payee’s public key address. For cases of direct payments without the use of an intermediary, anyone can easily tell the link between the payer and the payee since all \kwTransaction{}{s} are public on the Blockchain.
\subsection{Linking Public Key Addresses and Transactions}
\label{sec:1-Linking Public Key Addresses and Transactions}
The other two anonymity properties apply to cases where users use different public key addresses for payments. These two properties can be violated when the different public key addresses used by the same user are linked together to form a \textbf{\textit{cluster}}. If this is achieved, then the only step left is to associate a real world identity to the cluster to fully de-anonymise the user.
The first step of forming clusters of addresses can be achieved by inferring joint control of addresses through \textbf{\textit{shared spending}} and \textbf{\textit{shadow addresses}} \cite{Androulaki2013}. This technique is illustrated in Fig.~\ref{fig:bitcoin_cluster}.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.9]{bitcoin-cluster}
\caption{Clustering by inferring joint control of addresses}
\label{fig:bitcoin_cluster}
\end{center}
\end{figure}
A \kwTransaction{}{} can contain shared spending by having multiple \kwInput{s} that refer to a different previous \kwOutput{s} (\S\ref{sec:1-Overview of Bitcoin}). One of the main reason for this kind of \kwTransaction{}{} is that the payment amount is too big to be fulfilled by any single \kwOutput{} paid to the payer and the payer has to combine funds from different \kwOutput{s} to make the payment. An adversary can make use of this spending pattern and infer that the public key addresses in the \kwOutput{s} referred to by the \kwInput{s} in a \kwTransaction{}{} belong to the same payer. In addition, a \kwTransaction{}{} often contains an \kwOutput{} that pays back the change to the payer (\S\ref{sec:1-Overview of Bitcoin}). The public key addresses in these \kwOutput{s}. referred to as shadow addresses, are in fact the addresses of payer. Hence an adversary can attempt extract these shadow addresses from a \kwTransaction{}{}.These two techniques can be used to form clusters of public key addresses that belong the same user. In the above example, the \kwOutput{s} in grey form a cluster and the public key addresses in these \kwOutput{s} belong to the Payer. Once clusters have been formed, different clusters can be linked to each other transitively as long as one public key address from each of the different clusters can be linked together. An example of this technique is illustrated in Fig.~\ref{fig:bitcoin_cluster_transitive}.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.95]{bitcoin-cluster-transitive}
\caption{Linking clusters transitively}
\label{fig:bitcoin_cluster_transitive}
\end{center}
\end{figure}
\subsection{Linking Public Key Addresses to Real World Identities}
\label{sec:1-Linking Public Key Addresses to Real World Identities}
After forming clusters of public key addresses that each belongs to a user, the last step is to assign real world identities to the clusters. This can be done by transacting with users who are targets of de-anonymization \cite{Meiklejohn2013}. Given that an adversary knows the real world identity of the user he is transacting with, he can learn one of the public key address of the user. With this one known public key address of the real world identity, a whole cluster containing that public key address can be transitively linked to the identity. Hence an adversary has the ability to attribute a list of public key addresses to a real world identity and de-anonymize the \kwTransaction{}{s} made by that identity.
\subsection{Side Channels Attacks}
\label{sec:1-Side Channels Attacks}
Side channels in Bitcoin refer to the indirect disclosure of information that can aid in the discovery of the real world identity of users. Since all \kwTransaction{}{} information such as payment amounts and timings are publicly available, an adversary can correlate \kwTransaction{}{s} patterns with other publicly available information to match \kwTransaction{}{s} to real identities \cite{narayanan2016bitcoin}. For example if activities on social media are correlated to Bitcoin \kwTransaction{}{s} activity, then social media users (assuming they are using real world identities) can be linked with the \kwTransaction{}{s} that happened during the period when they are active on social media. In addition, similar to clustering \kwTransaction{}{s} by analysing shared spending and shadow addresses, an adversary can also cluster \kwTransaction{}{s} together based on the similarities in the timings and amounts of the \kwTransaction{}{s} \cite{Androulaki2013}. The bottom-line is that as long as \kwTransaction{}{} details are public, there are always ways to make intelligent guesses about the real word identities of public key addresses.
| {
"alphanum_fraction": 0.7750411685,
"avg_line_length": 139.67,
"ext": "tex",
"hexsha": "f2e32a488204aaff6528d6ee84e8b09d3e989903",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e92555f9b6007b4256a1f00b2c8cfba0b46852c3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "shaofeinus/fyp-report",
"max_forks_repo_path": "Chapter1/chapter1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e92555f9b6007b4256a1f00b2c8cfba0b46852c3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "shaofeinus/fyp-report",
"max_issues_repo_path": "Chapter1/chapter1.tex",
"max_line_length": 1575,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e92555f9b6007b4256a1f00b2c8cfba0b46852c3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "shaofeinus/fyp-report",
"max_stars_repo_path": "Chapter1/chapter1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3171,
"size": 13967
} |
\section{Acknowledgments}
\label{sec:acknowledgments}
A. L. A. Reis was supported by a Phd scholarship from CNPq (Proc. number: 141275/2016-2).
Val{\'e}ria C.F. Barbosa was supported by fellowships from: CNPq (grant 307135/2014-4)
and FAPERJ (grant 26/202.582/2019). Vanderlei C. Oliveira Jr. was supported
by fellowships from: CNPq (grant 308945/2017-4) and FAPERJ (grant E-26/202.729/2018).
The authors thank CPRM for permission to use the aeromagnetic data set.
| {
"alphanum_fraction": 0.7542372881,
"avg_line_length": 42.9090909091,
"ext": "tex",
"hexsha": "c9d5c76e245b2f8e4757964d728a1e4fb07a2379",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-03-17T15:32:29.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-03-17T15:32:29.000Z",
"max_forks_repo_head_hexsha": "dd929120b22bbd8d638c8bc5924d15f41831dce2",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "pinga-lab/eqlayer-magnetization-direction",
"max_forks_repo_path": "manuscript/acknowledgements.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "dd929120b22bbd8d638c8bc5924d15f41831dce2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "pinga-lab/eqlayer-magnetization-direction",
"max_issues_repo_path": "manuscript/acknowledgements.tex",
"max_line_length": 90,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "dd929120b22bbd8d638c8bc5924d15f41831dce2",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "pinga-lab/eqlayer-magnetization-direction",
"max_stars_repo_path": "manuscript/acknowledgements.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-10T10:33:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-09-03T03:00:06.000Z",
"num_tokens": 161,
"size": 472
} |
\chapter{\texorpdfstring{Typed sets and category $\mathbf{Rel}$}%
{Typed sets and category Rel}}
\section{Relational structures}
\begin{defn}
\index{relational structure}A \emph{relational structure} is a pair
consisting of a set and a tuple of relations on this set.
\end{defn}
A poset $(\mathfrak{A},\sqsubseteq)$ can be considered as a relational
structure: $(\mathfrak{A},\llbracket\sqsubseteq\rrbracket).$
A set can $X$ be considered as a relational structure with zero relations:
$(X,\llbracket\rrbracket).$
This book is not about relational structures. So I will not introduce
more examples.
Think about relational structures as a common place for sets or posets,
as far as they are considered in this book.
We will denote $x\in(\mathfrak{A},R)$ iff $x\in\mathfrak{A}$ for
a relational structure $(\mathfrak{A},R)$.
\section{Typed elements and typed sets}
We sometimes want to differentiate between the same element of two
different sets. For example, we may want to consider different the
natural number~$3$ and the rational number~$3$. In order to describe
this in a formal way we consider elements of sets together with sets
themselves. For example, we can consider the pairs $(\mathbb{N},3)$
and $(\mathbb{Q},3)$.
\begin{defn}
\index{typed element}\index{element!typed}A \emph{typed element}
is a pair $(\mathfrak{A},a)$ where $\mathfrak{A}$ is a relational
structure and $a\in\mathfrak{A}$.
I denote $\type(\mathfrak{A},a)=\mathfrak{A}$ and $\GR(\mathfrak{A},a)=a$.
\end{defn}
\begin{defn}
I will denote typed element $(\mathfrak{A},a)$ as $@^{\mathfrak{A}} a$ or just
$@a$ when $\mathfrak{A}$ is clear from context.
\end{defn}
\begin{defn}
\index{typed set}\index{set!typed}A \emph{typed set} is a typed
element equal to $(\subsets U,A)$ where $U$ is a set and $A$ is
its subset.\end{defn}
\begin{rem}
\emph{Typed sets} is an awkward formalization of type theory sets
in ZFC ($U$ is meant to express the \emph{type} of the set). This
book could be better written using type theory instead of ZFC, but
I want my book to be understandable for everyone knowing ZFC. $(\subsets U,A)$
should be understood as a set $A$ of type $U$. For an example, consider
$(\subsets\mathbb{R},[0;10])$; it is the closed interval $[0;10]$
whose elements are considered as real numbers.\end{rem}
\begin{defn}
$\mathfrak{T}\mathfrak{A}=\setcond{(\mathfrak{A},a)}{a\in\mathfrak{A}}=\{\mathfrak{A}\}\times\mathfrak{A}$
for every relational structure~$\mathfrak{A}$.\end{defn}
\begin{rem}
$\mathfrak{T}\mathfrak{A}$ is the set of typed elements of~$\mathfrak{A}$.
\end{rem}
\begin{defn}
If $\mathfrak{A}$ is a poset, we introduce order on its typed elements
isomorphic to the order of the original poset: $(\mathfrak{A},a)\sqsubseteq(\mathfrak{A},b)\Leftrightarrow a\sqsubseteq b$.
\end{defn}
\begin{defn}
I denote $\GR(\mathfrak{A},a)=a$ for a typed element~$(\mathfrak{A},a)$.
\end{defn}
\begin{defn}
I will denote \emph{typed subsets} of a typed poset $(\subsets U,A)$
as $\subsets(\subsets U,A)=\setcond{(\subsets U,X)}{X\in\subsets A}=\{\subsets U\}\times\subsets A$.\end{defn}
\begin{obvious}
$\subsets(\subsets U,A)$ is also a set of typed sets.
\end{obvious}
\begin{defn}
I will denote $\mathscr{T}U=\mathfrak{T}\subsets U$.\end{defn}
\begin{rem}
This means that $\mathscr{T}U$ is the set of typed subsets of a set~$U$.\end{rem}
\begin{obvious}
$\mathscr{T}U=\setcond{(\subsets U,X)}{X\in\subsets U}=\{\subsets U\}\times\subsets U=\subsets(\subsets U,U)$.
\end{obvious}
\begin{obvious}
$\mathscr{T}U$ is a complete atomistic boolean lattice. Particularly:
\begin{enumerate}
\item $\bot^{\mathscr{T}U}=(\subsets U,\emptyset)$;
\item $\top^{\mathscr{T}U}=(\subsets U,U)$;
\item $(\subsets U,A)\sqcup(\subsets U,B)=(\subsets U,A\cup B)$;
\item $(\subsets U,A)\sqcap(\subsets U,B)=(\subsets U,A\cap B)$;
\item $\bigsqcup_{A\in S}(\subsets U,A)=(\subsets U,\bigcup_{A\in S}A)$;
\item $\bigsqcap_{A\in S}(\subsets U,A)=\left(\subsets U,\begin{cases}\bigcap_{A\in S}A&\text{ if }A\ne\emptyset\\U&\text{ if }A=\emptyset\end{cases}\right)$;
\item $\overline{(\subsets U,A)}=(\subsets U,U\setminus A)$;
\item atomic elements are $(\subsets U,\{x\})$ where $x\in U$.
\end{enumerate}
Typed sets are ``better'' than regular sets as (for example) for
a set~$U$ and a typed set~$X$ the following are defined by regular
order theory:\end{obvious}
\begin{itemize}
\item $\atoms X$;
\item $\overline{X}$;
\item $\bigsqcap^{\mathscr{T}U}\emptyset$.
\end{itemize}
For regular (``non-typed'') sets these are not defined (except of
$\atoms X$ which however needs a special definition instead of using
the standard order-theory definition of atoms).
Typed sets are convenient to be used together with filters on sets
(see below), because both typed sets and filters have a set~$\subsets U$
as their type.
Another advantage of typed sets is that their binary product (as defined
below) is a $\mathbf{Rel}$-morphism. This is especially convenient
because below defined products of filters are also morphisms of related
categories.
Well, typed sets are also quite awkward, but the proper way of doing
modern mathematics is \emph{type theory} not ZFC, what is however
outside of the topic of this book.
\section{\texorpdfstring{Category $\mathbf{Rel}$}{Category Rel}}
I remind that $\mathbf{Rel}$ is the category of (small) binary relations
between sets, and $\mathbf{Set}$ is its subcategory where only monovalued
entirely defined morphisms (functions) are considered.
\begin{defn}
Order on $\mathbf{Rel}(A,B)$ is defined by the formula $f\sqsubseteq g\Leftrightarrow\GR f\subseteq\GR g$.\end{defn}
\begin{obvious}
This order is isomorphic to the natural order of subsets of the set
$A\times B$.\end{obvious}
\begin{defn}
$X\rsuprel fY\Leftrightarrow\GR X\rsuprel{\GR f}\GR Y$ and $\rsupfun fX=(\Dst f,\rsupfun{\GR f}\GR X)$
for a $\mathbf{Rel}$-morphism~$f$ and typed sets $X\in\mathscr{T}\Src f$,
$Y\in\mathscr{T}\Dst f$.
\end{defn}
\begin{defn}
For category $\mathbf{Rel}$ there is defined reverse morphism: $(A,B,F)^{-1}=(B,A,F^{-1})$.\end{defn}
\begin{obvious}
$(f^{-1})^{-1}=f$ for every $\mathbf{Rel}$-morphism~$f$.
\end{obvious}
\begin{obvious}
$\rsuprel{f^{-1}}=\rsuprel f^{-1}$ for every $\mathbf{Rel}$-morphism~$f$.
\end{obvious}
\begin{obvious}
$(g\circ f)^{-1}=f^{-1}\circ g^{-1}$ for every composable $\mathbf{Rel}$-morphisms
$f$ and~$g$.\end{obvious}
\begin{prop}
$\rsupfun{g\circ f}=\rsupfun g\circ\rsupfun f$ for every composable
$\mathbf{Rel}$-morphisms $f$ and~$g$.\end{prop}
\begin{proof}
Exercise.\end{proof}
\begin{prop}
The above definitions of monovalued morphisms of $\mathbf{Rel}$ and
of injective morphisms of $\mathbf{Set}$ coincide with how mathematicians
usually define monovalued functions (that is morphisms of $\mathbf{Set}$)
and injective functions.\end{prop}
\begin{proof}
Let $f$ be a $\mathbf{Rel}$-morphism $A\rightarrow B$.
The following are equivalent:
\begin{itemize}
\item $f$ is a monovalued relation;
\item $\forall x\in A,y_{0},y_{1}\in B:(x\mathrel fy_{0}\land x\mathrel fy_{1}\Rightarrow y_{0}=y_{1})$;
\item $\forall x\in A,y_{0},y_{1}\in B:(y_{0}\ne y_{1}\Rightarrow\lnot(x\mathrel fy_{0})\lor\lnot(x\mathrel fy_{1}))$;
\item $\forall y_{0},y_{1}\in B\forall x\in A:(y_{0}\ne y_{1}\Rightarrow\lnot(x\mathrel fy_{0})\lor\lnot(x\mathrel fy_{1}))$;
\item $\forall y_{0},y_{1}\in B:(y_{0}\ne y_{1}\Rightarrow\forall x\in A:(\lnot(x\mathrel fy_{0})\lor\lnot(x\mathrel fy_{1})))$;
\item $\forall y_{0},y_{1}\in B:(\exists x\in A:(x\mathrel fy_{0}\land x\mathrel fy_{1})\Rightarrow y_{0}=y_{1})$;
\item $\forall y_{0},y_{1}\in B:y_{0}\mathrel{(f\circ f^{-1})}y_{1}\Rightarrow y_{0}=y_{1}$;
\item $f\circ f^{-1}\sqsubseteq1_{B}$.
\end{itemize}
Let now $f$ be a $\mathbf{Set}$-morphism $A\rightarrow B$.
The following are equivalent:
\begin{itemize}
\item $f$ is an injective function;
\item $\forall y\in B,a,b\in A:(a\mathrel fy\land b\mathrel fy\Rightarrow a=b)$;
\item $\forall y\in B,a,b\in A:(a\ne b\Rightarrow\lnot(a\mathrel fy)\lor\lnot(b\mathrel fy))$;
\item $\forall y\in B:(a\ne b\Rightarrow\forall a,b\in A:(\lnot(a\mathrel fy)\lor\lnot(b\mathrel fy)))$;
\item $\forall y\in B:(\exists a,b\in A:(a\mathrel fy\land b\mathrel fy)\Rightarrow a=b)$;
\item $f^{-1}\circ f\sqsubseteq1_{A}$.
\end{itemize}
\end{proof}
\begin{prop}
For a binary relation $f$ we have:
\begin{enumerate}
\item \label{br-j}$\rsupfun f\bigcup S=\bigcup\rsupfun{\rsupfun f}S$ for
a set of sets $S$;
\item \label{br-j1}$\bigcup S\rsuprel fY\Leftrightarrow\exists X\in S:X\rsuprel fY$
for a set of sets $S$;
\item \label{br-j2}$X\rsuprel f\bigcup T\Leftrightarrow\exists Y\in T:X\rsuprel fY$
for a set of sets $T$;
\item \label{br-j12}$\bigcup S\rsuprel f\bigcup T\Leftrightarrow\exists X\in S,Y\in T:X\rsuprel fY$
for sets of sets $S$ and $T$;
\item \label{br-at-r}$X\rsuprel fY\Leftrightarrow\exists\alpha\in X,\beta\in Y:\{\alpha\}\rsuprel f\{\beta\}$
for sets $X$ and $Y$;
\item \label{br-at-f}$\rsupfun fX=\bigcup\rsupfun{\rsupfun f}\atoms X$ for a
set $X$ (where $\atoms X=\setcond{\{x\}}{x\in X}$).
\end{enumerate}
\end{prop}
\begin{proof}
~
\begin{widedisorder}
\item [{\ref{br-j}}] ~
\begin{multline*}
y\in\rsupfun f\bigcup S\Leftrightarrow\exists x\in\bigcup S:x\mathrel fy\Leftrightarrow\exists P\in S,x\in P:x\mathrel fy\Leftrightarrow\\
\exists P\in S:y\in\rsupfun fP\Leftrightarrow\exists Q\in\rsupfun{\rsupfun f}S:y\in Q\Leftrightarrow y\in\bigcup\rsupfun{\rsupfun f}S.
\end{multline*}
\item [{\ref{br-j1}}]
\begin{multline*}
\bigcup S\rsuprel fY\Leftrightarrow\exists x\in\bigcup S,y\in Y:x\mathrel fy\Leftrightarrow\\
\exists X\in S,x\in X,y\in Y:x\mathrel fy\Leftrightarrow\exists X\in S:X\rsuprel fY.
\end{multline*}
\item [{\ref{br-j2}}] By symmetry.
\item [{\ref{br-j12}}] From two previous formulas.
\item [{\ref{br-at-r}}] $X\rsuprel fY\Leftrightarrow\exists\alpha\in X,\beta\in Y:\alpha\mathrel f\beta\Leftrightarrow\exists\alpha\in X,\beta\in Y:\{\alpha\}\rsuprel f\{\beta\}$.
\item [\ref{br-at-f}] Obvious.
\end{widedisorder}
\end{proof}
\begin{cor}
For a $\mathbf{Rel}$-morphism $f$ we have:
\begin{enumerate}
\item $\rsupfun f\bigsqcup S=\bigsqcup\rsupfun{\rsupfun f}S$ for $S\in\subsets\mathscr{T}\Src f$;
\item $\bigsqcup S\rsuprel fY\Leftrightarrow\exists X\in S:X\rsuprel fY$
for $S\in\subsets\mathscr{T}\Src f$;
\item $X\rsuprel f\bigsqcup T\Leftrightarrow\exists Y\in T:X\rsuprel fY$
for $T\in\subsets\mathscr{T}\Dst f$;
\item $\bigsqcup S\rsuprel f\bigsqcup T\Leftrightarrow\exists X\in S,Y\in T:X\rsuprel fY$
for $S\in\subsets\mathscr{T}\Src f$, $T\in\subsets\mathscr{T}\Dst f$;
\item $X\rsuprel fY\Leftrightarrow\exists x\in\atoms X,y\in\atoms Y:x\rsuprel fy$
for $X\in\mathscr{T}\Src f$, $Y\in\mathscr{T}\Dst f$;
\item $\rsupfun fX=\bigsqcup\rsupfun{\rsupfun f}\atoms X$ for $X\in\mathscr{T}\Src f$.
\end{enumerate}
\end{cor}
\begin{cor}
A $\mathbf{Rel}$-morphism $f$ can be restored knowing either $\rsupfun fx$
for atoms $x\in\mathscr{T}\Src f$ or $x\rsuprel fy$ for atoms $x\in\mathscr{T}\Src f$,
$y\in\mathscr{T}\Dst f$.\end{cor}
\begin{prop}
Let $A$, $B$ be sets, $R$ be a set of binary relations.
\begin{enumerate}
\item \textbf{\label{bsr-jf}$\rsupfun{\bigcup R}X=\bigcup_{f\in R}\rsupfun fX$}
for every set $X$;
\item \label{bsr-mf}$\rsupfun{\bigcap R}\{\alpha\}=\bigcap_{f\in R}\rsupfun f\{\alpha\}$
for every $\alpha$, if $R$ is nonempty;
\item \label{bsr-jr}$X\rsuprel{\bigcup R}Y\Leftrightarrow\exists f\in R:X\rsuprel fY$
for every sets $X$, $Y$;
\item \label{bsr-mr}$\alpha\mathrel{\left(\bigcap R\right)}\beta\Leftrightarrow\forall f\in R:\alpha\mathrel f\beta$
for every $\alpha$ and $\beta$, if $R$ is nonempty.
\end{enumerate}
\end{prop}
\begin{proof}
~
\begin{widedisorder}
\item [{\ref{bsr-jf}}] ~
\begin{multline*}
y\in\rsupfun{\bigcup R}X\Leftrightarrow\exists x\in X:x\mathrel{\left(\bigcup R\right)}y\Leftrightarrow\exists x\in X,f\in R:x\mathrel fy\Leftrightarrow\\
\exists f\in R:y\in\rsupfun fX\Leftrightarrow y\in\bigcup_{f\in R}\rsupfun fX.
\end{multline*}
\item [{\ref{bsr-mf}}] ~
\[
y\in\rsupfun{\bigcap R}\{\alpha\}\Leftrightarrow\forall f\in R:\alpha\mathrel fy\Leftrightarrow\forall f\in R:y\in\rsupfun f\{\alpha\}\Leftrightarrow y\in\bigcap_{f\in R}\rsupfun f\{\alpha\}.
\]
\item [{\ref{bsr-jr}}] ~
\begin{multline*}
X\rsuprel{\bigcup R}Y\Leftrightarrow\exists x\in X,y\in Y:x\mathrel{\left(\bigcup R\right)}y\Leftrightarrow\\
\exists x\in X,y\in Y,f\in R:x\mathrel fy\Leftrightarrow\exists f\in R:X\rsuprel fY.
\end{multline*}
\item [{\ref{bsr-mr}}] Obvious.
\end{widedisorder}
\end{proof}
\begin{cor}
Let $A$, $B$ be sets, $R\in\subsets\mathbf{Rel}(A,B)$.
\begin{enumerate}
\item \textbf{$\rsupfun{\bigsqcup R}X=\bigsqcup_{f\in R}\rsupfun fX$} for
$X\in\mathscr{T}A$;
\item $\rsupfun{\bigsqcap R}x=\bigsqcap_{f\in R}\rsupfun fx$ for atomic
$x\in\mathscr{T}A$;
\item $X\rsuprel{\bigsqcup R}Y\Leftrightarrow\exists f\in R:X\rsuprel fY$
for $X\in\mathscr{T}A$, $Y\in\mathscr{T}B$;
\item $x\rsuprel{\bigsqcap R}y\Leftrightarrow\forall f\in R:x\rsuprel fy$
for every atomic $x\in\mathscr{T}A$, $y\in\mathscr{T}B$.
\end{enumerate}
\end{cor}
\begin{prop}
$X\rsuprel{g\circ f}Z\Leftrightarrow\exists\beta:(X\rsuprel f\{\beta\}\land\{\beta\}\rsuprel gZ)$
for every binary relation~$f$ and sets $X$ and $Y$.\end{prop}
\begin{proof}
~
\begin{multline*}
X\rsuprel{g\circ f}Z\Leftrightarrow\exists x\in X,z\in Z:x\mathrel{(g\circ f)}z\Leftrightarrow\\
\exists x\in X,z\in Z,\beta:(x\mathrel f\beta\land\beta\mathrel gz)\Leftrightarrow\\
\exists\beta:(\exists x\in X:x\mathrel f\beta\land\exists y\in Y:\beta\mathrel gz)\Leftrightarrow\exists\beta:(X\rsuprel f\{\beta\}\land\{\beta\}\rsuprel gZ).
\end{multline*}
\end{proof}
\begin{cor}
$X\rsuprel{g\circ f}Z\Leftrightarrow\exists y\in\atoms^{\mathscr{T}B}:(X\rsuprel fy\land y\rsuprel gZ)$
for $f\in\mathbf{Rel}(A,B)$, $g\in\mathbf{Rel}(B,C)$ (for sets $A$,
$B$, $C$).\end{cor}
\begin{prop}
$f\circ\bigcup G=\bigcup_{g\in G}(f\circ g)$ and $\bigcup G\circ f=\bigcup_{g\in G}(g\circ f)$
for every binary relation~$f$ and set~$G$ of binary relations.\end{prop}
\begin{proof}
We will prove only $\bigcup G\circ f=\bigcup_{g\in G}(g\circ f)$
as the other formula follows from duality. Really
\begin{multline*}
(x,z)\in\bigcup G\circ f\Leftrightarrow\exists y:((x,y)\in f\land(y,z)\in\bigcup G)\Leftrightarrow\\
\exists y,g\in G:((x,y)\in f\land(y,z)\in g)\Leftrightarrow\exists g\in G:(x,z)\in g\circ f\Leftrightarrow(x,z)\in\bigcup_{g\in G}(g\circ f).
\end{multline*}
\end{proof}
\begin{cor}
Every $\mathbf{Rel}$-morphism is metacomplete and co-metacomplete.\end{cor}
\begin{prop}
\label{rel-mono}The following are equivalent for a $\mathbf{Rel}$-morphism~$f$:
\begin{enumerate}
\item \label{rel-mono-mono}$f$ is monovalued.
\item \label{rel-mmv}$f$ is metamonovalued.
\item \label{rel-wmmv}$f$ is weakly metamonovalued.
\item \label{rel-mono-atom}$\rsupfun fa$ is either atomic or least whenever
$a\in\atoms^{\mathscr{T}\Src f}$.
\item \label{rel-mono-bin}$\rsupfun{f^{-1}}(I\sqcap J)=\rsupfun{f^{-1}}I\sqcap\rsupfun{f^{-1}}J$
for every $I,J\in\mathscr{T}\Src f$.
\item \label{rel-mono-meet}$\rsupfun{f^{-1}}\bigsqcap S=\bigsqcap_{Y\in S}\rsupfun{f^{-1}}Y$
for every $S\in\subsets\mathscr{T}\Src f$.
\end{enumerate}
\end{prop}
\begin{proof}
~
\begin{description}
\item [{\ref{rel-mmv}$\Rightarrow$\ref{rel-wmmv}}] Obvious.
\item [{\ref{rel-mono-mono}$\Rightarrow$\ref{rel-mmv}}] Take $x\in\atoms^{\mathscr{T}\Src f}$;
then $fx\in\atoms^{\mathscr{T}\Dst f}\cup\{\bot^{\mathscr{T}\Dst f}\}$ and thus
\begin{multline*}
\rsupfun{\left(\bigsqcap G\right)\circ f}x=\rsupfun{\bigsqcap G}\rsupfun fx=\bigsqcap_{g\in G}\rsupfun g\rsupfun fx=\\
\bigsqcap_{g\in G}\rsupfun{g\circ f}x=\rsupfun{\bigsqcap_{g\in G}(g\circ f)}x;
\end{multline*}
so $\left(\bigsqcap G\right)\circ f=\bigsqcap_{g\in G}(g\circ f)$.
\item [{\ref{rel-wmmv}$\Rightarrow$\ref{rel-mono-mono}}] Take $g=\{(a,y)\}$
and $h=\{(b,y)\}$ for arbitrary $a\ne b$ and arbitrary~$y$. We
have $g\cap h=\emptyset$; thus $(g\circ f)\cap(h\circ f)=(g\cap h)\circ f=\bot$
and thus impossible $x\mathrel fa\land x\mathrel fb$ as otherwise
$(x,y)\in(g\circ f)\cap(h\circ f)$. Thus $f$ is monovalued.
\item [{\ref{rel-mono-atom}$\Rightarrow$\ref{rel-mono-meet}}] Let $a\in\atoms^{\mathscr{T}\Src f}$,
$\rsupfun fa=b$. Then because $b\in\atoms^{\mathscr{T}\Dst f}\cup\{\bot^{\mathscr{T}\Dst f}\}$
\begin{gather*}
\bigsqcap S\sqcap b\ne\bot\Leftrightarrow\forall Y\in S:Y\sqcap b\ne\bot;\\
a\rsuprel f\bigsqcap S\Leftrightarrow\forall Y\in S:a\rsuprel fY;\\
\bigsqcap S\rsuprel{f^{-1}}a\Leftrightarrow\forall Y\in S:Y\rsuprel{f^{-1}}a;\\
a\nasymp\rsupfun{f^{-1}}\bigsqcap S\Leftrightarrow\forall Y\in S:a\nasymp\rsupfun{f^{-1}}Y;\\
a\nasymp\rsupfun{f^{-1}}\bigsqcap S\Leftrightarrow a\nasymp\bigsqcap_{Y\in S}\rsupfun{f^{-1}}Y;\\
\rsupfun{f^{-1}}\bigsqcap S=\bigsqcap_{X\in S}\rsupfun{f^{-1}}X.
\end{gather*}
\item [{\ref{rel-mono-meet}$\Rightarrow$\ref{rel-mono-bin}}] Obvious.
\item [{\ref{rel-mono-bin}$\Rightarrow$\ref{rel-mono-mono}}]
$\rsupfun{f^{-1}}a\sqcap\rsupfun{f^{-1}}b=\rsupfun{f^{-1}}(a\sqcap b)=\rsupfun{f^{-1}}\bot=\bot$
for every two distinct atoms $a=\{\alpha\},b=\{\beta\}\in\mathscr{T}\Dst f$.
From this
\begin{multline*}
\alpha\mathrel{(f\circ f^{-1})}\beta\Leftrightarrow\exists y\in\Dst f:(\alpha\mathrel{f^{-1}}y\land y\mathrel{f}\beta)\Leftrightarrow\\
\exists y\in\Dst f:(y\in\rsupfun{f^{-1}}a\land y\in\rsupfun{f^{-1}}b)
\end{multline*}
is impossible. Thus $f\circ f^{-1}\sqsubseteq1_{\Dst f}^{\mathbf{Rel}}$.
\item [{$\lnot$\ref{rel-mono-atom}$\Rightarrow\lnot$\ref{rel-mono-mono}}] Suppose
$\rsupfun fa\notin\atoms^{\mathscr{T}\Dst f}\cup\{\bot^{\mathscr{T}\Dst f}\}$
for some $a\in\atoms^{\mathscr{T}\Src f}$. Then there exist distinct
points $p$, $q$ such that $p,q\in\rsupfun fa$. Thus $p\mathrel{(f\circ f^{-1})}q$
and so $f\circ f^{-1}\nsqsubseteq1_{\Dst f}^{\mathbf{Rel}}$.
\end{description}
\end{proof}
\section{Product of typed sets}
\begin{defn}
Product of typed sets is defined by the formula
\[
(\subsets U,A)\times(\subsets W,B)=(U,W,A\times B).
\]
\end{defn}
\begin{prop}
Product of typed sets is a $\mathbf{Rel}$-morphism.\end{prop}
\begin{proof}
We need to prove $A\times B\subseteq U\times W$, but this is obvious.\end{proof}
\begin{obvious}
Atoms of $\mathbf{Rel}(A,B)$ are exactly products $a\times b$ where
$a$ and $b$ are atoms correspondingly of $\mathscr{T}A$ and $\mathscr{T}B$.
$\mathbf{Rel}(A,B)$ is an atomistic poset.\end{obvious}
\begin{prop}
$f\nasymp A\times B\Leftrightarrow A\rsuprel fB$ for every $\mathbf{Rel}$-morphism~$f$
and $A\in\mathscr{T}\Src f$, $B\in\mathscr{T}\Dst f$.\end{prop}
\begin{proof}
~
\begin{multline*}
A\rsuprel fB\Leftrightarrow\exists x\in\atoms A,y\in\atoms B:x\rsuprel fy\Leftrightarrow\\
\exists x\in\atoms^{\mathscr{T}\Src f},y\in\atoms^{\mathscr{T}\Dst f}:(x\times y\sqsubseteq f\land x\times y\sqsubseteq A\times B)\Leftrightarrow f\nasymp A\times B.
\end{multline*}
\end{proof}
\begin{defn}
\emph{Image} and \emph{domain} of a $\mathbf{Rel}$-morphism~$f$
are typed sets defined by the formulas
\[
\dom(U,W,f)=(\subsets U,\dom f)\quad\text{and}\quad\im(U,W,f)=(\subsets W,\im f).
\]
\end{defn}
\begin{obvious}
Image and domain of a $\mathbf{Rel}$-morphism are really typed sets.\end{obvious}
\begin{defn}
\emph{Restriction} of a $\mathbf{Rel}$-morphism to a typed set is
defined by the formula $(U,W,f)|_{(\subsets U,X)}=(U,W,f|_{X})$.\end{defn}
\begin{obvious}
Restriction of a $\mathbf{Rel}$-morphism is $\mathbf{Rel}$-morphism.
\end{obvious}
\begin{obvious}
$f|_{A}=f\sqcap(A\times\top^{\mathscr{T}\Dst f})$ for every $\mathbf{Rel}$-morphism~$f$
and $A\in\mathscr{T}\Src f$.
\end{obvious}
\begin{obvious}
$\rsupfun fX=\rsupfun f(X\sqcap\dom f)=\im(f|_{X})$ for every $\mathbf{Rel}$-morphism~$f$
and $X\in\mathscr{T}\Src f$.
\end{obvious}
\begin{obvious}
$f\sqsubseteq A\times B\Leftrightarrow\dom f\sqsubseteq A\land\im f\sqsubseteq B$
for every $\mathbf{Rel}$-morphism~$f$ and $A\in\mathscr{T}\Src f$,
\textbf{$B\in\mathscr{T}\Dst f$}.\end{obvious}
\begin{thm}
Let $A$, $B$ be sets. If $S\in\subsets(\mathscr{T}A\times\mathscr{T}B)$
then
\[
\bigsqcap_{(A,B)\in S}(A\times B)=\bigsqcap\dom S\times\bigsqcap\im S.
\]
\end{thm}
\begin{proof}
For every atomic $x\in\mathscr{T}A$, $y\in\mathscr{T}B$ we have
\begin{multline*}
x\times y\sqsubseteq\bigsqcap_{(A,B)\in S}(A\times B)\Leftrightarrow\forall(A,B)\in S:x\times y\sqsubseteq A\times B\Leftrightarrow\\
\forall(A,B)\in S:(x\sqsubseteq A\land y\sqsubseteq B)\Leftrightarrow\forall A\in\dom S:x\sqsubseteq A\land\forall B\in\im S:y\sqsubseteq B\Leftrightarrow\\
x\sqsubseteq\bigsqcap\dom S\land y\sqsubseteq\bigsqcap\im S\Leftrightarrow x\times y\sqsubseteq\bigsqcap\dom S\times\bigsqcap\im S.
\end{multline*}
\end{proof}
\begin{obvious}
If $U$, $W$ are sets and $A\in\mathscr{T}(U)$ then $A\times$ is
a complete homomorphism from the lattice $\mathscr{T}(W)$ to the
lattice $\mathbf{Rel}(U,W)$, if also $A\ne\bot$
then it is an order embedding.\end{obvious}
| {
"alphanum_fraction": 0.6979560576,
"avg_line_length": 44.830472103,
"ext": "tex",
"hexsha": "65380615bff194710df250230f1c46c06eeff098",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "vporton/algebraic-general-topology",
"max_forks_repo_path": "chap-rel.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1",
"max_issues_repo_issues_event_max_datetime": "2020-03-13T02:05:02.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-12-30T07:16:23.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "vporton/algebraic-general-topology",
"max_issues_repo_path": "chap-rel.tex",
"max_line_length": 191,
"max_stars_count": 9,
"max_stars_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "vporton/algebraic-general-topology",
"max_stars_repo_path": "chap-rel.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-03T04:56:16.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-06-26T00:14:44.000Z",
"num_tokens": 8176,
"size": 20891
} |
\chapter{Foundation}
If you are already familiar with the things here,
please at least skim this chapter to verify that
we are speaking the same language.
\section{Type}
\index{type}%
\index{inhabitant}%
\(x : T\) means that the \emph{type} of \(x\) is \(T\).
We also say that \(x\) \emph{inhabits} \(T\) and that \(x\) is an \emph{inhabitant} of \(T\).
\index{function type}%
\index{type!function}%
A \emph{function type} looks like \(a \to b\).
Function application:
Iff \(x:A\) and \(f:A\to B\), then \(f(x):B\).
The arrow in a function type associates to the right.
The type \(a \to b \to c\) means \(a \to (b \to c)\).
Types and sets are different:
\(x \in A\) and \(x : A\) are different.
\section{Logic}
\index{iff}%
\index{if and only if}%
``Iff'' is ``if and only if''.
It is bi-implication.
\index{connectives}%
\index{logical connectives}%
Logical connectives:
\index{conjunction}%
\index{and}%
\emph{Conjunction} is \(a \wedge b\) (\(a\) and \(b\)).
\index{disjunction}%
\index{or}%
\emph{Disjunction} is \(a \vee b\) (\(a\) or \(b\)).
\index{implication}%
\emph{Implication} is \(a \implies b\) (if \(a\) then \(b\)).
\index{bi-implication}%
\emph{Bi-implication} is \(a \iff b\) (iff \(a\) then \(b\)).
\index{negation}%
\index{not}%
\emph{Negation} is \(\neg a\) (not \(a\)).
\section{Set}
\index{set}%
Practically, a \emph{set} is an unordered collection of unique things.%
\index{Russell's paradox}%
\footnote{Theoretically, a set cannot be defined this way due to Russell's paradox.}
If \(a \neq b\), then the notations \(\{a,b\}\), \(\{b,a\}\), and \(\{a,a,b\}\)
all describe the same set of \emph{two} elements.
The \emph{cardinality} of a set \(A\), written \(|A|\),
is the number of elements of \(A\).
\index{set builder notation}%
\index{notation!set builder}%
\paragraph{Set builder notation}
\( \{ M ~|~ F \} \)
describes the set of every \(M\) where \(F\) is true
where \(M\) is an expression and \(F\) is a logic formula.
For example, \(\{ x^2 ~|~ x \in \Nat \}\) is \(\{ 0^2, 1^2, 2^2, \ldots \}\).
The ellipsis (\ldots) is not an element of the set;
there are just too many things to write.
\paragraph{Set relationships}
\index{subset}%
\(A \subseteq B\)
(\(A\) is a \emph{subset} of \(B\))
iff \(\forall x (x \in A \implies x \in B)\).
\index{disjoint sets}%
\index{sets!disjoint}%
Two sets \(a\) and \(b\) are \emph{disjoint} iff \(a \cap b = \emptyset\).
\paragraph{Set operations}
\index{intersection}%
\index{intersection (set theory)}%
The \emph{intersection of \(A\) and \(B\)} is
\(A \cap B = \{ x ~|~ x \in A \wedge x \in B \}\).
\index{union}%
\index{union (set theory)}%
The \emph{union of \(A\) and \(B\)} is
\(A \cup B = \{ x ~|~ x \in A \vee x \in B \}\).
\index{subtraction (set theory)}%
\index{sets!subtraction}%
The \emph{subtraction of \(A\) and \(B\)} is
\(A - B = \{ x ~|~ x \in A \wedge x \not\in B \}\).
\index{Cartesian product}%
The \emph{Cartesian product of \(A\) and \(B\)} is \(A \times B = \{ (a,b) ~|~ a \in A, b \in B \}\).
The \emph{Cartesian product of \(A_1,\ldots,A_n\)} is
\(\prod_{k=1}^n A_k = A_1 \times \ldots \times A_n = \{ (a_1,\ldots,a_n) ~|~ a_1 \in A_1, ~\ldots, ~a_n \in A_n \}\).
\index{power set}%
\index{set!power}%
The \emph{power set} of \(A\) is \(2^A = \{ X ~|~ X \subseteq A \}\),
the set of all subsets of \(A\).
\paragraph{Closed under binary operation}
\index{closed under binary operation}%
Let \(S\) be a set and \(f\) be a \emph{binary operation} (a two-argument function).
\emph{\(S\) is closed under \(f\)} iff
\(\forall a, b \in S : f(a,b) \in S\).
\paragraph{Partitioning}
\(A_1,\ldots,A_n\) is a \emph{partitioning of \(A\)} if and only if
\(\bigcup_{k=1}^n A_k = A\) and
\(\forall i \neq j (A_i \cap A_j = \emptyset)\).
\section{Relation}
\index{relation}%
A \emph{relation} is a subset of a Cartesian product.
\index{binary relation}%
\index{relation!binary}%
\(R\) is a \emph{binary relation} between \(A\) and \(B\) iff \(R \subseteq A \times B\).
Iff \(A = B\), then \(R\) is an \emph{endorelation}.
\paragraph{Notation}
We write \(a R b\) or \(R(a,b)\) to mean \((a,b) \in R\).
We write \(\neg (a R b)\) or \(\neg R(a,b)\) to mean \((a,b) \not\in R\).
Properties of an endorelation:
\index{reflexive relation}%
\index{relation!reflexive}%
An endorelation \(R \subseteq A \times A\) is \emph{reflexive} iff
\(\forall x \in A : (x,x) \in R\),
\index{symmetric relation}%
\index{relation!symmetric}%
is \emph{symmetric} iff
\(\forall x,y: (x,y) \in R \iff (y,x) \in R\),
\index{antisymmetric relation}%
\index{relation!antisymmetric}%
is \emph{antisymmetric} iff
\(\forall x,y: (x,y) \in R \wedge (y,x) \in R \implies x = y\),
\index{transitive relation}%
\index{relation!transitive}%
is \emph{transitive} iff
\(\forall x,y,z: (x,y) \in R \wedge (y,z) \in R \implies (x,z) \in R\),
\index{total relation}%
\index{relation!total}%
is \emph{total} iff
\(\forall x \in A , y \in A : (x,y) \in R \vee (y,x) \in R\).
\index{equivalence relation}%
\index{relation!equivalence}%
An \emph{equivalence relation} is a symmetric transitive reflexive relation.
\index{partial order}%
\index{total order}%
\index{order!partial}%
\index{order!total}%
A \emph{partial order} is a antisymmetric transitive reflexive relation.
A \emph{total order} is an antisymmetric transitive total relation.
The
\index{composition}%
\index{composition!of two relations}%
\index{relation!composition}%
\emph{composition} of \(f\) and \(g\) is
\(f \circ g = \{ (a,c) ~|~ (a,b) \in g \wedge (b,c) \in f \}\).
Repeated composition: \(f^{n+1} = f \circ f^n\).
\(R\) is \emph{transitive} iff \(\forall a,b,c : a R b \wedge b R c \implies a R c\).
The \emph{inverse} of \(R\) is \(R^{-1} = \{ (b,a) ~|~ a R b \}\).
The
\index{symmetric closure}%
\emph{symmetric closure} of \(R\)
is the smallest symmetric relation that is also a superset of \(R\).
The symmetric closure of \(R\) is \(R \cup R^{-1}\).
The
\index{transitive closure}%
\emph{transitive closure} of \(R\)
is the smallest transitive relation that is also a superset of \(R\).
\section{Function}
\index{function}%
A \emph{function} \(f\) is a relation such that
\(\forall a, b, c : (a,b) \in f \wedge (a,c) \in f \implies b = c\).
The type of a function is \(a \to b\).
If \(f\) is a function, then \(f(a) = b\) iff \((a,b) \in f\).
The type of a relation between \(a\) and \(b\) is \(a \to 2^b\).
If $(a,b) \in f$ and $(a,c) \in f$, then $b = c$.
\index{endofunction}%
The type of an \emph{endofunction} is \(a \to a\).
An endofunction is a function whose input type is equal to its output type.
\index{Iverson bracket}%
\paragraph{Iverson bracket}
\([E]\) is \(1\) iff \(E\) is true
and \(0\) iff \(E\) is false.
\index{indicator function}%
\paragraph{Indicator function}
\(1_A(x) = [x \in A]\).
\index{unnamed function}%
\paragraph{Unnamed function}
\(f(x) = x + 1\) and \(f = x \to x+1\) describe the same function.
When an expression is expected, the notation \(X \to Y\)
means a function that evaluates to \(Y\) if given \(X\).
When a type is expected, the notation \(X \to Y\) describes a function type.
Expressions like \(x \to x + 1\) are called \emph{unnamed functions}.
The
\index{image!of relation}%
\emph{\(R\)-image} of \(A\) is \(\{ y ~|~ (x,y) \in R, ~ x \in A \}\).
The
\index{preimage!of relation}%
\emph{\(R\)-preimage} of \(A\) is \(\{ x ~|~ (x,y) \in R, ~ y \in A \}\).
The
\index{image!of function}%
\emph{\(f\)-image} of \(A\) is \(\{ f(x) ~|~ x \in A \}\).
The
\index{preimage!of function}%
\emph{\(f\)-preimage} of \(A\) is \(\{ x ~|~ f(x) \in A \}\).
For all function \(f\):
if \(A \subset B\) then \(F(A) \subseteq F(B)\)
where \(F(A) = \{ f(x) ~|~ x \in A \}\).
For all function \(f\):
if \(A \subset B\) then \(G(A) \subseteq G(B)\)
where \(G(A) = \{ x ~|~ f(x) \in A \}\).
The function type operator associates to the right: \(a \to b \to c = a \to (b \to c)\).
\paragraph{Currying}
\index{curry}%
\index{currying}%
\emph{Currying} turns every function that takes \(n\) arguments
to an equivalent function that takes one argument.
Currying \(f : (a_1,\ldots,a_n) \to b\) turns it into \(f' : a_1 \to \ldots \to a_n \to b\)
such that \(f(x_1,\ldots,x_n)
= f'~x_1~\ldots~x_n
= (((f'~x_1)~x_2)\ldots)~x_n\).
If it is clear from context that \(f\) can take 3 arguments (and not one 3-tuple argument),
then we abuse the notation \(f(a,b,c)\) to mean \(((f~a)~b)~c\).
\section{Polynomial}
A \emph{polynomial of \(x\)} is an expression of the form \(\sum_{k=0}^n a_k x^k\)
where \(n\) is the \emph{degree} of the polynomial.
A
\index{polynomial}%
\emph{polynomial function of degree \(n\)} is a function of the form
\(f(x) = \sum_{k=0}^n a_k x^k\) where each \(a_k\) is constant.
A \emph{polynomial function of degree \(n\)} is a function of the form
\(f(x) = \prod_{k=1}^n (x - r_k)\) where each \(r_k\) is constant.
Each \(r_k\) is a \emph{root} of the polynomial.
\section{Limit}
Informally, \(\lim_{x \to a} f(x) = b\)
iff \(f(x)\) approaches \(b\) as \(x\) approaches \(a\).
\section{Sequence}
\index{sequence}%
A \emph{sequence} \(x\) is a list of things \(x_0, x_1, x_2\), and so on.
An
\index{arithmetic sequence}%
\index{sequence!arithmetic}%
\emph{arithmetic sequence} is a sequence of the form \(x_k = x_0 + a k\).
A
\index{geometric sequence}%
\index{sequence!geometric}%
\emph{geometric sequence} is a sequence of the form \(x_k = r^k x_0\).
\section{Series}
\index{series}%
If \(x\) is a sequence, then \(\sum_k x_k\) is a \emph{series}.
The series \emph{converges} iff it is a number.
The
\index{sequence of partial sums}%
\index{partial sum}%
\emph{sequence of partial sums} of the sequence \(x\) is
the sequence \(y\)
where \(y_n = \sum_{k=0}^n x_k\).
The series is \(\lim_{n \to \infty} y_n\).
The
\index{geometric series}%
\index{series!geometric}%
\emph{geometric series} with ratio \(r\) is \(g(r) = \sum_{k=0}^\infty r^k\).
If \(0 < \abs{r} < 1\) then \(g(r) = \frac{1}{1-r}\).
| {
"alphanum_fraction": 0.6317543682,
"avg_line_length": 32.6765676568,
"ext": "tex",
"hexsha": "3529c09e067230277bde8c186f644b7ca68cc0b9",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-10-02T15:20:22.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-10-02T15:20:22.000Z",
"max_forks_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0",
"max_forks_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_forks_repo_name": "edom/work",
"max_forks_repo_path": "research/foundation.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0",
"max_issues_repo_issues_event_max_datetime": "2022-02-16T00:55:32.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-12-02T18:37:37.000Z",
"max_issues_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_issues_repo_name": "edom/work",
"max_issues_repo_path": "research/foundation.tex",
"max_line_length": 117,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0",
"max_stars_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_stars_repo_name": "edom/work",
"max_stars_repo_path": "research/foundation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3634,
"size": 9901
} |
\chapter{Conclusion}\label{chapter:conclusion}
In this thesis, we formally introduced the problem of item combination in destination \glspl{rs} as a \gls{moop}. Empirical research into how current state-of-the-art algorithms used for solving \glspl{op} and general \glspl{moop} can be extended to efficiently combine items in destination \glspl{rs} was performed. A comparison of choice algorithms was made, and it was decided that an evolutionary algorithm is best-suited for our problem class. The \gls{nsga}-III, a dominance-based genetic algorithm, was chosen to obtain a sequence of candidate solutions that satisfy user constraints.
Six heuristic variations of the algorithm were implemented. Evaluations conducted using an online study, and through a user survey showed that initializing the algorithm with feasible population members only and penalizing the individuals based on their distance to a feasible area had the highest rating of all variants on all defined metrics. Pure randomness in the initialization of population members showed poor results. However, incorporating composite penalization of the individuals generated by random initialization improved performance significantly.
Similarity-based initialization, as used in \parencite{cbrecsys2014} and further investigated in this work, produced poor results. This initialization technique restricts the search space such that there are a limited number of regions to combine during the actual evolutionary process. Combining items for recommendation is heavily dependent on the defined constraints. Similarity-based initialization does not take the constraints into account. Hence, during the evolutionary step in which the actual constraints determine the fitness of an individual, the limited regions in the population perform poorly. This also explains the poor performance in terms of diversity shown by the travel package algorithm developed by Wörndl and Herzog; the algorithm also used an equivalent similarity metric.
In general, this thesis has shown the potential applicability of \glspl{ea} in efficiently combining items in destination \glspl{rs}.
\section{Limitations}
The results obtained on metrics such as accuracy, satisfaction, cohesiveness, and diversity by this work are above average but imply possible improvement points through further research. Higher ratings from users in terms of accuracy and satisfaction are needed. The computational time of the algorithm is crucial for the usability of a \gls{rs}. Evaluations have shown that a considerable amount of time is needed to complete the evolutionary steps. Therefore, the current implementations of this thesis are not suitable for synchronous \gls{rs}. However, the algorithm could be implemented into an asynchronous \gls{rs}. The underlying data for this research influenced the limitations on recommended travel budgets for a region. Some regions in the database had no data regarding the recommended budget. Consequently, we excluded the regions' budget recommendations from suggested region combinations.
\section{Future Research}
Research, implementation, and evaluations from this thesis can be considered as a baseline for future work on item combination for destination \glspl{rs}. We investigated the applicability of genetic \glspl{ea} to this topic, and we introduced possibly applicable algorithms, for example \gls{aoc}. A possible direction for future work could be to investigate this algorithm type and compare its results with the results obtained in this work.
Further research into \glspl{ea} for item combination in \glspl{rs} has to be done. With this work as a baseline, investigating the trade-offs between the number of objectives, maximum generation, and population size could be a possible research focus. Finding the right balance between the three parameters is vital to improving computational speed while ensuring optimal item combinations.
Additionally, results from user survey proved that travelers with advanced knowledge are significantly different from travelers with lesser knowledge about destinations. This suggests a need for further user surveys with travel experts. Wörndl and Herzog had initially conducted their survey with travel experts only. Their survey method is indeed a better approach for evaluating results from an implemented algorithm.
An improved database of regions with the needed budgets and updated scores for different activities is vital and constitutes possible future work. A transformation and enhancement of the region names with ISO 3166 transformation would significantly increase the real-world usage of the database. For example, it would be possible to obtain the latitude and longitude coordinates of the regions if their ISO codes were added to the database. This would improve accuracy in routing and the cohesiveness of recommended regions. | {
"alphanum_fraction": 0.8284836066,
"avg_line_length": 187.6923076923,
"ext": "tex",
"hexsha": "494dc9b348433c7bad2a075beba9b2eff5657ca6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3cfc1ee0d91f7454fd0969092cbb70491b9c35d2",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "idaShobs/destirec-composite",
"max_forks_repo_path": "Item Combination For DestiRec/chapters/06_conclusion.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3cfc1ee0d91f7454fd0969092cbb70491b9c35d2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "idaShobs/destirec-composite",
"max_issues_repo_path": "Item Combination For DestiRec/chapters/06_conclusion.tex",
"max_line_length": 904,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3cfc1ee0d91f7454fd0969092cbb70491b9c35d2",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "idaShobs/destirec-composite",
"max_stars_repo_path": "Item Combination For DestiRec/chapters/06_conclusion.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 903,
"size": 4880
} |
\documentclass[% Options of scrbook
%
% Preamble fold starts here, unless 'documentclass' is added to the
% g:vimtex_fold_commands option.
%
% draft,
fontsize=12pt,
%smallheadings,
headings=big,
english,
paper=a4,
twoside,
open=right,
DIV=14,
BCOR=20mm,
headinclude=false,
footinclude=false,
mpinclude=false,
pagesize,
titlepage,
parskip=half,
headsepline,
chapterprefix=false,
appendixprefix=Appendix,
appendixwithprefixline=true,
bibliography=totoc,
toc=graduated,
numbers=noenddot,
]{scrbook}
%
% Fold commands (single)
%
\hypersetup{
...,
}
\tikzset{
testing,
tested,
}
%
% Fold commands (single_opt)
%
\usepackage[
...
]{test}
\usepackage[
backend=biber,
style=numeric-comp,
maxcitenames=99,
doi=false,
url=false,
giveninits=true,
]{biblatex}
%
% Fold commands (multi)
%
\renewcommand{\marginpar}{%
\marginnote%
}
\newcommand*{\StoreCiteField}[3]{%
\begingroup
\global\let\StoreCiteField@Result\relax
\citefield{#2}[StoreCiteField]{#3}%
\endgroup
\let#1\StoreCiteField@Result
}
\newenvironment{theo}[1][]{%
\stepcounter{theo}%
\ifstrempty{#1}{%
\mdfsetup{
frametitle={%
\tikz[baseline=(current bounding box.east),outer sep=0pt]%
\node[anchor=east,rectangle,fill=blue!20] {\strut Theorem~\thetheo};%
}%
}%
}%
{% else ifstrempty:
\mdfsetup{
frametitle={
\tikz[baseline=(current bounding box.east),outer sep=0pt]%
\node[anchor=east,rectangle,fill=blue!20]{\strut Theorem~\thetheo:~#1};}%
}%
}%
\mdfsetup{
innertopmargin=10pt,
linecolor=blue!20,
linewidth=2pt,topline=true,
frametitleaboveskip=\dimexpr-\ht\strutbox\relax,
}%
\begin{mdframed}[]\relax%
}{%
\end{mdframed}%
}
\begin{document}
Hello World
%
% Fold commands (single)
%
\pgfplotstableread[col sep=semicolon,trim cells]{
x ; y ; z ; type ; reference ; comment
0.01 ; 1.00 ; nan ; type1 ; ref-unspecified ;
0.02 ; 2.00 ; nan ; type2 ; ref-unspecified ;
0.03 ; 3.00 ; nan ; type3 ; ref-unspecified ;
}{\datatable}
%
% Fold commands (single)
%
\pgfplotstableread[col sep=semicolon,trim cells]{
x ; y ; z ; type ; reference ; comment
0.01 ; 1.00 ; nan ; type1 ; ref-unspecified ;
0.02 ; 2.00 ; nan ; type2 ; ref-unspecified ;
0.03 ; 3.00 ; nan ; type3 ; ref-unspecified ;
}
\datatable
%
% Test for cmd_addplot
%
\begin{tikzpicture}
\begin{axis}
\addplot+[error bars/.cd,x dir=both,x explicit] coordinates {
(0,0) +- (0.1,0)
(0.5,1) +- (0.4,0.2)
(1,2)
(2,5) +- (1,0.1)
};
\end{axis}
\end{tikzpicture}
\section{test 1}
\begin{equation}
f(x) = 1
\label{sec:test1}
\end{equation}
\subsection{test 1.1}
\begin{equation}
f(x) = 1
\label{sec:test1-longer label}
\end{equation}
\section{test 2}
% {{{ Testing markers
Folded stuff
% }}}
Testing markers %<<:
this fold was NOT recognized by vimtex before issue #1515
%:>>
%<<:
this fold worked before issue #1515
%:>>
\subsection{test 2.1}
\subsection{test 2.2}
\end{document}
| {
"alphanum_fraction": 0.642599278,
"avg_line_length": 17.612716763,
"ext": "tex",
"hexsha": "90172a5b2acd58f5fe46cf32b20643beb1f6971c",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-01-12T05:30:39.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-01-12T05:30:39.000Z",
"max_forks_repo_head_hexsha": "001545e8a3ac81f4e6e3fb1302d409d8efb6361e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "tiagoboldt/vimtex",
"max_forks_repo_path": "test/tests/test-folding/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "001545e8a3ac81f4e6e3fb1302d409d8efb6361e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "tiagoboldt/vimtex",
"max_issues_repo_path": "test/tests/test-folding/main.tex",
"max_line_length": 79,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "001545e8a3ac81f4e6e3fb1302d409d8efb6361e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "tiagoboldt/vimtex",
"max_stars_repo_path": "test/tests/test-folding/main.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-14T21:15:31.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-12-14T21:15:31.000Z",
"num_tokens": 1079,
"size": 3047
} |
% !TeX spellcheck = en_US
\chapter{Evaluation}
\section{Qualitative Evaluation on Real-World Datasets}
\label{sec:EvaluationOnDatasets} % Now, we can use "\autoref{sec:<my-label>}" to refer to this chapter
The face segmentation network was tested on two different datasets with real-life images: Firstly with the Caltech Occluded Faces in the Wild (COFW) dataset by \cite{cofw} and secondly with the parts-labeled LFW dataset of the University of Massachusetts \cite{LFW_dataset}. Both datasets are designed to present faces in real-world conditions. The COFW dataset provides 29 landmarks and a bounding box for all 507 images. The original LFW dataset contains 13'000 images of 1'680 different subjects. Each face is labeled with the name of the depicted person. This database is actually meant to test facial recognition/verification algorithms. Nevertheless, we tested the segmentation of the FCN on the 500 images of the Parts-LFW validation set. For each image, there is a ground truth segmentation, which makes it possible to measure the quality of the FCN's segmentation in numbers.
\\
\subsection{Evaluation on the COFW Dataset}
Since on the COFW dataset only landmarks and bounding-boxes are given, the segmentation had to be evaluated qualitatively. We tried reconstructing the graphic on Nirkins \cite{nirkin2018_faceswap} github-repository 'face\_segmentation' [\Cref{fig:chap2:myMatrix}]. In [\Cref{fig:chap2:myMatrix_EGGER}] the same 18 images are segmented by the iterative method of Egger et al. This algorithm outputs both, parameters for a 3DMM and a segmentation. [\Cref{fig:chap2:COFW_Fits}] shows the fits of 9 images of the COFW dataset where both segmentations are used. A matrix of the other 9 fits can be found in the Appendix [\Cref{appendix:COFW}].
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{Figures/chap2/myMatrix.jpg}
\caption{18 images of the COFW Dataset overlaid with the FCN output (in red). The segmentation results are very similar to those on \href{https://github.com/YuvalNirkin/face_segmentation}{Nirkin's github page}.}
\label{fig:chap2:myMatrix}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{Figures/chap2/myMatrix_EGGER.jpg}
\caption{The same target images as in [Figure \ref{fig:chap2:myMatrix}], but this time with the (final) segmentation of the occlusion-aware method of Egger et al \cite{egger_paper}. Often the eyes are not segmented or the segmentation includes skin other than the face (eg. hands).}
\label{fig:chap2:myMatrix_EGGER}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{Figures/chap2/COFW_Fits.png}
\caption{Tuples of facial images. In every tuple, the first image shows the fit with the mask of the algorithm of Egger et al itself [Figure \ref{fig:chap2:myMatrix_EGGER}]. The second shows the fit with the FCN mask which are depicted in [Figure \ref{fig:chap2:myMatrix}].}
\label{fig:chap2:COFW_Fits}
\end{figure}
\subsection{Evaluation on the Parts-LFW Dataset}
In the Parts-LFW dataset, we had a ground-truth mask for every image. The mask distinguishes between hair, skin, and background. We looped through all the provided ground-truth masks and overlaid them with the segmentations of the FCN. Now we can compare the labels of both masks. The evaluation is quite impressive. The FCN performs very well in segmenting only pixels that belong to the face. On average over the 500 images of the Parts-LFW dataset, there are $98.5\%$ right non-segmentations (only 1.5\% false positives). On the other hand, only $85.4\%$ of all the pixels which belong to the face are segmented as face (14.6\% false negatives) which is not a good but acceptable result. [Figures \ref{fig:drew:sf1} and \ref{fig:drew:sf2}] depict such a face image and its mask. Unfortunately, on the given mask no distinction is made between face and other parts of the body, but everything is segmented as skin [Sub-figures \ref{fig:drew:sf1} and \ref{fig:drew:sf2}]. However, it is more problematic that some faces have beards. The FCN of \cite{nirkin2018_faceswap} segments facial hair which was excluded from the provided labels. Therefore, we had to manually remove these images [Sub-figures \ref{fig:ali:sf3} and \ref{fig:ali:sf4}]. To reduce the effort, we took the Parts-LFW validation set containing 500 images. After removing the ones with a beard or mustache, we were left with 447 images. The results are summarised in [\Cref{fig:Parts-LFW}].
\begin{figure}[H]
\centering
\subbottom[A facial-image with the segmentation of the FCN highlighted in red.]{\includegraphics[width=0.4\textwidth]{Figures/chap2/drew/drew_original.jpg}\label{fig:drew:sf1}}
\subbottom[The provided ground-truth segmentation of the image \ref{fig:drew:sf1} overlaid with the mask of the FCN (red).]{\includegraphics[width=0.4\textwidth]{Figures/chap2/drew/drew_segments.png}\label{fig:drew:sf2}}
\subbottom[An facial-image with the segmentation of the FCN highlited in red.]{\includegraphics[width=0.4\textwidth]{Figures/chap2/ali/ali_original.jpg}\label{fig:ali:sf3}}
\subbottom[The provided ground-truth segmentation of the image \ref{fig:ali:sf3} overlaid with the mask of the FCN (red).]{\includegraphics[width=0.4\textwidth]{Figures/chap2/ali/ali_segments.png}\label{fig:ali:sf4}}
\caption{In each row, the left-hand is an image of the Parts-LFW dataset overlaid with the segmentation of the FCN (red). The skin region is colored in green, hair regions are blue, and the background is black. The mixed colours (yellow and magenta) arise because the red mask of the FCN is also present in these plots.}
\label{fig:tm}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{l|l} \hline
false-positives (hair) & 7.04\%\\ \hline
false-positives (background) & 0.68\%\\ \hline
false-negatives (background) & 14.13\%\\ \hline
right-segmentations & 85.87\%\\ \hline
right non-sementations & 99.32\% \\ \hline
\end{tabular}
\end{center}
\caption{The averages over the 447 images of the Parts-LFW evaluation set of \cite{LFW_dataset}. The FCN recognises (almost) only pixels which belong to the face (little false positives). By reducing from 500 to 447 images, we were able to lower this number even more. Unfortunately, there are many false negatives (skin pixels labeled as background).}
\label{fig:Parts-LFW}
\end{figure}
%\vspace{.5cm}
\FloatBarrier
\section{Evaluation on Synthetic-Data}
\subsection{Experimental Setup}
In order to evaluate the FCN on synthetic-data, we used the parametric face image generator of Kortylewski et al \cite{parametric} to produce images of a random face in a given pose [\Cref{fig:syntheticData_samples}]. We extended the software so that it now renders occlusions over the face. Furthermore, we changed the parametric face image generator so that it now generates a ground-truth mask, which classifies every pixel either as part of the face or as non-face. In this experiment the estimated mask of the FCN is compared to the ground-truth mask of the parametric face image generator.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Figures/chap2/syntheticData_samples.png}
\caption{Five examples of the synthetic face images. The same face is shown with yaw angles $-45$\textdegree, $-25$\textdegree, $0$\textdegree, $25$\textdegree, and $45$\textdegree. The type of the occlusion is chosen randomly and in an arbitrary orientation and position.}
\label{fig:syntheticData_samples}
\end{figure}
\subsection{Data for the Experiment}
Nirkin et al \cite{nirkin2018_faceswap} claim that both the face itself and the context of the face play an important role for the outcome of the segmentation. To prevent this effect and reduce the impact of outliers, we repeat the experiment 5 times with a different face and a different background image. The number of images used in a single run of the experiment varies but is similar to the number of grid cells in the following plots that visualise the experiments: 101 images were used to measure the dependence of one rotation alone (top row of [\Cref{fig:evaluation_angles}]), in the bottom row of [\Cref{fig:evaluation_angles}] 441 images were used, to plot the rotations versus the degree of occlusion [\Cref{fig:occVal40}] we used 340 images per plot and 380 were used for the plots in [\Cref{fig:occVal90}].
\subsection{Dependence of the Euler Angles}
Because we are now able to create synthetic face images in any desired pose, we first want to measure the segmentation-accuracy of the FCN for the rotations: Yaw, roll, and pitch. We evaluate each rotation itself and every possible combination of two rotations in order to create a hierarchy under the rotations [\Cref{fig:evaluation_angles}].\\
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Figures/evaluation_angles.png}
\caption{In the plots on the top row we see the segmentation accuracy in percent (on the y-axis) for every single image (with face angles from $-50$\textdegree to $50$\textdegree on the x-axis). The point cloud is approximated by a quadratic function via a least squares fit (red curve / $f(x)$). The first parameter of this function determines the opening angle ($p_1$). The greater the absolute value of this parameter, the more sensitive is the FCN to the respective rotation. In the bottom row, the colours indicate the segmentation accuracy. The brighter the colour, the better the segmentation. In every plot, there is a cluster of high accuracy segmentations centered in the origin. The rotation on whose axis the cluster has the lower variance is the more important of the two. A rotation is called 'important' when a small change of this rotation leads to a failure of the FCN.}
\label{fig:evaluation_angles}
\end{figure}
From the graphs of [\Cref{fig:evaluation_angles}], we can conclude that the roll rotation is the most relevant for the FCN, that the pitch rotation is less important, and that the accuracy of the FCN is still good even with high yaw angles (yaw is the least important rotation).
\subsection{Random Boxes as Occlusions}
The 340 images for the plots in [\Cref{fig:occVal40}] consist of pictures with 20 different occlusion levels, where one rotation is in the range from $-40$\textdegree to $40$\textdegree and the other two rotations are set to $0$.\\
\\
In the given table of [\Cref{fig:angle_table}], all provided images show a face, from which 20\% of the pixels are occluded by a randomly colored box. We can optically verify, that firstly, a high yaw rotation, despite the occluding box, has not much of an effect. Secondly, that in both situations, $-40$\textdegree and $40$\textdegree of the roll rotation, the result is not satisfying. That is interesting because no matter how big the roll rotation is, the information (the face) stays the same. Further, in the third column, the segmentation with a negative pitch angle is much worse than with a positive one. This supports our assumption that the roll rotation plays a big role, followed by the (asymmetric) pitch rotation. The yaw rotation is less important because even at large angles a big part of the face is still segmented.
% The source for this table was this post: https://stackoverflow.com/questions/2771856/centering-text-horizontally-and-vertically-in-latex
% To add padding for the cell contents: https://tex.stackexchange.com/questions/31672/column-and-row-padding-in-tables
\begin{figure}[H]
\begin{center}
\newcolumntype{C}{>{\centering\arraybackslash} m{2cm} } %# New column type
\begin{tabular}{m{1cm}|SC|SC|SC}
& yaw & roll & pitch\\ \hline
-40 & \subfloat{\includegraphics[width=0.1\textwidth]{Figures/-40_0_0_occVal_20.png}} &
\subfloat{\includegraphics[width=0.1\textwidth]{Figures/0_0_-40_occVal_20.png}} &
\subfloat{\includegraphics[width=0.1\textwidth]{Figures/0_-40_0_occVal_20.png}} \\ \hline
40 & \subfloat{\includegraphics[width=0.1\textwidth]{Figures/40_0_0_occVal_20.png}} &
\subfloat{\includegraphics[width=0.1\textwidth]{Figures/0_0_40_occVal_20.png}} &
\subfloat{\includegraphics[width=0.1\textwidth]{Figures/0_40_0_occVal_20.png}} \\
\end{tabular}
\end{center}
\caption{Based on these images, we can see that the roll rotation is the most sensitive, followed by the pitch rotation, where the segmentation works better on positive angles than on negative ones. The most stable detection is at the yaw rotation. It has the least influence on the segmentation.}
\label{fig:angle_table}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Figures/occVal_angles.png}
\caption{The colour of each grid cell indicates the accuracy of the segmentation of the FCN on a set of faces turned by the corresponding angle and occluded with a rectangle so that the corresponding amount (on the x-axis) of the face is hidden. The brighter the colour, the better the segmentation. On each plot, we would expect to see a triangle pointing to the right. This means that the combination of a large angle and a big occlusion make the face even more unsegmentable.}
\label{fig:occVal40}
\end{figure}
\pagebreak
We see that the segmentation is very sensitive to the roll rotation and that the FCN is not trained to segment faces that are aslope. Surprisingly, the yaw rotation plays a subordinate role here. The rightmost plot of [\Cref{fig:occVal40}] tells us, that the sign of the pitch rotation plays a significant role because the plot is asymmetric. Since we are not able to determine at which angles exactly the FCN begins to fail, we repeated the experiment with a higher angle range [\Cref{fig:occVal90}].
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Figures/occVal_angles_90.png}
\caption{This plot shows the outcome of a similar experiment as shown in Figure \ref{fig:occVal40} with less resolution. All the angles (yaw, pitch, and roll) range from $-90$\textdegree to $90$ \textdegree in steps of 10\textdegree. The scale on the "\%-occlusion" stays the same as in Figure \ref{fig:occVal40}. We can clearly see the limits of the FCN even with a occlusion of 2\%. Very interesting is the hard transition from good segmentations to bad segmentations in the two right plots.}
\label{fig:occVal90}
\end{figure}
Although in practice exact rectangles are very rare, boxes as occlusions are very simple and we are in control of the rectangle's size. It is the simplest method to occlude a given amount of the face region. | {
"alphanum_fraction": 0.7737606135,
"avg_line_length": 108.1777777778,
"ext": "tex",
"hexsha": "7e529567c5fa0795046d6b2de433076c6eb0ee69",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4b5d4b0608db54f92ae473819045fd8c5f1146b4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "eliasarnold/latex-bscThesis",
"max_forks_repo_path": "Chapters/Chapter2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4b5d4b0608db54f92ae473819045fd8c5f1146b4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "eliasarnold/latex-bscThesis",
"max_issues_repo_path": "Chapters/Chapter2.tex",
"max_line_length": 1459,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "4b5d4b0608db54f92ae473819045fd8c5f1146b4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Arneli/latex-bscThesis",
"max_stars_repo_path": "Chapters/Chapter2.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-04T21:22:05.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-05-04T21:22:05.000Z",
"num_tokens": 3826,
"size": 14604
} |
\chapter{Reflection}
In this chapter the improvements which could be made in the future will be listed and described. A reflection on how the program could be used in other contexts will also be made in order to explore new areas.
\section{Future Improvements}
The improvements mentioned in \cref{DelimitationSection} should all be included in a full version of the system. These changes should be accompanied by the changes presented in the \nameref{UsabilityTestSection}.
More changes have been thought of while developing the system and writing the documentation. These changes or improvements will be described in this section.
An improvement to the system would be to include a budget page where the user could manage his or her spendings for the month. It should be possible to set a budget for any month, and the program will then generate a meal plan that will fit to the budget. The shopping list should display the total cost of the items present on the list. This change would be beneficial as some users have applications on their smartphones or computers that help them with their budget. By having a part of the user budget in the program, it becomes more of a complete package for the user.
Something else that was thought of would be the possibility to pick or exclude stores to shop in. When the user is browsing their shopping list, they should be able to get recommendation on where to buy the different products. Some of the shops might lie on the users route back from home or their children's school. This feature could be expanded to allow the user to set a maximum distance for how far they would like to travel when shopping. If they set a distance of four kilometres, the program will only look for shops within a four kilometre radius from their home. If the needed ingredients on the shopping list could not be found in the shops, the user should be informed on the problem and recommend other shops that lies outside the radius.
Another considered feature were to include \textit{smart shopping}. The system should look for sales and try to incorporate them into the shopping list so the user would only buy the items when they were on sale or if they were needed soon. Normally the shopping list would only show the items that are within the \textit{days to shop for} time limit, but if an item that is needed later comes on sale, it should also be added to the list. This goes along with the two previous improvements as it becomes easier for the system to stay within the budget, and it allows it to only look for sales in certain shops.
\section{Alternative Project Focus}
The focus of the project could change to bigger groups of people instead of families. A kindergarten could implement the system in order to track what the children should have for lunch. Some of the children could suffer from milk allergy or diabetes and would therefore need special diets. The program should allow the pedagogues to make appropriate servings of lunch to the children, while helping them come up with recipes for alternative dishes. This would make the administrative work easier and minimise food waste in a place with many individuals. In order to make such a system new recipes have to be implemented as the current system is focused on dinner recipes and not lunch recipes. The recipes should also be focused on children and should promote healthy dishes.
The program could also be redesigned to be a website for a catering company. The customers would be people who would order for their parties. They would do so by going to the website and then choose the dishes they would like to order together with the expected guest count. The catering company would then get an order with a list of recipes they would have to cook together with an auto-generated shopping list and price. The price would be based on the shopping list and other factors such as how time consuming the dishes would be to cook. This could double up as being an administrative program that shows the arrangements they would have with their customers. This would help the company save time by automatically making shopping lists and scheduling their calender. The food waste aspect would not be as important, as the amount of food sold, would probably not be affected by using the system. However, it might help the company to not buy unwanted ingredients, as the system would keep track of what is currently stocked. The obstacles that would arise if this system were to be made would be to convert the entire program to be a website instead, even though some of the current features would not be ported. The interface and how to navigate would need to be changed entirely as there would be a separate customer view for the customers, and an administrative view for the staff of the company. | {
"alphanum_fraction": 0.8096330275,
"avg_line_length": 266.4444444444,
"ext": "tex",
"hexsha": "b2291950b32b9a171014f2ffc4cce8d75dad8c74",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "387a7c769cdda4913b81838bc8feffc9fbcafcc8",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "amatt13/FoodPlanner-Report",
"max_forks_repo_path": "Summary/futureImprovements.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "387a7c769cdda4913b81838bc8feffc9fbcafcc8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "amatt13/FoodPlanner-Report",
"max_issues_repo_path": "Summary/futureImprovements.tex",
"max_line_length": 1406,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "387a7c769cdda4913b81838bc8feffc9fbcafcc8",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "amatt13/FoodPlanner-Report",
"max_stars_repo_path": "Summary/futureImprovements.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 930,
"size": 4796
} |
\section{Spherical Bessel Functions}
\label{sphericalbess}
Matlab does not have built-in routines for spherical Bessel functions. Below are several variants of spherical Bessel functions and derivatives encountered in scattering problems.
%Log derivatives.
%\bgroup
%\def\arraystretch{3}
%\begin{table}[htdp]
%\caption{default}
%\begin{center}
%\begin{tabular}{|c|c|c|}
%\hline
%Function & Equation & Comments \\
%\hline
%\texttt{jinc(x)}& $\textrm{jinc}(x) = \dfrac{J_1(x)}{x}$ & $\lim_{x\rightarrow0} \textrm{jinc}(x) = \dfrac{1}{2}$ \\
%\hline
%\texttt{sbesselj(n,x)} & $j_n(x) = \sqrt{\dfrac{\pi}{2x}}J_{n+1/2}(x)$ & $\lim_{x\rightarrow0}j_n(x) = \left\{ \begin{array}{c} 1, \quad n=0 \\ 0, \quad n \ge 1\end{array} \right.$ \\
%\hline
%\end{tabular}
%\end{center}
%\label{default}
%\end{table}
%\egroup
\addtocontents{toc}{\protect\setcounter{tocdepth}{1}}
\subsection{Sombrero function: $\textrm{jinc}(x)$}
\addtocontents{toc}{\protect\setcounter{tocdepth}{2}}
The \textrm{jinc}(x) function, or Sombrero function, is defined:
\begin{equation}
\textrm{jinc}(x) = \dfrac{J_1(x)}{x}
\end{equation}
%
%\noindent where
%\begin{equation}
%\lim_{z\rightarrow0} \textrm{jinc}(z) = \dfrac{1}{2}
%\end{equation}
\noindent where small arguments are computed with the Taylor series
\eq{ \textrm{jinc}(x) \approx \dfrac{1}{2} - \dfrac{x^2}{16} + O(x^4)}
{\footnotesize
\VerbatimInput{\code/BesselFunctions/jinc.m}
}
%This is slow in Matlab and we'd like to avoid the division. Start with the series representation for the Bessel function of complex argument
%\eq{J_{\nu}(z) = \left(\dfrac{z}{2}\right)^\nu \sum_{k=0}^{\infty} \dfrac{ (-1)^k \left( \dfrac{z}{2}\right)^{2k} }{ k! \Gamma(\nu + k + 1)}}
%
%Evaluating $\nu =1$, and dividing by $z$,
%\eq{\textrm{jinc}(z) = \dfrac{1}{2} \sum_{k=0}^{\infty}(-1)^k a_k }
%\eq{a_k = \dfrac{ \left( \dfrac{z}{2}\right)^{2k} }{ k! \Gamma(k + 2)} }
%
%The log transform is used to avoid multiplying and dividing by large and small numbers, facilitate fast recursive of the factorials, and take care of the limit when $z = 0$.
%\eq{a_k = e^{\ln a_k}}
%
%where, using $\Gamma(n) = (n-1)!$,
%\ea{\ln a_k &=& 2k (\ln z - \ln 2) - \ln k! - \ln (k + 1)! }
%
%Evaluating at $k+1$, separating a factor of $\ln a_k$, and shifting the index down by 1, we get the recursion for the $k$th term of the series
%\ea{\ln a_{0} &=& 0 \\
%\ln a_{k} &=& \ln a_{k-1} + 2 (\ln z - \ln 2) - \ln k - \ln (k + 1) }
%
%%\ea{\ln a_{k+1} &=& 2(k+1) (\ln z - \ln 2) - \ln (k+1)! - \ln (k + 2)! \\
%%\ &=& 2k (\ln z - \ln 2) + 2 (\ln z - \ln 2) - (\ln k! + \ln (k+1)) - ( \ln (k + 1)! + \ln (k + 2) ) \\
%%\ &=& \ln a_k + 2 (\ln z - \ln 2) - \ln (k+1) - \ln (k + 2) }
%
%The code is as follows.
\addtocontents{toc}{\protect\setcounter{tocdepth}{1}}
\subsection{Spherical Bessel function: $j_n(x)$}
\addtocontents{toc}{\protect\setcounter{tocdepth}{2}}
The spherical Bessel function is
\begin{equation}
j_n(x) = \sqrt{\dfrac{\pi}{2x}}J_{n+1/2}(x)
\end{equation}
\noindent where
\begin{equation}
\lim_{x\rightarrow0}j_n(x) = \left\{ \begin{array}{c} 1 - \dfrac{x^2}{6} + O(x^4) \quad n=0 \\ 0, \quad n \ge 1\end{array} \right.
\end{equation}
The routine \texttt{sbesselj} returns the spherical Bessel function for matching arrays of $n$ and $x$ in the format of \texttt{besselj}. It uses the Taylor series near $x = 0$ for $n=0$, and substitutes 0 when $x=0$ for $n>1$.
{\footnotesize
\VerbatimInput{\code/BesselFunctions/sbesselj.m}
}
\addtocontents{toc}{\protect\setcounter{tocdepth}{1}}
\subsection{Spherical Bessel function: $j_n(x)/x$}
\addtocontents{toc}{\protect\setcounter{tocdepth}{2}}
This variant of the spherical Bessel function has $j_n(x)$ divided by $x$:
\begin{equation}
\dfrac{j_n(x)}{x} = \dfrac{1}{x^{3/2}}\sqrt{\dfrac{\pi}{2}}J_{n+1/2}(x)
\end{equation}
\noindent where
\begin{equation}
\lim_{x\rightarrow0}j_n(x) = \left\{ \begin{array}{c} \infty, \quad n=0 \\ \dfrac{1}{3} - \dfrac{x^2}{30} + O(x^4), \quad n = 1 \\ 0, \quad n \ge 2\end{array} \right.
\end{equation}
This is computed in the routine \texttt{sbesselj2}, which takes the same inputs as \texttt{sbesselj}. We let the routine return \texttt{Inf} for $x=0$ when $n=0$. This variant is found in electromagnetic scattering when usually the monopole term ($n=0$) is not needed.
{\footnotesize
\VerbatimInput{\code/BesselFunctions/sbesselj2.m}
}
\addtocontents{toc}{\protect\setcounter{tocdepth}{1}}
\subsection{Spherical Bessel function derivative: $j_n'(x)$}
\addtocontents{toc}{\protect\setcounter{tocdepth}{2}}
The derivative of the spherical Bessel function with respect to $x$ is given by the recurrence relation
\begin{equation}
j_n'(x) = -j_{n+1}(x) + \dfrac{n}{x}j_n(x)
\end{equation}
This is computed in \texttt{sbesseljp} with \texttt{sbesselj} and \texttt{sbesselj2}, which takes care of $x=0$.
{\footnotesize
\VerbatimInput{\code/BesselFunctions/sbesseljp.m}
}
\clearpage
\addtocontents{toc}{\protect\setcounter{tocdepth}{1}}
\subsection{Spherical Bessel function derivative: $[xj_n(x)]'$}
\addtocontents{toc}{\protect\setcounter{tocdepth}{2}}
This variant occurs often enough in scattering to have its own function, \texttt{sbesseljp2},
\ea{[xj_n(x)]' &=& j_n(x) + x j'_n(x) \\
\ &=& (1+n)j_n(x) - x j_{n+1}(x)}
This variant is found in the Mie scattering solution for spheres.
{\footnotesize
\VerbatimInput{\code/BesselFunctions/sbesseljp2.m}
}
\addtocontents{toc}{\protect\setcounter{tocdepth}{1}}
\subsection{Spherical Hankel function: $h_n^{(1)}(x)$}
\addtocontents{toc}{\protect\setcounter{tocdepth}{2}}
The spherical Hankel function is given by
\begin{equation}
h_n^{(1)}(x) = \sqrt{\dfrac{\pi}{2x}}H_{n+1/2}(x)
\end{equation}
This is always irregular at the origin, and computed in the routine \texttt{sbesselh}
{\footnotesize
\VerbatimInput{\code/BesselFunctions/sbesselh.m}
}
\addtocontents{toc}{\protect\setcounter{tocdepth}{1}}
\subsection{Spherical Hankel function derivative: ${h'}_n^{(1)}(x)$}
\addtocontents{toc}{\protect\setcounter{tocdepth}{2}}
The spherical Hankel derivative is computed in the routine \texttt{sbesselhp}
\begin{equation}
{h'}_n^{(1)}(x) = -h_{n+1}^{(1)}(x) + \dfrac{n}{x}h_n^{(1)}(x)
\end{equation}
{\footnotesize
\VerbatimInput{\code/BesselFunctions/sbesselhp.m}
}
\addtocontents{toc}{\protect\setcounter{tocdepth}{1}}
\subsection{Spherical Hankel function derivative: $[xh_n^{(1)}(x)]'$}
\addtocontents{toc}{\protect\setcounter{tocdepth}{2}}
Similar to before, this variant is computed in the routine \texttt{sbesselhp2}
\ea{[xh_n^{(1)}(x)]' &=& h_n^{(1)}(x) + x {h'}_n^{(1)}(x) \\
\ &=& (1+n)h_n^{(1)}(x) - x h_{n+1}^{(1)}(x)}
This variant is found in the Mie scattering solution for spheres.
{\footnotesize
\VerbatimInput{\code/BesselFunctions/sbesselhp2.m}
}
| {
"alphanum_fraction": 0.6732190364,
"avg_line_length": 35.6105263158,
"ext": "tex",
"hexsha": "df27eca6f53a6426fac3a4bf91a76bf9e812ff3d",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2022-02-08T19:58:04.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-08-29T13:28:44.000Z",
"max_forks_repo_head_hexsha": "caeb9540693185e000e08d826bc2ccabb6aa82bd",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "ruzakb/Waveport",
"max_forks_repo_path": "Tex/Utilities/BesselFunctions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "caeb9540693185e000e08d826bc2ccabb6aa82bd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "ruzakb/Waveport",
"max_issues_repo_path": "Tex/Utilities/BesselFunctions.tex",
"max_line_length": 270,
"max_stars_count": 12,
"max_stars_repo_head_hexsha": "caeb9540693185e000e08d826bc2ccabb6aa82bd",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "nasa-jpl/Waveport",
"max_stars_repo_path": "Tex/Utilities/BesselFunctions.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-18T20:09:47.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-08-29T13:29:21.000Z",
"num_tokens": 2510,
"size": 6766
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% a0poster Landscape Poster
% LaTeX Template
% Version 1.0 (22/06/13)
%
% The a0poster class was created by:
% Gerlinde Kettl and Matthias Weiser ([email protected])
%
% This template has been downloaded from:
% http://www.LaTeXTemplates.com
%
% License:
% CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/)
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%----------------------------------------------------------------------------------------
% PACKAGES AND OTHER DOCUMENT CONFIGURATIONS
%----------------------------------------------------------------------------------------
\documentclass[a0,landscape]{a0poster}
\usepackage{multicol} % This is so we can have multiple columns of text side-by-side
\columnsep=100pt % This is the amount of white space between the columns in the poster
\columnseprule=3pt % This is the thickness of the black line between the columns in the poster
\usepackage[svgnames]{xcolor} % Specify colors by their 'svgnames', for a full list of all colors available see here: http://www.latextemplates.com/svgnames-colors
\usepackage{times} % Use the times font
%\usepackage{palatino} % Uncomment to use the Palatino font
\usepackage{subcaption} % for subfigures environments
\usepackage{graphicx} % Required for including images
\graphicspath{{figures/}} % Location of the graphics files
\usepackage{booktabs} % Top and bottom rules for table
\usepackage[font=small,labelfont=bf]{caption} % Required for specifying captions to tables and figures
\usepackage{amsfonts, amsmath, amsthm, amssymb} % For math fonts, symbols and environments
\usepackage{wrapfig} % Allows wrapping text around tables and figures
\begin{document}
%----------------------------------------------------------------------------------------
% POSTER HEADER
%----------------------------------------------------------------------------------------
% The header is divided into three boxes:
% The first is 55% wide and houses the title, subtitle, names and university/organization
% The second is 25% wide and houses contact information
% The third is 19% wide and houses a logo for your university/organization or a photo of you
% The widths of these boxes can be easily edited to accommodate your content as you see fit
\begin{minipage}[b]{0.55\linewidth}
\veryHuge \color{NavyBlue} \textbf{Attentive Sequence-to-Sequence Learning for \\
Diacritic Restoration of Yor{\`u}b{\'a} Language Text} \color{Black}\\ % Title
\huge \textbf{Iroro Fred \d{\`O}n\d{\`o}m\d{\`e} Orife}\\ % Author(s)
\huge Niger-Volta Language Technologies Institute\\ % University/organization
\end{minipage}
%
\begin{minipage}[b]{0.25\linewidth}
\includegraphics[width=1cm]{logo.png} % Logo or a photo of you, adjust its dimensions here
\end{minipage}
%
\begin{minipage}[b]{0.25\linewidth}
\color{DarkSlateGray}\Large \textbf{Contact Information:}\\
\texttt{github.com/niger-volta-LTI} \\
\texttt{[email protected]}\\ % Email address
\end{minipage}
%
%\begin{minipage}[b]{0.25\linewidth}
%\includegraphics[width=15cm]{logo.png} % Logo or a photo of you, adjust its dimensions here
%\end{minipage}
\vspace{1cm} % A bit of extra whitespace between the header and poster content
%----------------------------------------------------------------------------------------
\begin{multicols}{4} % This is how many columns your poster will be broken into, a poster with many figures may benefit from less columns whereas a text-heavy poster benefits from more
%----------------------------------------------------------------------------------------
% ABSTRACT
%----------------------------------------------------------------------------------------
\color{Navy} % Navy color for the abstract
\begin{abstract}
Yor{\`u}b{\'a} is a widely spoken West African language with a writing system rich in tonal and orthographic diacritics. With very few exceptions, diacritics are omitted from electronic texts, due to limited device and application support. Diacritics provide morphological information, are crucial for lexical disambiguation, pronunciation and are vital for any Yor{\`u}b{\'a} text-to-speech (TTS), automatic speech recognition (ASR) and natural language processing (NLP) tasks. Reframing Automatic Diacritic Restoration (ADR) as a machine translation task, we experiment with two different attentive Sequence-to-Sequence neural models to process undiacritized text. On our evaluation dataset, this approach produces diacritization error rates of less than 5\%. We have released pre-trained models, datasets and source-code as an open-source project to advance efforts on Yor{\`u}b{\'a} language technology.
\end{abstract}
%----------------------------------------------------------------------------------------
% INTRODUCTION
%----------------------------------------------------------------------------------------
\color{SaddleBrown} % SaddleBrown color for the introduction
\section*{Introduction}
Yor{\`u}b{\'a} is a tonal language spoken by more than 40 Million people in the countries of Nigeria, Benin and Togo in West Africa. There are an additional million speakers in the African diaspora, making it the most broadly spoken African language outside Africa.
On modern computing platforms, the vast majority of Yor{\`u}b{\'a} text is written in plain ASCII, without diacritics. This presents grave problems for usage of the standard orthography via electronic media, which has implications for the unambiguous pronunciation of Yor{\`u}b{\'a}'s lexical and grammatical tones by both human speakers and TTS systems. Improper handling of diacritics also degrades the performance of document retrieval via search engines and frustrates every kind of Natural Language Processing (NLP) task, notably machine translation to and from Yor{\`u}b{\'a}. Finally, correct diacritics are mandatory in reference transcripts for any Automatic Speech Recognition (ASR) task.
\begin{enumerate}
\item We propose two different NMT approaches, using soft-attention and self-attention sequence-to-sequence (seq2seq) models \cite{bahdanau2014neural, vaswani2017attention}, to rectify undiacritized Yor{\`u}b{\'a} text.
\item Datasets, pre-trained models and source code are an open-source project at \textbf{\texttt{github.com/niger-volta-LTI/yoruba-adr}}
\end{enumerate}
\color{DarkSlateGray} % DarkSlateGray color for the rest of the content
%----------------------------------------------------------------------------------------
% MATERIALS AND METHODS
%----------------------------------------------------------------------------------------
\section*{Ambiguity in Undiacritized Yor{\`u}b{\'a} text }
Automatic Diacritic Restoration (ADR), which goes by other names such as Unicodification or deASCIIfication is a process which attempts to resolve the ambiguity present in undiacritized text. Undiacritized Yor{\`u}b{\'a} text has a high degree of ambiguity. Adegbola et al. state that for ADR the ``prevailing error factor is the number of valid alternative arrangements of the diacritical marks that can be applied to the vowels and syllabic nasals within the words".
For our training corpus of 1M words, we quantify the ambiguity by the percentage of all words that have diacritics, 85\%; the percentage of unique non-diacritized word types that have two or more diacritized forms, 32\%, and the lexical diffusion or \emph{LexDif} metric, which conveys the average number of alternatives for each non-diacritized word, 1.47.
\begin{center}\vspace{1cm}
\begin{tabular}{lcl}
\toprule
\multicolumn{2}{c}{\textbf{Characters}} & \textbf{Examples} \\
\midrule
{\`a} {\'a} \v{a} & \textbf{a} & gb{\`a} \emph{(spread)}, gba \emph{(accept)}, gb{\'a} \emph{(hit)} \\
{\`e} {\'e} \d{e} \d{\`e} \d{\'e} & \textbf{e} & es{\'e} \emph{(cat)}, {\`e}s{\`e} \emph{(dye)}, \d{e}s\d{\`e} \emph{(foot)} \\
{\`i} {\'i} & \textbf{i} & {\`i}l{\'u} \emph{(town)}, ilu \emph{(opener)}, {\`i}l{\`u} \emph{(drum)}\\
{\`o} {\'o} \d{o} \d{\`o} \d{\'o} \v{o} & \textbf{o} & \d{o}k\d{\'o} \emph{(hoe)}, \d{\`o}k\d{\`o} \emph{(spear)}, \d{o}k\d{\`o} \emph{(vehicle)}\\
{\`u} {\'u} \v{u} & \textbf{u} & mu \emph{(drink)}, m{\`u} \emph{(sink)}, m{\'u} \emph{(sharp)} \\
\midrule
{\`n} {\'n} \={n} & \textbf{n} & {n} \emph{(I)}, {\'n} (continuous aspect marker) \\
\d{s} & \textbf{s} & {s}{\'a} \emph{(run)}, \d{s}{\'a} \emph{(fade)}, \d{s}{\`a} \emph{(choose)} \\
\bottomrule
\end{tabular}
\captionof{table}{Diacritized forms for each non-diacritic character}
\end{center}\vspace{1cm}
Further, 64\% of all unique, non-diacritized monosyllabic words possess multiple diacritized forms. When we consider the distribution of ambiguity over grammatical function, we recognize the added difficulty of tasks like the lexical disambiguation of non-diacritized Yor{\`u}b{\'a} verbs, which are predominantly monosyllabic.
\\
Finally, there are tonal changes, which are rules about how tonal diacritics on a specific word are \emph{altered} based on context.
\begin{center}\vspace{1cm}
\begin{tabular}{clll}
\toprule
\textbf{Verb} & \textbf{Phrase} & \textbf{Translation} & \textbf{Tone}\\
\midrule
t{\`a} & o t{\`a} a & he sells it & Low \\
& o ta i\d{s}u, o ta\d{s}u & he sells yams & Mid\\
\midrule
t{\`a} & o ta {\`i}w{\'e} & he sells books & Mid \\
& o t{\`a}w{\'e} & he sells books & Low\\
\bottomrule
\end{tabular}
\captionof{table}{Tonal Changes}
\end{center}
%------------------------------------------------
\section*{A Sequence-to-Sequence Approach}
Expressing ADR as a machine translation problem, we treat undiacritized text and diacritized text as source and target languages respectively in a NMT formulation. We experimented two different NMT approaches:
\begin{enumerate}
\item \textbf{Soft-attention} based on the work of Bahdanau et al. \cite{bahdanau2014neural}, extends the RNN-Encoder-Decoder design with an attention mechanism that allows the decoder to observe different source words for each target word.
\item \textbf{Self-attention} aims to improve on limitations of RNNs, i.e. high computational complexity and non-parallelizeable computation. For both the encoder and decoder, the Transformer model, proposed by Vaswani et al. \cite{vaswani2017attention}, employs stacks of self-attention layers in lieu of RNNs.
\end{enumerate}
%----------------------------------------------------------------------------------------
% RESULTS
%----------------------------------------------------------------------------------------
\section*{Experiments \& Results}
We obtained a very small but fully diacritized text from the Lagos-NWU conversational speech corpus by Niekerk, et. al. We also created our own medium-sized corpus by web-crawling the two Yor{\`u}b{\'a}-language websites.
\begin{center}
\begin{tabular}{lll}
\toprule
\textbf{\# words} & \textbf{Source URL} & \textbf{Description} \\
\midrule
24,868 & rma.nwu.ac.za & Lagos-NWU corpus \\
50,202 & theyorubablog.com & language blog\\
910,401 & bible.com & online bible webite \\
\bottomrule
\end{tabular}
\captionof{table}{Training data subsets}
\end{center}
To better understand the dataset split, we computed a perplexity of 575.6 for the test targets with a language model trained over the training targets. The \{source, target\} vocabularies for training and test sets have \{11857, 18979\} and \{4042, 5641\} word types respectively.
We built the soft-attention and self-attention models with the Python 3 implementation of OpenNMT, an open-source toolkit created by the Klein et al. Our training hardware configuration was a standard AWS EC2 p2.xlarge instance with a NVIDIA K80 GPU, 4 vCPUs and 61GB RAM.
\begin{center}
\begin{tabular}{ccccc}
\toprule
\textbf{Attention} & \textbf{Size} & \textbf{RNN} & \textbf{Train\%} & \textbf{Test\%}\\
\midrule
soft + dot & 2L 512 & LSTM & 96.2 & 90.1 \\
soft + add & 2L 512 & LSTM & 95.9 & 90.1 \\
soft + tanh & 2L 512 & GRU & 96.2 & 89.7 \\
soft + tanh & 1L 512 & GRU & 97.8 & 89.7 \\
\midrule
self & 6L 512 & - & 98.5 & 95.4 \\
\bottomrule
\end{tabular}
\captionof{table}{Results}
\end{center}
The verbs \textbf{b{\`a}j\d{\'e}} (to spoil), or \textbf{j{\`u}l\d{o}} (to be more than) are discontinuous morphemes, or splitting verbs. In Table 5, the first example shows the model has learnt the diacritics necessary for \textbf{l\d{o}} following a previously predicted \textbf{j{\`u}}. In the second example, we note the ambiguity of the three occurrences of the undiacritized \textbf{si}, with two diacritized forms, \textbf{s{\`i}}, \textbf{s{\'i}}. An examination of the attention weight matrix for this example revealed that the third instance \textbf{s{\'i}} attends to the previous \textbf{s{\'i}} and that the first two attend to each other.
\begin{center}
\begin{tabular}{rl}
\toprule
\textbf{source} & emi ni oye ju awon agba lo nitori mo gba eko re\\
\textbf{target} & {\`e}mi ni {\`o}ye j{\`u} {\`a}w\d{o}n {\`a}gb{\`a} l\d{o} n{\'i}tor{\'i} mo gba \d{\`e}k\d{\'o} r\d{e} \\
\textbf{prediction} & {\`e}mi ni {\`o}ye \underline{\textbf{j{\`u}} {\`a}w\d{o}n {\`a}gb{\`a} \textbf{l\d{o}}} n{\'i}tor{\'i} mo gba \d{\`e}k\d{\'o} r\d{e}\\
\midrule
\textbf{source} & emi yoo si si oju mi si juda \\
\textbf{target} & {\`e}mi y{\'o}{\`o} s{\`i} s{\'i} oj{\'u} mi s{\'i} il{\'e} j{\'u}d{\`a} \\
\textbf{prediction} & {\`e}mi y{\'o}{\`o} s{\`i} s{\'i} oj{\'u} mi s{\'i} il{\'e} j{\'u}d{\`a} \\
\midrule
\textbf{source} & oun ko si ni ko ile ini re sile \\
\textbf{target} & {\`o}un k{\`o} s{\`i} n{\'i} \textbf{k\d{o}} il\d{\`e} {\`i}n{\'i} r\d{\`e} s{\'i}l\d{\`e}\\
\textbf{prediction} & {\`o}un k{\`o} s{\`i} n{\'i} \textbf{k\d{\'o}} il\d{\`e} {\`i}n{\'i} r\d{\`e} s{\'i}l\d{\`e} \\
\bottomrule
\end{tabular}
\captionof{table}{Example predictions}
\end{center}
\begin{center}
\includegraphics[width=0.55\linewidth]{emi_yoo_AttentionWeights}
\end{center}
\begin{center}
\includegraphics[width=0.65\linewidth]{patapata_AttentionWeights}
\captionof{figure}{Attention Weights}
\end{center}
%----------------------------------------------------------------------------------------
% CONCLUSIONS
%----------------------------------------------------------------------------------------
\color{SaddleBrown} % SaddleBrown color for the conclusions to make them stand out
\section*{Conclusions}
\begin{itemize}
\item Attention-based sequence-to-sequence learning approaches perform well on the Yor{\`u}b{\'a} diacritic restoration task.
\item Our approach minimizes the manual work needed to quickly create a high quality text corpus for TTS and MT tasks.
\item To make ADR more suitable as a preprocessing step for end-user text processing applications or any business usage, it will be necessary to augment training the corpus with more general purpose text.
\end{itemize}
\color{DarkSlateGray} % Set the color back to DarkSlateGray for the rest of the content
%----------------------------------------------------------------------------------------
% FORTHCOMING RESEARCH
%----------------------------------------------------------------------------------------
\section*{Forthcoming Research}
Avenues for future work include evaluating previous approaches to Yor{\`u}b{\'a} ADR on the present dataset, growing the training corpus and training superior word embeddings. We see the application of ADR to OCR output of scanned books as a fruitful next step in building high quality text corpora.
%----------------------------------------------------------------------------------------
% REFERENCES
%----------------------------------------------------------------------------------------
\nocite{*} % Print all references regardless of whether they were cited in the poster or not
\bibliographystyle{plain} % Plain referencing style
\bibliography{sample} % Use the example bibliography file sample.bib
%----------------------------------------------------------------------------------------
\end{multicols}
\end{document} | {
"alphanum_fraction": 0.6306846268,
"avg_line_length": 59.0656934307,
"ext": "tex",
"hexsha": "1db26150fecd7c0447e7564f9626d651b8c4bf07",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1c7f228d75de7ed42a95aae12ea31c4d1d4b1225",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Niger-Volta-LTI/publications",
"max_forks_repo_path": "2018_Interspeech_YO_ADR/conference_poster/conference_poster_5.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1c7f228d75de7ed42a95aae12ea31c4d1d4b1225",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Niger-Volta-LTI/publications",
"max_issues_repo_path": "2018_Interspeech_YO_ADR/conference_poster/conference_poster_5.tex",
"max_line_length": 907,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1c7f228d75de7ed42a95aae12ea31c4d1d4b1225",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Niger-Volta-LTI/publications",
"max_stars_repo_path": "2018_Interspeech_YO_ADR/conference_poster/conference_poster_5.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4491,
"size": 16184
} |
\documentclass[journal]{IEEEtran}
\ifCLASSOPTIONcompsoc
% IEEE Computer Society needs nocompress option
% requires cite.sty v4.0 or later (November 2003)
\usepackage[nocompress]{cite}
\else
% normal IEEE
\usepackage{cite}
\fi
\usepackage{fixltx2e}
\usepackage{listings}
%JSON
\lstdefinelanguage{JSON}{
basicstyle=\normalfont\ttfamily,
showstringspaces=false,
breaklines=true,
frame=lines,
morestring=[b]",
morestring=[d]'
}
\lstdefinelanguage{hresource}{
%language=html,
basicstyle=\ttfamily,
breaklines=true,
moredelim=[s][\textbf]{rel="hresource}{"},
moredelim=[s][\textbf]{class="}{"}
}
\usepackage{graphicx}
\begin{document}
%
% paper title
% can use linebreaks \\ within to get better formatting as desired
\title{MACHINE READABLE WEB SERVICE DESCRIPTIONS FOR AUTOMATED DISCOVERY AND COMPOSITION OF RESTFUL WEB SERVICES}
\author{\IEEEauthorblockN{
Dhananjay Balan\IEEEauthorrefmark{1},
Abil N George\IEEEauthorrefmark{1},
Akhil P M\IEEEauthorrefmark{1},
Deepak Krishanan\IEEEauthorrefmark{1},
and
Dr. Abdul Nizar M \IEEEauthorrefmark{2}
}\\
\IEEEauthorblockA{\IEEEauthorrefmark{1} Department of Computer Science and Engineering, College of Engineering, Trivandrum}\\
\IEEEauthorblockA{\IEEEauthorrefmark{2}Professor, Department of Computer Science and Engineering, College of Engineering, Trivandrum}
}
% use for special paper notices
%\IEEEspecialpapernotice{(Invited Paper)}
% make the title area
\maketitle
`
\begin{abstract}
%\boldmath
Restful web services are getting popular today. Most of the service providers are moving to REST based services due to its simplicity and popularity. Even with this much adoption rate, creation of mashups is a himalayan task for an average web developer. Many solutions had been proposed for adding semantics to RESTful web services to automate discovery and composition of services. However they suffer from many problems like, biased towards SOAP based web services,need a separate file for machine parsable descriptions and more. The existing automated mahsup creation service faces scalability issues. The need of the hour is an highly scalable system to reduce as much human intervention as possible for the discovery and composition of the RESTful web services. In this paper, we proposed a microformat like grammar added to the existing service documentation to double them as machine readable. It also uses RDF descriptions at the backend in order to provide strong interlinking among resources, which the end users are not aware of.
\end{abstract}
\begin{IEEEkeywords}
Semantic Web, intelligent agents, web service, semantic web service, REST, RESTful architecture, service discovery, service composition, Microformats, Poshformats, RDF
\end{IEEEkeywords}
% For peer review papers, you can put extra information on the cover
% page as needed:
% \ifCLASSOPTIONpeerreview
% \begin{center} \bfseries EDICS Category: 3-BBND \end{center}
% \fi
%
% For peerreview papers, this IEEEtran command inserts a page break and
% creates the second title. It will be ignored for other modes.
\IEEEpeerreviewmaketitle
\section{Introduction}
REST defines a set of architectural principles for designing web services that uses a Resource Oriented approach compared to SOAP based services, which uses RPC semantics. REST is light-weight compared to SOAP which packs a lot of complexity with the implementation of SOAP envelopes and Header. SOAP messages are formatted in XML while we can use XML or JSON with RESTful services for sending and receiving data.
REST provides a set of architectural constraints that, when applied as a whole, emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems. REST uses Uniform Resource Indicators(URIs) to locate resources.Web developers are shifting to REST based services due to its simplicity and wide adoption rate as a standard for creating web services. Most of the companies now provides RESTful services like google,yahoo,amazon,facebook etc.
Mashups are formed by integrating different services. They can combine data or functionality from two or more sources to create new services. Early mashups were developed manually by enthusiastic programmers. More sophisticated approaches are in need as the use of mashuped services are increased in day to day life. This led to the thought of automating mashup creation so that machines can intelligently create mashups with minimal human intervention.
Earlier approaches of automated mashup creation sticks to the use of an external file for service description which only machine can understand. This gives additional overhead to developers for maintaining machine readable descriptions parallel to human readable documentations. The microformat \cite{kopecky2008hrests} like approach is simple and easy to use,hence provides low entry barriers for web developers. It uses existing service descriptions in HTML to double them as machine readable descriptions by adding annotations to it such that it is not visible in markups.
There exists so many mashup editors which can automate mashup creation like Yahoo pipes\cite{pip}, Google Mashup Editors(GME), Microsoft Popfly etc.Yahoo pipes is a free online service with a drag and drop interface to aggregate,manipulate and mashup content from around the web.You can create your own custom RSS feeds with Yahoo Pipe that pull in content from a variety of sources and filter it so that you only see the most relevant news stories.However Yahoo Pipes faces many drawbacks. It cannot create a mashup between any pair of services.Also Yahoo Pipes is not scalable since lack processing power. It fails to generate results while processing complex regex from far multiple locations, with hundreds of posts every minutes.
GME is the mashup editor service from Google.Compared to Yahoo Pipes Google Mashup Editor wins in terms of power and flexibility. It allows to use coding in javascript,HTML,CSS etc along with the interactive interface. Microsoft Popfly is the simplest mashup editor which can be used as entry point for web developers. However both of these services were discontinued.
\section{Related Works}
Many solutions had been proposed for formally describing RESTful web services. These proposals approach the problem from different directions each providing a novel way of addressing the issue at hand. Most of these solutions were member submissions to the W3C but there is hardly any consensus on one global standard.
\subsection{Web Service Description Language (WSDL) 2.0}
WSDL 2.0\cite{chinnici2007web} is an extension of the Web Service Description Language (WSDL)that was used to describe traditional SOAP based services. WSDL 2.0 supports describing RESTful services by mapping the HTTP methods into explicit services available at the given URLs. So every resource will translate into 4 or fewer different services: GET, POST, PUT and DELETE.
The advantage of WSDL 2.0 is that it provides a unified syntax for describing both SOAP based and RESTful services. It also has very expressive descriptions where you can define the specific data type, the cardinality and other advanced parameters for each input type.
However, WSDL 2.0 requires RESTful services to be viewed and described from a different architectural platform: that of traditional RPC-like services. This forceful conversion negates many of the advantages of the RESTful philosophy. In addition, the expressiveness of the format comes at the price of losing the simplicity achieved by moving to the RESTful paradigm. These verbose files are not the easiest to be written by hand and also impose a maintenance headache. Hence, WSDL files are typically generated with the help of some tools. Further, WSDL descriptions are external files based on XML syntax that the developer has to create and maintain separately.
\subsection{Web Application Description Language (WADL)}
Web Application Description Language (WADL)\cite{hadley2006web} is another XML based description language proposed by Sun Microsystems. Unlike WSDL, WADL is created specifically to address the description of web applications, which are usually RESTful services. WADL documents describe resources instead of services but maintain the expressive nature of WSDL.
WADL still has some of the concerns associated with WSDL in that they still requires an external XML file to be created and maintained by the developer. It also results in boilerplate code.
\subsection{hRESTS}
hRESTS\cite{kopecky2008hrests} is a Microformat that attempts to fortify the already existing service documentations with semantic annotations in an effort to reuse them as formal resource descriptions. The Microformat uses special purpose class names in the HTML code to annotate different aspects of the associated services. It defines the annotations service, operation, address, method, input and output. A parser can examine the DOM tree of an hRESTS fortified web page to extract the information and produce a formal description.
hRESTS is a format specifically designed for RESTful services and hence avoids a lot of unnecessary complexities associated with other solutions. It also reduces the efforts required from the developer since he no longer needs to maintain a separate description of the service.
One downside with hRESTS is that, despite being specifically designed for REST, it still adheres to an RPC-like service semantics. It is still required to explicitly mention the HTTP methods as the operations involved. Moreover, instead of representing the attributes of a resource, it attempts to represent them as input-output as in traditional services. This results in a lot of unnecessary markup.
\subsection{SA-REST}
Similar to hRESTS, SA-REST\cite{gomadam2010sa} is also a semantic annotation based technique. Unlike hRESTS, it uses the RDFa syntax to describe services, operations and associated properties. The biggest difference between them is that SA-REST has some built in support for semantic annotations whereas hRESTS provides nothing more than a label for the inputs and outputs. SA-REST uses the concept of lifting and lowering schema mappings to translate the data structures in the inputs and outputs to the data structure of an ontology, the grounding schema, to facilitate data integration. It shares much the same advantages and disadvantages as the hRESTS format. In addition, since SA- REST is strictly based on RDF concepts and notations, the developer needs to be well aware of the full spectrum of concepts in RDF.
\subsection{SEREDASj}
SEREDASj\cite{lanthaler2011semantic}, a novel description method addresses the problem of resource description from a different perspective. While the other methods resort to the RPC-like semantics of input-operation-output, SEREDASj attempts to describe resources in their native format: as resources with attributes that could be created, retrieved, updated and deleted. This helps to reduce the inherent difference between operation oriented and resource oriented systems. The method also emphasizes on a simple approach that provides a low entry barrier for developers.
One interesting aspect about SEREDASj is that it uses JSON (JavaScript Object Notation)[15] to describe resources. JSON is an easy and popular markup technique used on the web - especially with RESTful service developers. The advantage is that, the target audience is already familiar with the notation used for markup and can reduce friction when it comes to adoption.
SEREDASj, however, addresses the documentation and description in the reverse order. You can create the description in JSON first and then generate the documentation from this. This can increase upgrade effort required for existing services and is not very flexible. It is still possible to embed these JSON descriptions into existing documentation but it floats as a data island, separated from the rest of the HTML page. This, again, causes some duplication of data between the documentation and the description and causes a maintenance hurdle.
\section{Service Description}
\label{sec:ServiceDescription}
Services are described using special purpose annotations in the HTML code. These annotations are specified in the class attribute of the associated tags. This attribute is usually used for classification of HTML elements and also as selectors for JavaScript access and to style elements via CSS. Using special purpose annotations as class names help us to reuse whatever is already provided by HTML and JavaScript. Following annotations are proposed for describing resources in a RESTful API.
\subsection{General Annotations}
\begin{enumerate}
\item {\bf hresource}: This is the root annotation that marks the resource description. All other annotations are contained within an element marked with {\it class="hresource"}. A client parsing a page could treat the presence of this annotation as an indication of the existence of a resource description on the page. Unless all other annotations are encapsulated in an hresource, they will not be parsed.
\item {\bf name} :Annotates the name of the resource. This can be any human readable name and need not have any programming significance.
\item {\bf url/uri}: Annotates the URL at which the resource is accessible.
\end{enumerate}
\subsection{Annotation Of Attributes}
\begin{enumerate}
\item {\bf attribute}: Annotates an attribute/property of the resource. All attributes of a resource should be annotated with this annotation. Specific characteristics of the attribute could be further specified by more annotations that are used together with the attribute annotation.
\item {\bf required}: Indicates a required attribute. This annotation is always used along with the attribute annotation.
\item {\bf queryable}: Indicates an attribute that may be provided in the HTTP querystring during a GET operation to filter the results. This annotation is always used along with the attribute annotation.
\item {\bf read-only}: Indicates a read-only attribute. A read-only attribute may be retrieved during a GET operation but may not be included in a POST or a PUT. This annotation is always used along with the attribute annotation.write-once: Indicates a write-once attribute that can be specified only during the create operation (POST) but not during update (PUT). This annotation is always used along with the attribute annotation.
\item {\bf guid}: Indicates if an attribute is a globally unique identifier for the resource that could be used across multiple services.
\item {\bf Comment}: Provides a human-readable description of the attribute. This should be descendant of parent of attribute node
\item {\bf hresource-datatype}: Annotates the datatype of the attribute.This should be descendant of parent of attribute node.For permissible types see table: \ref{tab:data_types}
\begin{table}
\centering
\begin{tabular}{|l|l|}
\hline
Data Type & Description \\ \hline
Integer or Int & 32 bit integer \\ \hline
float & floating point number \\ \hline
Int64 & 64 bit integer. \\ \hline
Range & Boolean or Bool \\ \hline
Date or Time & should specify date formatting \\ \hline
Timestamp & Timestamp of entity \\ \hline
\end{tabular}
\caption{Data types}
\label{tab:data_types}
\end{table}
{\bf eg}: Range(0.0,1.0) specifies floating point number between 0 and 1 and Range(0,1) specifies integer between 0 and 1.
\end{enumerate}
\subsection{Annotation of Methods}
Attribute can be input or output or both. Originally REST resource can be considers as bundle of attributes, each of which may have restriction on their access. But most of existing REST resources is specified as input/output of HTTP Methods ( GET,POST,PUT,DELETE).
\begin{enumerate}
\item {\bf method}: This is the root annotation that marks the permissible method. It contains following sub-annotations
\begin{itemize}
\item {\it type}: HTTP request type. GET,POST,PUT,DELETE
\item {\it input}: (optional) input attribute name
\item {\it output}: (optional) output attribute name
\item {\it header}: (optional) attributes that should be passed as header to HTTP request
\end{itemize}
\end{enumerate}
\subsection{Annotation of Errors}
\begin{enumerate}
\item {\bf hresource-error}: This is the root annotation that marks the errors.It should contain two sub-annotation for each error:
\begin{itemize}
\item {\it error-code}:Specify the error code.
\item {\it comment}: Description of error.
\end{itemize}
eg:
\begin{lstlisting}[language=html,breaklines=true]
<li class="hresource-error">
<code class="error-code">201</code>:
<span class="comment">test failed</span>
</li>
\end{lstlisting}
\end{enumerate}
\begin{figure}[!ht]
\centering
\includegraphics[width=3.5in]{images/rel_semantic.png}
\caption{Relationship between the semantic annotations}
\label{fig:rel_semantic}
\end{figure}
\subsection{Annotations For Linking Resources}
\begin{enumerate}
\item {\it Link to Super class}: When a resource is a subclass of another resource, this link is indicated by the rel attribute hresource-is-a. This implies that wherever the super class is accepted, the subclass is also accepted. For e.g., if a publisher defines a Book resource to provide a search of their catalog, they could annotate the resource to be a subclass of a more generic Book resource.
\begin{lstlisting}[language=html,breaklines=true]
<a rel="-is-a" href="http://dublincore.org/book/"> Book </a>
\end{lstlisting}
If there is another service from a bookshop that is known to accept a generic book resource for a purchase process, the client could infer that the specific book resource from the catalog would also be accepted there and use it.
For this linking to work properly, we need a core set of resources that can be extended by others. Fortunately, there is already a project named Dublin Core running that has defined many commonly used resources. We could reuse these resources for our purpose and use them as the root resources.
\item {\it Link to Consumers}: When an attribute of a service is consumed by another known service, this is annotated using \texttt{a rel attribute hresource-consumed-by}. This enables a software agent to find out what all can be done with the resource that it has already retrieved.
\begin{lstlisting}[language=html,breaklines=true]
<code class="attribute">ISBN</code>
Consumers:
<ul>
<li rel="hresource-consumed-by">
http://abc.com/buybook\#isbn
</li>
<li rel="hresource-consumed-by">
http://xyz.com/rentbook\#isbn
</li>
</ul>
\end{lstlisting}
\item {\it Link to Producers}: Similar to the link to consumers, services can annotate a link to a producer of one of its attributes. This helps reverse traversal of resources and also makes the system more peer-to-peer. This way, a link needs to be provided in either at one of the consumers or at the provider and an agent can identify this with link traversal. The annotation is made with the rel attribute \texttt{hresource-produced-by}.
The relationship between these semantic annotation is shown in figure \ref{fig:rel_semantic}.
\end{enumerate}
\section{Example REST{\it ful} web service}
%TODO
\vspace{50pt}
Consider a RESTful API with a ...
\textbf{\textit{TODO}}\\
\vspace{50pt}
The Listing.\ref{lst:Sample_HTML} depicts an annotation(according to Section.\ref{sec:ServiceDescription}[Service Description]) across an HTML description of above API.
When above HTML page (Listing.\ref{lst:Sample_HTML}) is parsed,required information of API is generated as JSON,which is shown in Listing.\ref{lst:Sample_JSON}.
Moreover we can also generate HTML documentary page from a JSON in specified format.
\lstinputlisting[language=hresource, frame=lines,
captionpos=b,caption=annotated HTML page, belowcaptionskip=4pt,
label=lst:Sample_HTML]{images/HTML.html}
\lstinputlisting[language=JSON,frame=lines
,captionpos=b,caption=JSON description generated from above HTML page, belowcaptionskip=4pt,
label=lst:Sample_JSON]{images/JSON.json}
\section{System implementation}
The proposed system currently addresses the composition of RESTful web services that represent resources using the JavaScript Object Notation (JSON). The system expects the services to return results to API queries as JSON objects and composes them as per the user specification. Extension of the same idea can enable the composition of XML based RESTful services. The system also includes an RDF conversion module that performs the automatic conversion process from Micro-format annotations to RDF\cite{rdf}.
The system uses a web UI at the client side for reading user input. The requests are handled by a server program developed in node.js that accepts requests from multiple client machines and handles them asynchronously. The server program acts as a proxy and is developed to enable the system to handle high volumes of client traffic. The server does the bulk of processing and also allows multiple cross domain HTTP calls with ease, which would otherwise be not possible with a client side implementation because of the same origin policy enforced by the modern web browsers.
The basic architecture of the system is as illustrated in \ref{fig:sys_arch}.
\begin{figure}[!ht]
\centering
\includegraphics[width=3.5in]{images/sys_arch.png}
\caption{System architecture}
\label{fig:sys_arch}
\end{figure}
The system uses a parser module to parse the DOM tree of the annotated API Documentation page and extract the information embedded in it. Based on the information extracted from DOM tree, the web UI presented to the user is populated with a set of API operations that the system identifies and that can be composed. A workbench is presented to the user and a drag-drop based user interface enables the him/her to graphically describe the required composition of web services. Once the graphical design of the mash-up is complete and user submits it, information from the design is converted into an abstract internal representation that is passed to the server. The server now invokes the required API calls asynchronously and composes them as required and produces an HTML formatted output which is the required mash-up. The HTML output is then served back to the client system for the user.
The interaction flow in the system is illustrated in \ref{fig:interaction_flow}.
\begin{figure*}[!ht]
\centering
\includegraphics[width=6in] {images/seq.png}
\caption{Interaction flow sequence}
\label{fig:interaction_flow}
\end{figure*}
\subsection{Server Side Implementation}
\begin{figure}[!ht]
\centering
\includegraphics[width=3.5in]{images/server_arch.png}
\caption{Server Architecture}
\label{fig:server_arch}
\end{figure}
The architecture of the server is as illustrated in \ref{fig:server_arch}.Server primarily deals with providing three services:
\begin{enumerate}
\item {\it Parsing API Documentation HTML}: Client side provides the URL of an annotated API Documentation HTML page. The server fetches the required documentation page and passes the HTML DOM to the parser module (hrestsparser). The parser generates a JSON description of the annotated resources which is then passed to the client to populate the client UI. The parser uses XPath for the traversal of the DOM tree and hence platform neutral.
\item {\it RDF Generation}: Client passes a URL to the server. The page is fetched by the RDF generator module. The page is parsed by the hRESTS parser module to produce the output JSON. The produced-by,consumed-by,is-a relations in the annotated page forms a graph of resources each of which is required for the generation of the RDF description. The RDF generator module recursively traverses the graph and fetches each of the required resources up to some arbitrary level of nesting.
\item {\it Request handling and mash-up generation}: The user specifies the required composition of services graphically in the client workbench. An abstract description of the required mash-up is generated at the client and is passed to the server as a JSON object. The server makes the required API invocations to fetch each of the resources to be composed into the final mash-up. The description JSON is parsed and the result resources are composed as specified producing an HTML output which is then served to the client system.
\end{enumerate}
\subsection{Client Side Implementation}
Client provides a simple drag and drop based user interface for mash-up design which can be implemented using jsPlumb\cite{jsplumb}. The elements in the UI are populated from the contents of the API documentations passed to the server. The different API calls can be simply dragged and dropped to the workbench and the inputs and outputs can be piped to one another by graphical connectors.
Client uses basic data structures which are simple JSON objects to keep track of attribute value or mapping .This forms a logical graph of the way in which the different service calls are composed. An abstract representation of the mash-up is created by traversing the graph. The traversal can be done by using a modified Depth First Search technique. The abstract representation is a JSON object which represents the initiation sequence for the API calls. This is then passed to the server.
\section{Conclusion and Future work}
Web is increasingly getting complex these days. Thousands of services spawn millions of requests each day. This work envisioned a intelligent web framework in which a service can create interconnections between different services compatible. It opens up a multitude of possibilities, including a higher layer manipulation of information for what it represents than how it is represented.
Popularization of services like ifttt\cite{ift} have given us the proof that this service is inevitable in the future of web, and if machines were able to parse the information constituted by entire internet it can do wonders that no one envisioned, so it can only be compared with magic.
The prototype mash-up editor created can intelligently mash the services based on annotations But it is also limited in some aspects. But we hope this direction deserves more exploration since it is relatively easy for the developer and the machine to follow. The prototype was implemented in different paradigms to verify the computational comparability.
The immediate future directions we would like to purse this works are
\begin{itemize}
\item Change to accommodate today's REST APIs. Most of the rest apis in use today does to conform to the RESTful paradigm. They use GET heavily to get most of the things done for performance purposes.
\item A tool to convert existing REST APIs. But this comes with lot of challenge. The above one for a start, adoption is another problem.
\end{itemize}
% use section* for acknowledgement
%\section*{Acknowledgment}
%The authors would like to thank...
% references section
\bibliographystyle{plain}
\bibliography{references}
% that's all folks
\end{document}
| {
"alphanum_fraction": 0.7956659735,
"avg_line_length": 78.5415472779,
"ext": "tex",
"hexsha": "18a50dc02f5e0a88ac263daba398d9a9161f3720",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "32807c080f5e8d88950f8839ad47a38a10969c09",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "abilng/sMash.it",
"max_forks_repo_path": "report/Paper/Semantic annotations for RESTful services.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "32807c080f5e8d88950f8839ad47a38a10969c09",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "abilng/sMash.it",
"max_issues_repo_path": "report/Paper/Semantic annotations for RESTful services.tex",
"max_line_length": 1041,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "32807c080f5e8d88950f8839ad47a38a10969c09",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "abilng/sMash.it",
"max_stars_repo_path": "report/Paper/Semantic annotations for RESTful services.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5821,
"size": 27411
} |
\section{Figure example}
% ############################################################################################################################################ %
\begin{frame}{Figure example}
\begin{figure}[H]
\centering
\includegraphics[width=.5\textwidth]{fig/beam_solx.jpg}
\caption{Figure example.}
\end{figure}
\end{frame}
% ############################################################################################################################################ %
| {
"alphanum_fraction": 0.3035343035,
"avg_line_length": 43.7272727273,
"ext": "tex",
"hexsha": "65c017260d50b3dea048a03491efd103d252f4e7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a4300b24b76b951b651888077a8cdc8791642337",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jvanharen/beamer_template",
"max_forks_repo_path": "src/figure_ex.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a4300b24b76b951b651888077a8cdc8791642337",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jvanharen/beamer_template",
"max_issues_repo_path": "src/figure_ex.tex",
"max_line_length": 144,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a4300b24b76b951b651888077a8cdc8791642337",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jvanharen/beamer_template",
"max_stars_repo_path": "src/figure_ex.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 68,
"size": 481
} |
\documentclass[12pt,oneside]{scrartcl}
\usepackage[utf8]{inputenc}
\usepackage[ngerman]{babel}
\usepackage[babel,german=quotes]{csquotes}
\usepackage[T1]{fontenc}
\usepackage[a4paper, left=4cm, right=2cm, top=3cm, bottom=2.1cm]{geometry}
\usepackage[scaled=.90]{helvet}
\renewcommand{\familydefault}{\sfdefault}
% Default Nomenclature
\usepackage[intoc]{nomencl}
\setlength{\nomlabelwidth}{.20\textwidth}
\renewcommand{\nomlabel}[1]{#1 \dotfill}
\makenomenclature
% Setter and getter for nomenclature
\usepackage{expl3}
\usepackage{xparse}
\usepackage{ifthen}
\ExplSyntaxOn
\prop_new:N \g_nomenclature_props
\prop_new:N \g_nomenclature_activation_props
\prop_new:N \g_nomenclature_full
%
\NewDocumentCommand{\NCSetForce}{mmg}{
\IfNoValueTF{#3}
{\nomenclature{#1}{#2}}
{\nomenclature{#1}{#2,\space #3}}
\prop_gput:Nnn \g_nomenclature_props {#1} {#2}
\prop_gput:Nnn \g_nomenclature_activation_props {#1} {true}
}
%
\NewDocumentCommand{\NCSet}{mmg}
{
\IfNoValueTF{#3}
{\prop_gput:Nnn \g_nomenclature_full {#1} {\nomenclature{#1}{#2}}}
{\prop_gput:Nnn \g_nomenclature_full {#1} {\nomenclature{#1}{#2,\space #3}}}
\prop_gput:Nnn \g_nomenclature_props {#1} {#2}
}
%
\NewDocumentCommand{\NCGet}{m}
{\emph{
\ifthenelse{\equal{\prop_item:Nn \g_nomenclature_props {#1}}{}}
{
\textbf{ERROR:\space NOMENCLATURE\space \enquote{#1} \space NOT\space FOUND}
}{
\ifthenelse{\equal{\prop_item:Nn \g_nomenclature_activation_props {#1}}{true}}
{
#1
}{
\prop_item:Nn \g_nomenclature_full {#1}
\prop_gput:Nnn \g_nomenclature_activation_props {#1} {true}
\prop_item:Nn \g_nomenclature_props {#1}
\space(#1)
}
}
}}
% Same as NCGet but without activation check (needed for longtables)
\NewDocumentCommand{\NCGetFull}{m}
{\emph{
\ifthenelse{\equal{\prop_item:Nn \g_nomenclature_props {#1}}{}}
{
\textbf{ERROR:\space NOMENCLATURE\space \enquote{#1} \space NOT\space FOUND}
}{
\prop_item:Nn \g_nomenclature_full {#1}
\prop_gput:Nnn \g_nomenclature_activation_props {#1} {true}
\prop_item:Nn \g_nomenclature_props {#1}
\space(#1)
}
}}
\ExplSyntaxOff
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\printnomenclature
\newpage
%%%
\NCSet{NCL}{My Nomenclature}
\NCSet{AAA}{Tripple A}
\NCSet{XYZ}{eX whY Zett}
\NCSet{MORE}{It's not less}{This is an additional explanation}
\NCSet{FULL}{Description to the fullest}
\NCSetForce{FORCE}{I was forced into the nomenclature}
\NCSetForce{FORCETWO}{I also was forced into the nomenclature}{At least I was fetched}
%%%
\section{NCL}
\NCGet{NCL}
\NCGet{NCL}
foo bar \NCGet{NCL} baz.
%
\section{foo}
\NCGet{foo}
\NCGet{ncl}
%
\section{AAA}
\NCGet{AAA}
\NCGet{AAA}
\NCGet{AAA}
foo bar \NCGet{AAA} baz.
%
\section{XYZ}
The nomenclature entry \emph{XYZ} is set but never used, so it's not added to the nomenclature.
%
\section{MORE}
This one fits perfectly into text but offers an extended description text in the nomenclature.
\NCGet{MORE}
%
\section{FORCE}
The term \enquote{FORCE} was added to nomenclature but it has never been fetched.
The term \NCGet{FORCETWO} was added to nomenclature and also was fetched.
%
\section{Full}
Get full: \NCGet{FULL}
Get full again: \NCGet{FULL}
Get fullest: \NCGetFull{FULL}
\end{document}
| {
"alphanum_fraction": 0.6885197271,
"avg_line_length": 21.7483870968,
"ext": "tex",
"hexsha": "07dc92b2d1123b7e041a3683ce04ac51eef5ac7a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8a26016d1183f751787c42c5d6d3e0a2c61d870d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "plytimebandit/latex",
"max_forks_repo_path": "nomenclature.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8a26016d1183f751787c42c5d6d3e0a2c61d870d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "plytimebandit/latex",
"max_issues_repo_path": "nomenclature.tex",
"max_line_length": 95,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8a26016d1183f751787c42c5d6d3e0a2c61d870d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "plytimebandit/latex",
"max_stars_repo_path": "nomenclature.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1120,
"size": 3371
} |
\mainsection{Run Time Subsystem}
\label{ch:vm}
This chapter describes the implementation of the
MPC Virtual Machine.
The virtual machine is driven by Bytecodes which
are produced by the MAMBA compiler (see later).
Of course you could compile Bytecodes from any compiler
if you wanted to write one with the correct backend.
The virtual machine structure resembles that of a
simple multi-core processor, and is a register-based
machine.
Each core corresponds to a seperate online thread
of execution; so from now on we refer to these ``cores''
as threads.
Each thread has a seperate set of registers, as
well as a stack for {\em integer} values.
To allow the saving of state, or the transfer of data
between threads there is a global memory.
This global memory (or at least the first $2^{20}$
values) are saved whenever the SCALE system gracefully
shuts down.
The loading of this saved memory into a future run of the
system is controlled by the command line arguments
passed to the \verb+Player.x+ program.
The design is deliberately kept sparse to ensure a fast, low-level
implementation, whilst more complex optimization decisions are intended
to be handled at a higher level.
\subsection{Overview}
The core of the virtual machine is a set of threads, which
execute sequences of instructions encoded in a bytecode format.
Files of bytecode instructions are referred to as \emph{tapes},
and each thread processes a single tape at a time.
Each of these execution threads has a pairwise point-to-point
communication channels with the associated threads in all
other players' runtime environments.
These communication channels, like all communication channels
in SCALE, are secured via TLS.
The threads actually have two channels to the correspond
thread in the other parties; we call these different channels
``connections''.
For the online thread ``connection'' zero is used for standard
opening of shared data, whereas ``connection'' one is used for
private input and output. This is to avoid conflicts beween
the two (for example a \verb+PRIVATE_OUTPUT+ coming between
a \verb+STARTOPEN+ and a \verb+STOPOPEN+.
Each online thread is supported by four other threads
performing the offline phase, each again with pairwise
TLS secured point-to-point channels. Currently the offline
threads only communicate on ``connection'' zero.
In the case of Full Threshold secret sharing another set of
threads act as a factory for FHE ciphertexts. Actively secure
production of such ciphertexts is expensive, requiring complex
zero-knowledge proofs (see Section \ref{sec:fhe}). Thus
the FHE-Factory threads locates this production into a
single location. The number of FHE Factory threads can be
controlled at run-time by the user.
In addition to bytecode files, each program to be run must
have a \emph{schedule}. This is a file detailing the execution
order of the tapes, and which tapes are to be run in parallel.
There is no limit to the number of concurrent tapes specified in a schedule,
but in practice one will be restricted by the number of cores.
The schedule file allows you to schedule concurrent threads
of execution, it also defines the maximum number of threads
a given run-time system will support. It also defines
the specific bytecode sequences which are pre-loaded
into the system.
One can also programmatically control execution of new
threads using the byte-code instructions \verb+RUN_TAPE+ and \verb+JOIN_TAPE+
(see below for details).
The schedule is run by the \emph{control thread}.
This thread takes the tapes to be executed at a
given step in the schedule, passes them to the execution
threads, and waits for the threads to
finish their execution before proceeding to the next stage of
the schedule.
Communication between threads is handled by a global
\emph{main memory}, which all threads have access to.
To avoid unnecessary stalls there is no locking mechanism provided to
the memory. So if two simultaneously running threads
execute a read and a write, or two writes, to the same
memory location then the result is undefined since it is
not specified as to which order the instructions
will be performed in.
Memory comes in both clear and secret forms.
Each execution thread also has its own local clear and secret
registers, to hold temporary variables.
To avoid confusion with the main memory
The values of registers are not assumed to be maintained
between an execution thread running one tape and
the next tape, so all passing of values
between two sequential tape executions must be done
by reading and writing to the virtual machine's main memory.
This holds even if the two consequtive bytecode
sequences run on the same ``core''.
\subsection{Bytecode instructions}
The design of the Bytecode instructions within a tape are influenced
by the RISC design strategy, coming in only a few basic
types and mostly taking between one and three
operands. The virtual machine also supports a limited form of
SIMD instructions within a thread, whereby a single instruction is used to
perform the same operation on a fixed size set of registers.
These vectorized instructions are not executed in parallel
as in traditional SIMD architectures, but exist to provide a compact way of
executing multiple instructions within a thread, saving on memory
and code size.
A complete set of byte codes and descriptions is
given in the html file in
\begin{center}
\verb+$(HOME)/Documentation/Compiler_Documentation/index.html+
\end{center}
under the class \verb+instructions+.
Each encoded instruction begins with 32 bits reserved for the opcode.
The right-most nine bits specify the instruction to be executed\footnote{The choice of nine is to enable extension of the system later, as eight is probably going
to be too small.}.
The remaining 23 bits are used for vector instructions, specifying the
size of the vector of registers being operated on.
The remainder of an instruction encoding consists of 4-byte operands, which
correspond to either indices of registers or immediate integer values.
\begin{itemize}
\item Note, vector instructions are not listed in the \verb+html+ document above.
They have the same name as standard instructions, prefixed by `V',
with the opcode created as described above.
\end{itemize}
The basic syntax used in the above html file is as follows:
\begin{itemize}
\item `c[w]': clear register, with the optional suffix `w' if the register is
written to.
\item `s[w]': secret register, as above.
\item `r[w]': regint register, as above.
\item `i' : 32-bit integer signed immediate value.
\item `int' : 64-bit integer unsigned immediate value.
\item `p' : 32-bit number representing a player index.
\item `str' : A four byte string.
\end{itemize}
Memory comes in three varieties \verb+sint+, \verb+cint+, and
\verb+regint+; denoted by \verb+S[i]+, \verb+C[i]+ and \verb+R[i]+.
\subsection{Load, Store and Memory Instructions}
Being a RISC design the main operations are load/store
operations, moving operations, and memory operations.
Each type of instructions comes in either clear data,
sharedata, or integer data formats.
The integer data is pure integer arithemetic, say
for controling loops, whereas clear data is integer
arithmetic modulo $p$.
For the clear values all values represented as integers
in the range $(-\frac{p-1}{2}, \dots, \frac{p-1}{2}]$.
The integer stack is again mainly intended for loop
control.
\subsubsection{Basic Load/Store/Move Instructions:}
\verb+LDI+,
\verb+LDI+,
\verb+LDSI+,
\verb+LDINT+,
\verb+MOVC+,
\verb+MOVS+,
\verb+MOVINT+.
\subsubsection{Loading to/from Memory:}
\verb+LDMC+,
\verb+LDMS+,
\verb+STMC+,
\verb+STMS+,
\verb+LDMCI+,
\verb+LDMSI+,
\verb+STMCI+,
\verb+STMSI+,
\verb+LDMINT+,
\verb+STMINT+,
\verb+LDMINTI+,
\verb+STMINTI+.
\subsubsection{Accessing the integer stack:}
\verb+PUSHINT+, \verb+POPINT+.
\subsubsection{Data Conversion}
To convert from mod $p$ to integer values and
back we provide the conversion routines.
\verb+CONVINT+, \verb+CONVMODP+.
These are needed as the internal mod $p$ representation
of clear data is in Montgomery representation.
\subsection{Preprocessing loading instructions}
The instructions for loading data from the preprocessing phase
are denoted \verb+TRIPLE+, \verb+SQUARE+, \verb+BIT+,
and they take as argument three, two, and one secret registers
respectively.
The associated data is loaded from the concurrently running
offline threads and loaded into the registers given as arguments.
\subsection{Open instructions}
The process of opening secret values is covered by two instructions.
The \verb+STARTOPEN+ instruction takes as input a set of $m$
shared registers, and \verb+STOPOPEN+ an associated set of $m$
clear registers, where $m$ can be an arbitrary integer.
This initiates the protocol to reveal the $m$ secret shared register values,
storing the result in the specified clear registers. The reason for
splitting this into two instructions is so that local, independent
operations may be placed between a \verb+STARTOPEN+ and \verb+STOPOPEN+,
to be executed whilst waiting for the communication to finish.
There is no limit on the number of operands to these instructions,
allowing for communication to be batched into a single pair of
instructions to save on network latency. However, note that when
the \texttt{RunOpenCheck} function in the C++ class \texttt{Open\_Protocol}
is used to check MACs/Hashes then this can stall when the network buffer fills
up, and hang indefinitely.
On our test machines this happens when opening around 10000 elements
at once, so care must be taken to avoid this when compiling or writing
bytecode (the Python compiler could automatically detect and avoid
this).
\subsection{Threading tools}
Various special instructions are provided to ease the workload when writing
programs that use multiple tapes.
\begin{itemize}
\item The \verb+LDTN+ instruction loads the current thread number into
a clear register.
\item The \verb+LDARG+ instruction loads an argument that was passed
when the current thread was called.
Thread arguments are optional and consist of a single integer,
which is specified in the schedule file that determines the execution
order of tapes, or via the instruction \verb+RUN_TAPE+.
\item The \verb+STARG+ allows the current tape to change its
existing argument.
\item To run a specified pre-loaded tape in a given thread, with
a given argument the \verb+RUN_TAPE+ command is executed.
\item To wait until a specified thread has finished one executes
the \verb+JOIN_TAPE+ function.
\end{itemize}
\subsection{Basic Arithmetic}
This is captured by the following instructions,
with different instructions being able to be operated
on clear, shared and integer types.
\verb+ADDC+,
\verb+ADDS+,
\verb+ADDM+,
\verb+ADDCI+,
\verb+ADDSI+,
\verb+ADDINT+,
\verb+SUBC+,
\verb+SUBS+,
\verb+SUBML+,
\verb+SUBMR+,
\verb+SUBCI+,
\verb+SUBSI+,
\verb+SUBCFI+,
\verb+SUBSFI+,
\verb+SUBINT+,
\verb+MULC+,
\verb+MULM+,
\verb+MULCI+,
\verb+MULSI+,
and
\verb+MULINT+.
\subsection{Advanced Arithmetic}
More elaborate algorithms can clearly be executed directly on
clear or integer values; without the need for complex
protocols. These include logical, shift and number
theoretic functions.
\verb+ANDC+,
\verb+XORC+,
\verb+ORC+,
\verb+ANDCI+,
\verb+XORCI+,
\verb+ORCI+,
\verb+NOTC+,
\verb+SHLC+,
\verb+SHRC+,
\verb+SHLCI+,
\verb+SHRCI+,
\verb+DIVC+,
\verb+DIVCI+,
\verb+DIVINT+,
\verb+MODC+,
\verb+MODCI+.
\verb+LEGENDREC+,
and
\verb+DIGESTC+.
\subsection{Debuging Output}
To enable debugging we provide simple commands to send
debugging information to the \verb+Input_Output+ class.
These bytecodes are
\begin{verbatim}
PRINTINT, PRINTMEM, PRINTREG, PRINTREGPLAIN,
PRINTCHR, PRINTSTR, PRINTCHRINT, PRINTSTRINT,
PRINTFLOATPLAIN, PRINTFIXPLAIN.
\end{verbatim}
\subsection{Data input and output}
This is entirely dealt with in the later Chapter on IO.
The associated bytecodes are
\begin{verbatim}
OUTPUT_CLEAR, INPUT_CLEAR,
OUTPUT_SHARE, INPUT_SHARE,
OUTPUT_INT, INPUT_INT,
PRIVATE_INPUT, PRIVATE_OUTPUT,
OPEN_CHAN, CLOSE_CHAN
\end{verbatim}
\subsection{Branching}
Branching is supported by the following
instructions
\verb+JMP+,
\verb+JMPNZ+,
\verb+JMPEQZ+,
\verb+EQZINT+,
\verb+LTZINT+,
\verb+LTINT+,
\verb+GTINT+,
\verb+EQINT+,
and
\verb+JMPI+.
\subsection{Other Commands}
The following byte codes are for fine tuning the machine
\begin{itemize}
\item \verb+REQBL+ this is output by the compiler to signal that
the tape requires a minimal bit length. This forces the runtime
to check the prime $p$ satisfies this constraint.
\item \verb+CRASH+ this enables the program to create a crash,
if the programmer is feeling particularly destuctive.
\item \verb+RAND+ this loads a pseudo-random value into a
clear register. This is not a true random number, as all
parties output the same random number at this point.
\item \verb+RESTART+ which restarts the online runtime.
See Section \ref{sec:restart} for how this intended to be used.
\item \verb+CLEAR_MEMORY+ which clears the current memory.
See Section \ref{sec:restart} for more details on how this is used.
\item \verb+CLEAR_REGISTERS+ which clears the registers of this processor core (i.e. thread).
See Section \ref{sec:restart} for more details on how this is used.
\item \verb+START_TIMER+ and \verb+STOP_TIMER+ are used to time different
parts of the code. There are 100 times available in the system;
each is initialized to zero at the start of the machine running.
The operation \verb+START_TIMER+ re-initializes a specified timer,
whereas \verb+STOP_TIMER+ prints the elapsed time since the last
initialization (it does not actually reinitialise/stop the timer itself).
These are accessed from MAMBA via the functions
\verb+start_timer(n)+ and \verb+stop_timer(n)+.
The timers use an internal class to measure time command in the C runtime.
\end{itemize}
| {
"alphanum_fraction": 0.765239401,
"avg_line_length": 39.0741758242,
"ext": "tex",
"hexsha": "db049398f60af25591ecac85c8fd7540931a9019",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "18fa886d820bec7e441448357b8f09e2be0e7c9e",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "athenarc/SCALE-MAMBA",
"max_forks_repo_path": "Documentation/ByteCodes.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "43a8945e63f7c8996da699dae0a9f9240084214f",
"max_issues_repo_issues_event_max_datetime": "2019-07-10T17:28:52.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-06-10T13:20:34.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "GPikra/python-smpc-wrapper-importer",
"max_issues_repo_path": "Documentation/ByteCodes.tex",
"max_line_length": 163,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "43a8945e63f7c8996da699dae0a9f9240084214f",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "GPikra/python-smpc-wrapper-importer",
"max_stars_repo_path": "Documentation/ByteCodes.tex",
"max_stars_repo_stars_event_max_datetime": "2020-05-20T03:53:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-08-10T10:03:15.000Z",
"num_tokens": 3481,
"size": 14223
} |
\documentclass[a4paper,12pt]{report}
\usepackage{amsmath,amsfonts,mathtools}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{hyperref}
\begin{document}
\title{MAT292 Abridged}
\author{Aman Bhargava}
\date{September 2019}
\maketitle
\tableofcontents
\section{Introduction}
The textbook and lectures for this course offer a great comprehensive guide for the
methods of solving ODE's. The goal here is to give a very concise overview of the things you
need to know (NTK) to answer exam questions. Unlike
some of our other courses, you don't need to be very intimately familiar with the derivations
of everything in order to solve the problems (though it certainly doesn't hurt). Think of this
as a really good cheat sheet.
\chapter{Qualitative Things and Definitions}
\section{Definitions}
\begin{enumerate}
\item \textbf{Differential Equation: } Any equation that contains a differential of dependent variable(s) with respect to any independent variable(s)
\item \textbf{Order: } The order of the highest derivative present.
\item \textbf{Autonomous: } When the definition of the $\frac{dy}{dt}$ doesn't contain $t$
\item \textbf{ODE and PDE: } Ordinary derivatives or partial derivatives.
\item \textbf{Linear Differential Equations: } $n$th order Linear ODE is of the form: $$\sum a_i(t)y^{(i)} = 0$$
\item \textbf{Homogenous: } if the $0$th element of the above sum has $a_0(t) = 0$ for all $t$.
\end{enumerate}
\section{Qualitative Analytic Methods to Know}
\begin{enumerate}
\item Phase lines
\item Slope fields
\end{enumerate}
\section{Types of Equilibrium}
\begin{enumerate}
\item Asymptotic stable equilibrium
\item Unstable equilibrium
\item Semistable equilibirium
\end{enumerate}
\chapter{1st Order ODE's}
\section{Separable 1st Order ODE's}
If you can write the ODE as: $$\frac{dy}{dx} = p(x)q(y)$$
Then you can put $p(x)$ with $dx$ on one side and $q(y)$ with $dy$ on the other and
integrate them both so solve the ODE.
\section{Method of Integrating Factors}
This is used to solve ODE's that can be put into the form
$$\frac{dy}{dt} + p(t)*y = g(t)$$
The chain rule can be written as: $\int (f'(x)g(x) + f(x)g'(x)) dx = f(x)g(x)$
We can use an \textbf{integrating factor} equivalent to $e^{\int p(t) dt}$ to multiply
both sides and arrive at a form that can be integrated with ease using the reverse chain
rule.
\section{Exact Equations}
If the equation is of the form $$M(x, y) + N(x, y) \frac{dy}{dx} = 0$$
and $$M_y(x, y) = N_x(x, y)$$
then $\exists$ a function $f$ satistfying $$f_x(x, y) = M(x, y); f_y(x, y) = N(x, y)$$
\paragraph{The solution: } $f(x, y) = C$ where $C$ is an arbitrary constant.
\section{Modeling with First Order Equations}
These are some vague tips on how to solve these types of problems from textbook section
2.3
\begin{itemize}
\item To \textbf{create} the equation, state physical principles
\item To \textbf{solve}, solve the equation and/or find out as much as you can about the nature of the solution.
\item Try \textbf{comparing} the solution/equation to the physical phenomenon to `check' your work.
\end{itemize}
\section{Non-Linear vs. Linear DE's}
\paragraph{Theorem on Uniqueness of 1st Order Solutions}
$$y' + p(t)y = g(t)$$
There exists a unique solution $y=\Phi(t)$ for each starting point $(y_0, t_0)$ if
$p$, $g$ are continuous on the given interval.
\section{Population Dynamcis with Autonomous Equations}
\paragraph{Autonomous: } $\frac{dy}{dt} = f(y)$
\subsection{Simple Exponential}
$\frac{dy}{dt} = ry$
Problem: doesn't take into account the upper bound for population/sustainability.
\subsection{Logistic equation}
$$\frac{dy}{dt} = (r-ay)y$$
Equivalent form:
$$\frac{dy}{dt} = r(1 - \frac{y}{k})y$$
$f$ is the \textit{intrinsic growth rate}.
\chapter{Systems of Two 1st Order DE's}
\section{Set Up}
Your first goal is to get the system in the form $$\frac{d \pmb{u}}{dt} = \pmb{Ku + b}$$
Where $\pmb{K}$ is a 2 by 2 matrix, $\pmb{u}$ is your vector of values you want to
predict, and $\pmb{b}$ is a $2$-long vector of constants.
\paragraph{More generally, } the equation is of the type
$$\frac{d\pmb{x}}{dt} = \pmb{P}(t)x + \pmb{g}(t)$$
Called a \textbf{first order linear system of two dimensions}. If $\pmb{g}(t) = \pmb{0} \forall t$
then it is called \textbf{homogenous}, else \textbf{non-homogenous}. We let $x$ be composed
of values $$\pmb{x} = \begin{bmatrix}
x \\
y \\
\end{bmatrix}$$
\section{Existence and Uniqueness of Solutions}
\paragraph{Theorem: } $\exists$ unique solution to $$\frac{d\pmb{x}}{dt} = \pmb{P}(t)x + \pmb{g}(t)$$
so long as the functions $\pmb{P}(t)$ exist and are continuous on the interval $I$ in equestion.
\subsection{Linear Autonomous Systems}
If the right side doesn't depend on $t$, it's autonomous. In this case, the autonomous
version looks (familiarly) like: $$\frac{d\pmb{x}}{dt} = \pmb{A}x + \pmb{b}$$
\paragraph{Equilibrium points} arise when $\pmb{A}x = -\pmb{b}$
\section{Solving}
\subsection{General Solution}
We start with $y' = \pmb{A}y+b$
\begin{itemize}
\item Find eigen values $\lambda$ s.t. $det(A-I\lambda) = 0$
\item Find eigen vectors $v$ s.t. $(A-I\lambda)v = 0$
\item Enter and simplify $y(t) = C_1 v_1 e^{\lambda_1 t} + C_2 v_2 e^{\lambda_2 t}$
\end{itemize}
\paragraph{Converting to homogenous equation: } Let $y_{eq}$ be the equilibrium value of $y$ that
can be found when $y' = 0 = Ay + b$.
$$y_{eq} + \bar{y} = y$$ and $y_{eq}$ is the solution to $\bar{y}' = A\bar{y}$.
\subsection{Special Case 1: Repeated Eigen Value}
Start in the same fashion as above. You will easily be able to find the eigen value and
at least one eigen vector. Then, the path diverges:
\paragraph{Case 1: Another can easily be found - } Now you find your $v_2$ and proceed.
\paragraph{Case 2: Another cannot easily be found - } You must use the following formula
to find your second vector if this is the case:
$$(A-I\lambda)v_2 = v_1$$
This is known as the "general" eigen vector.
\subsubsection{Final Form}
Your final form for this case is going to be rather different than the others:
$$x = C_1e^{\lambda_1 t}tv_1 + C_2e^{\lambda_2 t}v_2$$
\subsection{Special Case 2: Two Complex Eigen Values}
\chapter{Numerical Methods}
$$\frac{dy}{dt} = f(t, y)$$
\section{Euler's Method}
We start with a first order ODE. Let us define a fixed step $\Delta t$.
$$y_{n+1} = y_n + \Delta t(f(t_n, y_n))$$
Error = $|y(t_n)-y_n| \approx \Delta t$
\subsection{Basic Idea: Integrate The ODE}
$$\int_{t_n}^{t_n+1} \frac{dy}{dt} dt = \int_{t_n}^{t_n+1} f(t, y(t)) dt$$
Euler's method makes the following approximation.
$$\int_{t_n}^{t_n+1} f(t, y(t)) dt \approx \Delta t f(t_n, y_n)$$
But we can do better.
\subsubsection{Mean Value Theorem for Integrals}
If y is continuous on $[a, b]$ then $\exists c \in (a, b)$ so that
$$\frac{1}{b-a} \int_a^b g(t) dt = g(c)$$
Euler's method would just assume that $g(c)$ is at the far left hand side of the
Riemann sum, so we can improve upon this! If we can guess $c$ more accurately,
our final answer will be a lot better.
\paragraph{Since $c$} is more likely to be inside the interval $[t_n, t_{n+1}$, we could
try the following estimations to improve upon Euler's method. We will now try \textbf{sampling}.
\section{Improved Euler Method}
Let $g(t) = f(t, y(t))$. We literally use that riemann sum trapezoidal rule for this approximation.
$$y_{n+1}-y_n \approx \frac{\Delta t}{2}(f(t_n, y_n)+f(t_{n+1}, y_{n+1}))$$
Where we make the approximation for $y_{n+1}$ as
$$y_{n+1} \approx y_n + \Delta t f(t_{n+1}, y_n)$$
\paragraph{Steps: }
\begin{itemize}
\item Evaluate $K_1 = f(t_n, y_n)$
\item Predict $u_{n+1} = y_n + \Delta t K_1$
\item Evaluate $K_2 = f(t_{n+1}, u_{n+1})$
\item Update $u_{n+1} = u_n + \Delta t \frac{k_1+k_2}{2}$
\end{itemize}
This method is consider \textbf{second order}, so $$|y(t_n)-y_n| \approx C(\Delta t)^2$$ (global error).
\paragraph{The expense } of a numerical method is roughly the \textbf{number of function calls to $f()$}.
Therefore, improved Euler's method comes at the cost of one more function evaluation of $f()$.
\section{Runge Kutta Method}
Modern workhorse of solving ODE's. It's 4th order, so requires 4 function $f$ calls.
\paragraph{Steps}
\begin{itemize}
\item $k_1 = f(t_n, y_n)$
\item $u_n = y_n + \frac{\Delta t}{2} k_1$ (half step)
\item $k_2 = f(t_n + \frac{\Delta t}{2}, u_n$
\item $v_n = y_n + \frac{\Delta t}{2} k_2$
\item $k_3 = f(t_n+\frac{\Delta t}{2}, v_n$
\item $w_n = y_n + \Delta t k_3$
\item $k_{-1} = f(t_{n+1}, w_n$
\item $y_{n+1} = y_n + \Delta t(\frac{k_1}{6}+\frac{k_2}{3}+\frac{k_3}{3}+\frac{k_4}{6}$
\end{itemize}
\section{Above and Beyond 4th Order}
If we can just increase our accuracy by adding more functional evaluations, then why can't we just keep on
adding function evaluations and increasing the order?
\begin{tabular}{l|llllllllll}
\textbf{Order} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\
\hline
\textbf{Min Function Evaluations} & 1 & 2 & 3 & 4 & \textbf{6} & 7 & \textbf{9} & 11 & 14 & ? \\
\end{tabular}
Answer: Past a 4 evaluations, it's not really worth while.
\chapter{Systems of First-Order Equations}
\section{Theory of First-Order Linear Systems}
\paragraph{$n\times n$ Linear System: } $$\vec{x}' = \pmb{P}(t)\vec{x} + \vec{g}(t)$$
\paragraph{Existence and Uniqueness Theorem: } If $\pmb{P},\vec{g}$ are continuous on $[a, b]$, there exists unique $\vec{x}(t)$ for the IVP.
For constant $n\times n $ matrix, we use $\pmb{A} = \pmb{P}(t)$.
\paragraph{SUPERPOSITION: } For linear systems, any linear combination of solutions is another solution.
\section{Wronskians}
$$W = W[x_1,x_2,..., x_n](t) = det(\pmb{X}(t))$$
Where $\pmb{X}(t)$ is an $n\times n$ matrix with column vectors being solutions to the problem.
\paragraph{THEOREM: } If each column vector $\vec{x}(t)$ solution is linearly independent, then $W[x_1, ..., x_n] \neq 0$ for all time in the
given interval. If $W(t) = 0$, that tells us that the solutions are not linearly independent.
\paragraph{THEOREM: } There exists at least one funadmental set of solutions for all linear systems.
\section{Fundamental Matrices}
If $\{\vec{x}_1(t), \vec{x}_2(t), ..., \vec{x}_n(t)\}$ are solutions to $\vec{x}' = \pmb{P}(t)\vec{x}$ then the \textbf{FUNDAMENTAL MATRIX} is
$$\pmb{X}(t) = [\vec{x}_1(t), ..., \vec{x}_n(t)]$$
\begin{enumerate}
\item It is invertible
\item Any solution to the IVP can be written as $\vec{x}(t) = \pmb{X}(t)\vec{c}$ where $\vec{c} \in \mathbb{R}^n$.
\item $\pmb{X}' = \pmb{P}(t)\pmb{X}$
\end{enumerate}
\section{Matrix Exponential}
Motivation: For $x' = ax$, the solution is $x = x_0 e^{at}$. What's the equivalent to $e^{at}$ for matrices?
\paragraph{Definition of Matrix Exponential: } $$e^{\pmb{A}t} = \sum_{k=0}^{\infty}\pmb{A}^k\frac{t^k}{k!}$$
That's not terribly useful, so we dive deeper. \textbf{THEOREM: }
$$e^{\pmb{A}t} = \pmb{\Phi(t)}$$
Where $\pmb{\Phi(t)} = \pmb{X}(t)\pmb{X}^{-1}(t_0)$ such that $\pmb{\Phi(t)} = \pmb{I}$ at $t = t_0$. In essence, it's a
special fundamental matrix.
\paragraph{Properties: } for $\pmb{A}, \pmb{B} \in \mathbb{R}^{n\times n}$:
\begin{enumerate}
\item $e^{A(t + \tau)} = e^{At}e^{A\tau}$
\item $Ae^{At} = e^{At}A$
\item $(e^{At})^{-1} = e^{-At}$
\item $e^{(A+B)t} = e^{At + Bt}$ it $AB = BA$
\end{enumerate}
\subsection{Constructing Matrix Exponential}
Let $[\vec{x}_1(t), ..., \vec{x}_n(t)] = \pmb{X}(t)$ be a fundamental set of solutions. Then $$e^{At} = \pmb{X}(t)\pmb{X}^-1(t_0)$$
\paragraph{When A is non-defective } (i.e. has complete set of eigen values and is diagonalizable):
$$X(t) = [e^{\lambda_1 t}V_1 + ... + e^{\lambda_n t}V_n]$$
$$\Phi(t) = \pmb{I}[e^{\lambda_1 t}, e^{\lambda_2 t}, ..., e^{\lambda_n t}]$$
\chapter{Second-Order Linear Equations}
These equations combine $t, y, y', y''$: $$y'' = f(t, y, y')$$ Initial conditions are specified as $y(0), y'(0)$.
\paragraph{Linearity: } If and only if it is of the form $y'' + p(t)y' + q(t)y = g(t)$.
\paragraph{Homogenous: } If $g(t) = 0$ for all $t$ in the interval.
\paragraph{Constant Coefficients: } $ay'' + by' + cy = g(t)$
\section{Dynamical System Formulation}
We can convert this into a first-order system by stating:
\begin{enumerate}
\item $x'_1 = x2$
\item $x'_2 = f(t, x1, x2)$
\end{enumerate}
Where $x_1 = y$, and $x_2 = y'$. In vector notation:
$$\vec{x}' = \vec{f}(t, x_1, x_2) = [x_2, f(t, x_1, x_2)$$
\section{Theory of Second Order Linear Homogenous Systems}
\paragraph{Existence: } There exists a solution for $y'' + p(t)y' + q(t)y = g(t)$ as long as $p, q, g$ are cts.
\subsection{Abel's Theorem}
Let $\vec{x}' = \pmb{P}(t)\vec{x}$. Then the Wronskian $$W(t) = c[\exp(\int tr(\pmb(P)(t)))dt]$$
Where the trace is the sum of the diagonal.
\section{Linear Homogenouse Equations with Constant Coefficients}
$$ay'' + by' + cy = 0$$
$$\vec{x}' = \pmb{A}\vec{x} = \begin{bmatrix}
0 & 1 \\
\frac{-c}{a} & \frac{-b}{a} \notag
\end{bmatrix}
\vec{x}
$$
\subsection{Solution}
$$\vec{x} = \sum e^{\lambda_n t}\vec{V}_n$$
\begin{enumerate}
\item Find eigen values of $\pmb{A}$.
\item Vector $V_n$ is $$\begin{bmatrix}
1 \\
\lambda \\
\end{bmatrix}$$
\end{enumerate}
\subsection{Phase Portraits}
\begin{itemize}
\item Real + Negative $\to $ Asymmetrically stable.
\item Real + Positive $\to $ Asymmetrically unstable.
\item Complex + Negative $\to $ Stable spiral in.
\item Complex + Positive $\to $ Unstable spiral out.
\end{itemize}
\textbf{ALWAYS CLOCKWISE}.
\section{Mechanical and Electrical Vibration}
Pretty much just what we did in physics.
\section{Method of Undetermined Coefficients}
$$y'' + y' + y = f(t)$$
This can be applied when $f(t)$ is one of the following:
\begin{enumerate}
\item $f(t) = e^{st}$: Try $y_p = ae^{st}$
\item $f(t) = \text{polynomial}^n$: Try $y_p = a_n t^n + a_{n-1}t^{n-1} + ...$
\item $f(t) = \sin t$: Try $y_p = c_1\cos(t) + c_2\sin(t)$
\item $f(t) = t\sin t$: Try $y_p = (a+bt)\cos(t) + (c+dt)\sin(t)$
\end{enumerate}
\subsection{Resonance}
The one situation that breaks the system is $y'' - y = e^t$. We might try $y_p = ae^t$, but that will cancel out no matter what!
This is called \textbf{resonance}.
To get around this, we add in a $t$ term: $y_p = tae^t$
\section{Variation of Parameters}
$$y'' + B(t)y' + C(t)y = f(t)$$
First we must find two null solutions $y_1(t), y_2(t)$ that solve $y'' + by' + cy = 0$. Our final solution will be of the form:
$$y(t) = c_1(t)y_1(t) = c_2(t)y_2(t)$$
Where $c_1, c_2$ are ``varying parameters''. When we plug in the hypothesized $y(t)$, we get
$$c_1' y_1 + c_2' y_2 = 0$$
$$c_1' y_1' + c_2' y_2' = f(t)$$
From these two, we can solve for $c_1'$ and $c_2'$. Then:
$$y(t) = y_1(t) \int_0^t \frac{(-y_2)f(t)}{W(t)} dt + y_2(t) \int_0^t \frac{y_1(t)f(t)}{W(t)} dt$$
Where $W(t) = \det \begin{vmatrix}
y_1 & y_2 \\
y_1' & y_2' \notag
\end{vmatrix}$
\chapter{The Laplace Transform}
The value-add of the Laplace trasnform is that it lets you convert an ODE into a different form, solve that different form,
then convert back.
\section{Definition and Properties}
$$\mathcal{L}\{f(t)\} = F(S) = \int_0^{\infty} e^{-st} f(t)dt$$
\subsection{Linearity}
$$\mathcal{L}\{ c_1f_1 + c_2f_2\} = c_1\mathcal{L}\{ f_1 \} + c_2 \mathcal{L}\{ f_2 \}$$
\subsection{Exponential Order}
\paragraph{IF } $|f(t)| \leq Ke^{at}$ as $t\to\infty$ for some $K, a \in \mathbb{R}$ it is of exponential order.
\subsection{Existence of $\mathcal{L}\{ f \}$}
\paragraph{CONDITION I: } The function must be piecewise continuous.
\paragraph{CONDITION II: } The function must be of exponential order.
If these two hold, the Laplace transform is defined and approaches zero as $s\to\infty$.
\section{Properties of Laplace Transform}
\begin{enumerate}
\item If $c$ is a constant then $$\mathcal{L}\{ e^{ct}f(t) \} \to F(s-c)$$
\item Derivatives of $f(t)$:
% $$\mathcal{L}\{ f'(t) \} \to sF(s) - f(0)$$
% $$\mathcal{L}\{ f''(t) \} \to s^2 F(s) - sf(0)-f'(0)$$
$$\mathcal{L}\{ f^{(n)}(t) \} \to s^n \mathcal{L}\{ f(t) \} + s^{n-1} f(0) + ... + sf^{(n-2)}(0) + f^{(n-1)}(0)$$
\item Multiplying by powers of $t$:
$$\mathcal{L}\{ t^n f(t) \} \to (-1^n)F^{(n)}(s)$$
$$\mathcal{L}\{ t^n \} \to \frac{n!}{s^{n+1}}(-1)^n$$
\end{enumerate}
\section{Inverse Laplace Transform}
$$f = \mathcal{L}^{-1}\{ F(s) \}$$
This is usually a 1-1 correspondence, so we use a lookup table.
\subsection{Partial Fractions}
\begin{enumerate}
\item Factor the denominator
\item Write in terms of sums of $\frac{X}{factored term}$
\item Solve for coefficients of $X$ terms.
\item Use linearity of $\mathcal{L}^-1\{ \}$ to solve for $f(t)$.
\end{enumerate}
\paragraph{Rules for different types of factored denominators: }
\begin{itemize}
\item Single linear roots: $\frac{A}{s - c_1}$
\item Repeated linear roots: $\frac{A}{s - c_1} + \frac{B}{(s-c_1)^2} + ... + \frac{Z}{(s-c_1)^k}$
\item Quadratic roots: $\frac{A + Bs}{s^2 + c_1s + c_2}$
\end{itemize}
\section{Solving ODE's with $\mathcal{L} \{\}$}
\paragraph{Steps: }
\begin{enumerate}
\item Convert to $F(s), Y(s)$ space with $$\mathcal{L} \{\}$$
\item Solve for $Y(s)$ in terms of $s$.
\item Solve for $\mathcal{L}^{-1} \{Y(s)\}$
\end{enumerate}
\subsection{Characteristic Polynomial}
Let $$ay'' + by' + cy = f(t),\,\,\,\,\, y(0) = 0,\,y'(0) = 0$$
If we let $Z(s) = as^2 + bs + c$:
$$Y(s) = \frac{(as + b)y(0) + ay'(0)}{Z(s)} + \frac{F(s)}{Z(s)}$$
In the general case, if $Z(s) = \sum_{n=0}^{k} a_ns^n$:
$$Y(s) = \frac{ [\sum_{n=1}^m a_n s^{n-1}] y(0) + ... + [a_ns + a_{n-1}]y^{n-2}(0) + a_n y^{(n-1)}(0) }{Z(s)} + \frac{F(s)}{Z(s)}$$
\subsection{Systems of Differential Equations}
$$\vec{y}' = \pmb{A}\vec{y} + \vec{f}(t)$$ with $\vec{y}(0) = \vec{y}_0$.
We then (1) Take $\mathcal{L} \{\}$ of both sides and (2) re-arrange to find:
$$(s\pmb{I} - \pmb{A})\vec{Y}(s) = \vec{y}_0 + \vec{F}(s)$$
\begin{itemize}
\item $\vec{Y}(s)$ is just each function in $\vec{y}(t)$ run through laplace transform.
\item $\vec{F}(s)$ is just each function in $\vec{f}(t)$ run through laplace transform.
\end{itemize}
\paragraph{Therefore: }
$$\vec{Y}(s) = (s\pmb{I} - \pmb{A})^{-1} \vec{y}_0 + (s\pmb{I} - \pmb{A})^{-1}\vec{F}(s) $$
\section{Discontinuous and Periodic Functions}
\subsection{Heaviside function}
(a.k.a. unit step function) is $u(t)$:
\[ u(t) =
\begin{cases}
0 & \text{if } t < 0 \\
1 & \text{if } t \geq 0
\end{cases}
\]
\[ u_c(t) =
\begin{cases}
0 & \text{if } t < c \\
1 & \text{if } t \geq c
\end{cases}
\]
\[ u_{cd}(t) =
\begin{cases}
0 & \text{if } t < c \text{ or } t > d \\
1 & \text{if } c \leq t < d
\end{cases}
\]
\[
u_{cd} = u_c(t) - u_d(t)
\]
\paragraph{Laplace Transforms of Heaviside functions: }
$$\mathcal{L}\{ u_c(t) \} \to \frac{e^{-cs}}{s}$$
$$\mathcal{L}\{ u_{cd}(t) \} = \mathcal{L}\{ u_c(t) - u_d(t) \} \to \frac{e^{-cs} - e^{-ds}}{s}$$
\subsection{Time-Shifted Functions}
Consider $y = g(t) =
\begin{cases}
0 & \text{if } t < c \\
f(t-c) & \text{if } t \geq c
\end{cases}$. Therefore $g(t) = u_c(t)f(t-c)$
$$\therefore \mathcal{L}\{ u_c(t)f(t-c) \} \to e^-cs\mathcal{L}\{ f(t) \}$$
$$\therefore \mathcal{L}^{(-1)}\{ e^{-cs} F(s) \} \to u_c(t)f(t-c)$$
\subsection{Periodic Functions}
\paragraph{DEFINITION: } $f(t+T) = f(t) \forall t \in \mathbb{R}, T \in \mathbb{R} $
\paragraph{WINDOW FUNCTION: } $f_T(t) = f(t)[1-u_T(t)$. It corresponds the first period then zeros everywhere else.
The laplace transform is understandably simple $$\mathcal{L}\{f_T(t)\} \to \int_0^T e^{-st}f(t)dt$$
\paragraph{Laplace Transform of Periodic Functions: } $$F(s) = \frac{F_T(s)}{1-e^{-sT}}$$
\section{Discontinuous Forcing Functions}
$ay'' + by' + cy = g(t)$ still has the same process for solving with Laplace transforms as before, even with a discontinuous forcing function.
\subsection{Impulse Functions}
These functions show a forcing function that is zero everywhere other than in $[t, t+\epsilon]$, and is very large in the non-zero range.
$$I(\epsilon) = \int_{t_0}^{t_0+\epsilon} g(t) dt$$
After the forcing, the momentum of the system is $I$. We define $\delta_{\epsilon}(t) = \frac{u_0(t)-u_{\epsilon}(t)}{\epsilon}$ so that
$$g(t) = I_0\delta_{\epsilon}(t)$$
As $\epsilon \to 0$, we get instantaneous \textbf{unit impulse function} $\delta(t)$. The properties are as follows:
\begin{itemize}
\item $\delta(t-t_0) = \lim_{\epsilon \to 0} \delta_{\epsilon}(t-t_0)$
\item $int_a^b f(t)\delta(t-t_0) dt = f(t_0)$
\item $\mathcal{L}\{\delta(t-t_0)\} \to e^{-st_0}$
\item $\mathcal{L}\{\delta(t)\} \to 1$
\item $\delta(t-t_0) = \frac{d}{dt} u(t-t_0)$
\end{itemize}
\section{Convolution Integrals}
\paragraph{Definition: } $$f * g = h(t) = \int_0^t f(t-\tau)g(\tau)d\tau$$
\subsection{Convolution Properties}
\begin{enumerate}
\item $f*g = g*f$
\item $f*(g_1 + g_2) = f*g_1 + f*g_2$
\item $(f*g)*h = f*(g*h)$
\item $f*0 = 0$
\end{enumerate}
\subsection{Convolution Theorem}
Let $F(s) = \mathcal{L}\{f(t)\}$, $G(s) = \mathcal{L}\{g(t)\}$. If $$H(s) = F(s)G(s) = \mathcal{L}\{h(t)\}$$
then $$h(t) = f*g = \int_0^t f(t-\tau)g(\tau) d\tau$$
\subsection{Free and Forced Response}
If we try solving $ay'' + by' + cy = g(t)$ with $y_0, y_1=y'(0)$ we find that the laplace transformed solution is:
$$
Y(s) = H(s)[(as+b)y_0 + ay_1]+H(s)G(s)
$$
where $H(s) = \frac{1}{Z(s)} = \frac{1}{as^2 + bs + c}$. Then, in the time domain, we get:
$$
y(t) = \mathcal{L}^{-1}\{H(s)[(as+b)y_0 + ay_1]\} + \int_0^1 h(t-\tau)g(\tau)d\tau
$$
On the right, the \textbf{first term} is the solution to $ay'' + by' + cy = 0$. It is the \textbf{FREE RESPONSE}.
Meanwhile, the second term is the \textbf{forced response}.
\paragraph{In summary: } Total response = free + forced
$$Y(s) = H(s)[(as+b)y_0 + ay_1] + H(s)G(s)$$
$$y(t) = \alpha_1y_1(t) + \alpha_2y_2(t) + \int_0^t h(t-\tau)g(\tau)d\tau$$
\paragraph{Transfer function: } Function describing ratio of forced response to free response. In this case, it is $H(t)$. It has
all the characteristics of the system.
$h(t)$ is also called the \textbf{impulse response} because it is the response when an impulse is used so that $G(s) = 1$.
$y_g(t) = h(t)*g(t)$ is, therefore, the forced response in the time domain while $Y_g(s) = H(s)*G(s)$ is the forced response in the s-domain.
\paragraph{How to get system response in the time domain: }
\begin{enumerate}
\item Determine $H(s)$
\item Find $G(s)$
\item Construct $Y_g(s) = H(s)G(s)$
\item Convert to the time-domain with $\mathcal{L}^{-1}(Y_g(s))$
\end{enumerate}
\end{document} | {
"alphanum_fraction": 0.6479055258,
"avg_line_length": 38.7564766839,
"ext": "tex",
"hexsha": "24bf0e8f51516675822558226fdfe1a32626159a",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-05-06T19:01:31.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-05-05T14:21:34.000Z",
"max_forks_repo_head_hexsha": "de733823c493d35689cfcd846f87a47e0b05331c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "AdamCarnaffan/EngSci_Abridged",
"max_forks_repo_path": "tex/MAT292.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "de733823c493d35689cfcd846f87a47e0b05331c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "AdamCarnaffan/EngSci_Abridged",
"max_issues_repo_path": "tex/MAT292.tex",
"max_line_length": 149,
"max_stars_count": 17,
"max_stars_repo_head_hexsha": "de733823c493d35689cfcd846f87a47e0b05331c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "AdamCarnaffan/EngSci_Abridged",
"max_stars_repo_path": "tex/MAT292.tex",
"max_stars_repo_stars_event_max_datetime": "2021-07-15T02:14:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-10-25T06:03:59.000Z",
"num_tokens": 8220,
"size": 22440
} |
% ****** Start of file apssamp.tex ******
%
% This file is part of the APS files in the REVTeX 4.1 distribution.
% Version 4.1r of REVTeX, August 2010
%
% Copyright (c) 2009, 2010 The American Physical Society.
%
% See the REVTeX 4 README file for restrictions and more information.
%
% TeX'ing this file requires that you have AMS-LaTeX 2.0 installed
% as well as the rest of the prerequisites for REVTeX 4.1
%
% See the REVTeX 4 README file
% It also requires running BibTeX. The commands are as follows:
%
% 1) latex apssamp.tex
% 2) bibtex apssamp
% 3) latex apssamp.tex
% 4) latex apssamp.tex
%
\documentclass[%
reprint,
%superscriptaddress,
%groupedaddress,
%unsortedaddress,
%runinaddress,
%frontmatterverbose,
%preprint,
%showpacs,preprintnumbers,
%nofootinbib,
%nobibnotes,
%bibnotes,
amsmath,amssymb,
aps,
%pra,
%prb,
%rmp,
%prstab,
%prstper,
%floatfix,
]{revtex4-1}
\usepackage{graphicx}% Include figure files
\usepackage{dcolumn}% Align table columns on decimal point
\usepackage{bm}% bold math
%\usepackage{hyperref}% add hypertext capabilities
%\usepackage[mathlines]{lineno}% Enable numbering of text and display math
%\linenumbers\relax % Commence numbering lines
%\usepackage[showframe,%Uncomment any one of the following lines to test
%%scale=0.7, marginratio={1:1, 2:3}, ignoreall,% default settings
%%text={7in,10in},centering,
%%margin=1.5in,
%%total={6.5in,8.75in}, top=1.2in, left=0.9in, includefoot,
%%height=10in,a5paper,hmargin={3cm,0.8in},
%]{geometry}
\begin{document}
\preprint{APS/123-QED}
\title{Introduction to Python}
\author {U24568 - Computational Physics}
%\email {[email protected]}
%\homepage {http://web.mit.edu/fac/www}
\affiliation {Dr. Karen Masters \\ University of Portsmouth}
\date{\today}
\date{\today}% It is always \today, today,
% but any date may be explicitly specified
\begin{abstract}
Summary of lecture notes, a record of instructions for launching python applications, and instructions for the first set of python exercises to work through.
\end{abstract}
%\pacs{Valid PACS appear here}% PACS, the Physics and Astronomy
% Classification Scheme.
%\keywords{Suggested keywords}%Use showkeys class option if keyword
%display desired
\maketitle
%\tableofcontents
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{\label{sec:intro} Introduction to Python}
In this section we gain familiarity with python (specifically python 3), including different methods to run python by working on Physics problems. Remember you have not enrolled on a programming unit -- rather this is ``Computational {\bf Physics}'', so our emphasis here will be on using computers to solve physical problems. However a good basis in programming will provide both excellent transferable skills, and give you the skills to ensure you physics computing will go smoother (and faster) in the future. We chose to teach in python rather than the Matlab you learned in ``Introduction to Computational Physics'' because:
\begin{enumerate}
\item it gives you another language to list on you CV
\item it will give you confidence that you can pick up any language you need
\item it is a sought after language by the employers of physicists
\item it's free and open source (you don't need a license like for Matlab).
\end{enumerate}
%We will also have a short period of Maple to give you an alternative method to do computer algebra which is commonly employed by theoretical physicists, and during discussions on accuracy and speed of particularly high performance computing you will also have some exposure to the Fortran language.
%\begin{figure}
%\includegraphics[width=0.40\textwidth]{KeepCalmCodePython.png}%
%\caption{\label{}}
%\end{figure}
\section{Installing Python \label{sec:install}}
By far the simplest method to run python for scientific use is via the Anaconda distribution, which uses all the commonly known scientific packages such as Matplotlib, Scipy, Numpy, and takes care of all the relevent dependencies for you. It also comes with different ways to interact with python (see below).
\subsection{University Networked Computer: Windows}
The Anaconda is ready installed. To access this:
\begin{itemize}
\item Launch ``Aps Anywhere"
\item Search for ``Anaconda" (worth ``favouriting" it by clicking on the star)
\item Launch ``Anaconda" (this takes a while first time, and has a bunch of warnings, just keep and eye and click OK; it'll be quicker in future)
\item Launch Jupyter Notebook
\end{itemize}
\subsection{Your own computer/laptop}
You may wish to download Anaconda. It is free and available for Windows, OSX and Linux installation. Make sure you make a python3 environment.
\subsection{Via SciServer Computer}
This allows you to run python3 in Jupyter notebooks on any computer/device linked to the internet. It's also linked to some large astronomical databases.
\begin{itemize}
\item Navigate to {\tt http://www.sciserver.org/} and make a (free) account
\item Select ``Compute"
\item Start a ``New Container"
\item Start a python3 Notebook (from the dropdown ``New" menu at upper right).
\end{itemize}
\section{Running Python}
There are many different methods to run python programmes. In Anaconda the three ways you can work with python are directly from the command line (teminal window); using the Interactive Data Environment ``Spyder" (which is supposed to look at lot like Matlab); and using Jupyter Notebooks. The code you write will be identical, just the way you work on it, and run it may differ.
\subsection{Juptyer Notebooks}
Jupyter Notebooks are web based application which provide a way to create and share documents, which contain live, updateable code. Juptyer can be used with multiple languages, but we'll focus on using it with python. Jupyter is included in the Anaconda distribution, and available online in SciServer. This is an excellent way to write code in a class as it allows you to integrate your notes with the code.
To launch Jupyter Notebook from a Portsmouth Networked Windows Machine (after you have launched Anaconda):
\begin{itemize}
\item Start Anaconda (see above), and in Anaconda Navigator Launch Jupyter Notebook. This should open a browser window from the Jupyter Server.
\item Tip: Make a folder on your account called ``Computational Physics"
\item Navigate to ``Computational Physics" in the Jupyter Browser
\item Start a New Python 3 Notebook (from the menu at upper right).
\item To end select ``Close and halt" from the file menu
\end{itemize}
Let's try this:
\begin{enumerate}
\item Start up a Jupyter Notebook (see above, or use SciServer).
\item In the first box write a title (e.g. ``Python notes"), and some notes on what we're doing (change this to a ``Markdown" box).
\item In the second box write {\tt print("Hello World")} and run it.
\item "Save and Checkpoint" (from the File Menu). Do this frequently.
\item Now work through the beginning of the Python Programming for Physicists tutorial (Chapter 2 of Newman \cite{newman}) from Section 2.2 up to and including Exercise 2.2: ``Calculating the Altitude of a Satellite" writing the code and notes on it in a Juptyer Notebook.
\end{enumerate}
\subsubsection{Github}
One nice feature of Jupyter Notebooks is how they enable the sharing of code (and outputs) via platforms like ``Github" ({\tt http://github.com}. I encourage you to get an account on ``Github" and start a repository for your Computational Physics work. Later in the term I will be using Github to share my own Notebooks of example code. You can find my ``Computational Physics Unit" repository at {\tt github.com/karenlmasters/ComputationalPhysicsUnit}.
\subsection{Spyder}
Spyder is the Scientific PYthon Development EnviRonment. It's one of many Integrated Development Environments for the python language, and it's distributed with the Anaconda python package. I have been told it's quite similar to the Matlab Development Environment, so this may be the most familiar way for you to run python. However as 2nd year physicists I'd like to encourage you to not use a GUI environment like this - my examples will all be in Jupyter Notebooks.
%\begin{enumerate}
%\item Start up Spyder (type {\tt spyder} in the terminal).
%\item Work through the beginning of the Python Programming for Physicists tutorial (Chapter 2 of Newman \cite{newman}) up to and including Exercise 2.2: "Calculating the Altitude of a Satellite" using Spyder as your IDE (instead of IDLE as suggested in the Chapter).
%\end{enumerate}
\subsection{Command Line}
For completeness I wish to mention this method of using python, which will be easiest if done under linux (or Mac). Any python code saved in a ``code.py'' file (where ``code'' is just any name you choose) can be run this way. There are linux text editors (e.g. {\tt nano} which can edit these files directly (note that Jupyter Notebook include formatting content which will not work - you need to copy and paste just the coding parts of the Notebook to do this)
To run code at the command line you simply type:
\begin{verbatim}
> python code.py
\end{verbatim}
(where {\tt code.py} is the file name of your python code). This is most useful for code which takes a long time to run, or which you need to run with different inputs. Most experienced coders I know use this method (although Jupyter Notebooks are becoming more common).
%Write a simple ``I love Physics" code (ie. a code which runs and prints ``I love Physics" to the terminal) in a text editor to run from the command line.
Tip: there are more advanced text editors available in Linux which colour code the coding syntax. I often use a version of Emacs to write code for this reason.
\section{Python for Physics Tutorial}
Now please work through the remainder of Chapter 2 of Newman \citep{newman} using the python interface of your choice.
Tips:
\begin{itemize}
\item Save your Notebook with names you remember (e.g. NewmanChapter2)
\item Give your variables sensible names (so you can remember what they are)
\item Your code will be more efficient (and less bug prone) if you define variables to have the correct types (integer/floating etc).
\item Mistakes and error messages are normal in programming, and working out what they mean can be tricky (but is not impossible). Google (or your search engine of preference) will be your friend here, as will asking your peers (and your lecturers) for help.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\section{Physics Examples for your Portfolio}
%Please work through the following exercises writing commented code (as Jupyter Notebooks) to hand in for your portfolio.
%Please use all three methods we have been practicing to interact with Python (i.e. write one as a Python script in a text editor to run, do one in the Spyder IDE and one as a Jupyter Notebook. You may choose which method to run the fourth, exercise but please explain your choice. There is no correct answer, it's simply a function of programming style and personal preferences).
%2016
%1. Calculating the Altitude of a Satellite - Exercise 2.2 (Newman \citep{newman}, pg 30)
%2. Quantum potential step - Exercise 2.5 (Newman \citep{newman}, pg 36) -
%3. The semi-empirical mass formula - Exercise 2.10 (Newman \citep{newman}, pg 75)
%4. Make a user defined function to calculate binomial coefficients - Exercise 2.11 (Newman \citep{newman}, pg 82)
%\begin{figure*}
%\includegraphics[width=1.0\textwidth]{Exercise2_2.png}%
%\caption{\label{}}
%\end{figure*}
%\begin{figure*}
%\includegraphics[width=1.00\textwidth]{Exercise2_5.png}%
%\caption{\label{}}
%\end{figure*}
%\begin{figure*}
%\includegraphics[width=1.0\textwidth]{Exercise2_10.png}%
%\caption{\label{}}
%\end{figure*}
%\begin{figure*}
%\includegraphics[width=1.0\textwidth]{Exercise2_11.png}%
%\caption{\label{}}
%\end{figure*}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Place all of the references you used to write this paper in a file
% with the same name as following the \bibliography command
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bibliography{sample-paper}
\bibliographystyle{prsty}
\begin{thebibliography}{}
\bibitem{newman} Newman, M., Computational Physics - Revised and Expanded, [2013]
\bibitem{whypython} Why Python: http://lorenabarba.com/blog/why-i-push-for-python, [2014]
\bibitem{jupyter} Jupyter Notebooks: http://jupyter.org/, [2017]
\bibitem{github} GitHub: https://github.com, [2017]
%\bibitem{melissinos2003}Melissinos, A.C., Napolitano, J., Experiments in Modern
% Physics - 2nd Edition, Academic Press, [2003]
%\bibitem{bevington2003}Bevington and Robinson, Data Reduction and
% Error Analysis for the Physical Sciences - 3rd Edition, McGraw-Hill,
% [2003]
%\bibitem{pritchard1990}Professor D. Pritchard, Personal Communication
\end{thebibliography}
\end{document}
%
% ****** End of file apssamp.tex ****** | {
"alphanum_fraction": 0.7378044154,
"avg_line_length": 51.48828125,
"ext": "tex",
"hexsha": "b2be227fd0b40ff18dd746a5e79259e8a2b1192b",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2018-10-30T13:50:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-09-29T15:05:58.000Z",
"max_forks_repo_head_hexsha": "fbfd89d1dd57c84b88ef5a50c69614b1db2630e6",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "karenlmasters/ComputationalPhysicsUn",
"max_forks_repo_path": "IntroductiontoPython/introduction-python.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "fbfd89d1dd57c84b88ef5a50c69614b1db2630e6",
"max_issues_repo_issues_event_max_datetime": "2017-10-30T10:57:16.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-10-25T09:08:44.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "fizisist/computational-physics-portsmouth",
"max_issues_repo_path": "IntroductiontoPython/introduction-python.tex",
"max_line_length": 631,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "fbfd89d1dd57c84b88ef5a50c69614b1db2630e6",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "fizisist/computational-physics-portsmouth",
"max_stars_repo_path": "IntroductiontoPython/introduction-python.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-14T04:48:22.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-09-29T15:05:57.000Z",
"num_tokens": 3262,
"size": 13181
} |
\subsection*{Data Collection}
A Struck Integrated Systems (SIS) 3302 DAQ module was used to collect data. This module has 16-bit ADCs and an internal 100 MHz clock which was used for all of the measurements. Preamplifier output signals were fed directly to the DAQ module without additional processing. These output signals were digitized and used to perform this analysis. Additionally, the on-board shaping amplifier was used to retrieve event energies (in ADC units).
The trigger threshold value was set so that gamma-ray pulses with energies as low as approximately 50 keV could be collected without excessive noise and without exceeding the voltage range of the DAQ. Data was collected attempting to have negligible pile-up effects. The event rate was chosen to be approximately 800 Hz (triggers) for a system with a 100 MHz clock (10 ns sampling time) and a preamplifier pulse decay time of roughly 50 $\mu$s.
Before further processing each pulse was baseline corrected. This was done by subtracting the average value of the first 15 samples from all of the samples in that pulse.
\subsection*{Energy Calibration}
Before taking spectral data, each channel must be calibrated. The 661.7 keV line from ${}^{137}$Cs and the 59.5 keV line from ${}^{241}$Am were used to perform a linear energy calibration.
Four calibration data sets were taken. First an ${}^{241}$Am source was placed near the outward face of detector 1. Next, the source was moved to the outward source of the detector 2. This was repeated with a ${}^{137}$Cs source. This was done to ensure that all channels would have spectral data for both sources.
Two spectra were plotted for each channel, one using both ${}^{241}$Am data sets combined, and the other using both ${}^{137}$Cs data sets combined. A region of interest around each photopeak was selected by visual inspection of the relevant spectrum. These regions were each fit with a Gaussian function and a linear function (to account for background) using the built in models in LmFit \cite{LMFIT}. A linear energy calibration was used.
Channels which had fewer than 50 total counts around the ${}^{137}$Cs peak or showed other strange behavior (visually observed) in spectra were not calibrated, and were not used in any of the analysis shown. This included 28 of the 152 channels. Most of these channels correspond to strips near the edge of the detectors which have been disconnected due to high leakage currents. However, some are recoverable. With higher statistics or The channel the abnormally high FWHM value in figure \ref{fwhm} had very few events in the cesium photopeak region and a very flat distribution, leading to poor, inaccurate fitting. Channels which are not plotted were not calibrated.
The energy resolution of each channel is determined from the FWHM of the cesium peak. This is shown in figure \ref{fwhm}. Each individual channel had a relatively low number of counts in the photopeak. The error bars shown correspond to fitting errors. Low statistics in the peaks limited this procedure (there were roughly 60 counts for a single channel). The average energy resolution was found to be 2.39 keV., with a standard deviation of 0.39 keV. The channels with high FWHM values tend to be those with a small number of counts in the peak. The variation between channels would likely decrease with better statistics. Ideally, all strips would have the same energy resolution. Realistically, however, we expect that some strips, in particular those on the outer edges of the detector, to have degraded energy resolution. The DC face of detector 1 contains channels 0 through 37. Here we see a u-shaped trend in the energy resolution, potentially demonstrating the effects of higher leakage current near the edges. The DC face of detector two contains channels 38 through 75. Here we also see a slight u-shaped curve, though the trend is not definite. The AC face of detector 1 spans channels 76 through 113 and the AC face of detector 2 spans channels 114 to 151.
\begin{figure}
\begin{centering}
\includegraphics[width=0.7\textwidth]{./figures/energy_res.pdf}
\caption{The energy resolution for each channel of two double-sided strip detectors. Channels with a 0 FWHM are those which were not calibrated due to low statistics, high leakage currents, or other issues. The error bars shown correspond to fitting errors.}
\label{fwhm}
\end{centering}
\end{figure}
\subsection*{Timing}
For this and all following sections and analysis, the events used were for a ${}^{137}$Cs source, positioned several decimeters away from the DC face of detector 1. ${}^{137}$Cs was used to ensure events throughout the potential range of detector depth. Additionally, these signals have greater SNR which makes it easier to properly trigger on and correlate signals.
A distribution of trigger times over all events and both detectors was plotted. For a Poisson process, the distribution of arrival times is exponential, peaked at zero and falling more steeply with increasing rate (the decay constant of the exponential function is $\lambda = 1/mean rate)$.
The distribution for experimental triggers does not follow this trend exactly. There are more events near 0 and more events at 1 than at zero. t is found that most events occur within 20 nanoseconds of each other ($\approx$ 82 $\%$, as seen in figure \ref{timehist}. This is probably due to several effects. For each event which causes a trigger on a strip, it is very likely to see triggers on other strips. For each event, ideally we would see at least two strips trigger, at least one on each side of the detector. Due to charge sharing and image charges on neighboring strips, more than just these two strips can register an event. Additionally, triggers due to noise and random coincidences will also increase the rate of event triggers. Lastly, there is some uncertainty in the timestamps used to calculate the time between events. There is likely some variation in the trigger time recorded by the SIS modules, which will depend on the on-board fast trapezoidal filter (used for triggering) parameters and event topology. The effect of this was not evaluated in this work.
\begin{figure}
\begin{centering}
\includegraphics[width=0.7\textwidth]{./figures/time-to-next-event.pdf}
\caption{The time-to-next event histogram}
\label{timehist}
\end{centering}
\end{figure}
To select single-site events, events which have triggers within 70 ns of each other on opposite faces of a detector crystal were selected (see next subsection for details on coincidence window selection). Requiring this, along with full energy deposition, provided the selection criteria for a single-site event.
For an event, each of the two pulses (from the opposite faces) was smoothed using a Savitzky-Golay filter \cite{scipy}. The last point where the smoothed signal did not exceed 50 percent of its maximum value and 6 neighboring points (3 to each side) were fit with a linear function. The time that this linear function crossed 50 percent was calculated and used as $t50$. The $t50$ value of one face was subtracted from the other, to find $\Delta t50$. Here the $t50$ of the electrons was subtracted from that of the holes ($t50_{cathode}- t50_{anode}$). The difference in trigger time was added to $\Delta t50$ to find the difference in signal arrival times.
\begin{figure}
\begin{centering}
\includegraphics[width=0.7\textwidth]{./figures/t50_fitting.pdf}
\caption{Each raw signal (blue) is smoothed with a Savitzky-Golay filter (orange). The region around the point at which the smoothed signal exceeds half its maximum is fit with a line (green) to extract the t50 value (red point)}
\label{fit}
\end{centering}
\end{figure}
\subsection*{ADL3 Model}
A model of the detector was built using ADL3 (AGATA Demonstrator Library v.3), which uses finite element methods to calculate fields (electric field, electric potential, weighting potential) in the given geometry \cite{adl3}. A 3D model was used, modeling 5 strips on each side. The geometry modeled is show in figure \ref{wpot}, which depicted the weighting potential of the central strip on the DC side.
This model was used to calculate predicted signals for events taking place at various depths, centered for a strip on either face. A map of the interaction depths simulated is shown in \ref{positions}, where the interaction points are superimposed on the weighting potential plotted in figure \ref{wpot}. Interactions were simulated every 0.1 mm.
The difference in t50 value (time at which the signal reaches half of its maximum) between signals on the central electrode on the AC and DC sides was found for each event. This corresponds roughly to the difference in trigger times expected. In truth, this is an optimistic assumption since effect like the preamplifier response, jitter, charge trapping, etc. are not being taken into account. Using this method, the maximum difference in t50 between the two faces of the crystal is $\approx$ 62 ns. In accordance with this, a coincidence window of 70 ns was used. This slight broadening in time window should not affect the false coincidence rate since a very small fraction of events occur with triggers between 62 and 70 ns ($<$ 0.1 $\%$ ). The expected difference in trigger time is plotted in figure \ref{t50depth}, as a function of interaction depth. The average difference is 31 ns.
\begin{figure}
\begin{centering}
\includegraphics[width=0.7\textwidth]{./figures/Wpot03.pdf}
\caption{CAPTION}
\label{wpot}
\end{centering}
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[width=0.7\textwidth]{./figures/positions.pdf}
\caption{CAPTION}
\label{positions}
\end{centering}
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[width=0.7\textwidth]{./figures/deltat50_vs_depth.pdf}
\caption{CAPTION}
\label{t50depth}
\end{centering}
\end{figure}
\subsection*{Depth Determination}
\subsubsection*{Simple Linear Fit}
The maximum $\Delta t50$was seen to be roughly 210, the minimum $\Delta t50$ was seen to be -160. For a rough determination of the depth, the maximum value is assumed to correspond to the anode and the minimum value to the cathode. A linear fit is applied to determine intermediate position values. This is not a fully representative treatment. It fails to take into account effects due to the non-linear weighting potential directly near electrodes, different charge carrier mobilities, realistic charge transport, and other effects. However, for the purposes of this work, we make this crude assumption following \cite{amman}.
\subsubsection*{Linear Fit}
A different linear interpretation was used as well. Following \cite{cci21}, assuming saturation velocity for both charge carriers, one can use a linear function to relate depth ($z$) to $\Delta t50$:
\begin{equation}
z = z_0 + k \Delta t50
\end{equation}
where z is the depth of the interaction, $z_0$ is a constant depth which is slightly offset from the midpoint of the detector to account for differences in electron and hole velocities, and $k$ is a proportionality factor. $z_0$ and $k$ were experimentally determined for detector 1 to be 5.2 mm and 0.04 mm/ns respectively, in \cite{cci21}. This offset of 5.2 mm was adjusted to 5.95 mm to better fit the data set.
\subsubsection*{Library Method}
Using the ADL3 model and simulated signals described earlier, the depth was calculated using the signal library method. Here, experimental signals were compared to simulated signals to find the simulated signal that most closely matched the measured signal, and assume that interaction depth.
For each event, the signal on a detector face was compared to each simulated signal for that face in the library. The measured signal was shifted in time to most closely match the simulated signal to which is was being compared. This is done because the true start time of the experimental event is not known. In this way, this depth determination method is disentangled from the $\Delta$T50 of an event, and is determine purely from signal shapes.
REDO THIS PLOT TO HAVE LABELS AND TO NOT BE CONCATENATED
\begin{figure}
\begin{centering}
\includegraphics[width=0.7\textwidth]{./figures/simulated_signals.pdf}
\caption{The signals from both faces of the detector are shown here. The positive signals corresponds to the charge induced on the DC electrode, and the negative signals corresponds to that induced on the AC electrode. The blue signals correspond to an event which takes place very near to the DC electrode. The DC signal rises very quickly, and the AC very slowly. The red signal corresponds to an event which takes place very near the AC electrode. Now the AC signal rises very quickly and DC slowly.}
\label{signals}
\end{centering}
\end{figure}
REDO THIS PLOT TO HAVE LABELS
\begin{figure}
\begin{centering}
\includegraphics[width=0.7\textwidth]{./figures/signal_shift.pdf}
\caption{Here a measured signal (green) has been shifted in time to match a simulated signal (blue). The signal after shifting is shown (red dashed line). It is clear that the measured signal matched better to the library signal after shifting.}
\label{shift}
\end{centering}
\end{figure}
| {
"alphanum_fraction": 0.791631926,
"avg_line_length": 102.2713178295,
"ext": "tex",
"hexsha": "a52e68109e313112093ba32640ac3e82532c2535",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ca5d872cac3a4ebb0b5829d7c142bf35ba2de74e",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "jmsz/lab3",
"max_forks_repo_path": "text/methods.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ca5d872cac3a4ebb0b5829d7c142bf35ba2de74e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "jmsz/lab3",
"max_issues_repo_path": "text/methods.tex",
"max_line_length": 1270,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ca5d872cac3a4ebb0b5829d7c142bf35ba2de74e",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "jmsz/lab3",
"max_stars_repo_path": "text/methods.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3007,
"size": 13193
} |
% ------------------------------------------------------------
% ------------------------------------------------------------
% Insert the common KETCube AppNote Defines
% ------------------------------------------------------------
% ------------------------------------------------------------
\input{resources/appNotes/defs.tex}
\title{\UWBLogo KETCube AppNote 003:\\ Voltage Measurement up to 100V DC (\vhCurrentVersion)}
\author{Author: \vhListAllAuthorsLongWithAbbrev}
\date{Version \vhCurrentVersion\ from \vhCurrentDate}
% ------------------------------------------------------------
% ------------------------------------------------------------
% Insert the common KETCube AppNote Head
% ------------------------------------------------------------
% ------------------------------------------------------------
\input{resources/appNotes/head.tex}
% ------------------------------------------------------------
% ------------------------------------------------------------
% BEGIN of the KETCube appNote Content
% ------------------------------------------------------------
% ------------------------------------------------------------
\section*{About this Document}
\input{resources/about.tex}
This document describes a simple yet powerful extension board with only 4 discrete-components for Voltage measurement up to 100V DC.
\setcounter{tocdepth}{1}
\tableofcontents
\clearpage
\listoffigures
\listoftables
\begin{versionhistory}
\vhEntry{03/2018}{03.03.2018}{JB}{Initial version}
\vhEntry{05/2018}{07.05.2018}{JB|KV|MU}{Text review, minor fixes}
\end{versionhistory}
% history table ... do not number
\setcounter{table}{0}
\clearpage
\pagenumbering{arabic}
\pagestyle{headings}
\clearpage
\section{Measurement Principle}
The measurement is based on a simple Voltage divider -- see Figure \ref{fig:mp:voltDiv}.
\marginlabel{\captionof{figure}{Voltage Divider}\label{fig:mp:voltDiv}}
\raisebox{-\height}{\includegraphics[width=0.3\paperwidth]{volt_divider.pdf}}
For potential $U$, the following equations holds:
\begin{equation}
U = U_1 + U_2
\end{equation}
When $U_2$ is known, the following equations holds for $U$:
\begin{equation}
U = U_2 + \frac{U_2 \cdot R_1}{R_2}
\end{equation}
For current $I$, the following equation holds:
\begin{equation}
I = \frac{U_1}{R_1} = \frac{U_2}{R_2} = \frac{U}{R_1 + R_2}
\end{equation}
% ============================
\clearpage
\section{Extension Board}
The extension board contains the voltage divider and additional components to enhance circuit protection -- see Figure \ref{fig:eb:sch}.
\subsection{Schematic}
\marginlabel{\captionof{figure}{Extension board schematic and interfacing with KETCube}\label{fig:eb:sch}}
\raisebox{-\height}{\includegraphics[width=0.4\paperwidth]{schematic.pdf}}
\begin{table*}[!ht]
\begin{tabular}{| p{2cm} | p{2cm} | p{1.5cm} |}
\hline
\rowcolor{SeaGreen3!30!} {\bf Reference} & {\bf Value} & {\bf Unit} \\
\hline
\hline
R1 & 1M & $\Omega$ \\
\hline
R2 & 24k & $\Omega$ \\
\hline
R3 & 1k & $\Omega$ \\
\hline
D1 & 1N4007 & -- \\
\hline
\end{tabular}
\addcontentsline{lot}{table}{Extension board components}
\label{tab:eb:values}
\end{table*}
\subsection{KETCube Interface}
The extension board can be connected to KETCube via mikroBUS or KETCube socket.
Connect $ADCin$ to KETCube $AN$ pin and ground to KETCube $GND$ pin.
Make sure, that $Vref$ on KETCube main board is connected to supply voltage -- see KETCube specification \cite{ZCU:KETCube:05-2018}.
\clearpage
\subsection{Operation}
The measured voltage should be connected to pads $A$ (a positive terminal) and $B$ (a negative -- common -- terminal) respectively. The polarity of the measured voltage is $A \rightarrow B$.
When the measured voltage source is connected correctly, a small current flows from $A$ to $B$ and the partial voltage at $ADCin$ is measured to determine $A \rightarrow B$ voltage.
When the measured voltage source is connected incorrectly ($B \rightarrow A$), no current flows through the circuit and the partial voltage at $ADCin$ is equal to 0V.
KETCube \docKCModName{ADC} uses 12-bit ADC, this results in measurement resolution of $\approx$ 0.025V.
\docNote{The measured voltage source is constantly loaded by current $I \leq 100 ~\mu$A}
\subsection{Protections}
\subsubsection*{Over-Current}
The resistor $R_3$ protects the MCU PIN.
\begin{equation}
U_{ADCin} \leq U_{VDD} + U_{protection~diode~forward~voltage}
\end{equation}
\subsubsection*{Wrong Polarity}
When the polarity of voltage source connected to pads is wrong, i.e. $B \rightarrow A$, the diode $D1$ ensures, that the potential at $ADCin$ will be equal to KETCube ground.
% ============================
\clearpage
\section{Absolute Maximum Ratings}
\begin{table*}[!ht]
\hspace*{-4cm}
\begin{tabular}{| p{3.5cm} | p{1.5cm} | p{2cm} | p{2cm} | p{2cm} | p{1cm} |}
\hline
\rowcolor{SeaGreen3!30!} {\bf Parameter} & {\bf Symbol} & {\bf MIN} & {\bf TYP} & {\bf MAX} & {\bf UNIT} \\
\hline
\hline
Resolution & $U_{res}$ & -- & 0.025 & -- & V\\
\hline
\hline
Supply Voltage & VDD & 2.5\footnotemark & \multicolumn{2}{l|}{see KETCube spec. \cite{ZCU:KETCube:05-2018}} & V \\
\hline
1N4007 Forward voltage & $U_{DFW}$ & -- & 0.6 & -- & V\\
\hline
1N4007 DC blocking voltage & $U_{DCB}$ & -- & -- & 1000 & V\\
\hline
$A \rightarrow B$ voltage & $U_{A \rightarrow B}$ & $U_{DFW} + U_{res}$ & -- & 100 & V\\
\hline
$B \rightarrow A$ voltage & $U_{B \rightarrow A}$ & -- & -- & $U_{DCB}$ & V\\
\hline
$A \rightarrow B$ current & $I$ & -- & -- & 100 & $\mu$A\\
\hline
\end{tabular}
\addcontentsline{lot}{table}{Absolute Maximum Ratings}
\label{tab:spec:AMR}
\end{table*}
\footnotetext{Minimum supply voltage for full range measurement: 0V -- 100V}
% ============================
\clearpage
\section{KETCube Settings}
In KETCube terminal (see KETCube specification \cite{ZCU:KETCube:05-2018}), enable \docKCModName{ADC} and configure any module for data delivery:
\begin{docCodeExample}
\begin{verbatim}
>> enable ADC
>> enable LoRa
>> set LoRa appEUI 001122 ...
...
\end{verbatim}
\end{docCodeExample}
% ============================
\section{Measured Voltage Computation}
The KETCube ADC module measure the $U_{ADCin}$ voltage, but the interesting value is $U_{A \rightarrow B}$. Thus $U_{A \rightarrow B}$ must be computed from $U_{ADCin}$ by using the following equation:
\[
U_{A \rightarrow B} =
\begin{dcases}
\leq U_{DFW}, & \text{if } U_{ADCin} = 0\\
U_{DFW} + U_{ADCin} + \frac{U_{ADCin} \cdot 10^6}{24 \cdot 10^3}, & \text{otherwise}
\end{dcases}
\]
\docNote{The extension board enables measurement for $U_{A \rightarrow B} \geq U_{DFW}$ only.}
\docNote{When this extension board is used in conjunction with {\it LoRa} module, apply the above equation to the received value on the application server.}
% ============================
\clearpage
\bibliographystyle{IEEEtran}
\bibliography{IEEEabrv,resources/sources}
% ============================
\input{resources/license.tex}
| {
"alphanum_fraction": 0.5917755991,
"avg_line_length": 33.8433179724,
"ext": "tex",
"hexsha": "de826b3613cc36a488d790a3752bd7eaa9b947fd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d903cb548a0408fe76f59e08d74ac3ea81d9ff96",
"max_forks_repo_licenses": [
"NCSA",
"MIT"
],
"max_forks_repo_name": "SmartCAMPUSZCU/KETCube-docs",
"max_forks_repo_path": "appNotes/KETCube_appNote_003.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "d903cb548a0408fe76f59e08d74ac3ea81d9ff96",
"max_issues_repo_issues_event_max_datetime": "2021-02-07T13:24:21.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-12-29T14:31:05.000Z",
"max_issues_repo_licenses": [
"NCSA",
"MIT"
],
"max_issues_repo_name": "SmartCAMPUSZCU/KETCube-docs",
"max_issues_repo_path": "appNotes/KETCube_appNote_003.tex",
"max_line_length": 204,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "d903cb548a0408fe76f59e08d74ac3ea81d9ff96",
"max_stars_repo_licenses": [
"NCSA",
"MIT"
],
"max_stars_repo_name": "SmartCAMPUSZCU/KETCube-docs",
"max_stars_repo_path": "appNotes/KETCube_appNote_003.tex",
"max_stars_repo_stars_event_max_datetime": "2020-07-20T17:54:03.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-07-20T17:54:00.000Z",
"num_tokens": 2086,
"size": 7344
} |
% File: revise.tex
% Created: Wed Oct 27 02:00 PM 2018 P
% Last Change: Wed Oct 27 02:00 PM 2018 P
%
%
% Copyright 2007, 2008, 2009 Elsevier Ltd
%
% This file is part of the 'Elsarticle Bundle'.
% ---------------------------------------------
%
% It may be distributed under the conditions of the LaTeX Project Public
% License, either version 1.2 of this license or (at your option) any
% later version. The latest version of this license is in
% http://www.latex-project.org/lppl.txt
% and version 1.2 or later is part of all distributions of LaTeX
% version 1999/12/01 or later.
%
% The list of all files belonging to the 'Elsarticle Bundle' is
% given in the file `manifest.txt'.
%
% Template article for Elsevier's document class `elsarticle'
% with numbered style bibliographic references
% SP 2008/03/01
%
%
%
% $Id: elsarticle-template-num.tex 4 2009-10-24 08:22:58Z rishi $
%
%
%\documentclass[preprint,12pt]{elsarticle}
\documentclass[answers,11pt]{exam}
% \documentclass[preprint,review,12pt]{elsarticle}
% Use the options 1p,twocolumn; 3p; 3p,twocolumn; 5p; or 5p,twocolumn
% for a journal layout:
% \documentclass[final,1p,times]{elsarticle}
% \documentclass[final,1p,times,twocolumn]{elsarticle}
% \documentclass[final,3p,times]{elsarticle}
% \documentclass[final,3p,times,twocolumn]{elsarticle}
% \documentclass[final,5p,times]{elsarticle}
% \documentclass[final,5p,times,twocolumn]{elsarticle}
% if you use PostScript figures in your article
% use the graphics package for simple commands
% \usepackage{graphics}
% or use the graphicx package for more complicated commands
\usepackage{graphicx}
% or use the epsfig package if you prefer to use the old commands
% \usepackage{epsfig}
% The amssymb package provides various useful mathematical symbols
\usepackage{amssymb}
% The amsthm package provides extended theorem environments
% \usepackage{amsthm}
\usepackage{amsmath}
% The lineno packages adds line numbers. Start line numbering with
% \begin{linenumbers}, end it with \end{linenumbers}. Or switch it on
% for the whole article with \linenumbers after \end{frontmatter}.
\usepackage{lineno}
% I like to be in control
\usepackage{placeins}
% natbib.sty is loaded by default. However, natbib options can be
% provided with \biboptions{...} command. Following options are
% valid:
% round - round parentheses are used (default)
% square - square brackets are used [option]
% curly - curly braces are used {option}
% angle - angle brackets are used <option>
% semicolon - multiple citations separated by semi-colon
% colon - same as semicolon, an earlier confusion
% comma - separated by comma
% numbers- selects numerical citations
% super - numerical citations as superscripts
% sort - sorts multiple citations according to order in ref. list
% sort&compress - like sort, but also compresses numerical citations
% compress - compresses without sorting
%
% \biboptions{comma,round}
% \biboptions{}
% Katy Huff addtions
\usepackage{xspace}
\usepackage{color}
\usepackage{multirow}
\usepackage[hyphens]{url}
\usepackage[acronym,toc]{glossaries}
\include{acros}
\makeglossaries
%\journal{Annals of Nuclear Energy}
\begin{document}
%\begin{frontmatter}
% Title, authors and addresses
% use the tnoteref command within \title for footnotes;
% use the tnotetext command for the associated footnote;
% use the fnref command within \author or \address for footnotes;
% use the fntext command for the associated footnote;
% use the corref command within \author for corresponding author footnotes;
% use the cortext command for the associated footnote;
% use the ead command for the email address,
% and the form \ead[url] for the home page:
%
% \title{Title\tnoteref{label1}}
% \tnotetext[label1]{}
% \author{Name\corref{cor1}\fnref{label2}}
% \ead{email address}
% \ead[url]{home page}
% \fntext[label2]{}
% \cortext[cor1]{}
% \address{Address\fnref{label3}}
% \fntext[label3]{}
\title{Modeling and Simulation of Online Reprocessing in the Thorium-Fueled
Molten Salt Breeder Reactor\\
\large Response to Review Comments}
\author{Andrei Rykhlevskii, Jin Whan Bae, Kathryn D. Huff}
% use optional labels to link authors explicitly to addresses:
% \author[label1,label2]{<author name>}
% \address[label1]{<address>}
% \address[label2]{<address>}
%\author[uiuc]{Kathryn Huff}
% \ead{[email protected]}
% \address[uiuc]{Department of Nuclear, Plasma, and Radiological Engineering,
% 118 Talbot Laboratory, MC 234, Universicy of Illinois at
% Urbana-Champaign, Urbana, IL 61801}
%
% \end{frontmatter}
\maketitle
\section*{Review General Response}
We would like to thank the reviewers for their detailed assessment of
this paper. Your suggestions, clarifications, and comments have resulted in
changes which certainly improved the paper.
\begin{questions}
\section*{Reviewer 1}
\question Overall, a well written article in a topic area of interest
to a number of audiences. Technically very good, well written in the
description and analysis. Well done authors!
A few minor comments/suggestions/clarifications as follows:
\begin{solution}
Thank you for these kind comments. Your suggestions have
certainly improved the paper.
\end{solution}
%---------------------------------------------------------------------
\question Abstract: Given the recent announcement about TransAtomic
Power, you may want to remove them from your list?
\begin{solution}
Thank you for your kind review. The current manuscript was submitted before
announcement that TransAtomic is ceasing operations. It is removed from
the list of MSR startups (indeed, the whole list was
removed based on a comment by another reviewer).
\end{solution}
%---------------------------------------------------------------------
\question Table 1, page 3: SCALE/TRITON is fast as well as thermal
reactor capable.
\begin{solution}
Thank you for the recommendation. In Table 1 ``thermal'' has been changed
to ``thermal/fast''.
\end{solution}
%---------------------------------------------------------------------
\question Page 4, Line 48: Should read ``...and refill using a single
or multiple unit cell ....''
\begin{solution}
That statement has been modified as requested.
\end{solution}
%---------------------------------------------------------------------
\question Page 4, Line 55. Also worth noting that the latest SCALE
release will have the same functionality using continuous removal (B.
R. Betzler, J. J. Powers, N. R. Brown, and B. T. Rearden, ``Molten Salt
Reactor Neutronics Tools in SCALE,'' Proc. M\&C 2017 - International
Conference on Mathematics \& Computational Methods Applied to Nuclear
Science and Engineering, Jeju, Korea, Apr. 16-20 (2017).)
\begin{solution}
Thank you for the update. This sentence has been added:
``The latest SCALE release will also have the same functionality using
truly continuous removals \cite{betzler_implementation_2017}.''
\end{solution}
%---------------------------------------------------------------------
\question Page 8, Fig 2: Appears the image is cut off at the top?
\begin{solution}
The image has been replotted.
\end{solution}
%---------------------------------------------------------------------
\question Page 13, line 209: The description of the Pa removal,
although correct, isn't quite fully correct. The reason the Pa is
removed from the core and hence flux is to then enable the Pa to decay
to U233. If it was left in the core, it would transmute further and
hence not be able to produce the U233 that is necessary for this
breeding cycle to work.
\begin{solution}
``Protactinium presents a challenge, since it has a large absorption cross
section in the thermal energy spectrum. Moreover, $^{233}$Pa left in the core
would produce $^{234}$Pa and $^{234}$U, neither of which are useful as fuel,
would produce a smaller amount of $^{233}$Pa which decays into the fissile $^{233}$U.
Accordingly, $^{233}$Pa is continuously removed from the fuel salt into
a protactinium decay tank to allow $^{233}$Pa to decay to $^{233}$U
without the corresponding negative neutronic impact.''
\end{solution}
%---------------------------------------------------------------------
\question Table 3: ``cycle time'' is not defined in the paper. Please
add.
\begin{solution}
The ``cycle time'' definition has been added at the first
appearance in text.
\end{solution}
%---------------------------------------------------------------------
\question Page 14, line 224. The 3 day time step as the ``optimum'' for
Th fuel cycles in an MSR was first described and concluded by Powers et
al. Please add a reference to their initial work.
\begin{solution}
The reference to \cite{powers_new_2013} has been added.
\end{solution}
%---------------------------------------------------------------------
\question Page 14, line 234 onwards: Doesn't SERPENT already have an
MSR removal capability? If so, what is different about using SaltProc
with SERPENT?
\begin{solution}
It does. Indeed, much of the inspiration for SaltProc was the
lack of documentation, difficulty of use, and various
inexplicable results we acquired when we tried use this
built-in capability. This capability has not performed well and
needs to be better documented and more robustly verified before
we can use it. We fully intend to contribute to its improvement
if it can be made to agree with expectations. Additional text
to clarify this point has been add to solution for question
\ref{built-in}.
\end{solution}
%---------------------------------------------------------------------
\question Page 18, Figure 7: The figure is hard to interpret or see
clearly what is going on. Could an additional figure or a zoomed in
portion be added to show what the swing in k is over a much shorter
time interval? k seems to be swinging dramatically but over what time
period and how would this be controlled in reality? The graph almost
suggests that the core is unstable??
\begin{solution}
Zoomed portion for 150 EFPD interval has been added. We also
added notes on a plot to explain swing in multiplication factor.
\end{solution}
%---------------------------------------------------------------------
\question Page 18, line 327: Are those elements removed every 3435
days, or is it that the entire salt core is discharged?
\begin{solution}
100 \% of those elements atoms are removed every 3435
days. Full salt discard
as it mentioned in Table 3 has not been implemented. More detailed
explanation of this has been added:
``Additionally, the presence of rubidium, strontium, cesium, and barium in the
core are disadvantageous to reactor physics.
Overall, the effective multiplication factor gradually decreases from 1.075 to
$\approx$1.02 at equilibrium after approximately 6 years of irradiation.
In fact, SaltProc fully removes
all of these elements every 3435 days (not a small mass fraction every 3 days)
which causes the multiplication factor to jump by approximately 450
pcm, and limits using the batch approach for online reprocessing simulations.
In future versions of SaltProc this drawback will be eliminated by removing
elements with longer residence times (seminoble metals, volatile fluorides, Rb, Sr,
Cs, Ba, Eu). In that approach, chemistry models will inform separation
efficiencies for each reprocessing group and removal will optionally be spread more
evenly across the cycle time.''
\end{solution}
%---------------------------------------------------------------------
\question Page 19, Figure 8 (and same for Fig 9): y-axis in grams/kgs
or mass units would be better for the reader.
\begin{solution}
Thank you for the recommendation. Atom density was chosen for
publication-to-publication comparison (e.g. Park \emph{et al.} and
Betzler \emph{et al.}
\cite{park_whole_2015, betzler_molten_2017}). Although mass would certainly
be more understandable for the reader and will be added in a future releases.
\end{solution}
%---------------------------------------------------------------------
\question Page 20, Fig 9: What are the wiggles and dips , especially
seen for Np235?
\begin{solution}
Explanation of this phenomenon has been added as follows:
``Small dips in neptunium and plutonium number density
every 16 years are caused by removing $^{237}$Np and
$^{242}$Pu (included in Processing group ``Higher
nuclides'', see Table~3
which decays into $^{235}$Np and $^{239}$Pu,
respectively.''
\end{solution}
%---------------------------------------------------------------------
\question Page 20, line 351: It is more than just the Pu isotopes that
makes the spectrum harder? What about the other MAs etc?
\begin{solution}
Thank you for the excellent point. The corrected sentence reads thus:
``The neutron energy spectrum at equilibrium is harder
than at startup due to plutonium and other strong
absorbers accumulating in the core during reactor
operation.''
\end{solution}
%---------------------------------------------------------------------
\question Fig 12: units on y-axis?
\begin{solution}
Thanks for catching this. The units $\frac{n}{cm^2 s}$ have been added.
\end{solution}
%---------------------------------------------------------------------
\question Page 24, line 389: Should that be ``233U'' and not ``233Th''?
\begin{solution}
Yes, we meant $^{233}$U production. The typo has been fixed.
Thanks so much for catching it!
\end{solution}
%---------------------------------------------------------------------
\question Table 5: Please provide some comments on the uncertainties -
where do they come from? Also, the ``reference'' results need to state
whether ``initial'' or ``equilibrium''
\begin{solution}
In Table 5, we added information in the reference column
containing data for initial fuel salt composition. Details
about uncertainties have been added:
``Table 5 summarizes temperature effects on reactivity calculated
in this work for both initial and equilibrium fuel
compositions, compared with the original \gls{ORNL} report data
\cite{robertson_conceptual_1971}. By propagating the $k_{eff}$
statistical error provided by SERPENT2, uncertainty for each
temperature coefficient was obtained and appears in Table 5.
Other sources of uncertainty are neglected, such as cross
section measurement error and approximations inherent in the
equations of state providing both the salt and graphite density
dependence on temperature.''
\end{solution}
%---------------------------------------------------------------------
\question Page 26, line 425: ``Relatively large'' compared with what?
Perhaps results for an LWR case would be a good basis for comparison?
\begin{solution}
The sentence has been extended as follows:
The moderator temperature coefficient (MTC) is positive for the
startup composition and decreases during reactor operation
because of spectrum hardening with fuel depletion. Finally,
the total temperature coefficient of reactivity is negative for
both cases, but decreases during reactor operation due to
spectral shift. In summary, even after 20 years of operation
the total temperature coefficient of reactivity is relatively
large and negative during reactor operation (comparing with
conventional PWR which has temperature coefficient about -1.71
pcm/$^\circ$F $\approx$ -3.08 pcm/K
\cite{forget_integral_2018}), despite positive MTC, and affords
excellent reactor stability and control.
\end{solution}
%---------------------------------------------------------------------
\question Page 27, section 3.8: It needs to be made more clear that
these results were calculated, and that they are taken from the code
output.
\begin{solution}
This has now been clarified in the text:
``Table~7 summarizes the six factors for both initial and
equilibrium fuel salt composition. Using SERPENT2 and SaltProc,
these factors and their statistical uncertainties have been
calculated for both initial and equilibrium fuel salt
composition (see Table~2).''
\end{solution}
%---------------------------------------------------------------------
\question Page 29, Figure 15: Similar comment to above regarding Fig 7
- the results are difficult to see and interpret with such notable
swings.
\begin{solution}
Zoomed portion for 150 EFPD interval has been added along with
clarifying details.
\end{solution}
%---------------------------------------------------------------------
\section*{Reviewer 2}
%---------------------------------------------------------------------
\question The paper presents a python script (SaltProc) that can
complement the Serpent 2 code capabilities in terms of fuel cycle
analysis of MSRs. The paper is well written and the tool might be
useful for the nuclear engineering community. However, before
publishing the paper, I recommend some revisions.
\begin{solution}
Thanks very much for these comments. We appreciate your
detailed review, which has certainly improved the paper.
\end{solution}
%---------------------------------------------------------------------
\question \label{built-in} The most critical point of the work is that the built-in
capabilities for online reprocessing of Serpent 2 have not been used.
Their use is mentioned in the future work, but it is not clear why
these capabilities have not been used in the current work. To the
author's knowledge, they have been available in Serpent 2 since quite a
while. The authors should clarify this point at the beginning of the
paper, and not only in the ``Future work'' section. Even though the
technical work was done without using these capabilities, they should
highlight what SaltProc adds to the built-in Serpent capabilities, and
they should at least try to extrapolate on the potential advantages of
combining SaltProc and Serpent capabilities. Based on this, they
should slightly restructure the paper in order to prove the claimed
advantages over Serpent 2.
\begin{solution}
Thank you for your helpful review. We tried to use these capabilities
\cite{rykhlevskii_online_2017} but have had a number of issues which
are hard to resolve due to lack of documentation and
published verifications of this feature. The following
paragraph has been added to clarify why these capabilities have
not been used in current work:
``Aufiero \emph{et al.} added an undocumented feature to SERPENT2
using a similar methodology by explicitly introducing
continuous reprocessing in the system of Bateman equations and
adding effective decay and transmutation terms for each nuclide
\cite{aufiero_extended_2013}. This was employed to study the
material isotopic evolution of the
\gls{MSFR}\cite{aufiero_extended_2013}. The developed extension
directly accounts for the effects of online fuel reprocessing
on depletion calculations and features a reactivity control
algorithm. The extended version of SERPENT2 was assessed
against a dedicated version of the deterministic ERANOS-based
EQL3D procedure in \cite{ruggieri_eranos_2006,
fiorina_investigation_2013} and adopted to analyze the
\gls{MSFR} fuel salt isotopic evolution.
We employed this built-in SERPENT2 feature for a simplified
unit-cell geometry of the thermal spectrum thorium-fueled
\gls{MSBR} and found it unusable\footnote{ Some challenges in
no particular order: mass conservation is hard to achieve;
three types of mflow cards (0, 1 or 2) are indistinguishable in
purpose; an unexplained difference between CRAM and TTA
results; etc.}. Primarily, it is undocumented, and the
discussion forum for SERPENT users is the only useful source of
information at the moment. Additionally, the reactivity control
module described in Aufiero \emph{et al.} is not available in
the latest SERPENT 2.1.30 release. Third, the infinite
multiplication factor behavior for simplified unit-cell model
obtained using SERPENT2 built-in capabilities
\cite{rykhlevskii_online_2017} does not match with exist
MCNP6/Python-script results for the similar model by Jeong and
Park\footnote{ In our study k$_{\infty}$ drops from 1.05 to
1.005 during a 1200 days of depletion simulation while in Jeong
and Park work this parameter decreasing slowly from 1.065 to
1.05 for the similar time-frame.}
\cite{jeong_equilibrium_2016}. Finally, only two publications
\cite{aufiero_extended_2013, ashraf_nuclear_2018} using these
capabilities are available, reflecting the reproducibility
challenge inherent in this feature.
If these challenges can be overcome through verification
against ChemTriton/SCALE as well as this work (the
SaltProc/SERPENT2 package), we hope to employ this SERPENT2
feature for removal of fission products with shorter residence
time (e.g., Xe, Kr), since these have a strong negative impact
on core lifetime and breeding efficiency.''
\end{solution}
%---------------------------------------------------------------------
\question In addition to this major point, I would suggest a few other
revisions. Considering the scope of the journal, mentioning the names of
companies and start-up is not appropriate. I would suggest removing
them.
\begin{solution}
The names of companies have been removed.
\end{solution}
%---------------------------------------------------------------------
\question The sentence ``Immediate advantages over traditional, solid-fueled,
reactors include near-atmospheric pressure in the 15 primary loop,
relatively high coolant temperature, outstanding neutron economy,
improved safety parameters,'' may suggest improved neutron economy vs
solid-fuel fast reactors. This is rarely the case, especially vs
Pu-based SFRs. I would suggest reformulating.
\begin{solution}
Thank you for the exceptional recommendation. The sentence has
been split into two, as follows:
Immediate advantages over traditional commercial reactors
include near-atmospheric pressure in the primary loop,
relatively high coolant temperature, outstanding neutron
economy, and improved safety parameters. Advantages over
solid-fueled reactors in general include reduced fuel
preprocessing and the ability to continuously remove fission
products and add fissile and/or fertile elements
\cite{leblanc_molten_2010}.
\end{solution}
%---------------------------------------------------------------------
\question The sentence ``With regard to the nuclear fuel cycle, the
thorium cycle produces a reduced quantity of plutonium and minor
actinides (MAs) compared to the traditional uranium fuel cycle'' is
correct, but the fact that this is an advantage is questionable. The
pros\&cons of thorium cycle have been long debated and there is no
consensus on its advantage in terms of exposure of workers, exposure of
public, geological repository, etc. I would suggest removing the
sentence.
\begin{solution}
Thanks for your insightful comment. The sentence has been removed.
\end{solution}
%---------------------------------------------------------------------
\question ``Methods listed in references [14, 17, 24, 25, 28, 29, 30]
as well as the current work also employ a batch-wise approach''. As a
matter of fact, the work in [14] allows for continuous reprocessing
via introduction of ``reprocessing'' time constants. The work from
Aufiero (mentioned in the following paragraph) actually used the
methodology previously developed in [14] for verification purposes.
\begin{solution}
Thank you for the information. That reference has been removed from the
sentence, and following paragraph has been modified to read:
Accounting for continuous removal or addition presents a
greater challenge since it requires adding a term to the
Bateman equations. Fiorina \emph{et al.} simulated \gls{MSFR}
depletion with continuous fuel salt reprocessing via
introducing ``reprocessing'' time constants into the ERANOS
transport code \cite{fiorina_investigation_2013}. The latest
SCALE release will also have the same functionality using truly
continuous removals \cite{betzler_implementation_2017}. A
similar approach is adopted to model true continuous feeds and
removals using the MCNP transport code listed in references
\cite{doligez_coupled_2014,heuer_towards_2014,nuttin_potential_2005}.
\end{solution}
%---------------------------------------------------------------------
\question Table 3. It is not clear what ``effective cycle times'' are.
Please clarify.
\begin{solution}
The ``cycle time'' definition has been added to its first appearance in text.
\end{solution}
%---------------------------------------------------------------------
\question The removal of fission products is made batch wise in the
described algorithms. However it is not clear how the fission products
with the longest ``effective cycle times'' are removed. Part of them at
every batch? Or all of them at the end of the ``effective cycle
times''. Please clarify. And please clarify the relation between
``effective cycle times'', batches and the average time spent by a
fission product in the reactor.
\begin{solution}
All of them at the end of the cycle time, and we agree that it
was not best solution. Feature improvements in SaltProc are
underway to enable more realistic handling times in the
processing heuristics. To clarify this following paragraph has
been added:
The current version of SaltProc only allows 100\% separation
efficiency for either specific elements or groups of elements
(e.g. Processing Groups as described in
Table~\ref{tab:reprocessing_list}) at the end of the specific
cycle time. This simplification neglects the reality that
the salt spends appreciable time out of the core, in the
primary loop pipes and the heat exchanger.
This approach works well for fast-removing elements (gases,
noble metals, protactinium) which should be removed each
depletion step. Unfortunately, for the elements with longer
cycle times (i.e. rare earths should be removed every 50 days)
this simplified approach leads to oscillatory behavior of all
major parameters. In future releases of SaltProc, this drawback
will be eliminated by removing elements with longer cycle times
using different method: only mass fraction (calculated separately
for each reprocessing group) will be removed each depletion step
or batch (e.g. 3 days in the current work).
\end{solution}
%---------------------------------------------------------------------
\question \label{osc} The oscillatory behavior shown in Fig. 7 on the time scale of
months/years is hard to explain. Is this because the fission product
with longer residence time are batch-wise removed at their ``effective
cycle time''? In case, why not to remove part of them at every depletion
step?
\begin{solution}
Yes, the oscillation happened because the fission products
with longer residence time are removed at the end of cycle time. We
definitely will take your advice and improve the code in future
releases. The following text has been added to clarify this issue:
In fact, SaltProc fully removes all of these elements every
3435 days (not a small mass fraction every 3 days) which causes
the multiplication factor to jump by approximately 450 pcm, and
limits using the batch approach for online reprocessing
simulations. In future versions of SaltProc this drawback will
be eliminated by removing elements with longer residence times
(seminoble metals, volatile fluorides, Rb, Sr, Cs, Ba, Eu). In
that approach, chemistry models will inform separation
efficiencies for each reprocessing group and removal will
optionally be spread more evenly accross the cycle time.
\end{solution}
%---------------------------------------------------------------------
\question Can the proposed tool adjust reactivity?
\begin{solution}
No, and we may someday add this capability, but we expect this
will be a challenge outside of the depletion code.
\end{solution}
%---------------------------------------------------------------------
\question ``The main physical principle underlying the reactor
temperature feedback is an expansion of material that is heated''. This
sentence would deserve some support data. Can you please calculate
separately the effect of temperature (Doppler in fuel and spectral
shift in graphite) and density?
\begin{solution}
Thank you for the excellent recommendation. Effects of temperature
and density have been calculated separately and added in Table 5.
Moreover, during these simulations we have discovered mistake in
fuel salt and graphite density correlations and completely
recalculated temperature coefficients of reactivity with lower
statistical error. Now initial total temperature coefficient is
closer to the reference and statistical uncertainty was reduced
from 0.046 to 0.038 pcm/K.
\end{solution}
%---------------------------------------------------------------------
\question How uncertainties have been calculated in Table 5? Is this
just statistical uncertainty from Serpent calculations? Please clarify
\begin{solution}
Yes, uncertainties were determined from statistical error
from SERPENT output. Following passage has been added to clarify
this point:
By propagating the $k_{eff}$ statistical error provided by
SERPENT2, uncertainty for each temperature coefficient was
obtained and appears in Table~\ref{tab:tcoef}. Other sources
of uncertainty are neglected, such as cross section
measurement error and approximations inherent in the equations
of state providing both the salt and graphite density
dependence on temperature.
\end{solution}
%---------------------------------------------------------------------
\question For calculating the coefficients in table 5, are you only
changing densities, or also dimensions? Please clarify and provide a
justification.
\begin{solution}
We have changed both densities and dimensions. This passage
has been modified to read:
A new geometry input for SERPENT2, which takes into
account displacement of graphite surfaces, was created
based on this information. For calculation of
displacement, it was assumed that the interface between
the graphite reflector and vessel did not move, and
that the vessel temperature did not change. This is the
most reasonable assumption for the short-term
reactivity effects because inlet salt is cooling
graphite reflector and inner surface of the vessel.
\end{solution}
%---------------------------------------------------------------------
\question ``The fuel temperature coefficient (FTC) is negative for both
initial and equilibrium fuel compositions due to thermal Doppler
broadening of the resonance capture cross sections in the thorium.''
What is the effect of density?
\begin{solution}
This passage has been modified to illuminate the effect of density:
The fuel temperature coefficient (FTC) is negative for both
initial and equilibrium fuel compositions due to thermal
Doppler broadening of the resonance capture cross sections in
the thorium. A small positive effect of fuel density on
reactivity increases from $+1.21$ pcm/K at reactor startup to
$+1.66$ pcm/K for equilibrium fuel composition which has
a negative effect on FTC magnitude during the reactor operation.
\end{solution}
%---------------------------------------------------------------------
\question ``This thorium consumption rate is in good agreement with a
recent online reprocessing study by ORNL [29].'' Please notice that in a
reactor with only Th as feed, and near equilibrium, the Th consumption
rate is exclusively determined by the reactor power and by the energy
released per fission.
\begin{solution}
Thank you for the recommendation. Following sentence has been
added:
It must be noted that for the reactor with only thorium feed,
at near equilibrium state, the thorium consumption rate is
determined by the reactor power, the energy released per fission,
and neutron energy spectrum.
\end{solution}
%---------------------------------------------------------------------
\question Are you considering the effect of the gradual poisoning of
the graphite with fission products? If not, it would be worthwhile
briefly discussing its effect.
\begin{solution}
This passage has been added:
$^{135}$Xe is a strong poison to the reactor, and some
fraction of this gas is absorbed by graphite during MSBR
operation. ORNL calculations shown that for unsealed commercial
graphite with helium permeability 10$^{-5}$ cm$^2$/s the
calculated poison fraction is less than 2\% \cite{robertson_conceptual_1971}.
This parameter can be improved by using experimental graphites
or by applying sealing technology. The effect of the gradual
poisoning of the core graphite with xenon is not treated here.
\end{solution}
%---------------------------------------------------------------------
\question In the manuscript it is not always clear when the authors
refer to numerical approximations of physical situations. For instance,
the authors write ``Figure 16 demonstrates that batch-wise removal of
strong absorbers every 3 days did not necessarily leads to fluctuation
in results but rare earth elements 480 removal every 50 days causes an
approximately 600 pcm jump in reactivity.'' These 600 pcm are an effect
of a numerical approximation, but the way things are presented can be
confusing to the reader. Please try to explicitly separate physical and
numerical effects. And try to related numerical effects to physical
effects. For instance, how does these numerical ``jumps'' affect the
results? In this sense, why the batch-wise removal of strong absorbers
every 3 days was not done? And why a fraction of rare earths is not
removed every three days?
\begin{solution}
This sentence was quite unclear and we have rewritten it to
address your concerns. Effectively these simulations exactly
simulated the batch-wise refuelling detailed in the Robertson
design document, while a more realistic approach would give
smooth behavior. A description of the changes to the text
appears in solution \ref{osc}.
\end{solution}
%---------------------------------------------------------------------
\question ``The current work results show that a true equilibrium
composition cannot exist but balance between strong absorber
accumulation and new fissile material production can be achieved to
keep the reactor critical.'' Not clear. Do you mean that the equilibrium
composition cannot be achieved in a lifetime of the reactor? Please
clarify
\begin{solution}
This statement, being unclear and misleading, has been removed
completely. The intention was to communicate that for a relaxed
definition of equillibrium, it can be acheived.
\end{solution}
%---------------------------------------------------------------------
\section*{Reviewer 3}
\question This paper presents a SaltProc-Serpent2 coupled code system
to simulate the depletion and online reprocessing process by directly
changing the isotropic composition of fuel salt. The Molten Salt
Breeder Reactor (MSBR) was analyzed to demonstrate the simulation
capability of the developed code system. However, large number of
similar works have been done and published on this topic. In
particular, this work is similar with that done by Park et al. with the
title of ``Whole core analysis of molten salt breeder reactor with
online fuel reprocessing'' published on ``International Journal of Energy
Research''. The authors need to prove the uniqueness and innovation
within their work.
\begin{solution}
We appreciate this comment and have made clarifying
improvements to indicate the impact of this work throughout the
paper. More detail is described regarding these changes in the
specific comment responses below.
\end{solution}
%--------------------------------------------------------------------
\question What are the main differences of this work with the previous
works, especially with the work published by Park at al. in ``Whole core
analysis of molten salt breeder reactor with online fuel reprocessing''?
\begin{solution}
Thank you for your question. The new paragraph has been added:
The works described in \cite{park_whole_2015} and
\cite{jeong_equilibrium_2016} are most similar to the work
presented in this paper. However, a few major differences
follow: (1) Park \emph{et al.} employed MCNP6 for depletion
simulations while this work used SERPENT2; (2) the full-core
reactor geometry herein is more detailed
\cite{rykhlevskii_full-core_2017}; (3) Park \emph{et al.} and
Jeong \emph{et al.} both only considered volatile gas removal,
noble metal removal, and $^{233}$Pa separation while the
current work implemented the more detailed reprocessing scheme
specified in the conceptual \gls{MSBR} design
\cite{robertson_conceptual_1971}; (4) the $^{232}$Th neutron
capture reaction rate has been investigated to prove advantages
of two-region core design; (5) the current work explicitly
examines the independent impacts of removing specific fission
product groups.
\end{solution}
%--------------------------------------------------------------------
\question How did the authors verify the coupled SaltProc and Serpent
code system?
\begin{solution}
We have compared few parameters (multiplication factor, Th refill
rate, neutron energy spectrum) with Betzler \emph{et al.}
\cite{betzler_molten_2017} and mentioned it in the Result section.
We also compared neutron energy spectrum and temperature coefficients
for equilibrium composition with Park \emph{et al.} \cite{park_whole_2015}.
In a future SaltProc release suite of unit tests will be added
to make sure that results are consistent with our test cases.
\end{solution}
%--------------------------------------------------------------------
\question In Page 3, the title of ``Table 1'' should not only contain
the ``fast spectrum system''. The work published by Zhou and Yang et al.
with the title of ``Fuel cycle analysis of molten salt reactors based on
coupled neutronics and thermal-hydraulics calculations'' needs to be
included in Table 1 for completeness.
\begin{solution}
Thank you for your excellent recommendations. The table has been
enriched as requested.
\end{solution}
%--------------------------------------------------------------------
\question In Page 5, the following sentence needs to be explained. ``We
employed this extended SERPENT 2 for a simplified unit-cell geometry of
thermal spectrum thorium-fueled MSBR and obtained results which
contradict existing MSBR depletion simulations.''
\begin{solution}
This statement has been significantly extended (see solution
\ref{built-in}).
\end{solution}
%--------------------------------------------------------------------
\question In Page 10, ``SERPENT generates the problem-dependent nuclear
data library'', how did SERPENT generate the problem-dependent nuclear
data library? What kind of nuclear data library did SERPENT generate?
\begin{solution}
Thank you for your kind review. The statement has been removed
entirely and we emphasized that temperature of each material is
assumed to be constant over 60 years:
The specific temperature was fixed for each material and did
not change during the reactor operation.
\end{solution}
%--------------------------------------------------------------------
\question In Page 14, ``depletioncalculations'' should be ``depletion
calculations''.
\begin{solution}
Thanks for catching this, fixed.
\end{solution}
%--------------------------------------------------------------------
\question In Page 21, it looks like the neutron spectrum is not
normalized in Figure 10. It is recommended to normalize the neutron
spectrum for comparison.
\begin{solution}
Thank you for the comment. Neutron energy spectrum in Figure
10 and 11 are normalized per unit lethargy and the area under
the curve is normalized to 1. The caption has been edited to
make this more clear.
\end{solution}
%--------------------------------------------------------------------
\question In Page 22, ``Figure 13 reflects the normalized power
distribution of the MSBR quarter core, which is the same at both the
initial and equilibrium states'' contradicts the following statement of
``The spectral shift during reactor operation results in different power
fractions at startup and equilibrium''.
\begin{solution}
Thanks for catching this. The difference between power fraction
is very small that could be seen from Table 4. It is impossible
to see the difference in a contour plot that is why we left only
equilibrium composition on Figures 13 and 14. The paragraph has
been modified as follows:
Table~4 shows the power fraction in each zone for initial and
equilibrium fuel compositions. Figure~13 reflects the
normalized power distribution of the \gls{MSBR} quarter core
for equilibrium fuel salt composition. For both the initial
and equilibrium compositions, fission primarily occurs in the
center of the core, namely zone I. The spectral shift during
reactor operation results in slightly different power
fractions at startup and equilibrium, but most of the power is
still generated in zone I at equilibrium (table~4).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{solution}
%--------------------------------------------------------------------
\question In Page 24, it is hard to agree with the statement that ``the
majority of 233Th is produced in zone II.''. How did the authors draw
this conclusion?
\begin{solution}
The statement has been removed entirely.
\end{solution}
%--------------------------------------------------------------------
\question In Page 24 and 25, why did the normalized power density
distribution and the 232Th neutron capture reaction rate distribution
share the same figure for both initial and equilibrium fuel salt
compositions?
\begin{solution}
Thanks for catching this, it is a typo. Figure 13 and 14 are plotted
for equilibrium composition only.
\end{solution}
%--------------------------------------------------------------------
\question Too much background information was contained in the
abstract, which deteriorates the readability of the abstract.
\begin{solution}
The abstract has been shortened.
\end{solution}
%--------------------------------------------------------------------
\end{questions}
\bibliographystyle{unsrt}
\bibliography{../2018-msbr-reproc}
\end{document}
%
% End of file `elsarticle-template-num.tex'.
| {
"alphanum_fraction": 0.5964416721,
"avg_line_length": 51.5981781377,
"ext": "tex",
"hexsha": "d1d1c0450c3fbc461eff5a0d9c2ee7503d61a0db",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2020-04-30T15:06:40.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-05-17T17:04:23.000Z",
"max_forks_repo_head_hexsha": "0b0e9e2f643038ef3359ece0dafbc536f833695c",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "andrewryh/msbr-reproc",
"max_forks_repo_path": "revisions/review-responses.tex",
"max_issues_count": 38,
"max_issues_repo_head_hexsha": "5c942f25fed851a38d8055a73a23c35a3de5b80d",
"max_issues_repo_issues_event_max_datetime": "2019-01-20T21:55:45.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-05-18T14:58:21.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "arfc/2020-rykhl-dissertation",
"max_issues_repo_path": "2020-annuals-uq/revisions/review-responses.tex",
"max_line_length": 108,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "0b0e9e2f643038ef3359ece0dafbc536f833695c",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "andrewryh/msbr-reproc",
"max_stars_repo_path": "revisions/review-responses.tex",
"max_stars_repo_stars_event_max_datetime": "2020-03-25T21:13:11.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-10-10T19:43:59.000Z",
"num_tokens": 9949,
"size": 50979
} |
%% LyX 2.2.3 created this file. For more info, see http://www.lyx.org/.
%% Do not edit unless you really know what you are doing.
\documentclass[12pt,english]{article}
\usepackage{mathptmx}
\usepackage[T1]{fontenc}
\usepackage[latin9]{inputenc}
\usepackage{geometry}
\geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in}
\usepackage{babel}
\usepackage[authoryear]{natbib}
\usepackage[unicode=true,pdfusetitle,
bookmarks=true,bookmarksnumbered=false,bookmarksopen=false,
breaklinks=false,pdfborder={0 0 0},pdfborderstyle={},backref=false,colorlinks=false]
{hyperref}
\usepackage{breakurl}
\makeatletter
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% LyX specific LaTeX commands.
%% Because html converters don't know tabularnewline
\providecommand{\tabularnewline}{\\}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% User specified LaTeX commands.
\date{}
\makeatother
\begin{document}
\title{\textbf{Matlab for Applied Micro Empiricists: }\\
\textbf{Summer II.2 2012}}
\maketitle
\vspace{-0.85in}
\begin{center}
\begin{tabular}{ll}
Course: & Matlab for Applied Micro Empiricists (half Summer module for rising
second-years)\tabularnewline
Instructor: & Tyler Ransom\tabularnewline
Time: & Tue/Thu 10 a.m. - 12:30 p.m.; Friday 11 a.m. - 12 p.m.\tabularnewline
Location: & Social Sciences 113\tabularnewline
Office: & 2106 Campus Dr \#201A\tabularnewline
Email: & \href{mailto:[email protected]}{[email protected]}\tabularnewline
Office Hours: & By appointment\tabularnewline
\end{tabular}
\par\end{center}
\subsubsection*{About this course}
This module will build on the skills introduced in the first module
and introduce topics covered in second-year applied micro modules.
These topics include multinomial choice models, numerical integration,
Monte Carlo simulation and bootstrapping. The course will cover specific
applications of these topics in the fields of labor, education and
industrial organization.
\subsubsection*{Prerequisites}
I am going to assume that all students know the material from the
first Matlab module. This includes familiarity with Matlab basics
(most notably Matlab's functional optimizers) as well as a working
knowledge of \LaTeX.
\subsubsection*{Textbooks}
There is no formal textbook for the course, but students may find
the following resources helpful: \emph{Discrete Choice Methods with
Simulation} (Train)\footnote{This is available for free online at \url{http://elsa.berkeley.edu/books/choice2.html}},
\emph{Econometric Analysis of Cross Section and Panel Data} (Wooldridge),
\emph{Econometrics} (Hayashi), \emph{Time Series Analysis} (Hamilton),
\emph{Econometric Analysis} (Greene), and \emph{Numerical Methods
in Economics} (Judd).
\subsubsection*{Registration, Enrollment and Overall Class Grades}
In order to get credit for this course, you need to first enroll in
Econ 360 or Econ 370 in ACES. Because each module is not listed separately
in ACES, I can only determine your enrollment in this course through
your completion of problem sets and/or attendance in lectures. In
order to get credit for Econ 360 or Econ 370 in ACES, each student
must enroll in at least two modules. Grades from those two modules
will be averaged into a final grade for Econ 360/370. If you enroll
in more than two modules, your two highest grades are averaged. Because
of this favorable grading policy, you are encouraged to take more
than two modules.
\subsubsection*{Grades for this Module}
Grades will be determined by the average score from three problem
sets, due each week by 11:59 p.m. on Thursday. Late problem sets will
not be accepted. Submit problem set materials to your ``dropbox''
folder on Sakai. You are allowed to work on problem sets in groups
(no larger than 3, please), but each student must turn in his/her
own copy of the problem set. In particular, each student should avoid
copying/pasting code and instead type the code out on his/her own.
(This is the only way to learn how to program.) Put your name and
the names of those in your group at the top of your code file(s).
Each Friday morning I will post solutions at approximately 8 A.M.
We will then spend the Friday lecture time going through the code
together. Problem sets will be graded on the following scale (some
convex combination of effort and accuracy):
\medskip{}
\begin{center}
\begin{tabular}{cl}
4: & Problem set is complete and mostly correct\tabularnewline
3: & Problem set is complete with errors; or mostly complete and mostly
correct\tabularnewline
2: & Problem set is complete with many errors; or barely complete and mostly
correct\tabularnewline
1: & Problem set is barely attempted or completely incorrect\tabularnewline
0: & Problem set turned in late or not at all\tabularnewline
\end{tabular}\\
\par\end{center}
Problem set grades will be combined to an unweighted average to determine
course grade.
\subsubsection*{Schedule of Topics (subject to change)}
\begin{center}
\begin{tabular}{ccll}
\hline
Class & Date & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Topics & Lecture Notes\tabularnewline
\hline
1 & Tue 7/24 & Intro to multinomial choice & Multinomial.pdf\tabularnewline
& & Unobserved heterogeneity & MultinomialDerivations.pdf\tabularnewline
& & Fixed effects/random effects & UnobservedHeterogeneity.pdf\tabularnewline
2 & Thu 7/26 & Advanced programming tricks & TipsAndTricks.pdf\tabularnewline
& & Constrained optimization & DeltaMethod.pdf\tabularnewline
& & \emph{PS1 due by 11:59 p.m.} & \tabularnewline
3 & Fri 7/27 & Go over code for Problem Set 1 & \tabularnewline
4 & Tue 7/31 & Improving computational efficiency & ComputationalEfficiency.pdf\tabularnewline
& & Analytical gradients & LinuxLab.pdf\tabularnewline
& & mex files and compiled languages & \tabularnewline
5 & Thu 8/2 & Simulation & NumericalIntegration.pdf\tabularnewline
& & Integration & Bootstrap.pdf\tabularnewline
& & Bootstrapping/Inference & \tabularnewline
& & \emph{PS2 due by 11:59 p.m.} & \tabularnewline
6 & Fri 8/3 & Go over code for Problem Set 2 & \tabularnewline
7 & Tue 8/7 & Multi-stage estimation algorithms & IterativeAlgorithms.pdf\tabularnewline
& & Intro to Bayesian Inference & IntroBayesianInference.pdf\tabularnewline
8 & Thu 8/9 & Model fit & ModelFitCfl.pdf\tabularnewline
& & Counterfactual simulation & DurationCount.pdf\tabularnewline
& & Duration Analysis & \tabularnewline
& & Count Data Models & \tabularnewline
& & \emph{PS3 due by 11:59 p.m.} & \tabularnewline
9 & Fri 8/10 & Go over code for Problem Set 3 & \tabularnewline
\hline
\end{tabular}
\par\end{center}
\end{document}
| {
"alphanum_fraction": 0.7510435301,
"avg_line_length": 43.8431372549,
"ext": "tex",
"hexsha": "1aabd61509ff0e02c89b73e57bafe7e067c5a946",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2021-09-17T20:11:06.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-02-04T07:00:24.000Z",
"max_forks_repo_head_hexsha": "436aea36edaa805f5eca81044f434da23511429e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jngod2011/MatlabCourse",
"max_forks_repo_path": "Syllabus2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "436aea36edaa805f5eca81044f434da23511429e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jngod2011/MatlabCourse",
"max_issues_repo_path": "Syllabus2.tex",
"max_line_length": 118,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "436aea36edaa805f5eca81044f434da23511429e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "tyleransom/MatlabCourse",
"max_stars_repo_path": "Syllabus2.tex",
"max_stars_repo_stars_event_max_datetime": "2021-03-30T08:08:05.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-02-04T07:00:20.000Z",
"num_tokens": 1758,
"size": 6708
} |
\section{Introduction}
Content Addressable Parallel Processors (CAPPs) constitute an alternative to the standard von Neumann architecture, featuring parallel-processing based on content-addressable memory. Unlike Random Access Memory (RAM) which works by performing operations on a word in memory by referring to its physical address, a CAPP is able to select multiple memory locations simultaneously based on pattern matching the contents of the memory being accessed. Consequently, a CAPP can perform operations like writing into and reading from multiple selected memory cells in constant time. The original intent of the CAPP design was to serve as a general purpose computer, capable of parallel processing. Furthermore, it was hoped that, by providing native support for parallel pattern-matching and multi-write capability, the software for such machines would be relatively easy to write (at least in comparison with other parallel architectures). In practice, this did not occur and CAPPs eventually found use primarily in a much more limited form of application specific CAMs, primarily in the area of computer networking devices, e.g. in the form of fast lookup tables used in network switches and routers.
The goal of our project was to expose a true CAPP capable of multi-write and parallel read, as a convenient USB peripheral, to enable renewed experimentation with this class of architecture. We therefore designed a Verilog module for a parameterized (hence, scalable) CAPP. To this module we added a USB/UART interface module which we manage using an FSM-based protocol. The combined system was implemented on a TinyFPGA-BX \cite{tinyfpga_bx} which we then control over the USB/UART using a simple Python-based driver. We believe that CAPPs have potential in graph theory, neural network caching, high-speed indexing, regex computations etc. and we hope that an open-source, expandable CAPP design which can be synthesized on an FPGA using open-source tools, can provide a basis for renewed research into the application of CAPPs to these diverse and challenging domains. Our implementation is fully open source and can be found here \cite{CAPP_FPGA}.
| {
"alphanum_fraction": 0.8208198987,
"avg_line_length": 434.2,
"ext": "tex",
"hexsha": "5286f0f2a8281dec6e764aa1f52c0ac8d42a3d18",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-03-20T05:51:04.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-03-20T05:50:48.000Z",
"max_forks_repo_head_hexsha": "9312340a0207858b06f4924c9986db741b18ba6f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "asalik13/Content-Addressable-Memory",
"max_forks_repo_path": "FPGA-CAPP_research_paper/sections/introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9312340a0207858b06f4924c9986db741b18ba6f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "asalik13/Content-Addressable-Memory",
"max_issues_repo_path": "FPGA-CAPP_research_paper/sections/introduction.tex",
"max_line_length": 1194,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "9312340a0207858b06f4924c9986db741b18ba6f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "asalik13/Content-Addressable-Memory",
"max_stars_repo_path": "FPGA-CAPP_research_paper/sections/introduction.tex",
"max_stars_repo_stars_event_max_datetime": "2021-10-12T03:00:58.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-10-08T16:56:20.000Z",
"num_tokens": 425,
"size": 2171
} |
\section{Slopes in Polar Coordinates}\label{sec:Slopes in polar coordinates}
When we describe a curve using polar coordinates, it is still a curve
in the $x$-$y$ plane. We would like to be able to compute slopes and
areas for these curves using polar coordinates.
We have seen that $x=r\cos\theta$ and $y=r\sin\theta$ describe the
relationship between polar and rectangular coordinates. If in turn we
are interested in a curve given by $r=f(\theta)$, then we can write
$x=f(\theta)\cos\theta$ and $y=f(\theta)\sin\theta$, describing $x$
and $y$ in terms of $\theta$ alone. The first of these equations
describes $\theta$ implicitly in terms of $x$, so using the chain rule
we may compute
$${dy\over dx}={dy\over d\theta}{d\theta\over dx}.$$
Since $d\theta/dx=1/(dx/d\theta)$, we can instead compute
$$
{dy\over dx}={dy/d\theta\over dx/d\theta}=
{f(\theta)\cos\theta + f'(\theta)\sin\theta\over
-f(\theta)\sin\theta + f'(\theta)\cos\theta}.
$$
\begin{example}{Horizontal Tangent Line}{horizontaltangentpolar}
Find the points at which the curve given by
$r=1+\cos\theta$ has a vertical or horizontal tangent line.
\end{example}
\begin{solution}
Since this function has period $2\pi$, we may restrict our attention to the
interval $[0,2\pi)$ or $(-\pi,\pi]$, as convenience dictates.
First, we compute the slope:
$$
{dy\over dx}={(1+\cos\theta)\cos\theta-\sin\theta\sin\theta\over
-(1+\cos\theta)\sin\theta-\sin\theta\cos\theta}=
{\cos\theta+\cos^2\theta-\sin^2\theta\over
-\sin\theta-2\sin\theta\cos\theta}.
$$
This fraction is zero when the numerator is zero (and the denominator
is not zero). The numerator is $\ds \ds 2\cos^2\theta+\cos\theta-1$ so by the
quadratic formula
$$
\cos\theta={-1\pm\sqrt{1+4\cdot2}\over 4} = -1
\quad\hbox{or}\quad {1\over 2}.
$$
This means $\theta$ is $\pi$ or $\pm \pi/3$.
However, when $\theta=\pi$, the denominator is
also $0$, so we cannot conclude that the tangent line is horizontal.
Setting the denominator to zero we get
\begin{eqnarray*}
-\theta-2\sin\theta\cos\theta &= 0\cr
\sin\theta(1+2\cos\theta)&=0,\cr
\end{eqnarray*}
so either $\sin\theta=0$ or $\cos\theta=-1/2$. The first is true when
$\theta$ is $0$ or $\pi$, the second when $\theta$ is $2\pi/3$ or
$4\pi/3$. However, as above, when $\theta=\pi$, the numerator is also $0$, so we
cannot conclude that the tangent line is
vertical. Figure~\ref{fig:cardioid tangents} shows points
corresponding to $\theta$ equal to $0$, $\pm 1.318$, $2\pi/3$ and
$4\pi/3$ on the graph of the function. Note that when $\theta=\pi$ the
curve hits the origin and does not have a tangent line.
\end{solution}
\figure[H]
\centerline{\vbox{\beginpicture
\normalgraphs
%\ninepoint
\setcoordinatesystem units <12truemm,12truemm>
\setplotarea x from -1 to 2.5, y from -1.5 to 1.5
\axis left shiftedto x=0 /
\axis bottom shiftedto y=0 /
\multiput {$\bullet$} at 2 0 0.75 1.299 0.75 -1.299 -0.25 0.433 -0.25 -0.433 /
\setquadratic
\plot 2.000 0.000 1.984 0.208 1.935 0.411 1.856 0.603 1.748 0.778
1.616 0.933 1.464 1.063 1.295 1.166 1.117 1.240 0.933 1.285
0.750 1.299 0.572 1.285 0.405 1.245 0.251 1.182 0.115 1.098
-0.000 1.000 -0.094 0.891 -0.165 0.775 -0.214 0.657 -0.241 0.542
-0.250 0.433 -0.242 0.333 -0.221 0.246 -0.191 0.172 -0.155 0.112
-0.116 0.067 -0.079 0.035 -0.047 0.015 -0.021 0.005 -0.005 0.001
0.000 0.000 -0.005 -0.001 -0.021 -0.005 -0.047 -0.015 -0.079 -0.035
-0.116 -0.067 -0.155 -0.112 -0.191 -0.172 -0.221 -0.246 -0.242 -0.333
-0.250 -0.433 -0.241 -0.542 -0.214 -0.657 -0.165 -0.775 -0.094 -0.891
0.000 -1.000 0.115 -1.098 0.251 -1.182 0.405 -1.245 0.572 -1.285
0.750 -1.299 0.933 -1.285 1.117 -1.240 1.295 -1.166 1.464 -1.063
1.616 -0.933 1.748 -0.778 1.856 -0.603 1.935 -0.411 1.984 -0.208
2.000 0.000 /
\endpicture}}
\caption{{Points of vertical and horizontal tangency for $r=1+\cos\theta$.} \label{fig:cardioid tangents}}
\endfigure
We know that the second derivative $f''(x)$ is useful in describing
functions, namely, in describing concavity. We can compute $f''(x)$ in
terms of polar coordinates as well. We already know how to write
$dy/dx=y'$ in terms of $\theta$, then
$$
{d\over dx}{dy\over dx}= {dy'\over dx}={dy'\over
d\theta}{d\theta\over dx}={dy'/d\theta\over dx/d\theta}.
$$
\begin{example}{Second Derivative of Cardioid}{cardioidsecondder}
Find the second derivative for the cardioid
$r=1+\cos\theta$.
\end{example}
\begin{solution}
\begin{eqnarray*}
{d\over d\theta}{\cos\theta+\cos^2\theta-\sin^2\theta\over
-\sin\theta-2\sin\theta\cos\theta}\cdot{1\over dx/d\theta} &=\cdots=
\ds{3(1+\cos\theta)\over (\sin\theta+2\sin\theta\cos\theta)^2}
\cdot\ds{1\over-(\sin\theta+2\sin\theta\cos\theta)}\cr
&=\ds{-3(1+\cos\theta)\over(\sin\theta+2\sin\theta\cos\theta)^3}.\cr
\end{eqnarray*}
The ellipsis here represents rather a substantial amount of algebra.
We know from above that the cardioid has horizontal tangents at $\pm
\pi/3$; substituting these values into the second derivative we get
$\ds y''(\pi/3)=-\sqrt{3}/2$ and $\ds y''(-\pi/3)=\sqrt{3}/2$,
indicating concave down and concave up respectively. This agrees with
the graph of the function.
\end{solution}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\Opensolutionfile{solutions}[ex]
\section*{Exercises for \ref{sec:Slopes in polar coordinates}}
\begin{enumialphparenastyle}
%%%%%%%%%%
\begin{ex}
\noindent Compute $y'=dy/dx$ and $\ds y''=d^2y/dx^2$.
\begin{multicols}{2}
\begin{enumerate}
\item $r=\theta$
\item $r=1+\sin\theta$
\item $r=\cos\theta$
\item $r=\sin\theta$
\item $r=\sec\theta$
\item $r=\sin(2\theta)$
\end{enumerate}
\end{multicols}
\begin{sol}
\begin{enumerate}
\item $(\theta\cos\theta+\sin\theta)/(-\theta\sin\theta+\cos\theta)$,
$\ds (\theta^2+2)/(-\theta\sin\theta+\cos\theta)^3$
\item $\ds {\cos\theta+2\sin\theta\cos\theta\over
\cos^2\theta-\sin^2\theta-\sin\theta}$,
$\ds {3(1+\sin\theta)\over(\cos^2\theta-\sin^2\theta-\sin\theta)^3}$
\item $\ds (\sin^2\theta-\cos^2\theta)/(2\sin\theta\cos\theta)$,
$\ds -1/(4\sin^3\theta\cos^3\theta)$
\item $\ds {2\sin\theta\cos\theta\over\cos^2\theta-\sin^2\theta}$,
$\ds {2\over(\cos^2\theta-\sin^2\theta)^3}$
\item undefined
\item $\ds {2\sin\theta-3\sin^3\theta\over3\cos^3\theta-2\cos\theta}$,
$\ds {3\cos^4\theta-3\cos^2\theta+2\over2\cos^3\theta(3\cos^2\theta-2)^3}$
\end{enumerate}
\end{sol}
\end{ex}
%%%%%%%%%%
\begin{ex}
\noindent Sketch the curves over the interval $[0,2\pi]$ unless
otherwise stated.
\begin{multicols}{3}
\begin{enumerate}
\item $r=\sin\theta+\cos\theta$
\item $r=2+2\sin\theta$
\item $\ds r={3\over2}+\sin\theta$
\item $r= 2+\cos\theta$
\item $\ds r={1\over2}+\cos\theta$
\item $\ds r=\cos(\theta/2), 0\le\theta\le4\pi$
\item $r=\sin(\theta/3), 0\le\theta\le6\pi$
\item $\ds r=\sin^2\theta$
\item $\ds r=1+\cos^2(2\theta)$
\item $\ds r=\sin^2(3\theta)$
\item $\ds r=\tan\theta$
\item $\ds r=\sec(\theta/2), 0\le\theta\le4\pi$
\item $\ds r=1+\sec\theta$
\item $\ds r={1\over 1-\cos\theta}$
\item $\ds r={1\over 1+\sin\theta}$
\item $\ds r=\cot(2\theta)$
\item $\ds r=\pi/\theta, 0\le\theta\le\infty$
\item $\ds r=1+\pi/\theta, 0\le\theta\le\infty$
\end{enumerate}
\end{multicols}
\end{ex}
\end{enumialphparenastyle}
| {
"alphanum_fraction": 0.6699582754,
"avg_line_length": 38.449197861,
"ext": "tex",
"hexsha": "e6dfcac26021c9a1213a1b7b1ce6f1c0a6f1db1e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "TimAlderson/OpenCalc",
"max_forks_repo_path": "11-polar-coord-parametric-eq/11-2-slopes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "TimAlderson/OpenCalc",
"max_issues_repo_path": "11-polar-coord-parametric-eq/11-2-slopes.tex",
"max_line_length": 106,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "TimAlderson/OpenCalc",
"max_stars_repo_path": "11-polar-coord-parametric-eq/11-2-slopes.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2886,
"size": 7190
} |
\section{Work Structure}
This worked is divided into the following chapters:
\begin{enumerate}
\item In Chapter~\ref{cap2:theoFrame}, we describe the theoretical framework
enclosed on this work. This chapter the Semantic
Web, Information Extraction methods and how they relate with Semantic Web
technologies, Semantic Parsing applied to translating natural language to
\SPARQL, and the current state and challenges of the Question Answering over
Knowledge Graphs task.
\item In Chapter~\ref{cap3:system}, we give an overview of the proposed Question
Answering system for this work. This includes a general explanation of the
pipeline proposed to generate a \SPARQL{} query, and more specific details on how
each component is designed.
\item In Chapter~\ref{cap4:experimentalDesign}, we go into details about the
experiments we run in this work. We present the research questions we aimed to answer,
the baseline we compare our system with, and the metrics used to quantify the
performance of each system.
\item In Chapter~\ref{cap5:results}, we present the results derived from running the
proposed experiments. Aside from that, we include a brief discussion and analysis of
the results.
\item In Chapter~\ref{cap6:conclusions}, we summarize the conclusion of this work,
discuss its limitations and the future work regarding Question Answering over
Knowledge Graphs.
\end{enumerate}
| {
"alphanum_fraction": 0.7506684492,
"avg_line_length": 55.4074074074,
"ext": "tex",
"hexsha": "d6fc74a7a48ddbaf63827c161699f27c821c7db5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "95597b1601b1277e3275b2581f50687ae999fab5",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ddiomedi2906/tesis-latex",
"max_forks_repo_path": "1_introduction/work_structure.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "95597b1601b1277e3275b2581f50687ae999fab5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ddiomedi2906/tesis-latex",
"max_issues_repo_path": "1_introduction/work_structure.tex",
"max_line_length": 91,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "95597b1601b1277e3275b2581f50687ae999fab5",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ddiomedi2906/tesis-latex",
"max_stars_repo_path": "1_introduction/work_structure.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 329,
"size": 1496
} |
\documentclass[11pt,fleqn,twoside]{article}
\usepackage{makeidx}
\makeindex
\usepackage{palatino} %or {times} etc
\usepackage{plain} %bibliography style
\usepackage{amsmath} %math fonts - just in case
\usepackage{amsfonts} %math fonts
\usepackage{amssymb} %math fonts
\usepackage{lastpage} %for footer page numbers
\usepackage{fancyhdr} %header and footer package
\usepackage{mmpv2}
\usepackage{url}
\usepackage[subtle]{savetrees}
% the following packages are used for citations - You only need to include one.
%
% Use the cite package if you are using the numeric style (e.g. IEEEannot).
% Use the natbib package if you are using the author-date style (e.g. authordate2annot).
% Only use one of these and comment out the other one.
\usepackage{cite}
%\usepackage{natbib}
\begin{document}
\name{Ryan Gouldsmith}
\userid{ryg1}
\projecttitle{MapMyNotes}
\projecttitlememoir{MapMyNotes} %same as the project title or abridged version for page header
\reporttitle{Outline Project Specification}
\version{1.0}
\docstatus{Release}
\modulecode{CS39440}
\degreeschemecode{G401}
\degreeschemename{Computer Science}
\supervisor{Hannah Dee} % e.g. Neil Taylor
\supervisorid{hmd1}
\wordcount{}
%optional - comment out next line to use current date for the document
%\documentdate{10th February 2014}
\mmp
\setcounter{tocdepth}{3} %set required number of level in table of contents
%==============================================================================
\section{Project description}
%==============================================================================
MapMyNotes aims to produce a web application which will allow the user to upload an image of their handwritten notes and store them in a database. It should also provide the functionality to search for a note and add them to a calendar.
People still handwrite their notes and this often leads to large amounts of paper. This can be cumbersome especially if they try to search through notes which may not be organised. There are modern applications which will allow you to take notes, such as EverNote, but people still like writing handwritten notes.
At a basic level the application will allow the user to upload an image of their handwritten notes. It will use simple OCR analysis to identify the title of the notes, the lecturer, the module code, the location and the time by interpreting the author's handwriting.
From this the application can manually tag the module code for the notes based on suggestions from the OCR analysis, but ultimately the user tags their own notes. The notes metadata (code, location, lecturer and time) will be stored in the database with the note itself. The user can then search for a given module code and find all associated notes, for the specific user. Finally, the user must be able to insert the notes into a calendar, for archival purposes; OAuth with Google Calendar has been considered but more research needs to be conducted.
If there's sufficient time remaining then the application will look to do full OCR recognition on the note and produce a document where the note has been converted to text. There will also be an extraction of diagrams and graphs from the note and the user will be able to drag, rotate etc. An additional further aim is to automatically rotate the image of the note, allowing for notes to be taken at obscure angles and the system will automatically make it perpendicular. Finally, the application could link to other previous notes. These are all long term goals and I will be focusing on the first part mainly.
The methodology that the project will follow will be an adapted Extreme Programming/Scrum approach. There needs to be investigatory work on how this can be applied to this project, but during the development stage then CRC cards could be beneficial when thinking about the design stage.
%==============================================================================
\section{Proposed tasks}
%==============================================================================
The following tasks are ones which should be completed on the project:
\begin{itemize}
\item \textbf{Investigation of how to extract handwriting from images.} This task will involve looking and investigating OCR tools to see how well handwriting data can be interpreted. The tool will need to be trained for the author's handwriting.
\item \textbf{Investigation into server configuration.} The server will need to be configured with running a web application alongside external libraries such as the OCR tool.
\item \textbf{Configuration of local work environment.} Configuration of local machines will closely match that of the server, using the same version control, either Git or SVN. Configuration of an continuous integration tool to aid project development will be required.
\item \textbf{Development}
\begin{itemize}
\item \textbf{Produce a front end application to input a note.} The front end features must allow a user to upload an image and see the image on the screen. They can add appropriate tags for the module code and they can search for the module code, producing a full list of notes based on the module code.
\item \textbf{Back end parsing the images.} The core business logic should conduct basic OCR recognition of text at the top of the notes. The notes can then interact with OpenCV's libraries to extract diagrams from the notes. Finally, the backend module should integrate with a calendar to archive the notes so they can be found again via the date.
\end{itemize}
\item \textbf{Produce a list of constraints that the notes must follow.} As notes vary from person to person, then a constraint should be attached to the notes to ensure the notes follow a similar format. This should be a small set of rules that can be added to the application which represent a formalised structure, for instance the module code and the lecture title in caps at the top of the page.
\item \textbf{Continuous project meetings and diaries.} The project will consist of weekly meetings with the supervisor. There shall be one group meeting and another individual meeting to show and share progress. A weekly diary will be kept to monitor individual progress on the project and it will be referenced in the final report deliverable.
\item \textbf{Preparation for the demonstrations.} The project will consist of a mid project demonstration and an end demonstration. The mid project demonstration will need to be prepared to show minimal character recognition of notes and the ability to upload an image. The final project demonstration should show the ability to search for a note by tagging it and archiving it in a calendar.
\item \textbf{Investigation of how to extract diagrams and graphs from images.} There will be research conducted to identify an efficient way to identify images, graphs and diagrams from an image of a note. This could then be implemented into the system. This is considered a ``stretch goal''.
\end{itemize}
%==============================================================================
\section{Project deliverables}
%==============================================================================
The following are deliverables which are expected to be completed on the project:
\begin{itemize}
\item \textbf{The MapMyNotes software.} There should be a web application which at the minimum will take an image and save it to a database and integrate with a calendar. This should be well tested and well structured with appropriate comments where necessary along with providing any build scripts from the server.
\item \textbf{Weekly blogposts regarding progress.} There should be a weekly blog post to aid the author in analysing and reflecting on the week and any obstacles they may have overcome. This will be used and referenced in the final report.
\item \textbf{Mid-project Demonstration.} There will be a mid project demonstration which should show current progress on the project.
\item \textbf{The final report.} The report will discuss the work that has been carried out, the process that I have followed any libraries or frameworks which have been used throughout the project. It will discuss the project, the design undertaken and evaluating the end outcome and any changes that would be made.
\begin{itemize}
\item \textbf{A collection of user acceptance and model tests.} A series of user acceptance and integration tests should be provided for the front end application. Additionally, back end models should have appropriate logic tested.
\item \textbf{Story Card, CRC cards, burndown charts and backlog items.} As the author intends to use an Extreme Programming/Scrum methodology for their project, then providing CRC cards and story cards would be useful for the final report deliverable to show the author's design decisions.
\item \textbf{OCR training data.} The OCR tool will need to be trained to recognise the author's handwriting. Any training data which aided with this learning stage should be provided.
\item \textbf{The final demonstration.} There will be a final project demonstration which shows the full extent of the software produced. This should be considered when planning iterations.
\end{itemize}
\end{itemize}
%
% Start to comment out / remove the following lines. They are only provided for instruction for this example template. You don't need the following section title, because it will be added as part of the bibliography section.
%
%==============================================================================
%\section*{Your Bibliography - REMOVE this title and text for final version}
%==============================================================================
%
%You need to include an annotated bibliography. This should list all relevant web pages, books, journals etc. that you have consulted in researching your project. Each reference should include an annotation.
%The purpose of the section is to understand what sources you are looking at. A correctly formatted list of items and annotations is sufficient. You might go further and make use of bibliographic tools, e.g. BibTeX in a LaTeX document, could be used to provide citations, for example \cite{NumericalRecipes} \cite{MarksPaper} \cite[99-101]{FailBlog} \cite{kittenpic_ref}. The bibliographic tools are not a requirement, but you are welcome to use them.
%You can remove the above {\em Your Bibliography} section heading because it will be added in by the renewcommand which is part of the bibliography. The correct annotated bibliography information is provided below.
%
% End of comment out / remove the lines. They are only provided for instruction for this example template.
%
\nocite{*} % include everything from the bibliography, irrespective of whether it has been referenced.
% the following line is included so that the bibliography is also shown in the table of contents. There is the possibility that this is added to the previous page for the bibliography. To address this, a newline is added so that it appears on the first page for the bibliography.
\newpage
\addcontentsline{toc}{section}{Initial Annotated Bibliography}
%
% example of including an annotated bibliography. The current style is an author date one. If you want to change, comment out the line and uncomment the subsequent line. You should also modify the packages included at the top (see the notes earlier in the file) and then trash your aux files and re-run.
%\bibliographystyle{authordate2annot}
\bibliographystyle{IEEEannot}
\renewcommand{\refname}{Annotated Bibliography} % if you put text into the final {} on this line, you will get an extra title, e.g. References. This isn't necessary for the outline project specification.
\bibliography{mmp} % References file
\end{document}
| {
"alphanum_fraction": 0.7531896916,
"avg_line_length": 78.9,
"ext": "tex",
"hexsha": "e1fe387118fd216bbd56a2b7f17c33b81afb06d5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2c350f68f992e454e88d3653e46e7607e224e3ae",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Ryan-Gouldsmith/MajorProject-MapMyNotes",
"max_forks_repo_path": "dissertation_documents/deliverables/outline_project_specification/ryg1_OutlineProjectSpecification.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2c350f68f992e454e88d3653e46e7607e224e3ae",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Ryan-Gouldsmith/MajorProject-MapMyNotes",
"max_issues_repo_path": "dissertation_documents/deliverables/outline_project_specification/ryg1_OutlineProjectSpecification.tex",
"max_line_length": 611,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2c350f68f992e454e88d3653e46e7607e224e3ae",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Ryan-Gouldsmith/MajorProject-MapMyNotes",
"max_stars_repo_path": "dissertation_documents/deliverables/outline_project_specification/ryg1_OutlineProjectSpecification.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2447,
"size": 11835
} |
\section{Linear independence}
\begin{outcome}
\begin{enumerate}
\item Find the redundant vectors in a set of vectors.
\item Determine whether a set of vectors is linearly independent.
\item Find a linearly independent subset of a set of spanning vectors.
\item Write a vector as a unique linear combination of a set of
linearly independent vectors.
\end{enumerate}
\end{outcome}
% ----------------------------------------------------------------------
\subsection{Redundant vectors and linear independence}
In Example~\ref{exa:redundant-span}, we encountered three vectors
$\vect{u}$, $\vect{v}$, and $\vect{w}$ such that
$\sspan\set{\vect{u},\vect{v},\vect{w}} =
\sspan\set{\vect{u},\vect{v}}$. If this happens, then the vector
$\vect{w}$ does not contribute anything to the span of
$\set{\vect{u},\vect{v},\vect{w}}$, and we say that $\vect{w}$ is
\textbf{redundant}%
\index{redundant vector}%
\index{vector!redundant}. The following definition generalizes this
notion.
\begin{definition}{Redundant vectors, linear dependence, and linear independence}{redundant-vectors}
Consider a sequence of $k$ vectors $\vect{u}_1,\ldots,\vect{u}_k$.
We say that the vector $\vect{u}_j$ is \textbf{redundant}%
\index{redundant vector}%
\index{vector!redundant} if it can be written as a linear
combination of earlier vectors in the sequence, i.e., if
\begin{equation*}
\vect{u}_j = a_1\,\vect{u}_1 + a_2\,\vect{u}_2 + \ldots + a_{j-1}\,\vect{u}_{j-1}
\end{equation*}
for some scalars $a_1,\ldots,a_{j-1}$. We say that the sequence of
vectors $\vect{u}_1,\ldots,\vect{u}_k$ is \textbf{linearly
dependent}%
\index{linear dependence}%
\index{vector!linearly dependent} if it contains one or more
redundant vectors. Otherwise, we say that the vectors are
\textbf{linearly independent}%
\index{linear independence}%
\index{vector!linearly independent}.
\end{definition}
\begin{example}{Redundant vectors}{redundant-vectors}
Find the redundant vectors in the following sequence of vectors. Are
the vectors linearly independent?
\begin{equation*}
\vect{u}_1 = \begin{mymatrix}{c} 0 \\ 0 \\ 0 \\ 0\end{mymatrix},
\quad
\vect{u}_2 = \begin{mymatrix}{c} 1 \\ 2 \\ 2 \\ 3\end{mymatrix},
\quad
\vect{u}_3 = \begin{mymatrix}{c} 1 \\ 1 \\ 1 \\ 1\end{mymatrix},
\quad
\vect{u}_4 = \begin{mymatrix}{c} 2 \\ 3 \\ 3 \\ 4\end{mymatrix},
\quad
\vect{u}_5 = \begin{mymatrix}{c} 0 \\ 1 \\ 2 \\ 3\end{mymatrix},
\quad
\vect{u}_6 = \begin{mymatrix}{c} 3 \\ 3 \\ 2 \\ 2\end{mymatrix}.
\end{equation*}
\end{example}
\begin{solution}
\begin{itemize}
\item The vector $\vect{u}_1$ is redundant, because it is a linear
combination of earlier vectors. (Although there are no earlier
vectors, recall from Example~\ref{exa:span-empty-set} that the empty
sum of vectors is equal to the zero vector $\vect{0}$. Therefore,
$\vect{u}_1$ is indeed an (empty) linear combination of earlier
vectors.)
\item The vector $\vect{u}_2$ is not redundant, because it cannot be
written as a linear combination of $\vect{u}_1$. This is because
the system of equations
\begin{equation*}
\begin{mymatrix}{l|l}
0 & 1 \\
0 & 2 \\
0 & 2 \\
0 & 3 \\
\end{mymatrix}
\end{equation*}
has no solution.
\item The vector $\vect{u}_3$ is not redundant, because it cannot be
written as a linear combination of $\vect{u}_1$ and $\vect{u}_2$.
This is because the system of equations
\begin{equation*}
\begin{mymatrix}{ll|l}
0 & 1 & 1 \\
0 & 2 & 1 \\
0 & 2 & 1 \\
0 & 3 & 1 \\
\end{mymatrix}
\end{equation*}
has no solution.
\item The vector $\vect{u}_4$ is redundant, because $\vect{u}_4 =
\vect{u}_2 + \vect{u}_3$.
\item The vector $\vect{u}_5$ is not redundant, because
This is because the system of equations
\begin{equation*}
\begin{mymatrix}{llll|l}
0 & 1 & 1 & 2 & 0 \\
0 & 2 & 1 & 3 & 1 \\
0 & 2 & 1 & 3 & 2 \\
0 & 3 & 1 & 4 & 3 \\
\end{mymatrix}
\end{equation*}
has no solution.
\item The vector $\vect{u}_6$ is redundant, because $\vect{u}_6 =
\vect{u}_2 + 2\vect{u}_3-\vect{u}_5$.
\end{itemize}
In summary, the vectors $\vect{u}_1$, $\vect{u}_4$, and $\vect{u}_6$
are redundant, and the vectors $\vect{u}_2$, $\vect{u}_3$, and
$\vect{u}_5$ are not. It follows that the vectors
$\vect{u}_1,\ldots,\vect{u}_6$ are linearly dependent.
\end{solution}
% ----------------------------------------------------------------------
\subsection{The casting-out algorithm}
The last example shows that it can be a lot of work to find the
redundant vectors in a sequence of $k$ vectors. Doing so in the naive way
require us to solve up to $k$ systems of linear equations!
Fortunately, there is a much faster and easier method, the so-called
{\em casting-out algorithm}.
\begin{algorithm}{Casting-out algorithm}{casting-out}
\index{casting-out algorithm}%
\index{linear independence!casting-out algorithm}%
\index{vector!casting-out algorithm}%
\textbf{Input:} a list of $k$ vectors
$\vect{u}_1,\ldots,\vect{u}_k\in\R^n$. \smallskip
\textbf{Output:} the set of indices $j$ such that $\vect{u}_j$ is
redundant.
\smallskip
\textbf{Algorithm:} Write the vectors $\vect{u}_1,\ldots,\vect{u}_k$
as the columns of an $n\times k$-matrix, and reduce to {\ef}. Every
{\em non-pivot} column, if any, corresponds to a redundant vector.
\end{algorithm}
\begin{example}{Casting-out algorithm}{casting-out}
Use the casting-out algorithm to find the redundant vectors among
the vectors from Example~\ref{exa:redundant-vectors}.
\end{example}
\begin{solution}
Following the casting-out algorithm, we write the vectors
$\vect{u}_1,\ldots,\vect{u}_6$ as the columns of a matrix and reduce
to {\ef}.
\begin{equation*}
\begin{mymatrix}{rrrrrr}
0 & 1 & 1 & 2 & 0 & 3 \\
0 & 2 & 1 & 3 & 1 & 3 \\
0 & 2 & 1 & 3 & 2 & 2 \\
0 & 3 & 1 & 4 & 3 & 2 \\
\end{mymatrix}
\roweq\ldots\roweq
\begin{mymatrix}{rrrrrr}
0 & \circled{1} & 1 & 2 & 0 & 3 \\
0 & 0 & \circled{1} & 1 & -1 & 3 \\
0 & 0 & 0 & 0 & \circled{1} & -1 \\
0 & 0 & 0 & 0 & 0 & 0 \\
\end{mymatrix}.
\end{equation*}
The pivot columns are columns $2$, $3$, and $5$. The non-pivot
columns are columns $1$, $4$, and $6$. Therefore, the vectors
$\vect{u}_1$, $\vect{u}_4$, and $\vect{u}_6$ are redundant. Note
that this is the same answer we got in
Example~\ref{exa:redundant-vectors}.
\end{solution}
The above version of the casting-out algorithm only tells us which of
the vectors (if any) are redundant, but it does not give us a specific
way to write the redundant vectors as linear combinations of previous
vectors. However, we can easily get this additional information if we
reduce the matrix all the way to {\rref}. We call this version of the
algorithm the {\em extended casting-out algorithm}.
\begin{algorithm}{Extended casting-out algorithm}{extended-casting-out}
\index{extended casting-out algorithm}%
\index{casting-out algorithm!extended}%
\index{linear independence!casting-out algorithm!extended}%
\index{vector!casting-out algorithm!extended}%
\textbf{Input:} a list of $k$ vectors
$\vect{u}_1,\ldots,\vect{u}_k\in\R^n$.
\smallskip
\textbf{Output:} the set of indices $j$ such that $\vect{u}_j$ is
redundant, and a set of coefficients for writing each redundant
vector as a linear combination of previous vectors.
\smallskip
\textbf{Algorithm:} Write the vectors $\vect{u}_1,\ldots,\vect{u}_k$
as the columns of an $n\times k$-matrix, and reduce to
{\rref}. Every {\em non-pivot} column, if any, corresponds to a
redundant vector. If $\vect{u}_j$ is a redundant vector, then the
entries in the $j\th$ column of the {\rref} are coefficients for
writing $\vect{u}_j$ as a linear combination of previous
non-redundant vectors.
\end{algorithm}
\begin{example}{Extended casting-out algorithm}{extended-casting-out}
Use the casting-out algorithm to find the redundant vectors among
the vectors from Example~\ref{exa:redundant-vectors}, and write each
redundant vector as a linear combination of previous non-redundant
vectors.
\end{example}
\begin{solution}
Once again, we write the vectors $\vect{u}_1,\ldots,\vect{u}_6$ as
the columns of a matrix. This time we use the extended casting-out
algorithm, which means we reduce the matrix to {\rref} instead of
{\ef}.
\begin{equation*}
\begin{mymatrix}{rrrrrr}
0 & 1 & 1 & 2 & 0 & 3 \\
0 & 2 & 1 & 3 & 1 & 3 \\
0 & 2 & 1 & 3 & 2 & 2 \\
0 & 3 & 1 & 4 & 3 & 2 \\
\end{mymatrix}
\roweq\ldots\roweq
\begin{mymatrix}{rrrrrr}
0 & \circled{1} & 0 & 1 & 0 & 1 \\
0 & 0 & \circled{1} & 1 & 0 & 2 \\
0 & 0 & 0 & 0 & \circled{1} & -1 \\
0 & 0 & 0 & 0 & 0 & 0 \\
\end{mymatrix}.
\end{equation*}
As before, the non-pivot columns are columns $1$, $4$, and $6$, and
therefore, the vectors $\vect{u}_1$, $\vect{u}_4$, and $\vect{u}_6$
are redundant. The non-redundant vectors are $\vect{u}_2$,
$\vect{u}_3$, and $\vect{u}_5$. Moreover, the entries in the sixth
column are $1$, $2$, and $-1$. Note that this means that the sixth
column can be written as $1$ times the second column plus $2$ times
the third column plus $(-1)$ times the fourth column. The same
coefficients can be used to write $\vect{u}_6$ as a linear
combination of previous {\em non-redundant} columns, namely:
\begin{equation*}
\vect{u}_6 = 1\,\vect{u}_2 + 2\,\vect{u}_3 - 1\,\vect{u}_5.
\end{equation*}
Also, the entries in the fourth column are $1$ and $1$, which are
the coefficients for writing $\vect{u}_4$ as a linear combination of
previous non-redundant columns, namely:
\begin{equation*}
\vect{u}_4 = 1\,\vect{u}_2 + 1\,\vect{u}_3.
\end{equation*}
Finally, there are no non-zero entries in the first column. This
means that $\vect{u}_1$ is the empty linear combination
\begin{equation*}
\vect{u}_1 = \vect{0}.
\end{equation*}
\end{solution}
% ----------------------------------------------------------------------
\subsection{Alternative characterization of linear independence}
Our definition of redundant vectors depends on the order in which the
vectors are written. This is because each redundant vector must be a
linear combination of {\em earlier} vectors in the sequence. For example,
in the sequence of vectors
\begin{equation*}
\vect{u}=\begin{mymatrix}{r} 1 \\ 1 \\ 1 \end{mymatrix},\quad
\vect{v}=\begin{mymatrix}{r} 3 \\ 2 \\ 1 \end{mymatrix},\quad
\vect{w}=\begin{mymatrix}{r} 11 \\ 8 \\ 5 \end{mymatrix},
\end{equation*}
the vector $\vect{w}$ is redundant, because it is a linear combination
of earlier vectors $\vect{w} = 2\,\vect{u}+3\,\vect{v}$. Neither
$\vect{u}$ nor $\vect{v}$ are redundant. On the other hand, in the sequence
of vectors
\begin{equation*}
\vect{u}=\begin{mymatrix}{r} 1 \\ 1 \\ 1 \end{mymatrix},\quad
\vect{w}=\begin{mymatrix}{r} 11 \\ 8 \\ 5 \end{mymatrix},\quad
\vect{v}=\begin{mymatrix}{r} 3 \\ 2 \\ 1 \end{mymatrix},
\end{equation*}
$\vect{v}$ is redundant because
$\vect{v} = \frac{1}{3}\vect{w} - \frac{2}{3}\vect{u}$, but neither
$\vect{u}$ nor $\vect{w}$ are redundant. Note that none of the vectors
have changed; only the order in which they are written is
different. Yet $\vect{w}$ is the redundant vector in the first sequence,
and $\vect{v}$ is the redundant vector in the second sequence.
Because we defined linear independence in terms of the absence of
redundant vectors, you may suspect that the concept of linear
independence also depends on the order in which the vectors are
written. However, this is not the case. The following theorem gives an
alternative characterization of linear independence that is more
symmetric (it does not depend on the order of the vectors).
\begin{theorem}{Characterization of linear independence}{characterization-linear-independence}
Let $\vect{u}_1,\ldots,\vect{u}_k$ be vectors. Then
$\vect{u}_1,\ldots,\vect{u}_k$ are linearly independent%
\index{linear independence!alternative characterization}%
\index{vector!linearly independent!alternative characterization}
if and only if the homogeneous equation
\begin{equation*}
a_1\,\vect{u}_1 + \ldots + a_k\,\vect{u}_k = \vect{0}
\end{equation*}
has only the trivial solution%
\index{trivial solution}%
\index{solution!trivial}%
\index{system of linear equations!trivial solution}.
\end{theorem}
\begin{proof}
Let $A$ be the $n\times k$-matrix whose columns are
$\vect{u}_1,\ldots,\vect{u}_k$. We know from the theory of
homogeneous systems that the system
$a_1\,\vect{u}_1 + \ldots + a_k\,\vect{u}_k = \vect{0}$ has no
non-trivial solution if and only if every column of the {\ef} of $A$
is a pivot column. By the casting-out algorithm, this is the case if
and only if none of the vectors $\vect{u}_1,\ldots,\vect{u}_k$ are
redundant, i.e., if and only if the vectors are linearly independent.
\end{proof}
\begin{example}{Characterization of linear independence}{characterization-linear-independence}
Use the method of
Theorem~\ref{thm:characterization-linear-independence} to determine
whether the following vectors are linearly independent in $\R^4$.
\begin{equation*}
\vect{u}_1 = \begin{mymatrix}{r} 1 \\ 1 \\ 2 \\ 0 \end{mymatrix},\quad
\vect{u}_2 = \begin{mymatrix}{r} 0 \\ 1 \\ 1 \\ 1 \end{mymatrix},\quad
\vect{u}_3 = \begin{mymatrix}{r} 1 \\ 2 \\ 3 \\ 2 \end{mymatrix},\quad
\vect{u}_4 = \begin{mymatrix}{r} 2 \\ 3 \\ 3 \\ 1 \end{mymatrix}.
\end{equation*}
\end{example}
\begin{solution}
We must check whether the equation
\begin{equation*}
a_1 \begin{mymatrix}{r} 1 \\ 1 \\ 2 \\ 0 \end{mymatrix}
+ a_2 \begin{mymatrix}{r} 0 \\ 1 \\ 1 \\ 1 \end{mymatrix}
+ a_3 \begin{mymatrix}{r} 1 \\ 2 \\ 3 \\ 2 \end{mymatrix}
+ a_4 \begin{mymatrix}{r} 2 \\ 3 \\ 3 \\ 1 \end{mymatrix}
= \begin{mymatrix}{r} 0 \\ 0 \\ 0 \\ 0 \end{mymatrix}
\end{equation*}
has a non-trivial solution. If it does, the vectors are linearly
dependent. On the other hand, if there is only the trivial solution,
the vectors are linearly independent. We write the augmented matrix
and solve:
\begin{equation*}
\begin{mymatrix}{rrrr|r}
1 & 0 & 1 & 2 & 0 \\
1 & 1 & 2 & 3 & 0 \\
2 & 1 & 3 & 7 & 0 \\
0 & 1 & 2 & 1 & 0 \\
\end{mymatrix}
\roweq
\ldots
\roweq
\begin{mymatrix}{rrrr|r}
\circled{1} & 0 & 1 & 2 & 0 \\
0 & \circled{1} & 1 & 1 & 0 \\
0 & 0 & \circled{1} & 0 & 0 \\
0 & 0 & 0 & \circled{2} & 0 \\
\end{mymatrix}.
\end{equation*}
Since every column is a pivot column, there are no free variables;
the system of equations has a unique solution, which is
$a_1=a_2=a_3=a_4=0$, i.e., the trivial solution. Therefore, the
vectors $\vect{u}_1,\ldots,\vect{u}_4$ are linearly independent.
\end{solution}
\begin{example}{Characterization of linear independence}{characterization-linear-independence2}
Use the method of
Theorem~\ref{thm:characterization-linear-independence} to determine
whether the following vectors are linearly independent in $\R^3$.
\begin{equation*}
\vect{u}_1 = \begin{mymatrix}{r} 1 \\ 1 \\ 0 \end{mymatrix},\quad
\vect{u}_2 = \begin{mymatrix}{r} 1 \\ 3 \\ 1 \end{mymatrix},\quad
\vect{u}_3 = \begin{mymatrix}{r} 0 \\ 4 \\ 2 \end{mymatrix},\quad
\end{equation*}
\end{example}
\begin{solution}
As in the previous example, we must check whether the equation
$a_1\,\vect{u}_1 + a_2\,\vect{u}_2 + a_3\,\vect{u}_3 = \vect{0}$ has
a non-trivial solution. Once again, we write the augmented matrix
and solve:
\begin{equation*}
\begin{mymatrix}{rrr|r}
1 & 1 & 0 & 0 \\
1 & 3 & 4 & 0 \\
0 & 1 & 2 & 0 \\
\end{mymatrix}
\roweq
\ldots
\roweq
\begin{mymatrix}{rrr|r}
\circled{1} & 1 & 0 & 0 \\
0 & \circled{2} & 4 & 0 \\
0 & 0 & 0 & 0 \\
\end{mymatrix}.
\end{equation*}
Since column $3$ is not a pivot column, $a_3$ is a free
variable. Therefore, the system has a non-trivial solution, and the
vectors are linearly dependent.
With a small amount of extra work, we can find an actual non-trivial
solution of
$a_1\,\vect{u}_1 + a_2\,\vect{u}_2 + a_3\,\vect{u}_3 =
\vect{0}$. All we have to do is set $a_3=1$ and do a back
substitution. We find that $(a_1,a_2,a_3)=(2,-2,1)$ is a
solution. In other words,
\begin{equation*}
2\vect{u}_1 - 2\vect{u}_2 + \vect{u}_3 = \vect{0}.
\end{equation*}
We can also use this information to write $\vect{u}_3$ as a linear
combination of previous vectors, namely,
$\vect{u}_3 = -2\vect{u}_1 + 2\vect{u}_2$.
\end{solution}
The characterization of linear independence in
Theorem~\ref{thm:characterization-linear-independence} is mostly
useful for theoretical reasons. However, it can also help in solving
problems such as the following.
\begin{example}{Related sets of vectors}{related-linear-independence}
Let $\vect{u},\vect{v},\vect{w}$ be linearly independent vectors in
$\R^n$. Are the vectors $\vect{u}+\vect{v}$, $2\vect{u}+\vect{w}$,
and $\vect{v}-5\vect{w}$ linearly independent?
\end{example}
\begin{solution}
By Theorem~\ref{thm:characterization-linear-independence}, to check
whether the vectors are linearly independent, we must check whether
the equation
\begin{equation}\label{eqn:related-linear-independence1}
a(\vect{u}+\vect{v}) + b(2\vect{u}+\vect{w}) +
c(\vect{v}-5\vect{w})=\vect{0}
\end{equation}
has non-trivial solutions. If it does, the vectors are linearly
dependent, if it does not, they are linearly independent. We can
simplify the equation as follows:
\begin{equation}\label{eqn:related-linear-independence2}
(a+2b)\vect{u} + (a+c)\vect{v} + (b-5c)\vect{w}=\vect{0}.
\end{equation}
Since $\vect{u}$, $\vect{v}$, and $\vect{w}$ are linearly
independent, we know, again by
Theorem~\ref{thm:characterization-linear-independence}, that
equation {\eqref{eqn:related-linear-independence2}} only has the
trivial solution. Therefore,
\begin{eqnarray*}
a + 2b & = & 0, \\
a + c & = & 0, \\
b - 5c & = & 0.
\end{eqnarray*}
We can solve this system of three equations in three variables, and
we find that it has the unique solution $a=b=c=0$. Therefore,
$a=b=c=0$ is the only solution to equation
{\eqref{eqn:related-linear-independence1}}, which means that the
vectors $\vect{u}+\vect{v}$, $2\vect{u}+\vect{w}$, and
$\vect{v}-5\vect{w}$ are linearly independent.
\end{solution}
% ----------------------------------------------------------------------
\subsection{Properties of linear independence}
The following are some properties of linearly independent sets.
\begin{proposition}{Properties of linear independence}{properties-linear-independence}
\index{linear independence!properties}%
\index{vector!linearly independent!properties}%
\index{properties of linear independence}%
\begin{enumerate}
\item \textbf{Linear independence and reordering.} If a sequence
$\vect{u}_1,\ldots,\vect{u}_k$ of $k$ vectors is linearly
independent, then so is any reordering of the sequence (i.e.,
whether or not the vectors are linearly independent does not
depend on the order in which the vectors are written down).
\item \textbf{Linear independence of a subset.} If
$\vect{u}_1,\ldots,\vect{u}_k$ are linearly independent, then so
are $\vect{u}_1,\ldots,\vect{u}_j$ for any $j<k$.
\item \textbf{Linear independence and dimension.}
\label{properties-linear-independence-c}
Let $\vect{u}_1,\ldots,\vect{u}_k$ be a sequence of $k$ vectors in
$\R^n$. If $k>n$, then the vectors are linearly dependent (i.e.,
not linearly independent).
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}
\item This follows from
Theorem~\ref{thm:characterization-linear-independence}, because
whether or not the equation
$a_1\vect{u}_1 + \ldots + a_k\vect{u}_k = \vect{0}$ has a
non-trivial solution does not depend on the order in which the
vectors are written.
\item If one of the vectors in the sequence
$\vect{u}_1,\ldots,\vect{u}_j$ were redundant, then it would be
redundant in the longer sequence $\vect{u}_1,\ldots,\vect{u}_k$ as
well.
\item Let $A$ be the $n\times k$-matrix that has the vectors
$\vect{u}_1,\ldots,\vect{u}_k$ as its columns and suppose that
$k>n$. Then the rank of $A$ is at most $n$, so the {\ef} of $A$
has some non-pivot columns. Therefore, the system
$a_1\vect{u}_1 + \ldots + a_k\vect{u}_k = \vect{0}$ has
non-trivial solutions, and the vectors are linearly dependent by
Theorem~\ref{thm:characterization-linear-independence}.
\end{enumerate}
\end{proof}
\begin{example}{Linear dependence}{linear-dependence}
Are the following vectors linearly independent?
\begin{equation*}
\begin{mymatrix}{r}
1 \\
4
\end{mymatrix},\quad
\begin{mymatrix}{r}
2 \\
3
\end{mymatrix},\quad
\begin{mymatrix}{r}
3 \\
2
\end{mymatrix}.
\end{equation*}
\end{example}
\begin{solution}
Since these are $3$ vectors in $\R^2$, they are linearly dependent
by Proposition~\ref{prop:properties-linear-independence}. No calculation
is necessary.
\end{solution}
% ----------------------------------------------------------------------
\subsection{Linear independence and linear combinations}
In general, there is more than one way of writing a given vector as a
linear combination of some spanning vectors. For example, consider
\begin{equation*}
\vect{u}_1 = \begin{mymatrix}{r} 1 \\ 1 \\ 0 \end{mymatrix},\quad
\vect{u}_2 = \begin{mymatrix}{r} 0 \\ 1 \\ 1 \end{mymatrix},\quad
\vect{u}_3 = \begin{mymatrix}{r} 1 \\ 2 \\ 1 \end{mymatrix},\quad
\vect{v} = \begin{mymatrix}{r} 1 \\ 3 \\ 2 \end{mymatrix}.
\end{equation*}
We can write $\vect{v}$ in many different ways as a linear
combination of $\vect{u}_1,\ldots,\vect{u}_3$, for example
\begin{equation*}
\begin{array}{r@{~}c@{~}c}
\vect{v} &=& -\vect{u}_1 + 2\vect{u}_3, \\
\vect{v} &=& \vect{u}_2 + \vect{u}_3, \\
\vect{v} &=& \vect{u}_1 + 2\vect{u}_2, \\
\vect{v} &=& 2\vect{u}_1 + 3\vect{u}_2 - \vect{u}_3. \\
\end{array}
\end{equation*}
However, when the vectors $\vect{u}_1,\ldots,\vect{u}_k$ are linearly
independent, this does not happen. In this case, the linear
combination is always unique, as the following theorem shows.
\begin{theorem}{Unique linear combination}{unique-linear-combination}
Assume $\vect{u}_1,\ldots,\vect{u}_k$ are linearly independent. Then
every vector $\vect{v}\in\sspan\set{\vect{u}_1,\ldots,\vect{u}_k}$
can be written as a linear combination of
$\vect{u}_1,\ldots,\vect{u}_k$ in a unique way.
\end{theorem}
\begin{proof}
We already know that every vector
$\vect{v}\in\sspan\set{\vect{u}_1,\ldots,\vect{u}_k}$ can be written
as a linear combination of $\vect{u}_1,\ldots,\vect{u}_k$, because
that is the definition of span. So what must be proved is the
uniqueness. Suppose, therefore, that there are two ways of writing
$\vect{v}$ as such a linear combination, i.e., that
\begin{equation*}
\begin{array}{l}
\vect{v} = a_1\,\vect{u}_1 + a_2\,\vect{u}_2 + \ldots + a_k\,\vect{u}_k\quad\mbox{and} \\
\vect{v} = b_1\,\vect{u}_1 + b_2\,\vect{u}_2 + \ldots + b_k\,\vect{u}_k. \\
\end{array}
\end{equation*}
Subtracting one equation from the other, we get
\begin{equation*}
\vect{0} = (a_1-b_1)\vect{u}_1 + (a_2-b_2)\vect{u}_2 + \ldots + (a_k-b_k)\vect{u}_k.
\end{equation*}
Since $\vect{u}_1,\ldots,\vect{u}_k$ are linearly independent, we
know by Theorem~\ref{thm:characterization-linear-independence} that
the last equation only has the trivial solution, i.e., $a_1-b_1=0$,
$a_2-b_2=0$, \ldots, $a_k-b_k=0$. It follows that $a_1=b_1$,
$a_2=b_2$, \ldots, $a_k=b_k$. We have shown that any two ways of
writing $\vect{v}$ as a linear combination of
$\vect{u}_1,\ldots,\vect{u}_k$ are equal. Therefore, there is only
one way of doing so.
\end{proof}
% ----------------------------------------------------------------------
\subsection{Removing redundant vectors}
Consider the span of some vectors $\vect{u}_1,\ldots,\vect{u}_k$. As
we just saw in the previous subsection, the span is especially nice
when the vectors $\vect{u}_1,\ldots,\vect{u}_k$ are linearly
independent, because in that case, every element $\vect{v}$ of the
span can be {\em uniquely} written in the form
$\vect{v} = a_1\,\vect{u}_1 + \ldots + a_k\,\vect{u}_k$.
But what if we have a span of some vectors
$\vect{u}_1,\ldots,\vect{u}_k$ that are not linearly independent? It
turns out that we can always find some linearly independent vectors
that span the same set. In fact, this can be done by simply removing
the redundant vectors from $\vect{u}_1,\ldots,\vect{u}_k$. This is the
subject of the following theorem.
\begin{theorem}{Removing redundant vectors}{linearly-independent-subset}
\index{redundant vector!removing}%
\index{vector!redundant!removing}%
Let $\vect{u}_1,\ldots,\vect{u}_k$ be a sequence of vectors, and
suppose that $\vect{u}_{j_1},\ldots,\vect{u}_{j_\ell}$ is the
subsequence of vectors that is obtained by removing all of the
redundant vectors. Then $\vect{u}_{j_1},\ldots,\vect{u}_{j_\ell}$
are linearly independent and
\begin{equation*}
\sspan\set{\vect{u}_{j_1},\ldots,\vect{u}_{j_\ell}}
=
\sspan\set{\vect{u}_1,\ldots,\vect{u}_k}.
\end{equation*}
\end{theorem}
\begin{proof}
Remove the redundant vectors one by one, from right to left. Each
time a redundant vector is removed, the span does not change; the
proof of this is similar to Example~\ref{exa:redundant-span}.
Moreover, the resulting sequence of vectors
$\sspan\set{\vect{u}_{j_1},\ldots,\vect{u}_{j_\ell}}$ is linearly
independent, because if any of these vectors were a linear
combination of earlier ones, then it would have been redundant in
the original sequence of vectors, and would have therefore been removed.
\end{proof}
\begin{example}{Finding a linearly independent set of spanning vectors}{linearly-independent-subset}
Find a subset of $\set{\vect{u}_1,\ldots,\vect{u}_4}$ that is
linearly independent and has the same span as
$\set{\vect{u}_1,\ldots,\vect{u}_4}$.
\begin{equation*}
\vect{u}_1 = \begin{mymatrix}{r} 1 \\ 0 \\ -2 \\ 3 \end{mymatrix},
\quad
\vect{u}_2 = \begin{mymatrix}{r} -2 \\ 0 \\ 4 \\ -6 \end{mymatrix},
\quad
\vect{u}_3 = \begin{mymatrix}{r} 1 \\ 2 \\ 2 \\ 1 \end{mymatrix},
\quad
\vect{u}_4 = \begin{mymatrix}{r} 3 \\ 4 \\ 2 \\ 5 \end{mymatrix}.
\end{equation*}
\end{example}
\begin{solution}
We use the casting-out algorithm to find the redundant vectors:
\begin{equation*}
\begin{mymatrix}{rrrr}
1 & -2 & 1 & 3 \\
0 & 0 & 2 & 4 \\
-2 & 4 & 2 & 2 \\
3 & -6 & 1 & 5 \\
\end{mymatrix}
\roweq\ldots\roweq
\begin{mymatrix}{rrrr}
\circled{1} & -2 & 1 & 3 \\
0 & 0 & \circled{2} & 4 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\end{mymatrix}.
\end{equation*}
Therefore, the redundant vectors are $\vect{u}_2$ and
$\vect{u}_4$. We remove them (``cast them out'') and are left with
$\vect{u}_1$ and $\vect{u}_3$. Therefore, by
Theorem~\ref{thm:linearly-independent-subset},
$\set{\vect{u}_1,\vect{u}_3}$ is linearly independent and
$\sspan\set{\vect{u}_1,\vect{u}_3} =
\sspan\set{\vect{u}_1,\ldots,\vect{u}_4}$.
\end{solution}
| {
"alphanum_fraction": 0.6547614748,
"avg_line_length": 41.1456166419,
"ext": "tex",
"hexsha": "77869459a4d55fcd4f7c2a21e54e8f2fb8133b71",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-06-30T16:23:12.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-11-09T11:12:03.000Z",
"max_forks_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "selinger/linear-algebra",
"max_forks_repo_path": "baseText/content/SpanIndependenceBasis-LinearIndependence.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "selinger/linear-algebra",
"max_issues_repo_path": "baseText/content/SpanIndependenceBasis-LinearIndependence.tex",
"max_line_length": 100,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "selinger/linear-algebra",
"max_stars_repo_path": "baseText/content/SpanIndependenceBasis-LinearIndependence.tex",
"max_stars_repo_stars_event_max_datetime": "2021-06-30T16:23:10.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-03-21T06:37:13.000Z",
"num_tokens": 9417,
"size": 27691
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% 2) Experimental Setup
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Making a conscious effort to remove "britishisms" from the text -- thus, hence, whilst, while, yet, furthermore, whereas, etc.
%Also trying to slim down on adjectives. One mans's "very" is another man's "some".
%To make diff/merge easier. I am now putting 1 "statement" (sentence) per line.
%Trying to apply the KISS principle, especially to the language used, to help out our American reviewers.
\begin{algorithm}
\SetKw{To}{to}
\SetKw{StartTimer}{start $\leftarrow$ timer}
\SetKw{StopTimer}{stop $\leftarrow$ timer}
\SetKw{Continue}{continue}
\ForEach{ packet in packet\_buffer }{
\BlankLine
\StartTimer{}\;
\BlankLine
packet.ethernet.src $\leftarrow$ 0xFFFFFF\;
packet.ethernet.dst $\leftarrow$ 0x000001\;
packet.ethernet.type $\leftarrow$ 0x0800\;
$\cdots$\\
packet.ip.size $\leftarrow$ \emph{big\_endian}(IP\_SIZE)\;
$\cdots$\\
packet.ip.protocol $\leftarrow$ 0x11\;
$\cdots$\\
packet.udp.size $\leftarrow$ \emph{big\_endian}(UDP\_SIZE)\;
$\cdots$\\
\For{ i $\leftarrow$ 0 \To{} UDP\_SIZE $\div{}$ WORD\_SIZE}{
packet.udp.data[\emph{i}] $\leftarrow$ const integer\;
}
\BlankLine
\StopTimer{}\;
\BlankLine
}
\caption{Generating packets}
\label{alg:net_gen}
\end{algorithm}
\section{Testing High Speed Networks}
\label{s:experiments}
To test the effect of packet layouts on processing speed, we would like to use a flexible, high speed network adapter.
Some FPGA based 100Gb/s network adapters are beginning to appear on the market~\cite{100gnic}, but they are rare and expensive.
Additionally, adapting 1000's of lines of legacy network stack code to process new packet formats will be time consuming.
To work around these problems, we take a different approach.
Instead of building a network stack for a particular adapter, we use our experience in high speed network adapter design \emph{[reference removed for blind review]} to build a generic network stack that runs directly out of system memory.
We assume that an adapter will eventually become available to place packets there.
This allows us to test the absolute speed of packet processing quickly and flexibly
It also gives us an indication of the expected upper limits that software could expect to achieve.
By writing our own network stack, we free ourselves from legacy code bases and can ensure that our packet processor is flexible enough to cope with a range of different packet formats.
Our initial network stack only processes UDP packets, delivered over IP version 4.0 transport using Ethernet frames.
In principle any protocol or transport could be supported, but this is the simplest to implement.
By doing so, we can get a strong indication of the practical upper bounds of any reasonable network stack.
Our network stack is divided into two parts: generating and receiving packets.
Packet generation is detailed in Algorithm~\ref{alg:net_gen} and packet receiving is detailed in Algorithm~\ref{alg:net_rcv}.
For each experiment, we first run the generation phase and then the receiving phase. This way we can test both parts independently.
As with any sensible network stack, we only perform endian conversions for values that are non-constant.
For example, the ethernet packet type field is a constant, but the IP header size is not.
Our packet buffer is pinned so that swapping cannot happen and aligned to a 4kB page boundary.
Our experiments are run using a top-of-the range Intel Core i7 Ivy Bridge CPU (4930K), with 12 threads running at 3.4GHz.
We ensure that the network-stack process is pinned to a unique CPU, with no other processes or hyperthreads interfering and that the process is run with full-realtime priority.
This environment is overly generous.
Real in-kernel network stacks have limited time to run and have to contend with interrupts and threads from other competing resources.
\begin{algorithm}
\SetKw{Continue}{continue}
\SetKw{To}{to}
\SetKw{StartTimer}{start $\leftarrow$ timer}
\SetKw{StopTimer}{stop $\leftarrow$ timer}
\ForEach{ packet in packet\_buffer }{
\BlankLine
\StartTimer{}\;
\BlankLine
\If{ packet.ethernet.src $\neq$ 0xFFFFFF }{
\Continue{};
}
\If{ packet.ethernet.dst $\neq$ 0x000001 }{
\Continue{};
}
\If{ packet.ethernet.type $\neq$ 0x0800 }{
\Continue{};
}
$\cdots$\\
ip\_size $\leftarrow$ \emph{little\_endian}(packet.ip.size)\;
total\_ip\_bytes $\leftarrow$ total\_ip\_bytes + ip\_size\;
$\cdots$\\
\If{ packet.ip.protocol $\neq$ 0x11 }{
\Continue{}\;
}
$\cdots$\\
udp\_size $\leftarrow$ \emph{little\_endian}(packet.udp.size)\;
total\_udp\_bytes $\leftarrow$ total\_udp\_bytes + udp\_size\;
$\cdots$\\
counter $\leftarrow$ 0\;
\For{ i $\leftarrow$ 0 \To{} udp\_size $\div{}$ WORD\_SIZE}{
counter $\leftarrow$ packet.udp.data[\emph{i}];
}
\BlankLine
\StopTimer{}\;
\BlankLine
}
\caption{Receiving packets}
\label{alg:net_rcv}
\end{algorithm}
| {
"alphanum_fraction": 0.6844759948,
"avg_line_length": 45.3644067797,
"ext": "tex",
"hexsha": "225c5af3454c3856be5180e9594bcd54f02a85d8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b3f7c87eb4fb9f0503f1cd68a9e4cb2b63708e44",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "mgrosvenor/fastnet",
"max_forks_repo_path": "papers/hotnets14/experiments.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b3f7c87eb4fb9f0503f1cd68a9e4cb2b63708e44",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "mgrosvenor/fastnet",
"max_issues_repo_path": "papers/hotnets14/experiments.tex",
"max_line_length": 239,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b3f7c87eb4fb9f0503f1cd68a9e4cb2b63708e44",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "mgrosvenor/fastnet",
"max_stars_repo_path": "papers/hotnets14/experiments.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1350,
"size": 5353
} |
\chapter{Formulae}
{\color{red} This chapter may be not required}
Plane wave basis $b_{\alpha}(\mathbf{r})$:
\begin{equation}
b_{\alpha}(\mathbf{r}) = \frac{1}{\sqrt{\Omega}} e^{\mathbf{G}_{\alpha}\cdot\mathbf{r}}
\end{equation}
Lattice vectors of unit cell in reciprocal space:
\begin{equation}\label{eq:recvecs}
\mathbf{b} = 2\pi\left( a^{T} \right)^{-1}
\end{equation}
\textbf{G}-vectors:
\begin{equation}
\mathbf{G} = i \mathbf{b}_{1} + j \mathbf{b}_{2} + k \mathbf{b}_{3}
\end{equation}
Structure factor:
\begin{equation}
S_{I}(\mathbf{G}) = \sum_{\mathbf{G}} e^{ -\mathbf{G}\cdot\mathbf{X}_{I} }
\end{equation}
| {
"alphanum_fraction": 0.6623794212,
"avg_line_length": 25.9166666667,
"ext": "tex",
"hexsha": "5669d7584954fd30602841552d092d9f453a7213",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-06-03T00:54:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-03-23T06:58:47.000Z",
"max_forks_repo_head_hexsha": "35dca9831bfc6a3e49bb0f3a5872558ffce4b211",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "f-fathurrahman/ffr-ElectronicStructure.jl",
"max_forks_repo_path": "PW/Doc/formulae.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "35dca9831bfc6a3e49bb0f3a5872558ffce4b211",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "f-fathurrahman/ffr-ElectronicStructure.jl",
"max_issues_repo_path": "PW/Doc/formulae.tex",
"max_line_length": 87,
"max_stars_count": 11,
"max_stars_repo_head_hexsha": "35dca9831bfc6a3e49bb0f3a5872558ffce4b211",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "f-fathurrahman/ffr-ElectronicStructure.jl",
"max_stars_repo_path": "PW/Doc/formulae.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-29T13:30:20.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-01-03T02:19:05.000Z",
"num_tokens": 251,
"size": 622
} |
\chapter*{致谢}
Life has taugh us that love does not consist in gazing at each other but in
looking outward together in the same direction.
| {
"alphanum_fraction": 0.7622377622,
"avg_line_length": 15.8888888889,
"ext": "tex",
"hexsha": "36c05f6f44a5dc95127ed5f0e28117cd4f2777d8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "316580c575207f7af836e6c784de0351762f748b",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "iamxiatian/quantpy",
"max_forks_repo_path": "book/chapter/thanks.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "316580c575207f7af836e6c784de0351762f748b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "iamxiatian/quantpy",
"max_issues_repo_path": "book/chapter/thanks.tex",
"max_line_length": 75,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "316580c575207f7af836e6c784de0351762f748b",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "iamxiatian/quantpy",
"max_stars_repo_path": "book/chapter/thanks.tex",
"max_stars_repo_stars_event_max_datetime": "2020-06-25T14:34:46.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-06-21T08:11:10.000Z",
"num_tokens": 36,
"size": 143
} |
\chapter{Ingredients}
\label{append:ingred}
\section{Hokkien Mee: Stir-fry Noodles with Shrimps and Squids}
\lipsum[51]
\section{Nasi Lemak: Rice Boiled with Coconut Milk and Accompanied with Chilli Paste and Anchovies}
\lipsum[52]
\section{Prata: a Dough Served with Curry}
\lipsum[53]
\section{Fried Carrot Cake: Stir-fry Rice Cakes with Eggs}
\lipsum[54]
| {
"alphanum_fraction": 0.762295082,
"avg_line_length": 19.2631578947,
"ext": "tex",
"hexsha": "5b425b833fcb548185d7142539ced7d81bdbc6fc",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9d86d410132e081a731d449179fc9393189aeb7d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "thisum/nusthesis",
"max_forks_repo_path": "src/ap-ingredients.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9d86d410132e081a731d449179fc9393189aeb7d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "thisum/nusthesis",
"max_issues_repo_path": "src/ap-ingredients.tex",
"max_line_length": 99,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9d86d410132e081a731d449179fc9393189aeb7d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "thisum/nusthesis",
"max_stars_repo_path": "src/ap-ingredients.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 116,
"size": 366
} |
\chapter{Conclusion}\label{chap:4}
\begin{quote}
``And as the ends and ultimates of all things accord in some mean and measure with their inceptions and originals, that same multiplicit concordance which leads forth growth from birth accomplishing by a retrogressive metamorphosis that minishing and ablation towards the final which is agreeable unto nature so is it with our subsolar being.'' \\
--- James Joyce, \emph{Ulysses}
\end{quote}
This thesis presented two case studies.
In the first, we uncovered the nature of skimming through the lens of AZDWM inspectors.
We also discussed potential pitfalls for the fledgling empiricist.
The second case study was the design of a crowd-sourced skimmer detection application.
In this chapter, we addressed details of the Android API as well as system design.
Many of the features which made Bluetana practical in the field were not noted in the original paper.
The next two sections address the broader epistemological ground of this work.
\section{On Crowdsourced Application Design}
An individual observer cannot know everything: reality has a high degree of complexity.
Thus, collaboration, the implementation of crowd-sourced applications, is necessary for knowledge.
Crowd-sourcing can solve problems that need parallelism across space or time.
It is also a rich ground for that which is non-computational.
These systems must cooperate with nontrivial-to-classify human input (chapter \ref{chap:2}).
This occurs through either the data they analyze or the system itself.
Thus, the design of a crowd sourced systems can lead to an understanding of human behavior.
\section{On Effective Data Analysis from Heterogeneous Sources}\label{sec:effective-data-analysis}
Information contained within this reality is non-homogeneous and avoids rigorous classification on account of the complexities generated via the combinatoric pairing of atomic deterministic systems that may be symbolically paralleled in logic.
A large part of contemporary research is an attempt at a general solution to this ``problem''.
The closest we have come is the approximation of the human brain.
However, approximation is not the only option.
It may be possible to build automated systems for doing the analysis presented in this paper.
These systems's implementation may also be a crucial epistemological step in humanity's development.
They would certainly save a lot of time.
| {
"alphanum_fraction": 0.8153526971,
"avg_line_length": 70.8823529412,
"ext": "tex",
"hexsha": "d7ab2bacc0a1e7542e685a67c5fa096b1d05d15b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a71da18b4ba87b70eb6ecb5a3cef7b644b9d49b1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "maxwell-bland/masters-thesis",
"max_forks_repo_path": "chapter4.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a71da18b4ba87b70eb6ecb5a3cef7b644b9d49b1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "maxwell-bland/masters-thesis",
"max_issues_repo_path": "chapter4.tex",
"max_line_length": 347,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a71da18b4ba87b70eb6ecb5a3cef7b644b9d49b1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "maxwell-bland/masters-thesis",
"max_stars_repo_path": "chapter4.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 492,
"size": 2410
} |
%%%
\section{Overview of \pelelm}
\pelelm\ evolves chemically reacting low Mach number flows with block-structured adaptive mesh refinement (AMR).
The code depends upon the \amrex\ library (\url{https://github.com/AMReX-Codes/amrex}) to provide
the underlying data structures, and tools to manage and operate on them across
massively parallel computing architectures. \pelelm\ also borrows heavily
from the source code and algorithmic infrastructure of the \iamr\ code (\url{https://github.com/AMReX-Codes/IAMR}).
\iamr\ implements an AMR integration for the variable-density incompressible Navier-Stokes equations.
\pelelm\ extends \iamr\ to include complex coupled models for generalized thermodynamic relationships,
multi-species transport and chemical reactions. The core algorithms in \pelelm\ (and \iamr) are described
in the following papers:
\begin{itemize}
\item {\it A conservative, thermodynamically consistent numerical approach for low Mach number
combustion. I. Single-level integration},
A.~Nonaka, J.~B.~Bell, and M.~S.~Day, Combust. Theor. Model., vol. 22, no. 1, pp. 156-184, 2018
\url{https://ccse.lbl.gov/Publications/nonaka/LMC_Pressure.pdf} \cite{LMC-P}.
\item {\it A Deferred Correction Coupling Strategy for Low Mach Number Flow with Complex Chemistry},
A.~Nonaka, J.~B.~Bell, M.~S.~Day, C.~Gilet, A.~S.~Almgren, and M.~L.~Minion,
Combust. Theory and Modelling, 16(6), 1053-1088, 2012.
\url{http://www.tandfonline.com/doi/abs/10.1080/13647830.2012.701019} \cite{LMC_SDC}
\item {\it Numerical Simulation of Laminar Reacting Flows with Complex Chemistry},
M.~S.~Day and J.~B.~Bell,
Combust. Theory Modelling 4(4) pp.535-556, 2000.
\url{http://www.tandfonline.com/doi/abs/10.1088/1364-7830/4/4/309} \cite{DayBell:2000}
\item {\it An Adaptive Projection Method for Unsteady, Low-Mach Number Combustion},
R.~B.~Pember, L.~H.~Howell, J.~B.~Bell, P.~Colella, W.~Y.~Crutchfield, W.~A.~Fiveland, and J.~P.~Jessee,
Comb. Sci. Tech., 140, pp. 123-168, 1998.
\url{http://www.tandfonline.com/doi/abs/10.1080/00102209808915770} \cite{pember-flame}
\item {\it A Conservative Adaptive Projection Method for the Variable Density Incompressible Navier-Stokes Equations},
A.~S.~Almgren, J.~B.~Bell, P.~Colella, L.~H.~Howell, and M.~L.~Welcome,
J.~Comp.~Phys., 142, pp. 1-46, 1998.
\url{http://www.sciencedirect.com/science/article/pii/S0021999198958909} \cite{IAMR}
\end{itemize}
\section{The low Mach number flow equations}
\newcommand{\etal}{{\it et al.}}
\pelelm\ solves the reacting Navier-Stokes flow equations in the \emph{low Mach number} regime~\cite{DayBell:2000,rehm1978equations,Majda:1985}. In the low Mach number regime, the characteristic fluid velocity is small compared to the sound speed, and the effect of acoustic wave propagation is unimportant to the overall dynamics of the system. Accordingly, acoustic wave propagation can be mathematically removed from the equations of motion, allowing for a numerical time step based on an advective CFL condition, and this leads to an increase in the allowable time step of order $1/M$ over an explicit, fully compressible method ($M$ is the Mach number). In this mathematical framework, the total pressure is decomposed into the sum of a spatially constant (ambient) thermodynamic pressure $P_0$ and a perturbational pressure, $\pi({\vec x})$ that drives the flow. Under suitable conditions (\cite{Majda:1985}), $\pi/P_0 = \mathcal{O} (M^2)$.
The set of conservation equations specialized to the low Mach number regime is a system of PDEs with advection, diffusion and reaction (ADR) processes that are constrained to evolve on the manifold of a spatially constant $P_0$:
\begin{eqnarray}
&&\frac{\partial (\rho \boldsymbol{u})}{\partial t} +
\nabla \cdot \left(\rho \boldsymbol{u} \boldsymbol{u} + \tau \right)
= -\nabla \pi + \rho \, \boldsymbol{F} ,
\nonumber
\\
&&\frac{\partial (\rho Y_m)}{\partial t} +
\nabla \cdot \left( \rho Y_m \boldsymbol{u} + \boldsymbol{\mathcal{F}}_{m} \right)
= \rho \, \dot{\omega}_m,
\label{eq:gen}
\\
&&\frac{ \partial (\rho h)}{ \partial t} +
\nabla \cdot \left( \rho h \boldsymbol{u} + \boldsymbol{\mathcal{Q}} \right) = 0 ,
\nonumber
\end{eqnarray}
where $\rho$ is the density, $\boldsymbol{u}$ is the velocity, $h$ is the mass-weighted enthalpy, $T$ is temperature and $Y_m$ is the mass fraction of species $m$. $\dot{\omega}_m$ is the molar production rate for species $m$, the modeling of which will be described in Section~\ref{ChemKinetics}. $\tau$ is the stress tensor, $\boldsymbol{\mathcal{Q}}$ is the heat flux and $\boldsymbol{\mathcal{F}}_m$
%$\,=\,$$- \rho \boldsymbol{\mathcal{D}}_i \nabla X_{i}$
are the species diffusion fluxes. These transport fluxes require the evaluation of transport coefficients (e.g., the viscosity $\mu$, the conductivity $\lambda$ and the diffusivity matrix $D$) which are computed using the library EGLIB~\cite{EGLIB}, as will be described in more depth in Section~\ref{DifFluxesEGLIB}. The momentum source, $\boldsymbol{F}$, is an external forcing term. For example, we have used $\boldsymbol{F}$ to implement a long-wavelength time-dependent force to establish and maintain quasi-stationary turbulence.
These evolution equations are supplemented by an equation of state for the thermodynamic pressure. For example, the ideal gas law,
\begin{eqnarray}
P_0(\rho,Y_m,T)=\frac{\rho \mathcal{R} T}{W}=\rho \mathcal{R} T \sum_m \frac{Y_m}{W_m}
\label{eq:eos}
\end{eqnarray}
can be used, although \pelelm\ will soon support other more general expressions, such as
Soave-Redlich-Kwong~\cite{Soave1972}. In (\ref{eq:eos}), $W_m$ and $W$ are the species $m$, and mean
molecular weights, respectively. To close the system
we also require a relationship between enthalpy, species and temperature. We adopt the definition used in the CHEMKIN standard,
\begin{eqnarray}
h=\sum_m Y_m h_m(T)
\label{eq:hofT}
\end{eqnarray}
where $h_m$ is the species $m$ enthalpy. Note that expressions for $h_m(T)$ (see Section~\ref{ThermoProp}) incorporate the heat of formation for each species.
Neither species diffusion nor reactions redistribute the total mass, hence we have $\sum_m \boldsymbol{\mathcal{F}}_m = 0$ and $\sum_m \dot{\omega}_m = 0$. Thus, summing the species equations and using the definition $\sum_m Y_m = 1$ we obtain the continuity equation:
\begin{eqnarray}
\frac{\partial \rho}{\partial t} + \nabla \cdot \rho \boldsymbol{u} = 0
\label{eq:cont}
\end{eqnarray}
Equations~(\ref{eq:gen}), together with the equation of state, (\ref{eq:eos}) form a differential-algebraic equation (DAE) system that describes an evolution subject to a constraint. A standard approach to attacking such a system computationally is to differentiate the constraint until it can be recast as an initial value problem. Following this procedure, we set the thermodynamic pressure constant in the frame of the fluid,
\begin{eqnarray}
\frac{DP_0}{Dt} = 0
\label{eq:deos}
\end{eqnarray}
and observe that if the initial conditions satisfy the constraint, an evolution satisfying (\ref{eq:deos})
will continue to satisfy the constraint over all time. Expanding (\ref{eq:deos}) via the chain rule, and using
Eq.~\ref{eq:cont}:
\begin{eqnarray}
\nabla \cdot \boldsymbol{u} = \frac{1}{T}\frac{DT}{Dt} + W \sum_m \frac{1}{W_m} \frac{DY_m}{Dt} = S
\label{eq:veloconstr}
\end{eqnarray}
The constraint here take the form of a condition on the divergence of the flow. Note that the actual expressions to use in (\ref{eq:veloconstr}) will depend upon the chosen models for evaluating the transport fluxes in (\ref{eq:gen}).
%%%
\subsection{Transport fluxes}
\label{sub:DifFluxes}
Expressions for the transport fluxes appearing in Eqs.~(\ref{eq:gen}) can be approximated in the Enskog-Chapman expansion as~\cite{Ern:1994multicomponent}:
\begin{eqnarray*}
&&\boldsymbol{\mathcal{F}}_{m} = \rho Y_m \boldsymbol{V_m}
\\ [2mm]
&&\tau_{i,j} = - \Big(\kappa - \frac{2}{3} \mu \Big) \delta_{i,j} \frac{\partial {u_k}}{\partial x_k} - \mu \Big(\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i}\Big)
\\ [2mm]
&&\boldsymbol{\mathcal{Q}} = \sum_m h_m \boldsymbol{\mathcal{F}}_{m} - \lambda' \nabla T - P_0 \sum_m \theta_m \boldsymbol{d_m}
\end{eqnarray*}
where $\mu$ is the shear viscosity, $\kappa$ is the bulk viscosity, and $\lambda'$ is the partial thermal conductivity. In the \textit{full matrix diffusion model}, the vector of $m$ species diffusion velocities, $\boldsymbol{V_m}$, is given by:
\begin{eqnarray*}
\boldsymbol{V_m} = - \sum_j {D}_{m,j} \boldsymbol{d_j} - \theta_m \nabla ln(T)
\end{eqnarray*}
where ${D}_{m,j}$ is the diffusion matrix, and $\boldsymbol{\theta}$ are thermal diffusion coefficients associated with the Soret (mass concentration flux due to an energy gradient) and Dufour (the energy flux due to a mass concentration gradient) effects. The $m$ species transport driving force due to composition gradients, $\boldsymbol{d_m}$, is given by~\cite{Ern:1994multicomponent}:
\begin{eqnarray*}
\boldsymbol{d_m} = \nabla X_m + (X_m -Y_m) \frac{\nabla P_0}{P_0}
\label{dmeqs}
\end{eqnarray*}
Alternatively (as in the library EGLIB~\cite{EGLIB}) the thermal diffusion \emph{ratios} $\boldsymbol{\chi}$ may be preferred~\cite{Ern:1994multicomponent} and the diffusion velocities and energy flux recast as:
\begin{eqnarray}
\boldsymbol{V_m} = - \sum_j {D}_{m,j} ( \boldsymbol{d_j} + \chi_j \nabla ln(T))
\\
\boldsymbol{\mathcal{Q}} = \sum_m h_m \boldsymbol{\mathcal{F}}_{m} - \lambda \nabla T + P_0 \sum_m \chi_m \boldsymbol{V_m}
\end{eqnarray}
where ${D} \boldsymbol{\chi} = \boldsymbol{\theta}$.
%and $\lambda' \nabla T = \lambda \nabla T + P_0 \sum_m \theta_m \nabla ln(T)$.
As can be seen, the expression for these fluxes relies upon several transport coefficients that need to be evaluated. However, in the present framework several effects are neglected, thus simplifying the fluxes evaluation, as will be seen in Section~\ref{SumUpEq}.
%%%
\section{The \pelelm\ equation set}
\label{SumUpEq}
The full diffusion model couples together the advance of all thermodynamics fields, including a dense matrix transport operator that is cumbersome to deal with computationally, while also being generally viewed as an overkill for most practical combustion applications -- particularly those involving turbulent fluid dynamics. For \pelelm, we make the following simplifying assumptions:
\begin{enumerate}
\item The bulk viscosity, $\kappa$ is negligible, compared to the shear viscosity,
\item The low Mach limit implies that there are no spatial gradients in the thermodynamic pressure,
\item The \textit{mixture-averaged}\ diffusion model is assumed,
\item Finally, Dufour and Soret effects are negligible
\end{enumerate}
With these assumptions, the conservation equations take the following form:
\begin{eqnarray}
&&\frac{\partial (\rho \boldsymbol{u})}{\partial t} +
\nabla \cdot \left(\rho \boldsymbol{u} \boldsymbol{u} + \tau \right)
= -\nabla \pi + \rho \, \boldsymbol{F} ,
\nonumber
\\
&&\frac{\partial (\rho Y_m)}{\partial t} +
\nabla \cdot \left( \rho Y_m \boldsymbol{u} + \boldsymbol{\mathcal{F}}_{m} \right)
= \rho \, \dot{\omega}_m,
\label{eq:pelelm}
\\
&&\frac{ \partial (\rho h)}{ \partial t} +
\nabla \cdot \left( \rho h \boldsymbol{u} + \boldsymbol{\mathcal{Q}} \right) = 0 ,
\nonumber
\end{eqnarray}
with
\begin{eqnarray*}
&&\boldsymbol{\mathcal{F}}_{m} = \rho Y_m \boldsymbol{V_m} = - \rho D_{m,mix} \nabla X_m
\\ [2mm]
&&\tau_{i,j} = \frac{2}{3} \mu \delta_{i,j} \frac{\partial {u_k}}{\partial x_k} - \mu \Big(\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i}\Big)
\\ [2mm]
&&\boldsymbol{\mathcal{Q}} = \sum_m h_m \boldsymbol{\mathcal{F}}_{m} - \lambda \nabla T
\end{eqnarray*}
Using these expressions, we can write an equation for $T$ that is needed in order to evaluate the right-hand side of the divergence constraint:
\begin{eqnarray}
\rho \, C_p \frac{DT}{Dt} &=& \nabla \cdot \lambda \nabla T
+ \sum_m \Big( h_m \nabla \cdot \boldsymbol{\mathcal{F}}_{m}
- \nabla \cdot h_m \boldsymbol{\mathcal{F}}_{m}
- h_m \rho \dot\omega_m \Big)
\label{eq:T}
\end{eqnarray}
where $C_p = \partial h/\partial T$ is the specific heat of the mixture at constant pressure. Equation~\ref{eq:veloconstr} then becomes:
\begin{eqnarray}
\nabla \cdot \boldsymbol{u} &=&\frac{1}{\rho \, C_p T}\Big[ \nabla \cdot \lambda \nabla T
+ \sum_m \Big( h_m \nabla \cdot \boldsymbol{\mathcal{F}}_{m} - \nabla \cdot h_m \boldsymbol{\mathcal{F}}_{m}\Big) \Big] \nonumber
\\
&&- \frac{W}{\rho} \sum_m \frac{1}{W_m} \nabla \cdot \boldsymbol{\mathcal{F}}_{m} + \frac{1}{\rho} \sum_m \Big( \frac{W}{W_m} -\frac{h_m(T)}{c_{p} T} \Big)\dot{\omega}_m
\label{eq:igldivu}
\end{eqnarray}
% We will talk about this later in the document...
% The resolution of the system of equations presented is performed in a fractional step framework which prohibits, in general, to numerically conserve both species and enthalpy while satisfying the equation of state (eos) Eq.~\ref{eq:eos}. To deal with this issue, a pressure correction term is added to the constraint S in Eq.~\ref{eq:veloconstr} to damp the system back onto the ambient eos (?? CHECK THAT FORMULA):
% \begin{eqnarray}
% \hat{S} = S + f \frac{c_{p} - R}{\Delta t c_{p} \hat{p}} (\hat{p} - P_0)
% \end{eqnarray}
% where $\hat{p}$ is computed via the eos Eq.~\ref{eq:eos}, $R = \mathcal{R}/W$ and $f$ is a damping factor ($<1$).
% \subsection{Transport coefficients and mixture rules: the Ern and Giovangigli approximations}
% \label{subs:EGLIB}
The mixture-averaged transport coefficients discussed above ($\mu$, $\lambda$ and $D_{m,mix}$) can be evaluated from transport properties of the pure species. We follow the treatment used in the EGLib library, based on the theory/approximations developed by Ern and Givangigli~\cite{Ern:1994,Ern:2004}.
The following choices are currently implemented in \pelelm\
\begin{itemize}
\item The viscosity, $\mu$, is estimated based \textcolor{red}{FIXME}
\item The conductivity, $\lambda$, is based on an empirical mixture formula:
\begin{eqnarray}
\lambda = \frac{1}{2} (\mathcal{A}_{-1} + \mathcal{A}_{1})
\end{eqnarray}
with
\begin{eqnarray}
\mathcal{A}_{\alpha}= \Big( \sum_m X_m (\lambda_m)^{\alpha} \Big)^{1/\alpha}
\end{eqnarray}
\item The diffusion flux is approximated using the diagonal matrix $diag(\widetilde{ \Upsilon})$, where:
\begin{eqnarray}
\widetilde{ \Upsilon}_m = D_{m,mix}, \;\;\;\mbox{where} \;\;\; D_{m,mix} = \frac{1-Y_m}{ \sum_{j \neq m} X_j / \mathcal{D}_{m,j}}
\label{eq:dmix}
\end{eqnarray}
This leads to a mixture-averaged approximation that is similar to that of Hirschfelder-Curtiss~\cite{Hirschfelder:1954}:
\begin{eqnarray*}
\rho Y_m \boldsymbol{V_m} = - \rho D_{m,mix} \nabla X_m
\end{eqnarray*}
\end{itemize}
Note that with these definitions, there is no guarantee that $\sum \boldsymbol{\mathcal{F}}_{m} = 0$, as
required for mass conservation. As discussed in Section~\ref{AlgoDetails}, an arbitrary ``correction flux,'' consistent with the mixture-averaged diffusion approximation, is added in \pelelm\ to enforce conservation.
\subsection{Pure species transport properties}
The mixture-averaged transport coefficients require expressions for the pure species binary transport coefficients. These, in turn, depend upon the forces of interaction between colliding molecules, which are complex functions of the shape and properties of each binary pair of species involved, as well as of their environment, intermolecular distance, etc. In practice, these interactions are usually described by a Lennard-Jones 6-12 potential (for non polar molecules, Stockmayer potential otherwise) that relates the evolution of the potential energy of the pair of species to their intermolecular distance. Here, the single component viscosities and binary diffusion coefficients are given by~\cite{Hirschfelder:1954}:
\begin{eqnarray}
\eta_m = \frac{5}{16} \frac{\sqrt{\pi m_m k_B T}}{\pi \sigma^2_m \Omega^{(2,2)*}},
\hspace{4mm}
\mathcal{D}_{m,j} = \frac{3}{16}\frac{\sqrt{2 \pi k^3_B T^3/m_{m,j}}}{P_0 \pi \sigma^2_{m,j} \Omega^{(1,1)*}}
\label{binary}
\end{eqnarray}
where $k_B$ is the Boltzmann constant, $\sigma_m$ is the Lennard-Jones collision diameter and $m_m (= W_k/\mathcal{A})$ is the molecular mass of species $m$. $m_{m,j}$ is the reduced molecular mass and $\sigma_{m,j}$ is the reduced collision diameter of the $(m,j)$ pair, given by:
\begin{eqnarray}
m_{m,j} = \frac{m_m m_j }{ (m_m + m_j)},
\hspace{4mm}
\sigma_{m,j} = \frac{1}{2} \zeta^{-\frac{1}{6}}(\sigma_m + \sigma_j)
\label{redCollision}
\end{eqnarray}
where $\zeta=1$ if the partners are either both polar or both nonpolar, but in the case of a polar molecule ($p$) interacting with a nonpolar ($n$) molecule:
\begin{eqnarray*}
\zeta=1 + \frac{1}{4} \alpha^*_n (\mu^*_p)^2 \sqrt{\frac{\epsilon_p}{\epsilon_n}}
\end{eqnarray*}
with $ \alpha^*_n = \alpha_n / \sigma^3_n$ the reduced polarizability of the nonpolar molecule and $\mu^*_p = \mu_p/\sqrt{\epsilon_p \sigma^3_p}$ the reduced dipole moment of the polar molecule, expressed in function of the Lennard-Jones potential $\epsilon_p$ of the $p$ molecule.
Both quantities appearing in~\ref{binary} rely upon the evaluation of \emph{collision integrals} $\Omega^{(\cdot,\cdot)*}$, which account for inter-molecular interactions, and are usually tabulated in function of reduced variables~\cite{Monchick:1961}:
\begin{itemize}
\item $\Omega^{(2,2)*}$ is tabulated in function of a reduced temperature ($T^*_m $) and a reduced dipole moment ($\delta^*_m$), given by:
\begin{eqnarray*}
T^*_m = \frac{k_BT}{\epsilon_m},
\hspace{4mm}
\delta^*_m = \frac{1}{2} \frac{\mu^2_m}{\epsilon_m \sigma^3_m}
\end{eqnarray*}
%where $\epsilon_m$ is the Lennard-Jones potential well depth and $\mu_m$ is the dipole moment of species $m$.
\item $\Omega^{(1,1)*}$ is tabulated in function of a reduced temperature ($T^*_{m,j} $) and a reduced dipole moment ($\delta^*_{m,j}$), given by:
\begin{eqnarray*}
T^*_{m,j} = \frac{k_BT}{\epsilon_{m,j}},
\hspace{4mm}
\delta^*_{m,j} = \frac{1}{2} \frac{\mu^2_{m,j}}{\epsilon_{m,j} \sigma^3_{m,j}}
\end{eqnarray*}
where the reduced collision diameter of the pair ($\sigma_{m,j}$) is given by \ref{redCollision}; and the Lennard-Jones potential $\epsilon_{m,j}$ and dipole moment $\mu_{m,j}$ of the $(m,j)$ pair are given by:
\begin{eqnarray*}
\frac{\epsilon_{m,j}}{k_B} = \zeta^2 \sqrt{\frac{\epsilon_m}{k_B} \frac{\epsilon_j}{k_B}},
\hspace{4mm}
\mu^2_{m,j} = \xi \mu_m \mu_j
\end{eqnarray*}
with $\xi = 1$ if $\zeta = 1$ and $\xi = 0$ otherwise.
\end{itemize}
The expression for the pure species thermal conductivities are more complex. They are assumed to be composed of translational, rotational and vibrational contributions~\cite{Warnatz:}:
\begin{eqnarray*}
\lambda_m = \frac{\eta_m}{W_m} (f_{tr}C_{v,tr} + f_{rot}C_{v,rot} + f_{vib}C_{v,vib})
\end{eqnarray*}
where
\begin{eqnarray*}
&&f_{tr} = \frac{5}{2}\Big(1-\frac{2}{\pi} \frac{C_{v,rot}}{C_{v,tr}} \frac{A}{B} \Big)
\\
&&f_{rot} = \frac{\rho \mathcal{D}_{m,m}}{\eta_m} \Big( 1 + \frac{2}{\pi} \frac{A}{B} \Big)
\\
&&f_{vib} = \frac{\rho \mathcal{D}_{m,m}}{\eta_m}
\end{eqnarray*}
and
\begin{eqnarray*}
A = \frac{5}{2} - \frac{\rho \mathcal{D}_{m,m}}{\eta_m},
\hspace{4mm}
B = Z_{rot} + \frac{2}{\pi} \Big( \frac{5}{3} \frac{C_{v,rot}}{\mathcal{R}} + \frac{\rho \mathcal{D}_{m,m}}{\eta_m} \Big)
\end{eqnarray*}
The molar heat capacities $C_{v,\cdot}$ depend on the molecule shape. In the case of a linear molecule:
\begin{eqnarray*}
\frac{C_{v,tr}}{\mathcal{R}} = \frac{3}{2},
\hspace{1.5em}
\frac{C_{v,rot}}{\mathcal{R}} = 1,
\hspace{1.5em}
{C_{v,vib}} = C_v - \frac{5}{2} \mathcal{R}
\end{eqnarray*}
In the case of a nonlinear molecule, the expressions are
\begin{eqnarray*}
\frac{C_{v,tr}}{\mathcal{R}} = \frac{3}{2},
\hspace{1.5em}
\frac{C_{v,rot}}{\mathcal{R}} = \frac{3}{2},
\hspace{1.5em}
{C_{v,vib}} = C_v - 3 \mathcal{R}
\end{eqnarray*}
For single-atom molecules the thermal conductivity reduces to:
\begin{eqnarray*}
\lambda_m = \frac{\eta_m}{W_m} (f_{tr}C_{v,tr} ) = \frac{15 \, \eta_m \mathcal{R}}{4 \, W_m}
\end{eqnarray*}
Finally, $Z_{rot}$ is the rotational relaxation number, a parameter given by~\cite{Parker:}:
\begin{eqnarray*}
Z_{rot}(T) = Z_{rot} (298) \frac{F(298)}{F(T)}
\end{eqnarray*}
with
\begin{eqnarray*}
F(T) = 1 + \frac{\pi^{(3/2)}}{2} \sqrt{\frac{\epsilon/k_B}{T} } + \Big( \frac{\pi^2}{4} +2 \Big) \Big( \frac{\epsilon/k_B}{T} \Big) + \pi^{(3/2)}\Big( \frac{\epsilon/k_B}{T} \Big)^{(3/2)}
\end{eqnarray*}
The pure species and mixture transport properties are evaluated with EGLib functions, which are linked directly into \pelelm. EGLib requires as input polynomial fits of the logarithm of each quantity versus the logarithm of the temperature.
\begin{eqnarray*}
ln(q_m) = \sum_{n=1}^4 a_{q,m,n} \, ln(T)^{(n-1)}
\end{eqnarray*}
where $q_m$ represents $\eta_m$, $\lambda_m$ or $D_{m,j}$. These fits are generated as part of a preprocessing step managed by the tool \fuego\ based on the formula (and input data) discussed above. The role of \fuego\ to preprocess the model parameters for transport as well as chemical kinetics and thermodynamics, is discussed in some detail in Section~\ref{FuegoDescr}.
%%%
\section{Chemical kinetics and the reaction source term}
\label{ChemKinetics}
Chemistry in combustion systems involves the $N_s$ species interacting through a set of $M_r$ elementary reaction steps, expressed as
\begin{eqnarray*}
\sum_{m=1}^{N_s} \nu_{m,j}'[X_m] \rightleftharpoons \sum_{m=1}^{N_s} \nu_{m,j}''[X_m],\quad for \quad j \in [1,M_r]
\label{IntroKM1}
\end{eqnarray*}
where $[X_m]$ is the molar concentration of species $m$, and $\nu_{m,j}'$, $\nu_{m,j}''$ are the stoichiometric coefficients on the reactant and product sides of reaction $j$, associated with $m$. For such a system, the rate of reaction $j$ ($R_j$) can be expressed in terms of the the forward ($k_{f,j}$) and backward ($k_{r,j}$) rate coefficients,
\begin{eqnarray*}
R_{j} = k_{f,j}\prod_{m=1}^{N_s} [X_{m}]^{\nu_{m,j}'}-k_{r,j}\prod_{m=1}^{N_s} [X_{m}]^{\nu_{m,j}''}
\end{eqnarray*}
The net molar production rate, $ \dot{\omega}_m$, in Eq.~\ref{eq:pelelm} of species $m$ is obtained by
collating the rate of creation and destruction over reactions:
\begin{eqnarray*}
\dot{\omega}_m = \sum_{j=1}^{M_r} \nu_{m,j} R_j
\label{IntroKM3}
\end{eqnarray*}
where $\nu_{m,j} =\nu_{m,j}'' - \nu_{m,j}'$. Expressions for the reaction rates coefficients $k_{(f,r),j}$ depend on the type of reaction considered. \pelelm \; relies on the CHEMKIN Arrhenius reaction format:
\begin{eqnarray*}
k_f = AT^{\beta} exp \left( \frac{-E_a}{RT}\right)
\end{eqnarray*}
where $A$ is the pre-exponential (frequency) factor, $\beta$ is the temperature exponent and $E_a$ is the activation energy. The CHEMKIN format additionally allows for a number of specializations of this format to represent pressure dependencies and third-body enhancements -- see the CHEMKIN Manual or Cantera website for additional information~\cite{Kee:1989,cantera}.
Most fundamental Arrhenius reactions are bidirectional, and typically only the forward rates are specified. In this case, the balance of forward and reverse rates are dictacted by equilibrium thermodynamics, via the equilibrium ``constant'', $K_{c,j}$. In a low Mach system, $K_{c,j}$ is a function only of temperature and the thermodynamic properties of the reactants and products of reaction $j$,
\begin{eqnarray*}
&&k_{r,j} = \frac{k_{f,j}}{K_{c,j}(T)} \;\;\; \mbox{where} \;\;\; K_{c,j}=K_{p,j} \left( \frac{P_{0}}{RT} \right)^{\sum_{k=1}^{N_s} \nu_{k,j}}
\\
&&\mbox{and} \;\;\; K_{p,j}=\exp \left( \frac{\Delta {S_j}^{0}}{R} - \frac{\Delta {H_j}^{0}}{RT} \right)
\end{eqnarray*}
$\Delta H_j$ and $\Delta S_j$ are the change in enthalpy and entropy of the reaction $j$, and $P_0$ is the ambient thermodynamic pressure.
Species production rates are evaluated via functions that are generated as part of a preprocessing step managed by the tool \fuego\ (see Section~\ref{FuegoDescr}).
%%%
\section{Thermodynamic properties}
\label{ThermoProp}
Currently, expressions for the thermodynamic properties in \pelelm\ follow those of CHEMKIN~\cite{Kee:1989}, which assume a mixture of ideal gases. Species enthalpies and entropies are thus functions of only temperature (for perfect gases, they are independent of pressure) and are given in terms of polynomial fits to the species molar heat capacities ($C_{p,\cdot}$),
\begin{eqnarray*}
\frac{C_{p,m}(T)}{\mathcal{R}} = \sum_{k=1}^{N_s} a_{k,m}T^{k-1}
\end{eqnarray*}
where, in the standard CHEMKIN framework (the 7-coefficients NASA format), $N =5$,
\begin{eqnarray}
\frac{C_{p,m}(T)}{\mathcal{R}} = a_{1,m} + a_{2,m} T + a_{3,m} T^2 + a_{4,m} T^3 + a_{5,m} T^4
\end{eqnarray}
Accordingly, the standard-state molar enthalpy of species $m$ is given by:
\begin{eqnarray}
\frac{H_{m}(T)}{\mathcal{R}T} = a_{1,m} +\frac{a_{2,m}}{2} T + \frac{a_{3,m}}{3} T^2 + \frac{a_{4,m}}{4} T^3 + \frac{ a_{5,m}}{5} T^4 + a_{6,m}/T
\end{eqnarray}
Note that the standard specifies that the heat of formation for the molecule is included in this expression.
Similarly, the standard-state molar entropy is written as:
\begin{eqnarray}
\frac{S_{m}(T)}{\mathcal{R}} = a_{1,m}ln(T) + {a_{2,m}} T + \frac{a_{3,m}}{2} T^2 + \frac{a_{4,m}}{3} T^3 + \frac{ a_{5,m}}{4} T^4 + a_{7,m}
\end{eqnarray}
For each species, $m$, in the model the user must specify the coefficients $a_{k,m}$. All other required thermodynamic properties are then determined (see, e.g., the CHEMKIN manual for additional details~\cite{Kee:1989}). Thermodynamic properties of the species, and those of the mixture, are evaluated via functions that are generated as part of a preprocessing step managed by the tool \fuego\ (see next Section~\ref{FuegoDescr}).
%%%
\section{\fuego\ chemistry preprocessing}
\label{FuegoDescr}
A typical model for \pelelm\ contains all the information associated with the CHEMKIN parameterization of the Arrhenius reaction set, as well as fitting coefficients for the thermodynamic relationships, and the specification of the species including data required to compute pure-species transport properties. In the combustion community, this information is communicated for each complete model --or ``mechanism'', through multiple text files that conform to the CHEMKIN standards. The CHEMKIN driver code (or equivalent) can then be used to ingest the large number of parameters contained in these files and provide a set of functions for evaluating all the properties and rates required. Earlier versions of \pelelm\ linked to the CHEMKIN codes directly (and thereby assumed that all problems consisted of a mixture of ideal gases). However, evaluations were not very efficient because the functions stepped through generic expressions that included a large number of conditional statements and unused generality. Direct evaluation of these complex expressions allows for a much more efficient code that optimizes well with modern compilers. This is important because an appreciable fraction of \pelelm\ runtime is spent in these functions. Performance issues notwithstanding, customized evaluators will be necessary to extend \pelelm\ to a larger class of (``real'') gas models outside the CHEMKIN standard, such as SRK, that are already part of the \pelec\ code capabilities (\pelec\ shares use of \pelephysics\ for combustion model specification).
For these reasons, \pelelm\ no longer uses CHEMKIN functions directly, but instead relies on a preprocessing tool, \fuego, to generate highly efficient C code implementations of the necessary thermodynamic, transport and kinetics evaluations. The source code generated from \fuego\ is linked into the \pelelm\ executable, customizing each executable for a specific model at compile time. The implementation source code files can also be linked conveniently to post-processing analysis tools, discussed in some detail in Section~(\textcolor{red}{TBD}). The \fuego\ processing tool, and the functions necessary to interface the generated functions to \pelelm\ are distributed in the auxiliary code package, \pelephysics. Included in the \pelephysics\ distribution is a broad set of models for the combustion of hydrogen, carbon-monoxide, methane, heptane, $n$-dodecane, dimethyl ether, and others, as well as instructions for users to extend this set using \fuego, based on their own CHEMKIN-compliant inputs. \pelephysics\ also provides support for simpler \textit{gama-law}\ equations-of-state, and simple/constant transport properties.
%%%
\section{The \pelelm\ temporal integration}
The temporal discretization in \pelelm\ combines a modified spectral deferred correction (SDC) coupling of chemistry and transport \cite{LMC_SDC} with a density-weighted approximate projection method for low Mach number flow \cite{DayBell:2000}. The projection method enforces a constrained evolution of the velocity field, and is implemented iteratively in such a way as to ensure that the update simultaneously satisfies the equation of state and discrete conservation of mass and total enthalpy. A time-explicit approach is used for advection; faster diffusion and chemistry processes are treated time-implicitly, and iteratively coupled together within the deferred corrections strategy. The integration algorithm, discussed in the following sections, is second-order accurate in space and time, and is implemented in the context of a subcycled approach for a nested hierarchy of mesh levels, where each level consists of logically rectangular patches of rectangular cells. All cells at a level have the same size, and are isotropic in all coordinates.
Due to the complexity of the \pelelm\ algorithm, it is best presented in a number of passes. Focusing first on the single-level advance, we begin with a general discussion of the SDC-based time step iteration, which is designed to couple together the various physics processes. We then describe the projection steps used to enforce the constraint in the context of this iterative update. Next, we dive a little deeper into precisely how the advance of the thermodynamic components of the state is sequenced. There are a few crucial nuances to the formulation/sequencing of the energy advection, energy diffusion, conservative corrections to the species diffusion fluxes, and of the projection that can then be discussed in the context of overall single-level time step. Finally, with all these aspects defined, we give an overview of the modifications necessary to support the AMR subcycling strategy.
\subsection{SDC preliminaries}
\label{AlgoDetails}
SDC methods for ODEs are introduced in Dutt et al.~\cite{Dutt:2000}.
The basic idea of SDC is to write the solution of an ODE
\begin{eqnarray}
\phi_t &=& F(t,\phi(t)), \qquad t\in[t^n,t^{n+1}];\\
\phi(t^n) &=& \phi^n,
\end{eqnarray}
as an integral,
\begin{equation}
\phi(t) = \phi^n + \int_{t^n}^{t} F(\phi)~d\tau,
\end{equation}
where we suppress explicit dependence of $F$ and $\phi$ on $t$ for notational simplicity.
Given an approximation $\phi^{(k)}(t)$ to $\phi(t)$, one can then define a residual,
\begin{equation}
E(t,\phi^{(k)}) = \phi^n + \int_{t^n}^t F(\phi^{(k)})~d\tau - \phi^{(k)}(t).\label{eq:residual}
\end{equation}
Defining the error as $\delta^{(k)}(t) = \phi(t) - \phi^{(k)}(t)$, one can then show that
\begin{equation}
\delta^{(k)}(t) = \int_{t^n}^t \left[F(\phi^{(k)}+ \delta^{(k)}) - F(\phi^{(k)})\right]d\tau + E(t,\phi^{(k)}).\label{eq:correction}
\end{equation}
In SDC algorithms, the integral in (\ref{eq:residual})
is evaluated with a higher-order quadrature rule.
By using a low-order discretization of the integral in (\ref{eq:correction}) one can construct
an iterative scheme that improves the overall order of accuracy of the approximation by one per
iteration, up to the order of accuracy of the underlying quadrature rule
used to evaluate the integral in (\ref{eq:residual}).
Specifically, if we let $\phi^{(k)}$ represent the current approximation and define
$\phi^{(k+1)} = \phi^{(k)} + \delta^{(k)}$ to be the iterative update,
then combining (\ref{eq:residual}) and (\ref{eq:correction}) results in an update equation,
\begin{equation}
\phi^{(k+1)}(t) = \phi^n + \int_{t^n}^t \left[F(\phi^{(k+1)}) - F(\phi^{(k)})\right]d\tau +
\int_{t^n}^t F(\phi^{(k)})~d\tau,\label{eq:update}
\end{equation}
where a low-order discretization (e.g., forward or backward Euler) is used for the first integral
and a higher-order quadrature is used to evaluate the second integral. For our reacting flow model,
the underlying projection methodology for the time-advancement of velocity is second-order,
so we require the use of second-order (or higher) numerical quadrature for the second integral.
\subsection{MISDC Correction Equations}
Bourlioux et al.~\cite{BLM:2003} and Layton and Minion \cite{Layton:2004}
introduce a variant of SDC, referred to as MISDC, in which $F$ is decomposed into distinct
processes, each treated separately with methods appropriate to its own time scale. Here, we write
\begin{equation}
\phi_t = F \equiv A(\phi) + D(\phi) + R(\phi),\label{eq:multi}
\end{equation}
to refer to advection, diffusion, and reaction processes.
For this construction we assume that we are given an approximate solution $\phi^{(k)}$ that
we want to improve.
Using the ideas in \cite{BLM:2003,Layton:2004}, we develop
a series of correction equations to update $\phi^{(k)}$ that uses relatively
simple second-order discretizations of $A(\phi)$ and $D(\phi)$ but a high-accuracy
treatment of $R(\phi)$. In our approach, $A(\phi^{(k)})$ is piecewise-constant over
each time step, and is evaluated using a second-order Godunov procedure
(see \cite{almgren-iamr} for full details on the Godunov procedure).
The Godunov procedure computes a time-centered
advection term at $t^{n+\myhalf}$, and incorporates an explicit diffusion source term and an
iteratively lagged reaction source term, i.e.,
\begin{equation}
A(\phi^{(k)}) \equiv A^{n+\myhalf,(k)} = A\left(\phi^n,D(\phi^n),I_R^{(k-1)}\right),
\end{equation}
where $I_R^{(k-1)}$ is the effective contribution due to reactions from the previous iteration, i.e.,
\begin{equation}
I_R^{(k-1)} = \frac{1}{\Delta t^n}\int_{t^n}^{t^{n+1}} R(\phi)~d\tau.\label{eq:IR}
\end{equation}
where $\Delta t^n = t^{n+1} - t^n$. Here $I_R^{(k-1)}$ is computed from a high-accuracy
integration of the reaction kinetics equations,
augmented with piecewise constant-in-time representation of advection and diffusion.
Details of this procedure are given below.
In the spirit of MISDC, we solve correction equations for the individual processes in
(\ref{eq:multi}) sequentially. We begin by discretizing (\ref{eq:update}), but only
including the advection and diffusion terms in the correction integral,
\begin{equation}
\phi_{\rm AD}^{(k+1)}(t) = \phi^n + \int_{t^n}^t \left[A^{(k+1)} - A^{(k)} + D^{(k+1)} - D^{(k)}\right]d\tau + \int_{t^n}^t F^{(k)}~d\tau.\label{eq:AD Correction}
\end{equation}
Thus, $\phi_{\rm AD}^{(k+1)}(t)$ represents an updated approximation of the solution after correcting the
advection and diffusion terms only. For the first integral, we use an explicit update for the advection term and a
backward Euler discretization for the diffusion term.
For the second integral, we represent $F$ in terms of $A$, $D$, and $R$ and
use the definition
of $A^{(k)}$, $D^{(k)}$, and $I_R^{(k-1)}$ to obtain
a discretization of (\ref{eq:AD Correction}) for
$\phi_{\rm AD}^{n+1,(k+1)}$:
\begin{eqnarray}
\phi_{\rm AD}^{n+1,(k+1)} &=& \phi^n + \Delta t \left[A^{(k+1)} - A^{(k)} + D_{\rm AD}^{(k+1)} - D^{n+1,(k)}\right] \nonumber \\
&&\hspace{0.5cm}+ \Delta t\left[A^{(k)} + \half\left(D^n + D^{(k)}\right) + I_R^{(k)}\right],
\end{eqnarray}
where $I_R^{(k)}$ is defined using (\ref{eq:IR}).
This equation simplifies to the following backward Euler type linear system, with the
right-hand-side consisting of known quantities:
\begin{equation}
\phi_{\rm AD}^{n+1,(k+1)} - \Delta t D_{\rm AD}^{(k+1)} = \phi^n + \Delta t \left[A^{(k+1)} + \half\left(D^n - D^{(k)}\right) + I_R^{(k)}\right].
\label{eq:AD}
\end{equation}
After computing $\phi_{\rm AD}^{n+1,(k+1)}$, we complete the update by solving a correction equation for
the reaction term. Standard MISDC approaches would formulate the reaction correction equation as
\begin{eqnarray}
{\phi}^{(k+1)}(t) = \phi^n &+& \int_{t^n}^t \left[ A^{(k+1)} - A^{(k)} + D_{\rm AD}^{(k+1)} - D^{(k)} \right]~d\tau \nonumber \\
&+& \int_{t^n}^t \left[R^{(k+1)} - R^{(k)}\right]d\tau + \int_{t^n}^t F^{(k)}~d\tau, \label{eq:stdreact}
\end{eqnarray}
and use a backward Euler type discretization for the integral of the reaction terms.
Here, to address stiffness issues with detailed chemical kinetics, we will instead
formulate the correction equation for the
reaction as an ODE, which is treated separately with an ODE integrator package.
In particular, by differentiating (\ref{eq:stdreact}) we obtain
\begin{eqnarray}
{\phi}^{(k+1)}_t &=& \left[ A^{(k+1)} - A^{(k)} + D_{\rm AD}^{(k+1)} - D^{(k)} \right]\nonumber\\
&&\hspace{-0.5cm}+ \left[R^{(k+1)} - R^{(k)}\right] + \left[A^{(k)} + \half\left(D^n + D^{(k)}\right) + R^{(k)}\right]\nonumber\\
&=& R^{(k+1)} + \underbrace{A^{(k+1)} + D_{\rm AD}^{(k+1)} + \half\left[D^n - D^{(k)}\right]}_{F_{\rm AD}^{(k+1)}}, \label{eq:MISDCint}
\end{eqnarray}
which we then advance with the ODE integrator over $\Delta t$ to obtain $\phi^{n+1,(k+1)}$.
After the integration, we can evaluate $I_R^{(k+1)}$, which is required for the next iteration
\begin{equation}
I_R^{(k+1)} = \frac{\phi^{n+1,(k+1)} - \phi^n}{\Delta t} - F_{\rm AD}^{(k+1)}.
\end{equation}
Summarizing, the variant of SDC used in the single-level time-step of \pelelm\ integrates the $A$, $D$ and $R$ components of the discretization scheme in an iterative fashion, and each process incorporates a source term that is constructed using a lagged approximation of the other processes. In the case of the implicit diffusion, an additional source term arises from the SDC formulation. If the SDC iterations were allowed to fully converge, all the process advanced implicitly would be implicitly coupled to all others. Moreover, each process is discretized using methods that are tailored specifically to the needs of that operator. In the next section, we give more details for each of the components, including how and where the \textit{velocity projections}\ play a role.
\subsection{Data centering, $A$-$D$-$R$, and the projections}
\pelelm\ implements a finite-volume, Cartesian grid discretization approach with constant grid spacing, where
$U$, $\rho$, $\rho Y_m$, $\rho h$, and $T$ represent cell averages, and the pressure field, $\pi$, is defined on the nodes
of the grid, and is temporally constant on the intervals over the time step. There are three major steps in the algorithm:\\
{\bf Step 1}: ({\it Compute advection velocities}) Use a second-order Godunov procedure to predict a time-centered
velocity, $\uadvstar$, on cell faces using the cell-centered data (plus sources due to any auxiliary forcing) at $t^n$,
and the lagged pressure gradient from the previous time interval, which we denote as $\nabla \pi^{n-\myhalf}$.
(An iterative procedure is used to define an initial pressure profile
for the algorithm; see \cite{AlmBelColHowWel98,DayBell:2000} for details.)
The provisional field, $\uadvstar$, fails to
satisfy the divergence constraint. We apply a discrete projection by solving the elliptic equation
with a time-centered source term:
\begin{equation}
D^{{\rm FC}\rightarrow{\rm CC}}\frac{1}{\rho^n}G^{{\rm CC}\rightarrow{\rm FC}}\phi = D^{{\rm FC}\rightarrow{\rm CC}}\uadvstar - \left(\widehat S^n + \frac{\Delta t^n}{2}\frac{\widehat S^n - \widehat S^{n-1}}{\Delta t^{n-1}}\right),
\end{equation}
for $\phi$ at cell-centers, where $D^{{\rm FC}\rightarrow{\rm CC}}$ represents a cell-centered divergence of face-centered data,
and $G^{{\rm CC}\rightarrow{\rm FC}}$ represents a face-centered gradient of cell-centered data, and $\rho^n$ is computed on
cell faces using arithmetic averaging from neighboring cell centers. Also, $\widehat S$ refers to the RHS of the constraint
equation (e.g.,~\ref{eq:igldivu}), with modifications that will be discussed in Section~(\textcolor{red}{TDB}).
The solution, $\phi$, is then used to define
\begin{equation}
\uadv = \uadvstar - \frac{1}{\rho^n}G^{{\rm CC}\rightarrow{\rm FC}}\phi,
\end{equation}
After the \textit{MAC}-projection, $\uadv$ is a second-order accurate, staggered grid vector
field at $t^{n+\myhalf}$ that discretely satisfies the constraint. This field is the advection velocity used for computing
the time-explicit advective fluxes for $U$, $\rho h$, and $\rho Y_m$.\\
{\bf Step 2}: ({\it Advance thermodynamic variables}) Integrate $(\rho Y_m,\rho h)$ over the full time step. The details of this are presented in the next subsection.\\
{\bf Step 3}: ({\it Advance the velocity}) Compute an intermediate cell-centered velocity field,
$U^{n+1,*}$ using the lagged pressure gradient, by solving
\begin{equation}
\rho^{n+\myhalf}\frac{U^{n+1,*}-U^n}{\Delta t} + \left(\uadv\cdot\nabla U\right)^{n+\myhalf} = \half\left(\nabla\cdot\tau^n + \nabla\cdot\tau^{n+1,*}\right) - \nabla\pi^{n-\myhalf} + \frac{1}{2}(F^n + F^{n+1}),
\label{eq:vel}
\end{equation}
where $\tau^{n+1,*} = \mu^{n+1}[\nabla U^{n+1,*} +(\nabla U^{n+1,*})^T - 2\mathcal{I}\widehat S^{n+1}/3]$ and
$\rho^{n+\myhalf} = (\rho^n + \rho^{n+1})/2$, and $F$ is the velocity forcing. This is a semi-implicit discretization for $U$, requiring
a linear solve that couples together all velocity components. The time-centered velocity in the advective derivative,
$U^{n+\myhalf}$, is computed in the same way
as $\uadvstar$, but also includes the viscous stress tensor evaluated at $t^n$ as a source term
in the Godunov integrator. At
this point, the intermediate velocity field $U^{n+1,*}$ does not satisfy the constraint. Hence, we apply an
approximate projection to update the pressure and to project $U^{n+1,*}$ onto the constraint surface.
In particular, we compute $\widehat S^{n+1}$ from the new-time
thermodynamic variables and an estimate of $\dot\omega_m^{n+1}$, which is evaluated
directly from the new-time thermodynamic variables. We project the new-time velocity by solving the elliptic equation,
\begin{equation}
L^{{\rm N}\rightarrow{\rm N}}\phi = D^{{\rm CC}\rightarrow{\rm N}}\left(U^{n+1,*} + \frac{\Delta t}{\rho^{n+\myhalf}}G^{{\rm N}\rightarrow{\rm CC}}\pi^{n-\myhalf}\right) - \widehat S^{n+1}
\end{equation}
for nodal values of $\phi$. Here, $L^{{\rm N}\rightarrow{\rm N}}$ represents a nodal Laplacian of nodal data, computed
using the standard bilinear finite-element approximation to $\nabla\cdot(1/\rho^{n+\myhalf})\nabla$.
Also, $D^{{\rm CC}\rightarrow{\rm N}}$ is a discrete
second-order operator that approximates the divergence at nodes from cell-centered data
and $G^{{\rm N}\rightarrow{\rm CC}}$ approximates a cell-centered gradient from nodal data. Nodal
values for $\widehat S^{n+1}$ required for this equation are obtained by interpolating the cell-centered values. Finally, we
determine the new-time cell-centered velocity field using
\begin{equation}
U^{n+1} = U^{n+1,*} - \frac{\Delta t}{\rho^{n+\myhalf}}G^{{\rm N}\rightarrow{\rm CC}}(\phi-\pi^{n-\myhalf}),
\end{equation}
and the new time-centered pressure using $\pi^{n+\myhalf} = \phi$.
Thus, there are three different types of linear solves required to advance the velocity field. The first is the \textit{MAC}\ solve in order to obtain \textit{face-centered}\ velocities used to compute advective fluxes. The second is the multi-component \textit{cell-centered}\ solver for (\ref{eq:vel}) used to obtain the provisional new-time velocities. Finally, a \textit{nodal}\ solver is used to project the provisional new-time velocities so that they satisfy the constraint.
\subsection{Thermodynamic Advance}\label{sec:Thermodynamic Advance}
Here we describe the details of {\bf Step 2} above, in
which we iteratively advance $(\rho Y_m,\rho h)$ over the full time step.
We begin by computing the diffusion
operators at $t^n$ that will be needed throughout the iteration. Specifically, we evaluate the transport coefficients
$(\lambda,C_p,\mathcal D_m,h_m)^n$ from $(Y_m,T)^n$, and the provisional diffusion
fluxes, $\widetilde{\boldsymbol{\cal F}}_m^n$. These fluxes are conservatively
corrected as discussed in Section~\textcolor{red}{TBD} to obtain ${\boldsymbol{\cal F}}_m^n$ such that $\sum {\boldsymbol{\cal F}}_m^n = 0$.
Finally, we copy the transport coefficients, diffusion fluxes and the thermodynamic state from $t^n$ as starting values for
$t^{n+1}$, and initialize the reaction terms, $I_R$ from the values used in the previous step.
The following sequence is then repeated for each iteration, $k$=1:$k_{max}$
{\bf MISDC Step 2-I:} Use a second-order Godunov integrator to predict
time-centered edge states, $(\rho Y_m,\rho h)^{n+\myhalf,(k)}$. Source terms for this prediction include
explicit diffusion forcing, $D^{n}$, and an iteration-lagged reaction term, $I_R^{(k)}$.
Since remaining steps of the algorithm (including diffusion and chemistry advances) will not affect the new-time density, we can already compute $\rho^{n+1,(k+1)}$. This will be needed in the trapezoidal-in-time diffusion solves.
\begin{equation}
\frac{\rho^{n+1,(k+1)} - \rho^n}{\Delta t} = A_{\rho}^{(k+1)} = \sum A_{m}^{(k+1)}
= -\sum_m\nabla\cdot\left(\uadv\rho Y_m\right)^{n+\myhalf,(k)}.
\end{equation}
In addition to predicting $\rho$ and $\rho Y_m$ to the faces to compute advective fluxes, we need $\rho h$ there
as well. We could predict based on a Godunov scheme, however, because $h$ contains the heat of formation, scaled to an arbitrary reference state, it is not generally monotonic through flames. Also, because the equation of state is generally nonlinear, this will often lead to numerically-generated non-mononoticity in the temperature field. An analytically equivalent approach, based on the fact that temperature should be smoother and monotonic through the flame, is to instead predict temperature with the Godunov scheme to the cell faces directly. Then, with $T$, $\rho = \sum (\rho Y_m)$ and $Y_m = (\rho Y_m)/\rho$ on cell faces, we can use Eq.~\ref{eq:hofT} to define $h$ there instead of extrapolating. We can then evaluate the advective flux divergence, $A_{h}^{(k+1)}$.
{\bf Step 2-II:} Update the transport coefficients (if necessary) with the most current cell-centered thermodynamic
state, then interpolate those values to the cell faces.
Note that from here forward, we will drop the $n$+1 superscript of the $k$ and $k$+1 iterates.
We now compute provisional, time-advanced species mass fractions, $\widetilde Y_{m,{\rm AD}}^{(k+1)}$,
by solving a backward Euler type correction equation for the Crank-Nicolson update\footnote{The provisional species diffusion fluxes $\widetilde{\boldsymbol{\cal F}}_{m,{\rm AD}}^{(0)} = -\rho^n\mathcal D_m^n\nabla\widetilde X_{m,{\rm AD}}^{(0)}$. However, this expression couples together all of the species mass fractions in the update of each, even for the mixture-averaged model. Computationally, it is much more tractable to write this as a diagonal matrix update with a lagged correction by noting that $X_m = (W/W_m)Y_m$. Using the chain rule, $\widetilde{\boldsymbol{\cal F}}_{m,{\rm AD}}^{(0)}$ then has components proportional to $\nabla Y_m$ and $\nabla W$. The latter is lagged in the iterations, and is typically very small. In the limit of sufficient iterations, diffusion is driven by the true form of the the driving force, $d_m$, but in this form, each iteration involves decoupled diagonal solves.\label{fn:X}}, obtained by following the SDC formalism leading to (\ref{eq:AD}):
\begin{equation}
\frac{\rho^{(k+1)}\widetilde Y_{m,{\rm AD}}^{(k+1)} - (\rho Y_m)^n}{\Delta t}
= A_m^{{(k+1)}} + \widetilde D_{m,AD}^{(k+1)} + \half(D_m^n - D_m^{(k)}) + I_{R,m}^{(k)}
\label{eq:pY}
\end{equation}
where
\begin{eqnarray*}
&D_m^n &= - \nabla \cdot {\boldsymbol{\cal F}}_m^n\\ [2mm]
&D_m^{(k)} &= - \nabla \cdot {\boldsymbol{\cal F}}_m^{(k)}\\ [1mm]
&\widetilde D_{m,AD}^{(k+1)} &= - \nabla \cdot \widetilde {\boldsymbol{\cal F}}_{m,AD}^{(k+1)}\\ [-1.5mm]
& &= \;\; \nabla \cdot \Big[ \rho^{(k+1)}\mathcal D_m^{(k)}\frac{W}{W_m}\nabla\widetilde Y_{m,{\rm AD}}^{(k+1)}
\; + \; \rho^{(k+1)}\frac{Y_m^{(k)}}{W_m} \nabla W^{(k)} \Big]
\end{eqnarray*}
By lagging the $\nabla W$ term (and $\mathcal D_m$), this equation is a scalar, time-implicit, parabolic and linear for the updated $\widetilde Y_{m,{\rm AD}}^{(k+1)}$ (and requires a linear solve). The form of this solve, from a
software perspective, is identical to that of the \textit{MAC}\ projection discussed above.
Once all the species equations are updated with (\ref{eq:pY}), compute ${\boldsymbol{\cal F}}_{m,{\rm AD}}^{(k+1)}$,
which are conservatively corrected versions of $\widetilde{\boldsymbol{\cal F}}_{m,{\rm AD}}^{(k+1)}$,
and then re-compute the updated species mass fractions, $Y_{m,{\rm AD}}^{(k+1)}$, using
\begin{eqnarray}
\frac{\rho^{(k+1)}Y_{m,{\rm AD}}^{(k+1)} - (\rho Y_m)^n}{\Delta t}
&=& A_m^{{(k+1)}} + D_{m,AD}^{(k+1)} + \half(D_m^n - D_m^{(k)}) + I_{R,m}^{(k)}
\end{eqnarray}
where
\begin{eqnarray*}
&&D_{m,AD}^{(k+1)} = - \nabla \cdot {\boldsymbol{\cal F}}_{m,{\rm AD}}^{(k+1)}
\end{eqnarray*}
Next, we compute the time-advanced enthalpy, $h_{\rm AD}^{(k+1)}$. Much like diffusion of the species densities,
$Y_m$, with a $\nabla X_m$ driving force, leads to a nonlinear, coupled Crank-Nicolson update, the
enthalpy diffuses with a $\nabla T$ driving force -- we define an alternative linearized strategy.
We begin by following the same SDC-correction formalism, (\ref{eq:AD}), used for the species, and write
the nonlinear update for $\rho h$ (noting that there is no reaction source term here):
\begin{equation}
\frac{\rho^{(k+1)} h_{{\rm AD}}^{(k+1)} - (\rho h)^n}{\Delta t}
= A_h^{(k+1)} + D_{T,AD}^{(k+1)} + H_{AD}^{(k+1)} + \half \Big( D_T^n - D_T^{(k)} + H^n - H^{(k)} \Big)
\label{eq:hup}
\end{equation}
where
\begin{eqnarray*}
&D_T^n = \nabla \cdot \lambda^n \nabla T^n \hspace{2cm}
&H^n = - \nabla \cdot \sum h_m(T^n) \; {\boldsymbol{\cal F}}_m^n\\
&D_T^{(k)} = \nabla \cdot \lambda^{(k)} \nabla T^{{k}}
&H^{(k)} = - \nabla \cdot \sum h_m(T^{(k)}) \; {\boldsymbol{\cal F}}_m^{(k)}\\
&D_{T,AD}^{(k+1)} = \nabla \cdot \lambda_{AD}^{(k+1)} \nabla T_{AD}^{(k+1)}
&H_{AD}^{(k+1)} = - \nabla \cdot \sum h_m(T_{AD}^{(k+1)}) \; {\boldsymbol{\cal F}}_{m,AD}^{(k+1)}
\end{eqnarray*}
However, since we cannot compute $h_{{\rm AD}}^{(k+1)}$ directly, we solve this iteratively based on the approximation
$h_{{\rm AD}}^{(k+1),\ell+1} \approx h_{{\rm AD}}^{(k+1),\ell} + C_{p}^{(k+1),\ell} \delta T^{\ell+1}$, with
$\delta T^{\ell+1} = T_{{\rm AD}}^{(k+1),\ell+1} - T_{{\rm AD}}^{(k+1),\ell}$, and iteration index, $\ell$ = 1:$\,\ell_{MAX}$.
Equation~(\ref{eq:hup}) is thus recast into a linear equation for $\delta T^{\ell+1}$
\begin{eqnarray}
\rho^{(k+1)} C_p^{(k+1),\ell} \delta T^{\ell +1}
&-& \Delta t \, \nabla \cdot \lambda^{(k+1),\ell} \nabla (\delta T^{\ell +1}) \nonumber \\
&=& \rho^n h^n - \rho^{(k+1)} h^{(k+1),\ell} + \Delta t \Big( A_h^{(k+1)} + D_{T,AD}^{(k+1),\ell} + H_{AD}^{(k+1),\ell} \Big) \\
&&+ \; \frac{\Delta t}{2} \Big( D_T^n - D_T^{(k)} + H^n - H^{(k)} \Big) \nonumber
\end{eqnarray}
where $H_{AD}^{(k+1),\ell} = - \nabla \cdot \sum h_m(T_{AD}^{(k+1),\ell}) \, {\boldsymbol{\cal F}}_{m,AD}^{(k+1)}$
and $D_{T,AD}^{(k+1),\ell} = \nabla \cdot \lambda_{AD}^{(k+1),\ell} \, \nabla T_{AD}^{(k+1),\ell}$.
Note that again the solve for this
Crank-Nicolson update has a form that is identical to that of
the \textit{MAC}\ projection discussed above. After each
iteration, update $T_{{\rm AD}}^{(k+1),\ell+1} = T_{{\rm AD}}^{(k+1),\ell} + \delta T^{\ell+1}$ and
re-evaluate $(C_p,\lambda,h_m)^{(k+1),\ell+1}$ using $(T_{{\rm AD}}^{(k+1),\ell+1}, Y_{m,{\rm AD}}^{(k+1)}$).
After the iterations are complete, set
\begin{equation*}
D_{T,AD}^{(k+1)} = D_{T,AD}^{(k+1),\ell_{MAX}-1} + \nabla \cdot \lambda^{(k+1),\ell_{MAX}-1} \nabla (\delta T^{\ell_{MAX}})
\end{equation*}
{\bf Step 2-III:}
Based on the updates above, we define an effective contribution of advection and diffusion to the
update of $\rho Y_m$ and $\rho h$:
\begin{eqnarray*}
&&Q_{m}^{(k+1)} = A_m^{(k+1)} + D_{m,AD}^{(k+1)} + \half(D_m^n - D_m^{(k)}) \\
&&Q_{h}^{(k+1)} = A_h^{(k+1)} + D_{T,AD}^{(k+1)} + \half(D_T^n - D_T^{(k)} + H^n - H^{(k)} )
\end{eqnarray*}
Integrate the ODE system for reactions over $\Delta t^n$
to advance $(\rho Y_m,\rho h)^n$ to $(\rho Y_m,\rho h)^{(k+1)}$ with a piecewise-constant source term representing
advection and diffusion:
\begin{eqnarray}
\frac{\partial(\rho Y_m)}{\partial t} &=& Q_{m}^{(k+1)} + \rho\dot\omega_m(Y_m,T),\label{eq:MISDC VODE 3}\\
\frac{\partial(\rho h)}{\partial t} &=& Q_{h}^{(k+1)}.\label{eq:MISDC VODE 4}
\end{eqnarray}
After the integration is complete, we make one final call to the equation of state
to compute $T^{(k+1)}$ from $(Y_m,h)^{(k+1)}$. We also can compute the effect of reactions
in the evolution of $\rho Y_m$ using,
\begin{equation}
I_{R,m}^{(k+1)} = \frac{(\rho Y_m)^{(k+1)} - (\rho Y_m)^n}{\Delta t} - Q_{m}^{(k+1)}.
\end{equation}
If $k<k_{\rm max}-1$, set $k=k+1$ and return to MISDC Step 2-I. Otherwise, the
time-advancement of the thermodynamic variables is complete, and set
$(\rho Y_m,\rho h)^{n+1} = (\rho Y_m,\rho h)^{(k+1)}$.
If $k$+1=$k_{max}$, {\bf Step 2} of our algorithm is complete. Otherwise, return to {\bf Step 2-I} and repeat
the iteration.
| {
"alphanum_fraction": 0.7109906284,
"avg_line_length": 75.9165487977,
"ext": "tex",
"hexsha": "a5871016b8eed064958ff0db1b0e5e5ca53d97f8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8b8c07aa1770c07e087f8976b6e16a71de68f751",
"max_forks_repo_licenses": [
"BSD-3-Clause-LBNL"
],
"max_forks_repo_name": "hbrunie/PeleLM",
"max_forks_repo_path": "Docs/UsersGuide/Equations/PeleLMEquations.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8b8c07aa1770c07e087f8976b6e16a71de68f751",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause-LBNL"
],
"max_issues_repo_name": "hbrunie/PeleLM",
"max_issues_repo_path": "Docs/UsersGuide/Equations/PeleLMEquations.tex",
"max_line_length": 1557,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8b8c07aa1770c07e087f8976b6e16a71de68f751",
"max_stars_repo_licenses": [
"BSD-3-Clause-LBNL"
],
"max_stars_repo_name": "hbrunie/PeleLM",
"max_stars_repo_path": "Docs/UsersGuide/Equations/PeleLMEquations.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 17280,
"size": 53673
} |
\section{Adding to Front}
\frame{\tableofcontents[currentsection]}
\begin{frame}
\frametitle{Problem Statement}
\begin{center}\ttfamily
prepend([1, 2, 3, 4, 5], 0) \\[4mm]
$\downarrow$ \\[4mm]
[0, 1, 2, 3, 4, 5]
\end{center}
\end{frame}
\begin{frame}
\frametitle{Add to Front of Array}
\begin{center}
\begin{tikzpicture}
\draw[thick] (0,0) grid ++(6,1);
\draw[thick] (0,-2) grid ++(7,1);
\foreach[evaluate={int(\i+1)} as \j] \i in {0,...,5} {
\node at (\i + 0.5,0.5) {\j};
\node at (\i + 1.5,-1.5) {\j};
\draw[-latex] (\i + 0.5,0.25) -- (\i + 1.5,-1.25) node[midway,sloped,font=\tiny,above] {copy};
}
\node at (0.5,-1.5) {0};
\end{tikzpicture}
\end{center}
\vskip4mm
\structure{Algorithm}
\begin{itemize}
\item Create new array with larger size
\item Copy all elements
\item $O(n)$
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Add to Front of Linked List}
\begin{center}
\begin{tikzpicture}[link/.style={thick,-latex}]
\foreach[evaluate={(\i - 1) * 1.5} as \x] \i in {1,...,5} {
\coordinate (p\i) at (\x,0);
\llnode[position={p\i},size=0.5cm,value=\i]
}
\foreach[evaluate={(\i - 1) * 1.5} as \x] \i in {1,...,4} {
\draw[-latex] ($ (p\i) + (0.75,0.25) $) -- ++(0.75,0);
}
\coordinate (p0) at ($ (p1) + (0,-1.5) $);
\llnode[position=p0,size=0.5cm,value=0]
\draw[-latex] ($ (p0) + (0.75,0.25) $) -- ($ (p1) + (0.5,0) $);
\end{tikzpicture}
\end{center}
\vskip4mm
\structure{Algorithm}
\begin{itemize}
\item Create new node
\item Have it point to the (originally) first node
\item $O(1)$
\end{itemize}
\end{frame}
| {
"alphanum_fraction": 0.4814049587,
"avg_line_length": 29.7846153846,
"ext": "tex",
"hexsha": "05e410639913c01612892da596b597b05036c0b9",
"lang": "TeX",
"max_forks_count": 32,
"max_forks_repo_forks_event_max_datetime": "2020-10-06T15:01:47.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-09-19T03:25:11.000Z",
"max_forks_repo_head_hexsha": "06743e4e2a09dc52ff52be831e486bb073916173",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "DennisWinnepenninckx/distributed-applications",
"max_forks_repo_path": "slides/linked-lists/aux-prepend.tex",
"max_issues_count": 22,
"max_issues_repo_head_hexsha": "06743e4e2a09dc52ff52be831e486bb073916173",
"max_issues_repo_issues_event_max_datetime": "2020-03-16T14:43:06.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-06-19T18:58:13.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "DennisWinnepenninckx/distributed-applications",
"max_issues_repo_path": "slides/linked-lists/aux-prepend.tex",
"max_line_length": 110,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "06743e4e2a09dc52ff52be831e486bb073916173",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "DennisWinnepenninckx/distributed-applications",
"max_stars_repo_path": "slides/linked-lists/aux-prepend.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-22T09:52:11.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-09-22T09:52:11.000Z",
"num_tokens": 710,
"size": 1936
} |
\chapter{Instrumentation}
The source-code for this work will be available in the Computer Engineering Group Git repository (\verb|https://github.com/UniHD-CEG|). It includes the Clang and LLVM extensions, as well as
the utilities to analyse traces.
Instrumenting an application at compile time happens in two steps. The first step is the instrumenting all
source file containing:
\begin{itemize}
\item Kernel Calls
\item Kernel and \verb|__device__| function definitions
\item Kernel \verb|__device__| functoin declarations
\end{itemize}
Each file needs to be parsed separately. For each file, a new file with the prefix 'augmented-' is created. This is then used for
the further build process. Compile units bear some problems with kernels defined and declared in \verb|.cu| files, which are then included
in other files. The easiest workaround is to copy the kernel into the file including it, and then only use the including file for instrumentation and build.
The following snipped is an example how to augment a kernel file.
\begin{lstlisting}[style=C]
clang++ -Xclang -load -Xclang "clang-plugin/libMemtrace-AA.so"
-Xclang -plugin -Xclang cuda-aug -Xclang -plugin-arg-cuda-aug
-Xclang -f -Xclang -plugin-arg-cuda-aug -Xclang ./augmented-kernel.cu
-I$CUDASYS/samples/common/inc -I. -I../utils
--cuda-path=$CUDASYS -std=c++11 -E application/kernel.cu
\end{lstlisting}
Next, the host and device utils are compiled, for later linking with the application.
\begin{lstlisting}[style=C]
// Host
clang++ -c -L$CUDASYS/lib64 --cuda-path=$CUDASYS
-I$CUDASYS/samples/common/inc -O1 --cuda-gpu-arch=sm_30
-I$CUDASYS/include -o hutils.o --std=c++11 -I. -I../utils
../utils/TraceUtils.cpp
// Device
clang++ -c -L$CUDASYS/lib64 --cuda-path=$CUDASYS
-I$CUDASYS/samples/common/inc -O1 --cuda-gpu-arch=sm_30
-I$CUDASYS/include -o dutils.o --std=c++11 -I. -I../utils
../utils/DeviceUtils.cu
\end{lstlisting}
Next, all the augmented kernel files are compiled at once.
\begin{lstlisting}[style=C]
clang++ -c -Xclang -load -Xclang $LLVMPLUGIN --cuda-path=$CUDASYS
-I$CUDASYS/samples/common/inc --cuda-gpu-arch=sm_30 -L$CUDASYS/lib64 -O1
-lcudart_static -m64 --std=c++11 -I. -I../utils -I<app-includes>
./augmented-kernel.cu ./augmented-kernel2.cu
\end{lstlisting}
Finally, all compiled files are linked together.
\begin{lstlisting}[style=C]
clang++ --cuda-path=$CUDASYS -I$CUDASYS/samples/common/inc --cuda-gpu-arch=sm_30 -L$CUDASYS/lib64 -O1
-lcudart -ldl -lrt -L. -m64 --std=c++11 -I. -I../utils -I<app-includes>
-o application dutils.o hutils.o augmented-kernel.o augmented-kernel2.o
\end{lstlisting}
During the execution one or more files are generated in the \verb|/tmp| folder. The files are named \verb|MemTrace-pipe-<n>|, one for
each stream.
| {
"alphanum_fraction": 0.7464890169,
"avg_line_length": 51.4259259259,
"ext": "tex",
"hexsha": "2683fb911b39e1af00c2bede645d45f6711590a7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fa0d354f8ad3be826111f64bfb0eaf18979c188e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "LamChob/GPUComm",
"max_forks_repo_path": "report/thesis/appendix-build.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fa0d354f8ad3be826111f64bfb0eaf18979c188e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "LamChob/GPUComm",
"max_issues_repo_path": "report/thesis/appendix-build.tex",
"max_line_length": 189,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fa0d354f8ad3be826111f64bfb0eaf18979c188e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "LamChob/GPUComm",
"max_stars_repo_path": "report/thesis/appendix-build.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 856,
"size": 2777
} |
\documentclass[10pt]{article}
\usepackage{url}
\title{Reply to academic editor and reviewer concerning submission BR-Org-18-034 "Investigating soundscapes perception through acoustic scenes simulation"}
\begin{document}
\maketitle
As a preamble, we would like to thank the editor and the reviewer for their comments and suggestions. Following these comments, we made several changes to the article, which are summarized here. The next sections list our answers to each of their comments, with references to the revised manuscript where appropriate.
\section{Answer to Academic Editor}
\begin{enumerate}
\item \emph{please follow APA (American Psychological Association) style guidelines (w.r.t. in-text citations, etc.).}
$\rightarrow$ The manuscript now follows the APA citation style. The text has been modified accordingly.
\item \emph{ Please avoid the over-use of acronyms as it makes reading of your article unnecessarily difficult (examples: P@5, ni, i, etc.). This sort of lab jargon is not transparent to the wider readership of BRM. }
$\rightarrow$ Those abbreviations are now in full text, except in for some figures for the sake of clarity, in which case the used acronyms are detailed in the caption.
\item \emph{I would also like to see comparisons between your approach and other state-of-the-art soundscape analyses (e.g. TAPESTREA, see comment from the reviewer below).}
$\rightarrow$ The work of Misra \& al is indeed very relevant. A new Section (number 6) is now dedicated to this comparison with state-of-the-art approaches.
\item \emph{You might consider providing a link to a github site with examples of your stimuli. It is hard to imagine what these sound scenes sound like without some real examples.}
$\rightarrow$ The resulting waveforms are available for download \url{https://archive.org/details/soundSimulatedUrbanScene}. The link is now in the introduction. The software platform used for the experiment, the parametrization of the software platform for each generated scene, as well as a 2 dimensional projection of the resulting scenes are available at \url{http://soundthings.org/research/urbanSoundScape/XP2014}.
\end{enumerate}
\section{Answer to Reviewer}
\begin{enumerate}
\item \emph{The utility of this tool beyond the experiments presented here is not apparent because the flexibility of the system was not demonstrated, for example:
- SimScene itself was only used in the first experiment, and other applications of it were not considered. Design choices vary widely between researchers and domains, meaning that the specific instantiations here may not be generally useful.}
$\rightarrow$ We definitely agree. As now discussed in more detail in Section 6 of the manuscript, the proposed software tool is not designed for genericity but we believe that the proposed specific design strategies and use of recent software toolkits can be considered as inspirations for designing other experimental protocols questioning soundscape perception.
\item \emph{It is not clear how this tool would be expanded to consider other acoustic parameters of interest, beyond level. It might be useful to consider potential relationships to other soundscape softwares such as TAPESTREA, and how the exploration of other acoustic parameters could be made intuitive for experiment participants.}
$\rightarrow$ The work of Misra \& al is indeed very relevant. A new Section (number 6) is now dedicated to this comparison with state-of-the-art approaches.
\item \emph{In the last sentence of the abstract, the distinction of 'physical descriptors' vs. 'global descriptors' does not have enough context here to be informative.}
$\rightarrow$ This sentence have be rewritten to be more explicit: "acoustic pressure levels computed for specific sound sources better characterize the appraisal than the acoustic pressure level computed over the overall soundscape."
\item \emph{In section 2.1 (pg. 5), the citations (particularly 27-31) do not seem relevant for justifying the event/texture distinction in the simulator.}
$\rightarrow$ The references to the ASA theory are now removed. The work of McDermott and Nelken and de Cheveigné are still cited as, in our humble opinion, they have some relevance to the matter discussed in this section.
\end{enumerate}
\end{document}
| {
"alphanum_fraction": 0.7997687861,
"avg_line_length": 70.9016393443,
"ext": "tex",
"hexsha": "2c9408c473229f781bd523241c3a186869bf3a94",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "87a844afe948bd89303e910ee5d51892718ca1b0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mathieulagrange/paperSimluatedScenePerception",
"max_forks_repo_path": "v2/responseToReviewers.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "87a844afe948bd89303e910ee5d51892718ca1b0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mathieulagrange/paperSimluatedScenePerception",
"max_issues_repo_path": "v2/responseToReviewers.tex",
"max_line_length": 420,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "87a844afe948bd89303e910ee5d51892718ca1b0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mathieulagrange/paperSimluatedScenePerception",
"max_stars_repo_path": "v2/responseToReviewers.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 917,
"size": 4325
} |
\title{Research Into An Existing \textit{Frogger} Implementation}
\author{
Jack Ellis \\
[email protected]\\
4262333
}
\date{}
\documentclass[12pt]{article}
\usepackage{graphicx}
\graphicspath{ {Images/} }
\usepackage{mathtools}
\usepackage{listings}
\lstset{
language=Java,
basicstyle=\ttfamily\small,
showstringspaces=false
}
\begin{document}
\maketitle
\tableofcontents
\pagebreak
\section{Introduction}
There exist online a great number of Object-Oriented (OO) implementations of the game \textit{Frogger}.
I have chosen GitHub user \verb|vitalius|' implementation (available at \verb|https://github.com/vitalius/frogger|) as the basis of my comparison, because of the number of additional features it has over and above the base "road crossing simulator" idea of Frogger.
In this document I will analyse and discuss these features.
\section{Experience Analysis}
\begin{figure}
\caption{A screenshot of vitalius' \textit{Frogger} implementation}
\includegraphics[width=\textwidth]{vitalius.png}
\end{figure}
The basic gameplay revolves around getting a frog from one side of a river to another, via a busy highway and a series of logs.
If the player is hit by a car or falls into the river they lose a life (starting from 5), and must restart from the beginning.
The game is in full colour with smooth animation (meaning the non-player elements that move do so smoothly as opposed to the player, who operates in steps).
Contrary to my original understanding of the game, the player can move in all four directions, meaning they can double back if their current position is unfavourable.
There is a countdown timer set to 60 seconds. If the player does not complete a level within these 60 seconds they lose a life.
The game has two main states: paused and playing; it is not a "true" pause state as all it does is return the game to the initial state as if opened anew.
\subsection{The Road}
There are 6 highway lanes, with alternating vehicle directions, and 2 types of regular vehicle (car and truck), and a special police car which traverses the width of the screen much more quickly than any other object.
Occasionally a heatwave will strike, which causes a particle effect around the player sprite, and causes said sprite to move in a random direction if the player does not move within some timeframe (this will be examined more closely in the source code analysis).
\subsection{The River}
The river stage is much more complex and demanding of the player.
Here the player must navigate a series of moving logs, which carry the player laterally.
If the player is on a log which moves beyond the bounds of the screen the player will fall into the river and lose a life.
Additionally to the logs the river contains turtles and crocodiles, which will carry the player from side to side in the same way as the logs.
However, both carry dangers; while you can move laterally on a crocodile, moving onto its head will see the player "eaten" and lose a life.
The turtles, being of width equal to the players' step size, cannot be moved on.
There is an environmental effect similar to the "heatwave" of the road section, here it is sudden gusts of wind.
This effect has some particle effects shown on screen, but more importantly it slowly moves the player in the direction of the blowing wind.
\section{Source Code Analysis}
The Java project contains 20 files; in this section I will go through each of them individually, in alphabetical order.
The game is built using the Java Instructional Game (JIG) engine, an educational tool developed in response to the fact that, at the time of its release in 2007, \textit{"most of today’s game engines rely on advanced knowledge of the C++ programming language"}\cite{jig-tutorial}
\subsection{AudioEfx.java}
This file is responsible for the audio effects in the game.
\subsubsection{public void playGameMusic(), 108-110}
This function simply loops the main game theme.
\subsubsection{public void playCompleteLevel(), 112-115}
This function first pauses the game music, then plays the "level complete" sound effect.
\subsubsection{public void playRandomAmbientSound(final long deltaMs), 117-129}
\verb|playRandomAmbientSound| plays, after a predetermined delay, an ambient sound effect.
This effect differs depending on whether the player is on road or on water, and from looking at the resource files can be either a car horn, car passing, or siren if the player is on the road, or a water splash, frog noise, or another kind of splash if they are on the river.
\subsubsection{public void update(final long deltaMs), 131-140}
\verb|update| calls \verb|playRandomAmbientSound|, passing in the current game time, and controls the audio (playing or pausing it depending upon the music's state).
\subsection{Car.java, CopCar.java, Truck.java}
I have bundled these files into one section because they are road-based extensions of \verb|MovingObject|, with largely similar file definitions.
\verb|CopCar| is the most "boring" of the three, having length of one unit, and only one possible sprite to represent it.
\verb|Car| again has length one unit, however has 3 options for representative sprites.
Finally, \verb|Truck| has length of two units, and consequently requires two \verb|CollisionObject|s.
In all cases the constructor sets the correct sprite, the position, adds the objects' \verb|CollisionObject| variable/s to the \verb|collisionObjects| variable, and ensures that the object is in fact drawn on the screen, i.e. making sure that if the objects' \verb|x| position is outside the screen boundaries, drawing is not even attempted.
\subsection{CollisionObject.java}
\verb|CollisionObject| extends a JIG class called \verb|VanillaSphere|, editing it such that its position is offset to the centre of the object to which it relates.
\subsection{Crocodile.java}
This is another extension of \verb|MovingEntity|, with a length of 3 units (as declared in the file).
Interestingly, it sets up 4 \verb|collisionObject|s for use in the \verb|collisionObjects| variable.
Beyond this, the class constructor is much the same as it is for the other \verb|MovingEntity| extensions, with the exception of some variables relating to the current and next frame, as well as the \verb|animationTime| and \verb|animationDelay|, used in the \verb|animate| function.
\subsubsection{public void animate(long deltaMs), 65-72}
This function sets the correct sprite on a per-frame basis such that the crocodile appears to be opening and closing its mouth.
It first adds the current amount of time the game has been running for to the animation time.
If that is now greater than the \verb|animationDelay| (set to 300 by default), it is reset, and the next sprite is chosen.
This occurs in sequence to give the impression of continuous animation.
\subsection{Frogger.java}
This is the main file relating to the player-controlled entity, the eponymous \verb|Frogger|.
It extends \verb|MovingEntity| with a number of variables:
\begin{itemize}
\item \verb|int MOVE_STEP|\\
Initialised to 0, this variable appears to denote the length of one player "hop", and is equal to the unit lengths of the cars, trucks, and other obstacles.
\item \verb|int ANIMATION_STEP|\\
Initialised to 4, this denotes the length of movement per frame within a hop; a hop takes 8 frames and the player moves a total of 32 pixels per hop.
\item \verb|int curAnimationFrame|\\
Initialised to 0, this is used in animation steps to determine whether or not the animation is finished.
\item \verb|int finalAnimationFrame|\\
Initialised to \verb|MOVE_STEP/ANIMATION_STEP| (so 8 by default), this is the total number of frames one animation should take.
\item \verb|long animationDelay|\\
Initialised to 10, this denotes the number of milliseconds before an animation should begin.
\item \verb|long animationBeginTime|\\
Initialised to 0, this is set to the current time at the beginning of any movement.
\item \verb|boolean isAnimating|\\
Initialised to \verb|false|, this is set to \verb|true| when the player is moving, and reset to \verb|false| when that movement has finished.
\item \verb|Vector2D dirAnimation|\\
Initialised to a new vector with values of 0, this denotes the direction the player should face when they move.
\item \verb|MovingEntity followObject|\\
Initialised to \verb|null|, this is used when a player steps on a log in the river stage, and denotes the direction and velocity the Frogger should move without input.
\item \verb|boolean isAlive|\\
Initialised to false, this denotes whether or not the player has died. It is set to \verb|true| at the start of a level.
\item \verb|long timeOfDeath|\\
Initialised to 0, this is used when resetting the frog with a 2-second delay.
\item \verb|int currentFrame|\\
Initialised to 0, this denotes which sprite should be used to represent the Frogger, and is updated depending on the direction the Frogger is facing.
\item \verb|int tmpFrame|\\
Initialised to 0, this is used such that the Frogger can return to the correct idle sprite after they have moved.
\item \verb|int deltaTime|\\
Initialised to 0, this is used mostly for level timing, taking one second off the game clock per second.
\item \verb|boolean cheating|\\
Initialised to \verb|false|, this changes the players' death scenarios making them invulnerable.
\item \verb|boolean hw_hasMoved|\\
Initialised to \verb|false|, this is set to \verb|true| whenever the player moves, and is used by the Heatwave event
\item \verb|Main game|\\
This is set up in the constructor.
\end{itemize}
\subsubsection{public Frogger(Main g), 72-77}
This is the constructor class, taking as an argument an instance of the main game and drawing the character to the screen.
The argument is stored as the \verb|Main game| variable mentioned earlier.
It resets the frog as well, using \verb|resetFrog()|.
\subsubsection{public void resetFrog(), 82-89}
This function sets \verb|isAlive| to \verb|true|, \verb|isAnimating| to \verb|false|, \verb|currentFrame| to \verb|0|, \verb|followObject| to \verb|null|, \verb|position| to the default starting position, and the game's level timer to its default value.
In essence it resets all of the values in the file to their initial state, except \verb|isAlive|
\subsubsection{public void move$<$Direction$>$(), 94-125}
This section will cover \verb|moveLeft()|, \verb|moveRight()|, \verb|moveUp()|, and \verb|moveDown()|, given that these are all similar enough functions that I do not believe they are worth individual sections.
In each case, the function first checks 3 things: that the player is not trying to pass the boundary of the screen, that the player is alive, and that the Frogger is not currently animating.
If all three of these things are true, it sets the players' sprite to the correct orientation, and calls \verb|move| with a \verb|Vector| of the correct orientation.
It also plays a small sound effect.
\subsubsection{public long getTime(), 131-133}
This function returns the current system time, in milliseconds.
\subsubsection{public void move(Vector2D dir), 144-161}
This function is called in each of the \verb|move<Direction>()| function calls.
It sets \verb|followObject| to \verb|null|, eliminating any "drifting" caused by a player being on a log, sets up a "step" to jump with \verb|curAnimationFrame| and \verb|finalAnimationFrame|, sets \verb|isAnimating| and \verb|hw_hasMoved| to \verb|true|, sets the Frogger's sprite to the correct, animated, state (including setting \verb|tmpFrame| to return to the correct original state), and synchronises the Frogger's collisionSphere with its new position.
\subsubsection{public void updateAnimation(), 166-190}
This function first checks whether or not the Frogger is alive and currently animating.
If one of these is not true, the function synchronises the Frogger's CollisionSphere with its sprite, and returns.
\par
Otherwise, if the value of \verb|curAnimationFrame| is greater than or equal to \verb|finalAnimationFrame| it means the Frogger should have finished animating, and consequently sets the value of \verb|isAnimating| to \verb|false|, and returns the Frogger to the sprite described by \verb|tmpFrame| (the Frogger in its resting state).
\par
If neither of the above are true, and the sum of \verb|animationBeginTime| and \verb|animationDelay| are less than the current time (accessed via a \verb|getTime()| call), the animation will cycle; this is done by updating the position by the value of one step, and incrementing the \verb|curAnimationFrame| variable.
\subsubsection{public void allignXPositionToGrid(), 195-202}
The purpose of this function is to ensure that the Frogger, upon movement, snaps back to the "grid" that makes up its range of movement.
If the Frogger is animating, or it has a \verb|followObject| set, the function does nothing.
If neither of those are the case, the Frogger's x position is rounded to the nearest multiple of 32 (the size of one "step").
\subsubsection{public void updateFollow(long deltaMs), 208-213}
\verb|updateFollow| allows the Frogger to follow a log on the river by taking its velocity vector.
To do this, it first checks that the the Frogger a) has a \verb|followObject| set, and b) is alive.
If both of these are true, it updates the Frogger's position using the \verb|followObject|'s velocity vector.
\subsubsection{public void follow(MovingEntity log), 219-221}
This function sets the Frogger to have a \verb|followObject| when it steps on a log.
\subsubsection{public void windReposition(Vector2D d), 228-234}
This function enforces the "random wind gust" environmental effect, by moving the Frogger to a new position described by the Frogger's current location and the argument \verb|d|.
It also sets \verb|hw_hasMoved| to true, and does all of this only if the Frogger is alive.
\subsubsection{public void randomJump(final int rDir), 240-254}
This function enforces the "heatwave" environmental effect, by moving the frogger one "step" in a random direction descirbed by \verb|rDir|.
\subsubsection{public void die(), 259-274}
If the Frogger is currently animating it cannot die, and as such this function does nothing if that is the case.
If the player is not cheating, the function plays a sound effect, sets \verb|followObject| to \verb|null|, \verb|isAlive| to \verb|false|, and \verb|hw_hasMoved| to \verb|true|.
It then sets the Frogger to the "dead" sprite, and decrements the number of lives the player has left.
Regardless of whether or not the player is cheating, the \verb|timeOfDeath| variable is set to the current time, and the level's timer is set to the default value.
\subsubsection{public void reach(final Goal g), 279-294}
This function is only called upon the Frogger reaching the end of the level.
If the \verb|Goal|'s \verb|isReached| variable is \verb|false|, it plays the relevant sound effect, adds 100 to the \verb|Game|'s score and adds the total score to the level timer.
If the player has landed on a bonus goal another sound effect is played, and the player gains a life.
\par
If the \verb|Goal|'s \verb|isReached| variable is \verb|true|, the Frogger's position is set to the same as the \verb|Goal|'s.
\subsubsection{public void update(final long deltaMs), 296-317}
This function deals with level timing and general animation update work.
If the player has run out of lives, it does nothing; if the player is dead it keeps them that way for two seconds.
Otherwise it updates the Frogger's animation (via an \verb|updateAnimation()| call), follow position (via \verb|updateFollow|), and sets the correct sprite frame.
Every 1000ms it decrements the game's timer by 1, and if that timer reaches 0 it kills the player.
\subsection{FroggerCollisionDetection.java}
This purpose of this class is to detect when the Frogger has collided with another object.
It has 6 declared variables:
\begin{itemize}
\item \verb|public Frogger frog|\\
The Frogger itself; this is given a value on object creation.
\item \verb|public CollisionObject frogSphere|\\
The Frogger's collision sphere; this is given a value on object creation.
\item \verb|public int river_y0|\\
The lower y-bound of the first river lane.
This is set to \verb|1*32|, or one grid step up.
\item \verb|public int river_y1|\\
The upper y-bound of the last river lane.
This is set to \verb|river_y0 + (6*32)|, 6 "lanes" up from \verb|y0|.
\item \verb|public int road_y0|\\
The lower y-bound of the first road lane.
This is set to \verb|1*32|, or one grid step up.
\item \verb|public int road_y1|\\
The upper y-bound of the last road lane.
This is set to \verb|road_y0 + (6*32)|, 6 "lanes" up from \verb|y0|.
\end{itemize}
\subsubsection{public FroggerCollisionDetection(Frogger f), 43-46}
This function instantiates an instance of \verb|FroggerCollisionDetection|, setting both the \verb|frog| and \verb|frogSphere| variable.
\subsubsection{public void testCollision(AbstractBodyLayer$<$MovingEntity$>$), 48-76}
This function tests whether or not the frog is touching any other objects on the map.
If the frog isn't alive, it returns empty, otherwise it sets a variable denoting the Frogger's current position and declares a \verb|double| called \verb|dist2|.
If the frog is out of bounds (checked by the function \verb|public boolean isOutOfBounds()|), the player dies.
Else it checks every \verb|MovingEntity| in its argument as follows: if that entity is inactive, skip it.
For every \verb|CollisionObject| that entity has stored, calculate the distance between them using Pythagoras, and check that against the centre position of the frog.
If they collide, call the \verb|collide| function and return.
Finally, if the Frogger is in the river, the player dies.
\subsubsection{public boolean isOutOfBounds(), 90-97}
This function takes the centre position of the Frogger, and if it is outside the bounds of the map returns \verb|true|, else it returns \verb|false|
\subsubsection{public boolean isOnRoad() \& public boolean isOnRiver(), 103-110 \& 116-123}
These two functions are similar enough that they can be described in the same section.
Both functions check the Frogger's position in \verb|y|, returning true if it is between the bounds described by \verb|road_y0| and \verb|road_y1|, and \verb|river_y0| and \verb|river_y1| respectively.
\subsubsection{public void collide(MovingEntity m, CollisionObject s), 125-153}
This function deals with collisions between the Frogger and other \verb|MovingEntity|s on the screen, selecting the correct function for the Frogger object given by \verb|frog| to perform based on what it collides with.
It is essentially a giant switch/case statement, checking what the \verb|MovingObject| is an instance of;
\begin{itemize}
\item If \verb|m| is a \verb|Truck|, \verb|Car|, or \verb|CopCar|, the Frogger dies.
\item If \verb|m| is a \verb|Crocodile|, the function checks whether or not the Frogger is on its head or body.\\
If the Frogger is on its head, the Frogger dies.\\
Otherwise, the Frogger is set to be \verb|follow|ing it.
\item If \verb|m| is a set of \verb|Turtles|, the function checks whether or not those turles are underwater or not.\\
If they are, the Frogger dies.\\
Otherwise, the Frogger is set to \verb|follow| the \verb|Turtles|.
\item Finally, if \verb|m| is a \verb|Goal|, the Frogger calls \verb|reach|, and the level is complete.
\end{itemize}
\subsection{FroggerUI.java}
This class has 8 class variables:
\begin{itemize}
\item The following 5 are all variables which store the relevant sprite information for various game mechanics.
\begin{itemize}
\item \verb|List<ImageResource> heart|\\
These are the hearts at the top-left of the screen which represent the number of lives the player has left.
\item \verb|List<ImageResource> gameOver|\\
This is the "Game Over" message.
\item \verb|List<ImageResource> levelFinish|\\
This is the "Level Complete" message.
\item \verb|List<ImageResource> introTitle|\\
This is the "Splash Screen", displayed at the start of the game.
\item \verb|List<ImageResource> instructions|\\
This is the "Help Screen", accessible by pressing \verb|h| while on the splash screen.
\end{itemize}
\item \verb|FontResource font|\\
This sets the correct font for instances when the text is on a dark background.
\item \verb|FontResource fontBlack|\\
This sets the correct font for instances when the text is on a light background.
\item \verb|Main game|\\
This is an instance of the \verb|Main| class, which the UI layer can interact with to determine game states and make perform functions.
\end{itemize}
\subsubsection{public FroggerUI(final Main g), 59-61}
This function instatiates an instance of \verb|FroggerUI|, and provides the \verb|Main| instance taken by the \verb|game| variable.
\subsubsection{public void render(RenderingContext rc), 64-118}
This function first renders the game's time and score at the top of the screen.
If the player has more than 0 lives, it then sets a variable \verb|dx| to 0, and using that draws all of the players' hearts at the top-left of the screen, capping it at a maximum of 10 rendered hearts (the player can have more than 10 lives, but only a maximum of 10 will be drawn).
The game's level is then placed at the top of the screen.
Now there is ostensibly a large switch/case statement that draws the relevant screen depending upon the value of the \verb|game| object's \verb|gameState| variable.
\subsubsection{public void update(long deltaMs), 120-121}
This function is empty, and simply overrides the superclass' \verb|update| function.
\subsubsection{public boolean isActive(), 123-125}
This function always returns \verb|true|; the layer is always active.
\subsubsection{public void setActivation(boolean a), 127-129}
This function does nothing, and similarly to \verb|update| only appears to override the superclass' version of it.
This function does nothing because the layer is always active, and consequently there is no need to set its activation.
\subsection{Goal.java}
This class has two variables associated with it:
\begin{itemize}
\item \verb|public boolean isReached|\\
Set to \verb|false| initially, this is set to \verb|true| if the Frogger reaches it.
\item \verb|public boolean isBonus|\\
Set to \verb|false| initially, this is set to \verb|true| if the goal is set as a bonus.
\end{itemize}
\subsubsection{public Goal(int loc) \& public Goal(Vector2D pos), 34-40 \& 42-48}
These two functions are overloaded constructors, hence they can be described in one section.
First they set the sprite to the goal one, and then they set their position.
The \verb|int| instance sets the position to the top of the screen and some number of player "steps" either side of the middle of the screen.
The \verb|Vector2D| instance sets the position to the exact one described by the argument.
In both cases the \verb|collisionObjects| variable has added to it the details of the \verb|Goal| being constructed, the position synchronised within the global \verb|collisionObjects| list, and the correct sprite frame is selected.
\subsubsection{public void reached(), 50-30}
This function sets \verb|isReached| to \verb|true|, and changes the frame of the \verb|Goal|'s sprite to one denoting success.
\subsubsection{public void setBonus(boolean b), 55-63}
If the \verb|boolean| argument is \verb|true|, the \verb|Goal| is set to be a bonus, conferring more rewards for reaching it, and changing its sprite to denote a bonus.
If the argument is \verb|false|, it unsets the bonus flag, and returns the sprite to a normal \verb|Goal| one.
\subsubsection{public void update(long deltaMs), 65-67}
This is an empty function whose purpose is solely to override the superclass' instance.
\subsection{GoalManager.java}
This class deals with generating and keeping track of all \verb|Goal|s in the game.
It has 8 variables in the class:
\begin{itemize}
\item \verb|final static int MAX_NUM_OF_GOALS|\\
Initialised to 6, this variable stores the maximum number of goals a single level can have.
\item \verb|private List<Goal> goals|\\
This keeps track of all of the \verb|Goal|s in the level.
\item \verb|private Random r|\\
This is a random series of integers, used when setting \verb|Goal|s to bonus and back.
\item \verb|protected boolean showingBonus|\\
Initialised to \verb|false|, this sets whether or not there is a bonus \verb|Goal| on the screen at all.
\item \verb|private int bonusRateMs|\\
Initialised to 5000, this variable sets how long there should be between the presence of bonus \verb|Goal|s
\item \verb|private int bonusShowMs|\\
Initialised to 5000, this variable sets how long a \verb|Goal| should be set as a bonus for.
\item \verb|private int dRMs|\\
Initialised to 0, this variable times how long it has been between bonuses being present.
\item \verb|private int dSMs|\\
Initialised to 0, this variable times how long a bonus has been present on the screen for.
\end{itemize}
\subsubsection{public GoalManager(), 49-53}
This is the class constructor; it sets the \verb|goals| variable to be a linked list of \verb|Goal| objects, initiall empty, sets \verb|r| to be a new Java \verb|Random| based on the current time, and calls \verb|init| with the value of the first level, which is 1.
\subsubsection{public void init(final int level), 62-85}
This function first clears the \verb|goals| variable, then a switch/case statement based on the current level sets the relevant number of \verb|Goal|s to be on screen.
There is a comment above this function which states that level 1 has 2 goals, 2 has 4, and all others have 6, however this code shows that in fact level 1 has 2 goals, and the rest have 4.
\subsubsection{public List$<$Goal$>$ get(), 91-93}
This function returns the \verb|goals| variable.
\subsubsection{public List$<$Goal$>$ getUnreached(), 99-106}
This function is similar to \verb|get|, however it returns a new list it constructs from \verb|Goal|s whose \verb|isReached| flag is \verb|false|.
\subsubsection{public void doBonusCheck(), 112-118}
This function consists of two if-statements.
This first checks if a bonus is not currently being shown and that the current time between bonuses being shown is greater than the bonus rate.
If both of those are \verb|true|, \verb|dSMs| is set to 0, \verb|showingBonus| to true, and a random unreached \verb|Goal| in \verb|goals| is set to be a bonus.
The second if-statement checks if a bonus is currently being shown and that the current time it has been shown for is greater than the show rate.
If both of those are true, \verb|dRMs| is set to 0, \verb|showingBonus| to false, and each \verb|Goal| in \verb|goals| has the bonus flag set to \verb|false|.
\subsubsection{public void update(long deltaMs), 129-133}
This function adds \verb|deltaMs| to both \verb|dRMs| and \verb|dSMs|, and then runs \verb|doBonusCheck|.
\subsection{HeatWave.java}
This class has 6 variables associated with it:
\begin{itemize}
\item \verb|final static int PERIOD|\\
Initialised to 2000, this is the time between heatwave events.
\item \verb|final static int DURATION|\\
Initialised to 1000, this is the duration of one heatwave event.
\item \verb|Random r|\\
This is a Java \verb|Random| variable, used in determining whether or not a random event happens.
\item \verb|private long timeMs|\\
This is set to 0 initially, and is checked against the \verb|PERIOD| variables when determining whether or not a heatwave should happen.
\item \verb|private long durationMs|\\
Similar to \verb|timeMs|, this is checked against the \verb|DURATION| variable when determining whether or not a heatwave should finish.
\item \verb|private long heatWaveMs|\\
This variable tracks how long a heatwave event has been occurring for.
\item \verb|public boolean isHot|\\
Initialisd to \verb|false|, this determines whether or not a heatwave event is occurring.
\end{itemize}
\subsubsection{public HeatWave(), 44-49}
This is the class constructor, and sets the values of \verb|isHot|, \verb|timeMs|, and \verb|heatWaveMs|, to \verb|true|, \verb|0|, and \verb|0| respectively.
It also instantiates \verb|r| with a new \verb|Random| object.
\subsubsection{public void perform(Frogger f, insal long deltaMs, final int level), 56-69}
This function enforces the effect of the heatwave itself; namely that if a heatwave begins and the player does not move within a certain time (directly proportional to the level), the Frogger will jump in a random direction.
if the Frogger is not alive, \verb|isHot| is set to \verb|false| and the function returns.
Otherwise, if it \verb|isHot| and the duration of the heatwave is less than the defined limit, and the player has not moved in that time, a random jump occurs and \verb|isHot| is set to false.
Finally, if the player has moved, \verb|isHot| is set to false.
\subsubsection{public void start(Frogger f, final int GameLevel), 77-88}
Here it is determined whether or not a heatwave event should start; if it is not currently "hot", and the time since the last heatwave check is greater than the period between heatwaves, the check time is set to 0.
Then, if the next random integer chosen by \verb|r| is less than the game's level multiplied by 10, \verb|durationMs| is set to 1, \verb|isHot| to \verb|true|, the Frogger's \verb|hw_hasMoved| variable to \verb|false|, and a heatwave-y sound effect plays.
\subsubsection{public MovingEntity genParticles(Vector2D pos), 96-107}
This generates the particles that appear around the Frogger when a heatwave event is occurring.
If it is not hot, or \verb|r| returns an integer greater than 10, it does nothing; else it returns a new \verb|MovingEntity| with a particulate sprite, which is drawn to the screen.
\subsubsection{public int computeHeatValue(final long deltaMs, final int level, int curTemp), 114-125}
This function only appears in this file, at this point; it is as yet unclear as to whether or not it is a part of some call somewhere withing the JIG however as it stands it does nothing.
Regardless, were it to be called it would increment \verb|heatWaveMs| by the value of the \verb|deltaMs| argument.
From this, every two seconds it increases the game's temperature by 1, and resets heatWaveMs to 0, capping it at 170.
\subsubsection{public void update(final long deltaMs), 127-130}
This increases both \verb|timeMs| and \verb|durationMs| by the value of the \verb|deltaMs| argument.
\subsection{LongLog.java}
This file contains only the constructor for the \verb|LongLog| class.
This sets up what is effectively 4 \verb|MovingEntities| joined together, and gives them a velocity described by the arguments passed in.
\subsection{Main.java}
The \verb|Main| class is the controller class for the entire game.
All of the functions herein are of type \verb|void|, given that they affect screen events or global/class variables.
In its declaration it creates an instance of its superclass, namely \verb|StaticScreenGame| (a class declared within the JIG engine), with the desired height and width of the game screen, and preferring to be left in a window as opposed to taking up the full screen.
It then sets the window's title to "Frogger", and loads its resources from the file \verb|resources.xml|, a file containing the names of all of the audio and image files for sprites and such.
It sets up the background of the frame based on the filename referenced herein, and applies it to the window.
Following this, the function sets up the collision detection by generating two spheres.
Finally, it generates instances of all of the classes declared in other files, and calls the member function \verb|initializeLevel|, passing in the value 1 as the argument.
\subsubsection{public void initializeLevel(int level), 131-178}
This function sets up the level for the game, first clearing the screen of all moving objects then generating the 5 river lanes and 5 road lanes.
Within this it sets the general speeds for objects within these lanes based on the variable \verb|dV|, which itself is proportional to the level the player has reached.
Once this is done, it generates the objective, called the Goal, and and cycles the traffic generator 500 times to generate some cars before the game is launched.
\subsubsection{public void cycleTraffic(long deltaMs), 186-229}
\verb|cycleTraffic|, per the name, is responsible for generating and cycling traffic elements across the road lanes, and is between lines 186 and 229.
The road lanes all have the same constructor, \verb|buildVehicle|, which implies that any vehicle has equal chance of appearing on any lane.
The river lanes, by comparison, alternate with regard to what can populate them.
All 5 lanes can have logs, however lanes 1, 3, and 5 will have short logs interspersed with turtles (generated by \verb|buildShortLogWithTurtles|), whereas lanes 2 and 4 will have longer logs interspersed with crocodiles (generated by \verb|buildLongLogWithCrocodile|).
If the wind or heatwave events are in effect, it will draw the relevant particle effects for them.
In each case for the road/river lanes, it will update them if they are present, or create and add them to the \verb|movingObjectsLayer| if they are not.
\subsubsection{public void froggerKeyboardHandler(), 234-281}
\verb|froggerKeyboardHandler|, as the name suggests, deals with keyboard inputs while the game is in progress.
It first polls the keyboard to determine whether or not a key is being pressed, and updates a series of \verb|boolean|s based on whether or not the keys associated with them have been pressed.
Before dealing with the results of those booleans it determines whether or not the user is cheating.
Here, cheating means activating/deactivating cheat mode by pressing \verb|c| or \verb|v| respectively, or skipping to level 10 by pressing the \verb|0| key.
Next it updates the \verb|keyPressed| and \verb|keyReleased| variables depending on whether or not a key has been pressed.
The boolean variable \verb|listenInput|, declared at the top of the class file (line 92) determines whether or not the game acts upon the users' input; this means that the user cannot hold down a key and the frog move continuously, rather that the if the user wishes to move 4 squares to the right they must press the right key 4 times.
If the keys have been released \verb|listenInput| is set to \verb|true|, so that the game can listen for another key press, and \verb|keyPressed| is set to false.
Finally, if the user has pressed the escape key the game is set back to the intro state (the variable \verb|GameState| is set to 0).
\subsubsection{public void menuKeyboardHandler(), 286-317}
There is a separate handler for keyboard events while the game is in the menu state, called \verb|menuKeyboardHandler|.
Again it begins by polling the keyboard, and if the space key is not pressed it sets the class boolean \verb|space_has_been_released| to \verb|true|; if that variable is \verb|false| it will \verb|return|, and finish the function there.
If the space bar is being pressed it has a number of actions to take based upon the games' state.
If the game is over, or the user is looking at the instructions, it will return them to the game's intro state and allow them to start again.
Otherwise it will start the game again, resetting the number of lives, score, level, timer, frog position, game state, music playing, and starting the level via \verb|initializeLevel|.
If the user presses \verb|h| in the menu they will recieve instructions about how the game works.
\subsubsection{public void finishLevelKeyboardHandler(), 322-329}
There is then a function to deal with keyboard inputs on the finish of a level, \verb|finishLevelKeyboardHandler|.
If the space key is pressed it will initialise a new level, one greater than the previous one, and return to the game state.
This function seems out of place, as in my gameplay testing the game automatically increments the level without the user pressing space.
\subsubsection{public void update(long deltaMs), 335-388}
\verb|update| is, I believe, run on every cycle of the game.
It is ostensibly one giant \verb|switch/case| statement, with cases for each game state.
In the case that the game is being played it will take the keyboard input and deal with it as described in \verb|froggerKeyboardHandler|, update the wind, heatwave, frog, audio, ui, and traffic with the current game time, and test for collisions, before applying wind or heatwave effects where appropriate.
If the player has lost a life it clears the particle layer (which is where the particle effects for wind and heat are drawn).
Otherwise, if the goal has been reached it will move to the level finish state, play a sound effect, and clear the particle layer.
If the player has run out of lives it will move to the game over state and the game will start again.
In the case that the game is over, the player is reading the instructions, or the game is on the intro menu, it will update the goal manager and cycle the traffic with the time since the game has started, and await user input.
In the case that the level has finished it will call the \verb|finishLevelKeyboardHandler| and it will perform as described above.
\subsubsection{public void render(RenderingContext rc), 394-421}
This function is responsible for rendering the contents of the screen.
Again, it is ostensibly one large \verb|switch/case| statement, doing different things depending upon the state of the game.
If the game is in progress (i.e. in states \verb|GAME_FINISH_LEVEL| or \verb|GAME_PLAY|), it renders the map of the game as well as the many moving objects that make up the obstacles and player sprite, and any particle effects created by the environmental effects..
If the game is in a pause state (i.e. \verb|GAME_OVER|,\verb|GAME_INSTRUCTIONS|, or \verb|GAME_INTRO|) it renders everything as before with the exception of the Frogger and particle effects.
\subsubsection{public static void main(String[] args), 423-427}
This is the function that is called on the \verb|.jar| file being run.
In creates a new Main object, and calls \verb|run| on it.
\subsection{MovingEntity.java}
This class is the superclass for all objects on the screen that move, specifically the player, cars, and river obstacles.
It extends the \verb|Body| class from the JIG with a linked list of \verb|CollisionObject|s.
This linked list will contain all other objects on the screen as well as their "hitboxes", for use in determining collisions.
\subsubsection{public void sync(Vector2D position), 64-72}
\verb|sync| updates the list of \verb|CollisionObject|s within the \verb|MovingEntity| object with the new position of the \verb|MovingEntity|.
\subsubsection{public void update(final long deltaMs), 82-90}
\verb|update| first checks if the position of the \verb|MovingEntity| is outside the bounds of the world.
Here it does this only for the X-axis, because the majority of objects only move on said axis, and the \verb|Frogger| class has an overriding implementation.
It then updates the position of the \verb|MovingEntity| based on its old position and current velocity, after which it calls \verb|sync| with that new position.
\subsection{MovingEntityFactory.java}
This class deals with the creation of all \verb|MovingEntities| in the game.
It has 12 class variables:
\begin{itemize}
\item The following essentially convert the subclass of \verb|MovingEntity| to be created, and are used in conjunction with the \verb|creationRate| variable described below.
\begin{itemize}
\item \verb|public static int CAR|\\
Set to \verb|0|, this denotes the \verb|Car| class to be created.
\item \verb|public static int TRUCK|\\
Set to \verb|1|, this denotes the \verb|Truck| class to be created.
\item \verb|public static int SLOG|\\
Set to \verb|2|, this denotes the \verb|ShortLog| class to be created.
\item \verb|public static int LLOG|\\
Set to \verb|3|, this denotes the \verb|LongLog| class to be created.
\end{itemize}
\item \verb|public Vector2D position|\\
This determines the position of the \verb|Factory|.
\item \verb|public Vector2D velocity|\\
This determines the velocity of the objects produced by the \verb|Factory|
\item \verb|public Random r|\\
A random-number generator.
\item \verb|private long updateMs|\\
This is used when creating a new \verb|MovingObject| to ensure it happens at the correct intervals.
\item \verb|private long copCarDelay|\\
This is used similarly to the above, however is only used when a \verb|CopCar| is trying to be made.
\item \verb|private long rateMs|\\
This determines the rate at which the \verb|Factory| checks whether or not to spawn a new \verb|MovingEntity|.
\item \verb|private int padding|\\
This is the minimum distance between two \verb|MovingObject|s in a lane.
\item \verb|private int[] creationRate|\\
This is an array of integers that determine the rate at which \verb|MovingEntities| should be created depending on what type they are.
\end{itemize}
\subsubsection{public MovingEntityFactory(Vector2D pos, Vector2D v), 59-72}
The constructor, this takes sets the \verb|position| and \verb|velocity| variables to the values passed in as arguments.
It then sets up \verb|r| with a new random number generator, and gives values to all items in the \verb|creationRate| variable based on their lengths and the \verb|Factory|'s \verb|velocity| variable.
\subsubsection{public MovingEntity buildBasicObject(int type, int chance), 81-105}
This is the basic factory method created for each lane.
If the \verb|updateMs| variable is greater than the \verb|rateMs| variable, it has a chance described by the \verb|chance| argument to create a new \verb|MovingEntity| of type described by the \verb|type| argument, with the \verb|position| and \verb|velocity| variables set to those of the \verb|Factory|.
\subsubsection{public MovingEntity buildShortLogWithTurtles(int chance), 107-112}
This function is the factory function for the lanes which have a chance of building either a \verb|ShortLog| or a series of \verb|Turtles|.
The function calls \verb|buildBasicObject| with an 80\% chance of building a \verb|ShortLog|; if it does not, and the \verb|chance| argument is less than another random number under 100 it will create a new series of turtles with a random 50/50 chance of being underwater initially.
\subsubsection{public MovingEntity buildLongLogWithCrocodile(int chance), 118-123}
Similarly to \verb|buildShortLogWithTurtles|, this first attempts to build a \verb|LongLog| using \verb|buildBasicObject|; if it does not it has a \verb|chance| argument\% chance to build a \verb|Crocodile|.
\subsubsection{public movingEntity buildVehicle(), 130-147}
This function is used to create \verb|Car|s, \verb|Truck|s, and \verb|CopCar|s.
It creates a new \verb|MovingEntity m|, with an 80\% chance of being a \verb|Car| and a 20\% chance of being a \verb|Truck|; if that \verb|MovingEntity| is created successfully, and the absolute value of its velocity (multiplied by the \verb|copCarDelay| variable) is less than the wdth of the world, that \verb|MovingEntity| will be replaced with a \verb|CopCar|.
Either way, the \verb|copCarDelay| variable is reset to 0, and if a \verb|CopCar| isn't spawned, \verb|m| is returned.
\subsubsection{public void update(final long deltaMs), 149-152}
This function increments \verb|updateMs| and \verb|copCarDelay| by the value of \verb|deltaMs| passed in by the argument.
\subsection{Particle.java}
This class extends \verb|MovingEntity|, and has 2 variables associated with it:
\begin{itemize}
\item \verb|private int timeExpire|\\
Initialised to 1, this is the time before a \verb|Particle| shold expire.
\item \verb|private int timeAlive|\\
Initialised to 1, this is the time for which a \verb|Particle| has been alive.
\end{itemize}
\subsubsection{public Particle(String sprite, Vector2D pos, Vector2D v, int te), 58-64}
The class constructor, this creates a new \verb|MovingEntity| with a sprite, position, and velocity described by the \verb|sprite|, \verb|pos|, and \verb|v| arguments respectively.
The \verb|te| argument is passed into the \verb|timeExpire| variable, and the whole object is activated by a \verb|setActivation(true)| call.
\subsubsection{public void update(final long deltaMs), 66-75}
This updates the whole object based on the \verb|MovingEntity|'s \verb|update| function, with the added step of checking whether or not the \verb|Particle| should have expired by now; this is done by adding the \verb|deltaMs| arguement to the \verb|timeAlive| variable, and checking that against \verb|timeExpire|.
If \verb|timeAlive| is greater than \verb|timeExpire|, the object is deactivated via a \verb|setActivation(false)| call.
\subsection{ShortLog.java}
This class is an extension of the \verb|MovingEntity| class, and has only one additional variable, \verb|public static int LENGTH|, which is initialised to 32*3 (96).
\subsubsection{public ShortLog(Vector2D pos, Vector2D v), 33-47}
This is the class constructor, which sets the sprite via a \verb|super| call.
It then sets the position and velocity based on the \verb|pos| and \verb|v| arguments respectively, and adds the collosion spheres as offsets from the \verb|position| variable.
If the x-value of \verb|v| is less than 0 the frame is set to 1, otherwise it's set to 0.
\subsection{Turtles.java}
This class extends \verb|MovingEntity|, and has 11 variables associated with it:
\begin{itemize}
\item \verb|private long underwaterTime|\\
Initialised to 0, this is the amount of time the \verb|Turtles| have been underwater for.
\item \verb|private long underwaterPeriod|\\
Initialised to 1200, this is the time for which the \verb|Turtles| should be underwater.
\item \verb|protected boolean isUnderwater|\\
Initialised to \verb|false|, this determines whether or not the \verb|Turtles| are underwater.
\item \verb|private boolean isAnimating|\\
Initialised to \verb|false|, this determines whether or not the \verb|Turtles| are animating.
\item \verb|private long localDeltaMs|\\
\item \verb|private long startAnimatingMs|\\
\item \verb|private long timerMs|\\
\item \verb|private long animatingPeriod|\\
Initialised to 150, this is the number of milliseconds one animation cycle should take.
\item \verb|private int aFrame|\\
Intialised to 0, this is the current animation frame.
\item \verb|private int max_aFrame|\\
Initialised to 2, this is the maximum number of animation frames allowed.
\end{itemize}
\subsubsection{public Turtles(Vector2D pos, Vector2D v, int water), 76-89}
The class constructor, this sets the sprite via a \verb|super()| call, and calls \verb|init| with the \verb|pos| and \verb|v| arguments passed through.
If the \verb|water| argument is 0, \verb|isUnderwater| is set to \verb|false|, else it is set to \verb|true|, and the frame is incremented by 2.
\subsubsection{public void init(Vector2D pos, Vector2D v), 99-114}
The \verb|position| variable is set to the \verb|pos| argument, and 2 additional collision spheres are created as offets.
The \verb|velocity| variable is set to the value of the \verb|v| argument, and the correct sprite frame is set based on the direction the \verb|Turtles| are travelling.
\subsubsection{public void checkAirTime(), 120-126}
This first increments the \verb|underwaterTime| variable by the value of \verb|localDeltaMs|, and if it becomes greater than the value of \verb|underwaterPeriod| it goes into the other state (underwater to floating or vice-versa).
\subsubsection{public void animate(), 133-152}
If the \verb|isAnimating| flag is set to false, this function does nothing.
Beyond that, if the \verb|timerMs| variable is greater than the \verb|startAnimatingMs| variable, it will animate one step of either returning to the surface or submerging.
If the \verb|aFrame| variable is greater than or equal to the \verb|max_aFrame| variable, it stops the animation.
\subsubsection{public void startAnimation(), 159-164}
This sets up the animation by setting \verb|isAnimating| to \verb|true|, and \verb|startAnimatingMs|, \verb|timerMs|, and \verb|aFrame| to 0.
\subsubsection{public void update(final long delta), 166-172}
This first performs the superclass' \verb|update| function, then sets \verb|localDeltaMs| to the \verb|deltaMs| argument, which it then uses to incremend the \verb|timerMs| variable.
It then performs \verb|checkAirTime()| and \verb|animate()|.
\subsection{Windgust.java}
This class has 6 variables associated with it:
\begin{itemize}
\item \verb|final static int PERIOD|\\
Intiailised to 5000, this is the period between wind gust event checks in milliseconds.
\item \verb|final static int DURATION|\\
Initialised to 3000, this is the duration of wind gusts in milliseconds.
\item \verb|Random r|\\
This is a random number generator used in determining whether or not a gust actually occur.
\item \verb|private long timeMs|\\
This is a variable containing the current game time, in milliseconds.
\item \verb|private long durationMs|\\
This is used to check how long a wind gust has been occuring for.
\item \verb|private boolean isWindy|\\
A flag denoting whether or not a wind gust event is currently occuring.
\end{itemize}
\subsubsection{public WindGust(), 49-43}
This is the constructor, which initialises \verb|timeMs| to 0, \verb|isWindy| to \verb|false|, and initialises \verb|r| with a new random number generator.
\subsubsection{public void perform(Frogger f, int level, final long deltaMs), 60-72}
If the Frogger is not alive, this function sets \verb|isWindy| to \verb|false| and \verb|return|s.
Otherwise, \verb|isWindy| and \verb|durationMs| are checked, the latter against being smaller than \verb|DURATION|, and if both of those are true, it sets the wind position randomly.
Otherwise it sets \verb|isWindy| to \verb|false|.
\subsubsection{public void start(final int level), 79-92}
This begins a new level; if it isn't currently windy and the value of \verb|timeMs| is greater than the \verb|PERIOD| variable, it has a chance of starting a new wind event, setting \verb|durationMs| to 1, \verb|isWindy| to \verb|true|, and playing a sound effect.
Either way it resets \verb|timeMs| to 0.
\subsubsection{public MovingEntity genParticles(final int level), 100-115}
This function, similar to other \verb|genParticles| functions, generates the particle overlay that appears when a wind event occurs.
\subsubsection{public void update(final long deltaMs), 117-120}
This increments \verb|timeMs| and \verb|durationMs| by the value of the \verb|deltaMs| argument.
\begin{thebibliography}{0}
\bibitem{jig-tutorial}
Wallace, Scott A. and Nierman, Andrew\\
Using the Java Instructional Game Engine in the Classroom\\
December 2007\\
Available at http://dl.acm.org/citation.cfm?id
Accessed 2018-10-26
\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.76543617,
"avg_line_length": 75.037845706,
"ext": "tex",
"hexsha": "e9273ac753022f51cee1954afcbb22e5a8de4dbc",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-04-01T23:37:58.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-04-01T23:37:58.000Z",
"max_forks_repo_head_hexsha": "b222ad865f480a15f34818718287283a1ba39dfe",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ScottSedgwick/Dissertation",
"max_forks_repo_path": "Docs/FroggerResearch/Research.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b222ad865f480a15f34818718287283a1ba39dfe",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ScottSedgwick/Dissertation",
"max_issues_repo_path": "Docs/FroggerResearch/Research.tex",
"max_line_length": 460,
"max_stars_count": 11,
"max_stars_repo_head_hexsha": "b222ad865f480a15f34818718287283a1ba39dfe",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ScottSedgwick/Dissertation",
"max_stars_repo_path": "Docs/FroggerResearch/Research.tex",
"max_stars_repo_stars_event_max_datetime": "2020-04-01T22:40:12.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-03-25T00:19:26.000Z",
"num_tokens": 13026,
"size": 51551
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% University/School Laboratory Report
% LaTeX Template
% Version 3.1 (25/3/14)
%
% This template has been downloaded from:
% http://www.LaTeXTemplates.com
%
% Original author:
% Linux and Unix Users Group at Virginia Tech Wiki
% (https://vtluug.org/wiki/Example_LaTeX_chem_lab_report)
%
% License:
% CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/)
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%----------------------------------------------------------------------------------------
% PACKAGES AND DOCUMENT CONFIGURATIONS
%----------------------------------------------------------------------------------------
\documentclass{article}
\usepackage[version=3]{mhchem} % Package for chemical equation typesetting
\usepackage{siunitx} % Provides the \SI{}{} and \si{} command for typesetting SI units
\usepackage{graphicx} % Required for the inclusion of images
\usepackage{natbib} % Required to change bibliography style to APA
\usepackage{amsmath} % Required for some math elements
\usepackage[margin=1in]{geometry}
\usepackage{hyperref}
\usepackage[british]{babel}
\selectlanguage{british}
\setlength\parindent{0pt} % Removes all indentation from paragraphs
\renewcommand{\labelenumi}{\alph{enumi}.} % Make numbering in the enumerate environment by letter rather than number (e.g. section 6)
%\usepackage{times} % Uncomment to use the Times New Roman font
%----------------------------------------------------------------------------------------
% DOCUMENT INFORMATION
%----------------------------------------------------------------------------------------
\title{Computer Vision - Lab 2 \\ Camera Calibration} % Title
\author{Davide Dravindran Pistilli} % Author name
\date{\today} % Date for the report
\begin{document}
\maketitle % Insert the title, author and date
\section{Introduction}
This lab experience was performed to calibrate a camera from a given set of checkerboard images and to remove distortion from a test image using the previously computed calibration data.
\section{Procedure}
The first step, after loading the calibration dataset, is to find all corners in each image. This is done in parallel on 4 separate threads, since each image is independent from the others. Furthermore, after locating all corners, their coordinates are refined through the use of the function \texttt{cv::cornerSubPix}.
When all images are correctly processed, the actual calibration data are computed through the function \texttt{cv::calibrateCamera}.
The program was tested with two different input configurations: \textbf{Test A} only contained the first 12 images from the dataset, while \textbf{Test B} contained all 57 of them.
This was done to get an idea about the impact of a much larger dataset on camera calibration.
\section{Results}
\textbf{Test A} provided the parameters displayed in \textit{Figure \ref{img_data_12}} and was completed in around $1.5s$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{images/data_12}
\caption{\footnotesize{calibration data with 12 input images.}}
\label{img_data_12}
\end{center}
\end{figure}
\textbf{Test B} provided the parameters displayed in \textit{Figure \ref{img_data_57}} and was completed in around $22s$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{images/data_57}
\caption{\footnotesize{calibration data with 57 input images.}}
\label{img_data_57}
\end{center}
\end{figure}
The results from distortion removal can be seen in \textit{Figures \ref{img_result_12}} and\textit{ \ref{img_result_57}}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{images/result_12}
\caption{\footnotesize{distortion removal after calibration with 12 images.}}
\label{img_result_12}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{images/result_57}
\caption{\footnotesize{distortion removal after calibration with 57 images.}}
\label{img_result_57}
\end{center}
\end{figure}
The output images are almost identical, but if we look at the data, we notice that calibrating with too many images actually has a negative impact on the reprojection error. Furthermore, \textbf{Test B} was almost $15$ times slower than \textbf{Test A}.
This means that the optimal amount of images for camera calibration has to be chosen carefully in order to avoid overfitting.
\end{document} | {
"alphanum_fraction": 0.7031746032,
"avg_line_length": 42,
"ext": "tex",
"hexsha": "055e575fe03c040228411a5bb9f41a170cf9c4a8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4066a99f6f6fdc941829d3cd3015565ec0046a2f",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "DavidePistilli173/Computer-Vision",
"max_forks_repo_path": "Lab2/report/Lab2Report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4066a99f6f6fdc941829d3cd3015565ec0046a2f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "DavidePistilli173/Computer-Vision",
"max_issues_repo_path": "Lab2/report/Lab2Report.tex",
"max_line_length": 319,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4066a99f6f6fdc941829d3cd3015565ec0046a2f",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "DavidePistilli173/Computer-Vision",
"max_stars_repo_path": "Lab2/report/Lab2Report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1059,
"size": 4410
} |
\subsubsection{Forward Rate Agreement}
A forward rate agreement (trade type \emph{ForwardRateAgreement} is set up using a \\
{\tt ForwardRateAgreementData} block as shown in listing \ref{lst:ForwardRateAgreementdata}. The forward rate agreement specific elements
are:
\begin{itemize}
\item StartDate: A FRA expires/settles on the startDate. \\
Allowable values: See \lstinline!Date! in Table \ref{tab:allow_stand_data}.
\item EndDate: EndDate is the date when the forward loan or deposit ends. It follows that (EndDate - StartDate) is the tenor/term of the underlying loan or deposit.
Allowable values: See \lstinline!Date! in Table \ref{tab:allow_stand_data}.
\item Currency: The currency of the FRA notional. \\
Allowable values: See \lstinline!Currency! in Table \ref{tab:allow_stand_data}.
\item Index: The name of the interest rate index the FRA is benchmarked against.
Allowable values: An alphanumeric string of the form CCY-INDEX-TERM. CCY, INDEX and TERM must be separated by dashes (-). CCY and INDEX must be among the supported currency and index combinations. TERM must be an integer followed by D, W,
M or Y. See Table \ref{tab:indices}.
\item LongShort: Specifies whether the FRA position is long (one receives the agreed rate) or short (one pays the agreed rate).
Allowable values: \emph{Long}, \emph{Short}.
\item Strike: The agreed forward interest rate.
Allowable values: Any real number. The strike rate is
expressed in decimal form, e.g. 0.05 is a rate of 5\%.
\item Notional: No accretion or amortisation, just a constant notional. \\
Allowable values: Any positive real number.
\end{itemize}
\begin{listing}[H]
%\hrule\medskip
\begin{minted}[fontsize=\footnotesize]{xml}
<ForwardRateAgreementData>
<StartDate>20161028</StartDate>
<EndDate>20351028</EndDate>
<Currency>EUR</Currency>
<Index>EUR-EURIBOR-6M</Index>
<LongShort>Long</LongShort>
<Strike>0.001</Strike>
<Notional>1000000000</Notional>
</ForwardRateAgreementData>
\end{minted}
\caption{Forward Rate Agreement Data}
\label{lst:ForwardRateAgreementdata}
\end{listing} | {
"alphanum_fraction": 0.7487131493,
"avg_line_length": 42.74,
"ext": "tex",
"hexsha": "6318495a1b7a7a2109267b4bda9164af86821486",
"lang": "TeX",
"max_forks_count": 180,
"max_forks_repo_forks_event_max_datetime": "2022-03-28T10:43:05.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-10-08T14:23:50.000Z",
"max_forks_repo_head_hexsha": "c46ff278a2c5f4162db91a7ab500a0bb8cef7657",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "mrslezak/Engine",
"max_forks_repo_path": "Docs/UserGuide/tradedata/forwardrateagreement.tex",
"max_issues_count": 59,
"max_issues_repo_head_hexsha": "c46ff278a2c5f4162db91a7ab500a0bb8cef7657",
"max_issues_repo_issues_event_max_datetime": "2022-01-03T16:39:57.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-10-31T04:20:24.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "mrslezak/Engine",
"max_issues_repo_path": "Docs/UserGuide/tradedata/forwardrateagreement.tex",
"max_line_length": 240,
"max_stars_count": 335,
"max_stars_repo_head_hexsha": "c46ff278a2c5f4162db91a7ab500a0bb8cef7657",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "mrslezak/Engine",
"max_stars_repo_path": "Docs/UserGuide/tradedata/forwardrateagreement.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-02T07:12:03.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-10-07T16:31:10.000Z",
"num_tokens": 580,
"size": 2137
} |
%*******************************************************************************
%*********************************** First Chapter *****************************
%*******************************************************************************
%!TEX root = 0.main.tex
\section{Graph Spherical Convolutions}\label{sec:Graph Spherical Convolutions}
In this chapter we first introduce DeepSphere \cite{DeepSphere}, an example of Graph Spherical Convolutional Neural Network that uses a modified version of the HKGL to perform graph convolutions and hierarchical pooling to achieve rotation equivariance, and computational efficiency. In section \ref{sec:Chapter2:pointwise convergence of the Heat Kernel Graph Laplacian on the Sphere} we prove a pointwise convergence result of the full HKGL in the case of the sphere using a regular deterministic sampling scheme. In section \ref{sec:Chapter2:How to build a good graph} we show a way of modifying the original graph of DeepSphere to improve its spectral convergence to $\Delta_{\mathbb S^2}$ while managing to contain the computational costs of graph convolutions.
\subsection{Graph Spherical Convolutional Neural Networks} \label{sec:Chapter1:DeepSphere}
Perraudin et al. \cite{DeepSphere} have proposed a Spherical CNN to process and analyze spherical sky maps, as the Cosmic Radiation map in figure \ref{fig:cosmicradiation}. Sky images are modeled as signals on the vertices of a \textit{sparse} graph $G$ on the vertex set $V$ of the image pixels $(v_i)$ with weights
$$
w_{ij}=\exp \left(-\frac{\norm {v_i-v_j}^2}{4t}\right)
$$
where the kernel width $t$ is a parameter to optimize. In an earlier work, Belkin et al. \cite{NIPS2006_2989} proved the convergence of eigenvectors of the \textit{full} graph $G$ to the eigenfunctions of the Laplace Beltrami operator $\Delta_{\mathbb S^2}$, when the data is sampled from a uniform distribution on $\mathbb S^2$ (see Section \ref{sec:Chapter1:theoretical foundations}, theorem \ref{theo:spectral convergence}). For this reason and for the intuition we presented at the beginning of Chapter 2 when introducing the mean equivariance error $\overline E_G$, we expect the construction of Perraudin et al. to work well only for images that were sampled with equi area sampling schemes. On such sampling schemes, since the graph Laplacian eigenvectors well approximate the spherical harmonics, the graph convolution (\ref{eq:graph convolution}) well approximates the true spherical convolution (\ref{eq:convolution}).
\begin{equation}\label{eq:approx}
\mathbf f^\intercal\mathbf v_{i(\ell, m)} \approx \int_{\eta \in \mathbb S^2}f(\eta)Y_\ell^m(\eta)d\mu(\eta)=\hat f(\ell,m)
\end{equation}
The most used sampling scheme by cosmologists and astrophysics is called HEALPix \cite{HEALPix}, and is the one implemented in DeepSphere. HEALPix is an acronym for Hierarchical Equal Area isoLatitude Pixelization of a sphere. This sampling scheme produces a subdivision of the sphere in which each pixel covers the same surface area as every other pixel. It is parametrized by a parameter $N_{side}\in\mathbb N$, and is made of $n=12N_{side}^2$ pixels. The points of this sampling lie on isolatitude rings, making it possible to implement an FFT algorithm for the discrete SHT. The minimal resolution for HEALPix is given by $N_{side}=1$ and is made by 12 pixels. For each increasing value of $N_{side}$ each patch is divided into 4 equal area patches centered around the pixels of the new sampling (figure \ref{fig:healpix sampling}).
In chapter \ref{sec:Chapter4} of this work we'll deepen the relationship between the continuous spherical Fourier transform and the graph Fourier transform in case of non-uniform sampling measures. Perraudin et al. propose an efficient implementation of the graph convolution (\ref{eq:graph convolution}) to be implemented in each layer of their Graph Spherical Convolutional Neural Network. They propose to learn only those filters that are polynomials of degree $q$ of the eigenvalues $\lambda_i$
\begin{equation}\label{eq:deepsphere filter}
\begin{aligned}
k(\lambda) &= \sum_{j=0}^{q} \theta_{j} \lambda^{j}\\
\Omega_k \mathbf f &= \mathbf{V}\left(\sum_{j=0}^{q} \theta_{j} \mathbf{\Lambda}^{j}\right) \mathbf{V}^{\top} \mathbf{f}=\sum_{j=0}^{q} \theta_{j} \mathbf{L}^{j} \mathbf{f}
\end{aligned}
\end{equation}
Learning the filter $k$ means learning the $q+1$ coefficients $\theta_j$. In this way they solve different problems at once: first, to compute the graph convolution (\ref{eq:graph convolution}) there's no need of computing the expensive eigen decomposition of $\mathbf L$, but they just need to evaluate a polynomial of the sparse matrix $\mathbf L$. Thanks to a suitable parametrization of the polynomial $\sum_{j=0}^{q} \theta_{j} \mathbf{L}^{j}$ in term of Chebyshev polynomials, they manage to reduce the computations needed to evaluate such filter to $\mathcal O(|E|)$. Since in a NN this filtering operation has to be computed in every forward and backward step of the training phase, this gain in efficiency is dramatically important and led to speedups of different orders of magnitude compared to the architecture of Cohen et al. (see Chapter 5, table \ref{tab:SHREC17_class}). The filtering operation (\ref{eq:deepsphere filter}) can be seen also in the vertex domain as a weighted sum of the $q$-neighborhoods of each vertex. This is due to the fact that $\mathbf L $ has the same sparsity structure of the adjacency matrix of the graph, and thus $(\mathbf L^q)_{ij}$ will be non-zero if and only if the vertices $v_i, v_j$ are connected by a path of length $q$.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figs/chapter1/healpix.jpg}
\caption{\label{fig:healpix sampling}HEALPix sampling for $N_{side}=1,2,3,4$ \cite{HEALPix}}
\end{figure}
Despite theorem \ref{theo:spectral convergence} \cite{Belkin:2005:TTF:2138147.2138189} states the spectral convergence of the \textit{full} HKGL to $\Delta_{\mathbb S^2}$, the \textit{sparse} version of the HKGL of Perraudin et al. does not seem to show such convergence. In figure (\ref{fig:Old spectrum1}) we see the correspondence between the subspaces spanned by the graph Fourier modes of the graph Laplacian used by Perraudin et al. and the true spherical harmonics. We can also see the plot of the graph eigenvalues: we can see that they clearly come in groups of $(2\ell+1)$ eigenvalues corresponding to each $\ell$th degree of the spherical harmonics. We thus call $\mathbf v_{i(\ell, m)}$ the $i$th graph eigenmode corresponding the the one of degree $\ell$ and order $m$. We compute the normalized Discrete Spherical Harmonic Transform (DSHT) of each $\mathbf v_{i(\ell, m)}$ up to the degree $\ell_\text{max}$. The entry $(\ell, \kappa)$ of the matrix represented in the figure corresponds to the percentage of energy of the $\ell$th eigenspace $V_\ell = \text{span}\{\mathbf v_{i(\ell, -\ell)}, \mathbf v_{i(\ell, -\ell+1)},...,\mathbf v_{i(\ell, \ell)}\}$ contained in the $\kappa$th eigenspace of the true spherical harmonics.
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\linewidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/deepsphere_original.png}
\includegraphics[width=1\linewidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/deepsphere_original_diagonal.png}
\includegraphics[width=1\linewidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/05_figs/old_results3.png}
\end{center}
\caption{\label{fig:Old spectrum1}Alignment of the eigenvectors of the graph Laplacian of the DeepSphere graph, the starting point of this work. In the middle we plot the diagonals of the matrices on top, and on the bottom we plot its spectrum for $N_{side}=16$.}
\end{figure}
In a perfect situation, this matrix would be the identity matrix, being all the energy of the $\ell$th graph eigenspace contained in the corresponding one spanned by the true spherical harmonics. It can be seen that the eigenmodes of the graph Laplacian span almost the same subspaces as the spherical harmonics in the low frequencies, but this alignment gets worse at higher frequencies. Furthermore, it can be noticed that even by improving the resolution of the graph, the low frequency eigenspaces do not get better aligned.
\subsection{Pointwise convergence of the Heat Kernel Graph Laplacian on the Sphere}
\label{sec:Chapter2:pointwise convergence of the Heat Kernel Graph Laplacian on the Sphere}
Here we prove a pointwise convergence result of the full graph Laplacian in the case of the sphere on a deterministic sampling scheme that is regular enough. Our proof will be constructed following the ideas presented in the proof of theorem \ref{theo:Belkin pointwise convergence}.
\vspace{0.5cm}
\begin{definition}{}(Heat Kernel Graph Laplacian operator)\\
\label{def:Heat Kernel Graph Laplacian operator}
\text{Given a sampling $\{x_i\in\mathcal M\}_{i=0}^{n-1}$ of the manifold we define the \textbf{operator} }$L_n^t$ such that
$$L_n^tf(y) := \frac{1}{n}\left[ \sum_{i=0}^{n-1} \exp{-\frac{||x_i-y||^2}{4t}}\left(f(y)-f(x_i)\right)\right]$$
\end{definition}
\vspace{0.5cm}
Observe that the Heat Kernel Graph Laplacian operator restricted on the sample points $x_0, ..., x_{n-1}$ acts as the usual Heat Kernel Graph Laplacian matrix $\mathbf L_n^t$ rescaled by a factor of $\frac{1}{n}$:
$$L_n^tf(x_i) = \frac{1}{n} (\mathbf L_n^t\mathbf f)_i$$.
\vspace{0.5cm}
\begin{snugshade*}
\begin{theorem}{from \cite[Belkin et al.]{Belkin:2005:TTF:2138147.2138189}}\\
\label{theo:Belkin pointwise convergence}
Let $\mathcal M$ be a $k$-dimensional compact smooth manifold embedded in some euclidean space $\mathbb R^N$, and fix $p\in\mathcal M$. Let the data points $x_1, ... x_n$ be sampled form a uniform distribution on the manifold $\mathcal M$. Set $t_n=n^{-\frac{1}{k+2+\alpha}}$, for any $\alpha>0$ and let $f\in\mathcal C_\infty(\mathcal M)$. Then:
$$\forall \epsilon>0\quad \mathbb{P}\left[\left|\frac{1}{t}\frac{1}{(4 \pi t)^{k/2}}L_{n}^{t_n} f(p)- \frac{1}{\text{vol}(\mathcal M)}L^{t_n} f(p)\right|>\epsilon\right] \xrightarrow{n\to\infty} 0$$
\end{theorem}
\end{snugshade*}
\vspace{0.5cm}
This theorem states a convergence in probability of $L_n^t$ to $L^t$, that is far from being as strong as spectral convergence of theorem \ref{theo:spectral convergence}. However, we want to show that a similar result still holds in the specific case of the manifold $\mathcal M$ being the 2-Sphere $\mathbb S^2$ and where the points $x_1, ..., x_n$ are not sampled from a random distribution on $\mathbb S^2$, but are defined by the HEALPix sampling. To understand the differences between theorem \ref{theo:Belkin pointwise convergence} and theorem \ref{theo:pointwise convergence in the healpix case} it is useful to first review the proof of theorem \ref{theo:Belkin pointwise convergence}. For this proof we'll need to use the Hoeffding's inequality that we recall here under:
(\textit{Hoeffding's inequality})\\
Let \(X_{1}, \ldots, X_{n}\) be independent identically distributed random variables, such that
\(\left|X_{i}\right| \leqslant K .\) Then
\begin{equation}\label{eq:Hoeffding}
\mathbb P\left\{\left|\frac{\sum_{i} X_{i}}{n}-\mathbb{E} X_{i}\right|>\epsilon\right\}<2 \exp \left(-\frac{\epsilon^{2} n}{2 K^{2}}\right)
\end{equation}
\begin{proof}[Proof of Theorem \ref{theo:Belkin pointwise convergence}]
The first step is to observe that for any fixed $t>0$, any fixed function $f$ and a fixed point $y\in\mathbb S^2$, the Heat Kernel Graph Laplacian $L_n^t$ is an unbiased estimator for the Functional Approximation of the Laplace-Beltrami $L^t$. In other words, $L_n^tf(y)$ is the empirical average of $n$ i.i.d. random variables $X_i= e^{-\frac{||x_i-y||^2}{4t}}\left(f(y)-f(x_i)\right)$ with expected value corresponding to $L^tf(y)$. Thus,
\begin{equation}
\label{eq:expected value of heat kernel grah laplacian}
\mathbb E L_n^tf(y) = \mathbb E \frac{1}{n}X_i = \mathbb E X_i = L^tf(y),
\end{equation}
and by the strong law of large numbers we have that
\begin{equation}
\label{eq:convergence in probability}
\lim_{n\to\infty}L_n^tf(y) = L^t(y).
\end{equation}
The core of the work of Belkin et al. is the proof, that we will not discuss, of the following proposition.
\begin{prop} Under the same hypothesis of theorem \ref{theo:Belkin pointwise convergence}, we have the following pointwise convergence
$$\frac{1}{t}\frac{1}{(4\pi t)^{k/2}} L^tf(p) \xrightarrow{t\to 0 } \frac{1}{\text{vol}(\mathcal M)}\triangle_{\mathcal M}f(p).$$
\label{prop:3}
\end{prop}
Thanks to Proposition \ref{prop:3} and equation (\ref{eq:convergence in probability}), a straightforward application of Hoeffding's inequality with $K=\frac{1}{t}\frac{1}{(4\pi t)^{k/2}}$ together with equation (\ref{eq:expected value of heat kernel grah laplacian}) leads to
\begin{equation}
\label{eq:hoeffding applied}
\mathbb{P}\left[\frac{1}{t(4 \pi t)^{k / 2}}\left|L_{n}^{t} f(y)- L^{t} f(y)\right|>\epsilon\right] \leq 2 e^{-1 / 2 \epsilon^{2} n t(4 \pi t)^{k / 2}}
\end{equation}
We want the right hand side of equation (\ref{eq:hoeffding applied}) to go to $0$ for $n\to\infty, t\to0$ at the same time. For this to happen, we need to find a sequence $(t_n)$ such that
$$\begin{cases}
t_n\xrightarrow{n\to\infty}0\\
2 e^{-1 / 2 \epsilon^{2} n t_n(4 \pi t_n)^{k / 2}}\xrightarrow{n\to\infty}0\\
\end{cases}$$
By fixing $t_n=n^{-\frac{1}{k+2+\alpha}}$, for any $\alpha>0$, it is easy to check that \\$-1 / 2 \epsilon^{2} n t_n(4 \pi t_n)^{k / 2}\xrightarrow{n\to\infty}+\infty$, thus concluding the proof.
\end{proof}
Now we can observe that in order to adapt this proof to the case of the sphere with an equi area sampling scheme we need to modify two key things. First, due to the deterministic nature of the sampling scheme, we need to prove that for any fixed $t>0$, any fixed function $f$ and any point $y\in\mathbb S^2$
\begin{equation}\label{eq:limit}
\left|L_n^tf(y)-L^tf(y)\right|\xrightarrow{n\to \infty} 0,
\end{equation}
without relying on the strong law of large numbers. Once proven such result, we need to prove that
$$\left|\frac{1}{4\pi t^2}\left(L_n^tf(x) - L^tf(x)\right)\right|\xrightarrow[n\to \infty]{t\to 0}0$$
We need now to define some geometrical quantities that we'll need. Given a sampling $x_0, ..., x_{n-1}$ define $\sigma_i$ to be the patch of the surface of the sphere corresponding to the $i$th point of the sampling, define $A_i$ to be its corresponding area and $d_i$ to be the radius of the smallest ball in $\mathbb R^3$ containing the i-th patch (see Figure \ref{fig:Geometric characteristics of a patch}). Define $d^{(n)} := \max_{i=0, ..., n}d_i$ and $A^{(n)}=\max_{i=0, ..., n}A_i$.\\
Once proven the limit (\ref{eq:limit}), Proposition \ref{prop:3} leads to our main result:
\vspace{1cm}
\begin{snugshade*}
\begin{theorem}
For a sampling $\mathcal P = \{x_i\in\mathbb S^2\}_{i=0}^{n-1}$ of the sphere that is equi area and such that $d^{(n)})\leq \frac{C}{\sqrt{n}}$, for all $f: \mathbb S^2 \rightarrow \mathbb R$ Lipschitz with respect to the euclidean distance in $\mathbb R^3$, for all $y\in\mathbb S^2$, there exists a sequence $t_n = n^\beta$ such that the rescaled Heat Kernel Graph Laplacian operator $\frac{|\mathbb S^2|}{4\pi t_n}L^t_n$ converges pointwise to the Laplace Beltrami operator on the sphere $\triangle_{\mathbb S^2}$ for $n\to\infty$:
$$ \lim_{n\to\infty}\frac{|\mathbb S^2|}{4\pi t_n} L_n^{t_n}f(y) = \triangle_{\mathbb S^2}f(y).$$
\label{theo:pointwise convergence in the healpix case}
\end{theorem}
\end{snugshade*}
\vspace{1cm}
\begin{minipage}{.4\textwidth}
\centering
\includegraphics[width=0.8\linewidth]{figs/chapter1/d_iA_i.jpg}
\captionof{figure}{\label{fig:Geometric characteristics of a patch}Geometric characteristics of the $i$th patch}
\end{minipage}%
\hfill
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/chapter1/Heal_Base.png}
\captionof{figure}{\label{fig:HEALPix equal areas patches}HEALPix equal areas patches for $N_{side}=1$, $N_{side}=2$}
\vspace{0.5cm}
\end{minipage}
\subsubsection{Proof of the pointwise convergence of the Heat Kernel Graph Laplacian on the Sphere for an equi area sampling scheme}
Our first goal is to prove the following Proposition:
\vspace{0.5cm}
\begin{prop}\label{prop:1}
For an equal area sampling $\{x_i\in\mathbb S^2\}_{i=0}^{n-1}: A_i=A_j \forall i,j$ of the sphere it is true that for all $f: \mathbb S^2 \rightarrow \mathbb R$ Lipschitz with respect to the euclidean distance $||\cdot||$ with Lipschitz constant $\mathcal L_f$
$$
\left| \int_{\mathbb S^2}f({ x})\text{d}{\mu(x)} - \frac{1}{n}\sum_i f( x_i)\right|\leq \mathcal L_fd^{(n)}.
$$
Furthermore, for all $y\in\mathbb S^2$ the Heat Kernel Graph Laplacian operator $L^t_n$ converges pointwise to the functional approximation of the Laplace Beltrami operator $L^t$
$$ L_n^tf(y)\xrightarrow{n\to\infty} L^tf(y).$$
\end{prop}
\vspace{0.5cm}
\begin{proof}
Let us assume that the function $f:\mathbb R^3\rightarrow \mathbb R$ is Lipschitz with Lipschitz constant $\mathcal L_f$, we have
$$\left| \int_{\sigma_{i}}f({ x})\text{d}{\mu(x)} - \frac{1}{n}f( x_i)\right| \leq \mathcal L_fd^{(n)}\frac{1}{n} $$
So, by triangular inequality and by summing all the contributions of all the $n$ patches
$$\left| \int_{\mathbb S^2}f({ x})\text{d}{\mu(x)} - \frac{1}{n}\sum_i f( x_i)\right| \leq \sum_i \left| \int_{\sigma_{i}}f({ x})\text{d}{\mu(x)} - \frac{1}{n}f( x_i)\right|\leq n \mathcal L_fd^{(n)}\frac{1}{n} = \mathcal L_fd^{(n)}$$
Thanks to this result, we have the following two pointwise convergences
$$\forall f \text{ Lipschiz,}\quad \forall y\in\mathbb S^2, \quad\quad \frac{1}{n}\sum_i e^{-\frac{||x_i-y||^2}{4t}}\rightarrow \int e^{-\frac{||x-y||^2}{4t}}d\mu(x)$$
$$\forall f \text{ Lipschiz,}\quad \forall y\in\mathbb S^2, \quad\quad \frac{1}{n}\sum_i e^{-\frac{||x_i-y||^2}{4t}}f(x_i)\rightarrow \int e^{-\frac{||x-y||^2}{4t}}f(x)d\mu(x)$$
Definitions \ref{def:Heat Kernel Graph Laplacian operator} and \ref{def:Functional approximation to the Laplace-Beltrami operator} end the proof.
\end{proof}
\vspace{0.5cm}
Now, we just proved that \textit{keeping t fixed} $L_n^tf(x)\rightarrow L^tf(x)$. Now our goal is to prove that:
\vspace{0.5cm}
\begin{prop}\label{prop:2}
Given a sampling regular enough i.e., for which we assume $A_i=A_j \ \forall i,j\text{ and }d^{(n)}\leq \frac{C}{\sqrt{n}}$, for a fixed $t>0$, a fixed Lipschitz function $f$ and a fixed point $y\in\mathbb S^2$ there exists a sequence $t_n = n^\beta, \beta<0$ such that
$$
\forall f \text{ Lipschitz, } \forall x\in\mathbb S^2 \quad \left|\frac{1}{4\pi t_n^2}\left(L_n^{t_n}f(x) - L^{t_n}f(x)\right)\right|\xrightarrow{n\to \infty}0.
$$
\end{prop}
\vspace{0.5cm}
The main result of this section, theorem \ref{theo:pointwise convergence in the healpix case}, is then an immediate consequence of Proposition \ref{prop:2} and Proposition \ref{prop:3}.
\begin{proof}[Proof of Proposition \ref{prop:2}]
We define for simplicity of notation
\begin{align*}
\phi^t(x;y) &:= e^{-\frac{||x-y||^2}{4t}}\left(f(y)-f(x)\right)\\
K^t(x,y) &:= e^{-\frac{||x-y||^2}{4t}}
\end{align*}
We start by writing the following chain of inequalities
\begin{align*}
||L_n^tf-L^tf||_\infty &= \max _{y\in \mathbb S^2} \left|L_n^tf(y)-L^tf(y)\right|\\
&= \max _{y\in \mathbb S^2} \left| \frac{1}{n} \sum_{i=1}^n \phi^t(x_i; y)- \int_{\mathbb S^2} \phi^t(x;y)d\mu(x) \right|\\
&\leq \max _{y\in \mathbb S^2} \sum_{i=1}^n \left| \frac{1}{n} \phi^t(x_i; y)- \int_{\sigma_i} \phi^t(x;y)d\mu(x) \right|\\
&\leq \max _{y\in \mathbb S^2} \left[\mathcal L_{\phi^t_y}d^{(n)} \right]\\
\end{align*}
where $\mathcal L_{\phi^t_y}$ is the Lipschitz constant of $x \rightarrow \phi^t(x, y)$ and where we used for the last inequality Proposition \ref{prop:1}. If we assume $d^{(n)}\leq \frac{C}{\sqrt{n}}$ we have that
$$||L_n^tf-L^tf||_\infty \leq \max _{y\in \mathbb S^2} \left[ \mathcal L_{\phi^t_y} \frac{C}{\sqrt{n}} \right]$$
Let's now find the explicit dependence $t\rightarrow \mathcal L_{\phi^t_y}$
\begin{align*}
\mathcal L_{\phi^t_y} &= ||\partial_x\phi^t(\cdot;y)||_\infty\\&
= ||\partial_x\left(K^t(\cdot;y)f\right)||_\infty\\&
= ||\partial_x K^t(\cdot;y)f + K^t(\cdot;y)\partial_x f||_\infty\\&
\leq ||\partial_x K^t(\cdot;y)f||_\infty + ||K^t(\cdot;y)\partial_x f||_\infty\\&
\leq ||\partial_x K^t(\cdot;y)||_\infty||f||_\infty + ||K^t(\cdot;y)||_\infty||\partial_x f||_\infty\\&
= ||\partial_x K^t(\cdot;y)||_\infty||f||_\infty + ||\partial_x f||_\infty\\&
= \mathcal L_{K^t_y} ||f||_\infty + ||\partial_xf||_\infty\\&
= \mathcal L_{K^t_y} ||f||_\infty + \mathcal L_f
\end{align*}
where $\mathcal L_{K^t_y}$ is the Lipschitz constant of $x\rightarrow K^t(x;y)$. We can observe that such constant does not depend on $y$:
$\mathcal L_{K^t_y} = \norm{\partial_x e^{-\frac{x^2}{4t}}}_\infty = \norm{\frac{x}{2t}e^{-\frac{x^2}{4t}}}_\infty = \left. \frac{x}{2t}e^{-\frac{x^2}{4t}}\right|_{x=\sqrt{2t}}=(2et)^{-\frac{1}{2}}\propto t ^ {-\frac{1}{2}}$
So we can continue
\begin{align*}
\max _{y\in \mathbb S^2} \left[ \mathcal L_{\phi^t_y} \frac{C}{\sqrt{n}} \right]
&\leq \frac{C}{\sqrt{n}} \left( (2et)^{-\frac{1}{2}} \norm{f}_\infty + \mathcal L_f \right)\\
&\leq \frac{C\norm{f}_\infty}{\sqrt{n}(2et)^\frac{1}{2}} + \frac{C}{\sqrt{n}}\mathcal L_f\\
\end{align*}
So we have that, rescaling by a factor $\frac{1}{4\pi t^2}$
\begin{align*}
\norm{\frac{1}{4\pi t^2}\left(L_n^tf-L^tf\right)}_\infty&\leq \frac{1}{4\pi t^2}\norm{\left(L_n^tf-L^tf\right)}_\infty \\
&\leq \frac{C}{4\pi}\left[\frac{\norm{f}_\infty}{\sqrt{2e}}\frac{1}{\sqrt{n}t^\frac{5}{2}} + \frac{\mathcal L_f}{\sqrt{n}t^2}\right]
\end{align*}
we want $\begin{cases}
t \rightarrow 0\\
n \rightarrow \infty\\
\sqrt{n}t^\frac{5}{2} \rightarrow \infty\\
\sqrt{n}t^2 \rightarrow \infty
\end{cases}$ in order for $ \frac{C}{4\pi}\left[\frac{\norm{f}_\infty}{\sqrt{2e}}\frac{1}{\sqrt{n}t^\frac{5}{2}} + \frac{\mathcal L_f}{\sqrt{n}t^2}\right] \xrightarrow[t\to 0 ]{n\to\infty}0$
This is true if $\begin{cases}
t(n) = n^\beta, &\beta\in(-\frac{1}{5}, 0) \\
t(n) = n^\beta, &\beta\in(-\frac{1}{4}, 0)
\end{cases} \implies t(n) = n^\beta, \quad \beta\in(-\frac{1}{5}, 0)$
Indeed
$\sqrt{n}t^\frac{5}{2}=n^{5/2\beta+1/2}\xrightarrow{n \to \infty} \infty$ since $\frac{5}{2}\beta+1/2>0 \iff \beta>-\frac{1}{5}$
$\sqrt{n}t^2=n^{2\beta+1/2}\xrightarrow {N \to \infty} \infty$ since $2\beta+1/2>0 \iff \beta>-\frac{1}{4}$
So, for $t=n^\beta$ with $\beta\in(-\frac{1}{5}, 0)$ we have that
$$\begin{cases}
(t_n)\xrightarrow{n\to\infty}0\\
\norm{\frac{1}{4\pi t_n^2}L_n^{t_n}f-\frac{1}{4\pi t_n^2}L^{t_n}f}_\infty \xrightarrow{n\to\infty}0
\end{cases}$$
\end{proof}
The proof of theorem \ref{theo:pointwise convergence in the healpix case} is now trivial:
\begin{proof}[Proof of Theorem \ref{theo:pointwise convergence in the healpix case}]
Thanks to Proposition \ref{prop:2} and Proposition \ref{prop:3} we conclude that $\forall y\in\mathbb S^2 $
$$\lim_{n\to\infty}\frac{1}{4\pi t_n^2} L_n^{t_n}f(y) = \lim_{n\to\infty}\frac{1}{4\pi t_n^2} L^{t_n}f(y) = \frac{1}{|\mathbb S^2|}\triangle_{\mathbb S^2}f(y) $$
\end{proof}
The proof of this result is instructive since it shows that we need to impose some regularity conditions on the sampling. If the sampling is equal area as HEALPix, meaning that all the patches $\sigma_i$ have the same area (i.e., HEALPix, see figure \ref{fig:HEALPix equal areas patches}), then we need to impose that $ d^{(n)}\leq \frac{1}{\sqrt{n}}$. If the sampling is not equal area, meaning that in general $A_i\neq A_j$, it can be shown that we need a slightly more complex condition: $\max_{i=0,...,n-1}d_iA_i\leq Cn^{-\frac{3}{2}}$.\\
In the work of Belkin et al. \cite{Belkin:2005:TTF:2138147.2138189} the sampling is drawn form a uniform random distribution on the sphere, and their proof heavily relies on the uniformity properties of the distribution from which the sampling is drawn. In our case the sampling is deterministic, and the fact that for a sphere there doesn't exist a regular sampling with more than 12 points (the vertices of a icosahedron) is indeed a problem that we need to overcome by imposing the regularity conditions above.
To conclude, we can see that the result obtained has the same form than the result obtained in \cite{Belkin:2005:TTF:2138147.2138189}. Given the kernel density $t(n)=n^\beta$, if Belkin et al. proved convergence in the random case for $\beta \in (-\frac{1}{4}, 0)$, we proved convergence in the HEALPix case for $\beta \in (-\frac{1}{5}, 0)$. This kind of result can be interpreted in the following way. In order to have this pointwise convergence, we need to reduce the kernel width but \textit{not so fast} compared to the resolution of the graph. In other words, the kernel width has to be reduced but is somewhat limited by the resolution of the graph. In the next section we'll see how to set in practice a good kernel width $t$ given a graph resolution $n$.
\begin{remark}
Pointwise convergence is just a necessary condition for spectral convergence. Theorem \ref{theo:pointwise convergence in the healpix case} does not imply convergence of eigenvalues and eigenvectors.
\end{remark}
\subsection{How to build a good graph to approximate spherical convolutions}
\label{sec:Chapter2:How to build a good graph}
The current state of the art of rotation equivariant Graph CNN is DeepSphere \cite{DeepSphere}. However, if we measure the alignment of the eigenspaces spanned by the eigenvectors of its graph Laplacian and the ones spanned by the spherical harmonics we see that it does not get better as $N_{side}$ increases (figure \ref{fig:deepsphere results}). We'll see that the main cause of this bad behavior of the eigenspaces is the fixed number of neighbors used for the construction of the graph. In this subsection we'll see that \textit{to obtain the desired spectral convergence it is necessary to increase the number of neighbors as we decrease the kernel width} $t$. We'll follow in practice what we did in proving theorem \ref{theo:pointwise convergence in the healpix case}: first we'll build a full graph, and let the number of pixels $n$ increase while keeping the kernel width $t$ fixed. After having discussed the results, we'll try to find a sequence $(t_{N_{side}})$ to obtain the expected spectral convergence. Only in the end we'll find a way to make the graph sparse to limit the computational costs of graph convolutions but keeping the eigen decomposition of the graph Laplacian as close to the spherical harmonics as possible.
\subsubsection{Full graph, $n\to\infty$}\label{sec:Chapter1: n to infty}
Here we analyze what happens to the power density spectrum of the \textit{full} Heat Kernel Graph Laplacian as we make $n$ go to infinity while keeping $t$ fixed. Since in the previous section we proved that (Proposition \ref{prop:3}) for a sampling regular enough and a fixed $t$, a fixed function $f$, a fixed point $y$
$$L_n^tf(y)\xrightarrow{n\to\infty}L^tf(y)$$
Since HEALPix is a very regular sampling of the sphere, we expect to observe (even if we didn't prove it) the corresponding spectral convergence proved in theorem \ref{theo:spectral convergence}. The results obtained are in figure \ref{fig:n to infinity1}, \ref{fig:n to infinity3}.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/n.png}
\includegraphics[width=\textwidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/n_diagonal.png}
\caption{\label{fig:n to infinity1}Alignment of the eigenvectors of the HKGL with a fixed kernel width $t$}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\textwidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/n_eigenvalues.png}
\includegraphics[width=0.49\textwidth]{figs/chapter1/trueeigenvalues.png}
\caption{\label{fig:n to infinity3}Left: spectrum of the HKGL with a fixed kernel width $t$. Right: true spectrum of $\Delta_{\mathbb S^2}$}
\end{figure}
In figure \ref{fig:n to infinity1} we see two things: first that there's a frequency threshold beyond which the Graph Laplacian is completely blind, approximately located at the 15th degree, and second that before this frequency threshold, we actually see the convergence expected: the alignment gets better as $n$ gets bigger.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{figs/chapter1/frequency_threshold1.png} \hfill
\includegraphics[width=0.45\textwidth]{figs/chapter1/frequency_threshold2.png}
\caption{\label{fig:n to infinity4}Frequency threshold explained.}
\end{figure}
To explain this we refer to figure \ref{fig:n to infinity4} where we show a simplified situation where we are sampling the interval $[0, 2]$ and we plot the Gaussian kernel centered around the first pixel of the sampling corresponding to the origin. On the leftmost image the pixels are correctly spaced with respect to the kernel width, in the sense that the values of the kernel evaluated on the pixels are well far apart from each other. This makes the graph able to "see" all the pixels differently, and thus all the frequencies with wavelength around the order of magnitude of the average pixel distance will be captured by the graph. On the rightmost image in figure \ref{fig:n to infinity4} there are too many pixels with respect to the kernel width: the values of the kernel evaluated on the pixels close to the origin, because of the slope of the kernel being almost zero are too close to each other (in red); because of this any variation of a signal on the red pixels would be almost invisible to the graph Laplacian. With this fixed kernel width $t$, no matter how much we sample the interval $[0,2]$, any frequency with wavelength shorter than the radius $r\approx0.25$ becomes invisible to the graph Laplacian.\\
This phenomenon can be seen also in the spectrum represented in figure \ref{fig:n to infinity3} where we plot the eigenvalues of the matrix $\mathbf L_n^t$: as $N_{side}$ gets bigger the eigenvalues get more and more grouped in the usual groups of the same multiplicity of the corresponding spherical harmonics; however, there's a frequency (corresponding approximately to the degree $\ell=15$) from which all the eigenspaces tend to merge into one, corresponding to the eigenvalue $1$.
\subsubsection{Full graph, $t\to 0$}
In this section we fix the parameter $N_{side}$ and and we make the kernel width $t$ go to $0$.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/t_sensitivity}
\caption{\label{fig:t_sensitivity_eigenspaces}Alignment of the eigenspaces of the HKGL with a fixed number of points $n$ corresponding to $N_{side}=8$}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/t_sensitivity_diagonal.png}
\caption{\label{fig:t_sensitivity_diagonal}Whole trend}
\end{figure}%
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/t_sensitivity_diagonal_2.png}
\caption{\label{fig:t_sensitivity_diagonal_2}First trend: error stays low for low frequencies, and gets lower for high frequencies}
\vspace{0.5cm}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/t_sensitivity_diagonal_1.png}
\caption{\label{fig:t_sensitivity_diagonal_1}Second trend: error gets higher for both high and low frequencies}
\end{figure}%
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{figs/chapter1/t.png}
\caption{\label{fig:weights}One row of the weight matrix for different values of the kernel width $t$ plotted on the HEALPix sampling with $N_{side}=8$. On the left, for a too large $t$, the HKGL can not capture high frequencies. On the right, for a too small $t$, the graph becomes almost completely disconnected. In the center a good value for $t$ makes the HKGL able to see the most frequencies.}
\end{figure}
Results are in figure \ref{fig:t_sensitivity_eigenspaces}, \ref{fig:t_sensitivity_diagonal}: Starting from $t=0.32$, the error in the high frequencies starts to get smaller, while the error in the low frequencies keeps staying low (Figure \ref{fig:t_sensitivity_diagonal_2}) up to $t=0.05$. For $t$ that gets smaller and smaller up to $t=0.01$, we get worse alignment both in high and low frequencies (Figure \ref{fig:t_sensitivity_diagonal_1}). This behavior can be explained with the same arguments used in the previous section: high values of the kernel width correspond to a very flat kernel (figure \ref{fig:weights}, left), and thus the graph loses the capacity of individuating high frequencies as discussed before. On the opposite side, low values of the kernel width correspond to a very peaked kernel, that causes all the weights to be close to zero and thus the graph becomes less and less connected loosing every capability of identifying different frequencies (figure \ref{fig:weights}, right).
\subsubsection{Putting it together: full graph, $n\to\infty$ and $t\to 0$}
A grid search has been used to find the following optimal kernel width for different values of $N_{side}$, always in the case of a full graph, where we maximized the number of graph eigenspaces with the alignment value in figure \ref{fig:optimal graph} bigger than $80\%$. The optimal values of the kernel width $t$ are shown in figure \ref{fig:t}. In DeepSphere $t$ is set to the average of the non zeros weight matrix entries, where the number of neighbors of each vertex is fixed between 7 and 8. We can see that the heuristic way of DeepSphere of setting the standard deviation $t$ produces results of the same order of magnitude of the optimal value. It can be seen that the optimal values of $t$ are very close to a linear trend in the log-log plot, showing an approximately polynomial relationship with the parameter $N_{side}$ that could be used to extrapolate possible values of $t$ for higher $N_{side}$.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/kernelwidth.png}
\caption{\label{fig:t}Standard deviation of the Gaussian kernel in a log-log plot. A straight line indicates a polynomial relation.}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/optimal_full.png}
\includegraphics[width=\textwidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/optimal_full_diagonal.png}
\caption{\label{fig:optimal graph}Alignment of eigenspaces of the optimal full graph.}
\end{figure}
In figure \ref{fig:optimal graph} we can appreciate that for a full graph, each time we double the parameter $N_{side}$, we approximately double the degree $\ell$ at which the graph eigenvectors are correctly aligned with the spherical harmonics.
\subsubsection{Reducing the number of neighbors}
For what it concerns how to make the graph sparse, the intuition is the following: remember that we want our graph Laplacian to approximate the operator $L^t$, that for sufficiently small $t$ approximates $\Delta$
$$\frac{1}{n}\left(\sum_i e^{-\frac{||x_i-y||^2}{4t}}(f(y)-f(x_i)) \right) \approxeq \int_{\mathbb S^2} e^{-\frac{||x-y||^2}{4t}}\left(f(y)-f(x)\right)d\mu(x) \approxeq \Delta_{\mathbb S^2} f(y)$$
So far we showed how to do so optimally with a full graph; however, a full graph comes at the cost of leading to a matrix $\mathbf L_n^t$ that is full, and thus to a graph filtering cost of $\mathcal O(n^2)$, worse than the common SHT cost of $\mathcal O(n^{3/2})$. Perraudin et al. \cite{DeepSphere} constructed a nearest neighbor graph constraining the number of neighbors for each vertex to be fixed, making the graph filtering cost linear in the number of pixels. However, as we saw at the beginning of this section this leads to a poor alignment of the graph eigenvectors with the spherical harmonics and thus to a not so optimal rotation equivariance. Here we propose a different approach, based on the following intuition: making the graph sparse means deciding which weights $w_{ij}=\exp{-\frac{||x_i-x_j||^2}{4t}}$ to set to zero. For this approximation to be accurate we want to set to zero only those weights that are small enough: let's define a new \textit{epsilon graph} $G'$ by fixing a threshold $\epsilon$ on $w_{ij}=e^{-\frac{||x_i-x_j||^2}{4t}}$ such that
$$w'_{ij} = \begin{cases}
e^{-\frac{||x_i-x_j||^2}{4t}}\quad& \text{if } e^{-\frac{||x_i-x_j||^2}{4t}} \geq \epsilon\\
0 \quad & \text{if } e^{-\frac{||x_i-x_j||^2}{4t}} < \epsilon
\end{cases}$$
By setting $\epsilon = 0.01$ - or equivalently, thresholding the weights at \\$\norm{x_i-x_j}\approx 3\sigma$ where $\sigma=\sqrt{2t}$ is the standard deviation of the kernel - in figure \ref{fig:optimal_thresholded} we can see the usual alignment plots of the graph $G'$:
\begin{table}[h]
\centering
\begin{tabular}{ c|c}
$N_{side}$ & Number of neighbors \\
1 & 11 \\
2 & 16 \\
4 & 37 \\
8 & 43 \\
16 & 52 \\
\end{tabular}
\caption{\label{table:NN}}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/optimal_thresholded.png}
\includegraphics[width=\textwidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/optimal_thresholded_diagonal.png}
\caption{\label{fig:optimal_thresholded}Optimal construction \textbf{thresholded at $k=0.01$}}
\end{figure}
We see that we need to increase the number of neighbors as $N_{side}$ gets bigger. Again, the intuition is the following: to have spectral convergence (a strong type of convergence) we need more and more global information and more precise. By fitting the relationship
$$
\text{Number of neighbors} = (N_{side})^\alpha
$$
to the data in table \ref{table:NN} we obtain that $\alpha$ should be close to $1/2$, meaning that the complexity graph filtering with $G'$ could be approximated by
$$
\mathcal O(|E|) = \mathcal O(n\sqrt{N_{side}}) = \mathcal{O}(n^{5/4}).
$$
where $n$ is the number of vertices of the graph. This complexity is exactly in the middle between the linear complexity of DeepSphere and the complexity $\mathcal O(n^{3/2})$ of the SCNNs of Cohen and Esteves \cite{SCNN} \cite{Esteves}. In practice the number of neighbors grows very slowly with the number of pixels, making graph convolutions with $G'$ still very efficient and fast (see section \ref{sec:Chapter5:Experimental validation}, table \ref{table:results}).
To conclude, we show in figure \ref{fig:Old spectrum}, \ref{fig:New spectrum} a confront between the alignment of the graph Laplacian eigenvectors of the DeepSphere graph $G$, the starting point of this work, and of the graph $G'$. It can appreciated how the alignment plots show a much better behavior of the graph Laplacian eigenvectors of $G'$, and how the spectrum of $G'$ resembles more accurately the spectrum of $\Delta_{\mathbb S^2}$.\\
\begin{minipage}{.5\textwidth}
\centering
\vspace{0.4cm}
\includegraphics[width=0.95\linewidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/deepsphere_original.png}
\includegraphics[width=0.95\linewidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/deepsphere_original_diagonal.png}
\includegraphics[width=0.95\linewidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/05_figs/old_results3.png}
\captionof{figure}{\label{fig:Old spectrum}Alignment of the graph Laplacian eigenvectors of the DeepSphere graph $G$, the starting point of this work, and its spectrum.}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/optimal_thresholded.png}
\includegraphics[width=0.95\linewidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/optimal_thresholded_diagonal.png}
\includegraphics[width=0.95\linewidth]{../codes/02.HeatKernelGraphLaplacian/HEALPix/06_figures/optimal_thresholded_eigenvalues.png}
\captionof{figure}{\label{fig:New spectrum}Alignment of the graph Laplacian eigenvectors of the proposed graph $G'$, and its spectrum.}
\end{minipage}
\subsubsection{Equivariance error}
So far we used the plots of the alignment of the eigenvectors with the spherical harmonics as a proxy of the quantity we are really interested in, the mean equivariance error $\overline E_G$, because they gave us more valuable interpretations about what was happening. Now we want to have the confirmation that the proposed graph $G'$ led to a smaller mean equivariance error than the one of the original graph $G$ of DeepSphere.
In figure \ref{fig:DeepSphere equivariance error} we plot the mean equivariance error
$$\overline E = \mathbb E_{f, g}\ E(f, g)
$$ of the diffusion filter $k(\lambda_i) = \exp(-\lambda_i)$ for both graphs $G, G'$ by spherical harmonic degree $\ell$ at different resolutions. This was obtained as the empirical average over a uniform sample of rotations $g\in SO(3)$ and a uniform sample of functions $f\in V_\ell = \text{span}\{Y_\ell^m, |m|\leq \ell\}$. We recall that for HEALPix there's no sampling theorem that guarantees the existence of an exact reconstruction operator $T^{-1}$, so to calculate this quantity we had to rotate the sampled signal in the discrete domain, introducing important interpolation errors. We can see that DeepSphere has a mean equivariance error that is almost 30\% in the low frequencies, and decreases slowly for higher ones. The graph $G'$ has a much better behavior: the error stays low for the small frequencies, rising up for the higher ones and always remaining confined under 5\%. To compare it we reported also the results for the full HKGL, where we can appreciate a behavior similar to $G'$ but with an error always smaller than 2\%.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{../codes/06.Equivariance_error/DeepSphereonHEALPix.png}
\includegraphics[width=\textwidth]{../codes/06.Equivariance_error/OptimalHKGLonHEALPix.png}
\includegraphics[width=\textwidth]{../codes/06.Equivariance_error/FullHKGLonHEALPix.png}
\caption{\label{fig:DeepSphere equivariance error}Mean equivariance error of the diffusion filter $\exp(-\Lambda)$ for $G$, $G'$ and the full HKGL, by spherical harmonic degree. Notice the difference in the scale of the y axis for DeepSphere, that reaches errors up to 30\%.}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{../codes/06.Equivariance_error/img_example_DeepSphere/DS.png}
\caption{\label{fig:DeepSphere equivariance error in practice}DeepSphere V1 equivariance error. On the left, a signal $f$. Top right, $f$ was first rotated and then filtered through a diffusion filter $k(\lambda) = \exp (-\lambda)$. Bottom right, $f$ was first filtered and then rotated. The difference in the two outcomes is evident.}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{../codes/06.Equivariance_error/img_example_thresholdedHKGL/thresholdedHKGL.png}
\caption{\label{fig:Optimal equivariance error in practice}DeepSphere V2 equivariance error. On the left, a signal $f$. Top right, $f$ was first rotated and then filtered through a diffusion filter $k(\lambda) = \exp (-\lambda)$. Bottom right, $f$ was first filtered and then rotated. No difference in the two outcomes is visible to the human eye.}
\end{figure}
In figure \ref{fig:DeepSphere equivariance error in practice} we see the visualization of the equivariance error of DeepSphere: on the left, the original sampled signal $Tf$. Top right, the rotated and filtered signal $\Omega_k T \Lambda(g) f$. Bottom right, the filtered and rotated signal $T \Lambda(g) T^{-1} \Omega_k T f $. In figure \ref{fig:Optimal equivariance error in practice} we see the visualization of the equivariance error of the graph $G'$: on the left, the same, original sampled signal $Tf$. Top right, the rotated and filtered signal $\Omega_k T \Lambda(g) f$. Bottom right, the filtered and rotated signal $T \Lambda(g) T^{-1} \Omega_k T f $. No difference can be appreciated at a visual analysis for the graph $G'$, while for DeepSphere the difference is clearly visible.
\clearpage
To conclude, we report in table \ref{tab:final results}, as final metric for the evaluation of rotation equivariance of the two graphs $G$, $G'$, the mean equivariance error for a different values of $N_{side}$ that we computed by sampling random coefficients $\theta_\ell^m \in(0,1)$ of linear combinations of all the spherical harmonics up to degree $\ell=16$
$$f(x) = \sum_{\ell\leq 16,\ |m|\leq\ell}\theta_\ell^m Y_\ell^m(x)
$$
and by averaging on random rotations $g\in SO(3)$. Our graph shows lower errors as $N_{side}$ grows, and a much lower error than DeepSphere.
\begin{table}
\centering
\begin{tabular}{c|ccc}
Mean equivariance error $\overline{E}$& $N_{side}=4$& $N_{side}=8$&$N_{side}=16$ \\\hline
DeepSphere graph $G$ & 12.37\% & 12.03\% & 12.23\% \\
Optimal graph $G'$ & 4.57\% & 3.98 \% & 1.54\%
\end{tabular}
\caption{\label{tab:final results}}
\end{table}
| {
"alphanum_fraction": 0.7356692114,
"avg_line_length": 97.6767241379,
"ext": "tex",
"hexsha": "b9dd3548cb649c05fc5e1f8cc9352c15a4e19628",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cca07a8485c6933361536286279ae6c7e14d7fa1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "MartMilani/PDM",
"max_forks_repo_path": "PDF/2.Chapter1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cca07a8485c6933361536286279ae6c7e14d7fa1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "MartMilani/PDM",
"max_issues_repo_path": "PDF/2.Chapter1.tex",
"max_line_length": 1274,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cca07a8485c6933361536286279ae6c7e14d7fa1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "MartMilani/PDM",
"max_stars_repo_path": "PDF/2.Chapter1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 13864,
"size": 45322
} |
\subsection{Mean value theorem for integration}
Take function \(f(x)\). From the extreme value theorem we know that:
\(\exists m \in \mathbb{R} \exists M\in \mathbb{R}\forall x\in [a,b](m<f(x)<M)\)
\subsection{Fundamental theorem of calculus}
From continuation we know that:
\(\int_a^{x_1}f(x)dx+\int_{x_1}^{x_1+\delta x}f(x)dx=\int_a^{x_1+\delta x}f(x)dx\)
\(\int_x^{x_1+\delta x}f(x)dx=\int_a^{x_1+\delta }f(x)dx-\int_a^{x_1 }f(x)dx\)
Indefinite integrals
| {
"alphanum_fraction": 0.6788008565,
"avg_line_length": 25.9444444444,
"ext": "tex",
"hexsha": "63ee984441dc042450d7939631d09caa58867860",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/analysis/integration/06-01-FundamentalTheoremCalculus.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/analysis/integration/06-01-FundamentalTheoremCalculus.tex",
"max_line_length": 82,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/analysis/integration/06-01-FundamentalTheoremCalculus.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 178,
"size": 467
} |
\sec{Picard's Iterates}
\subsection{Introduction}
\begin{mdframed}[style=boxstyle, frametitle={The Setup}]
Consider the initial value problem (IVP)
\[y' = f(t, y); \;\; y(a) = b.\]
Corresponding to this, we set up the following \emph{integral equation}
\[\phi(t) = b + \int_{a}^{t} f(s, \phi(s)) ds.\]
It can be verified a solution $\phi_0$ to the integral equation is a solution of the original DE as well. (And vice-versa.)\\
\end{mdframed}
Now, we describe a method to solve the integral equation.
\newpage
\begin{mdframed}[style=boxstyle, frametitle={Picard's Iteration Method}]
We recursively define a family of functions $\phi_n(t)$ for every $n \ge 0$ as follows:
\begin{align*}
\phi_0 &\equiv b,\\
\phi_{n + 1}(t) &= b + \int_{a}^{t} f(s, \phi_n(s)) ds \quad \text{for }n \ge 0.
\end{align*}
\end{mdframed}
Under suitable conditions, the sequence of functions $\left(\phi_n\right)$ converges to a function
\[\phi(t) = \lim_{n\to \infty}\phi_n(t),\]
which is a solution to the IVP.
\exercise{%
Solve the IVP:
\[y'(t) = 2t(1 + y); \;\; y(0) = 0.\]
} | {
"alphanum_fraction": 0.6657329599,
"avg_line_length": 38.25,
"ext": "tex",
"hexsha": "5ebad2f784500cbbdaa3a168fdc8a0f80e17f2d8",
"lang": "TeX",
"max_forks_count": 26,
"max_forks_repo_forks_event_max_datetime": "2021-05-17T12:09:09.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-08-30T18:19:52.000Z",
"max_forks_repo_head_hexsha": "aa22ff5a198e9ede5d07001384fda67a4606f8ab",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "siddhanttripathy/siddhanttripathy.github.io",
"max_forks_repo_path": "tuts/ma-108/summary/picard.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "aa22ff5a198e9ede5d07001384fda67a4606f8ab",
"max_issues_repo_issues_event_max_datetime": "2020-05-28T10:29:01.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-03-04T10:23:16.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "siddhanttripathy/siddhanttripathy.github.io",
"max_issues_repo_path": "tuts/ma-108/summary/picard.tex",
"max_line_length": 126,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "aa22ff5a198e9ede5d07001384fda67a4606f8ab",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "siddhanttripathy/siddhanttripathy.github.io",
"max_stars_repo_path": "tuts/ma-108/summary/picard.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-10T09:58:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-12-17T20:34:30.000Z",
"num_tokens": 383,
"size": 1071
} |
\section{201409-4}
\input{problem/2/201409-4-p.tex} | {
"alphanum_fraction": 0.7307692308,
"avg_line_length": 17.3333333333,
"ext": "tex",
"hexsha": "fb6d7956023a3623f93c8dab6b8ad97e81981625",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "26ef348463c1f948c7c7fb565edf900f7c041560",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "xqy2003/CSP-Project",
"max_forks_repo_path": "problem/2/201409-4.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "26ef348463c1f948c7c7fb565edf900f7c041560",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "xqy2003/CSP-Project",
"max_issues_repo_path": "problem/2/201409-4.tex",
"max_line_length": 32,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "26ef348463c1f948c7c7fb565edf900f7c041560",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "xqy2003/CSP-Project",
"max_stars_repo_path": "problem/2/201409-4.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-14T01:47:19.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-01-14T01:47:19.000Z",
"num_tokens": 22,
"size": 52
} |
\documentclass[12pt,a4paper]{article}
\usepackage[a4paper,text={16.5cm,25.2cm},centering]{geometry}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{bm}
\usepackage{graphicx}
\usepackage{microtype}
\usepackage{hyperref}
\setlength{\parindent}{0pt}
\setlength{\parskip}{1.2ex}
\hypersetup
{ pdfauthor = { Marco Fasondini },
pdftitle={ foo },
colorlinks=TRUE,
linkcolor=black,
citecolor=blue,
urlcolor=blue
}
\usepackage{upquote}
\usepackage{listings}
\usepackage{xcolor}
\lstset{
basicstyle=\ttfamily\footnotesize,
upquote=true,
breaklines=true,
breakindent=0pt,
keepspaces=true,
showspaces=false,
columns=fullflexible,
showtabs=false,
showstringspaces=false,
escapeinside={(*@}{@*)},
extendedchars=true,
}
\newcommand{\HLJLt}[1]{#1}
\newcommand{\HLJLw}[1]{#1}
\newcommand{\HLJLe}[1]{#1}
\newcommand{\HLJLeB}[1]{#1}
\newcommand{\HLJLo}[1]{#1}
\newcommand{\HLJLk}[1]{\textcolor[RGB]{148,91,176}{\textbf{#1}}}
\newcommand{\HLJLkc}[1]{\textcolor[RGB]{59,151,46}{\textit{#1}}}
\newcommand{\HLJLkd}[1]{\textcolor[RGB]{214,102,97}{\textit{#1}}}
\newcommand{\HLJLkn}[1]{\textcolor[RGB]{148,91,176}{\textbf{#1}}}
\newcommand{\HLJLkp}[1]{\textcolor[RGB]{148,91,176}{\textbf{#1}}}
\newcommand{\HLJLkr}[1]{\textcolor[RGB]{148,91,176}{\textbf{#1}}}
\newcommand{\HLJLkt}[1]{\textcolor[RGB]{148,91,176}{\textbf{#1}}}
\newcommand{\HLJLn}[1]{#1}
\newcommand{\HLJLna}[1]{#1}
\newcommand{\HLJLnb}[1]{#1}
\newcommand{\HLJLnbp}[1]{#1}
\newcommand{\HLJLnc}[1]{#1}
\newcommand{\HLJLncB}[1]{#1}
\newcommand{\HLJLnd}[1]{\textcolor[RGB]{214,102,97}{#1}}
\newcommand{\HLJLne}[1]{#1}
\newcommand{\HLJLneB}[1]{#1}
\newcommand{\HLJLnf}[1]{\textcolor[RGB]{66,102,213}{#1}}
\newcommand{\HLJLnfm}[1]{\textcolor[RGB]{66,102,213}{#1}}
\newcommand{\HLJLnp}[1]{#1}
\newcommand{\HLJLnl}[1]{#1}
\newcommand{\HLJLnn}[1]{#1}
\newcommand{\HLJLno}[1]{#1}
\newcommand{\HLJLnt}[1]{#1}
\newcommand{\HLJLnv}[1]{#1}
\newcommand{\HLJLnvc}[1]{#1}
\newcommand{\HLJLnvg}[1]{#1}
\newcommand{\HLJLnvi}[1]{#1}
\newcommand{\HLJLnvm}[1]{#1}
\newcommand{\HLJLl}[1]{#1}
\newcommand{\HLJLld}[1]{\textcolor[RGB]{148,91,176}{\textit{#1}}}
\newcommand{\HLJLs}[1]{\textcolor[RGB]{201,61,57}{#1}}
\newcommand{\HLJLsa}[1]{\textcolor[RGB]{201,61,57}{#1}}
\newcommand{\HLJLsb}[1]{\textcolor[RGB]{201,61,57}{#1}}
\newcommand{\HLJLsc}[1]{\textcolor[RGB]{201,61,57}{#1}}
\newcommand{\HLJLsd}[1]{\textcolor[RGB]{201,61,57}{#1}}
\newcommand{\HLJLsdB}[1]{\textcolor[RGB]{201,61,57}{#1}}
\newcommand{\HLJLsdC}[1]{\textcolor[RGB]{201,61,57}{#1}}
\newcommand{\HLJLse}[1]{\textcolor[RGB]{59,151,46}{#1}}
\newcommand{\HLJLsh}[1]{\textcolor[RGB]{201,61,57}{#1}}
\newcommand{\HLJLsi}[1]{#1}
\newcommand{\HLJLso}[1]{\textcolor[RGB]{201,61,57}{#1}}
\newcommand{\HLJLsr}[1]{\textcolor[RGB]{201,61,57}{#1}}
\newcommand{\HLJLss}[1]{\textcolor[RGB]{201,61,57}{#1}}
\newcommand{\HLJLssB}[1]{\textcolor[RGB]{201,61,57}{#1}}
\newcommand{\HLJLnB}[1]{\textcolor[RGB]{59,151,46}{#1}}
\newcommand{\HLJLnbB}[1]{\textcolor[RGB]{59,151,46}{#1}}
\newcommand{\HLJLnfB}[1]{\textcolor[RGB]{59,151,46}{#1}}
\newcommand{\HLJLnh}[1]{\textcolor[RGB]{59,151,46}{#1}}
\newcommand{\HLJLni}[1]{\textcolor[RGB]{59,151,46}{#1}}
\newcommand{\HLJLnil}[1]{\textcolor[RGB]{59,151,46}{#1}}
\newcommand{\HLJLnoB}[1]{\textcolor[RGB]{59,151,46}{#1}}
\newcommand{\HLJLoB}[1]{\textcolor[RGB]{102,102,102}{\textbf{#1}}}
\newcommand{\HLJLow}[1]{\textcolor[RGB]{102,102,102}{\textbf{#1}}}
\newcommand{\HLJLp}[1]{#1}
\newcommand{\HLJLc}[1]{\textcolor[RGB]{153,153,119}{\textit{#1}}}
\newcommand{\HLJLch}[1]{\textcolor[RGB]{153,153,119}{\textit{#1}}}
\newcommand{\HLJLcm}[1]{\textcolor[RGB]{153,153,119}{\textit{#1}}}
\newcommand{\HLJLcp}[1]{\textcolor[RGB]{153,153,119}{\textit{#1}}}
\newcommand{\HLJLcpB}[1]{\textcolor[RGB]{153,153,119}{\textit{#1}}}
\newcommand{\HLJLcs}[1]{\textcolor[RGB]{153,153,119}{\textit{#1}}}
\newcommand{\HLJLcsB}[1]{\textcolor[RGB]{153,153,119}{\textit{#1}}}
\newcommand{\HLJLg}[1]{#1}
\newcommand{\HLJLgd}[1]{#1}
\newcommand{\HLJLge}[1]{#1}
\newcommand{\HLJLgeB}[1]{#1}
\newcommand{\HLJLgh}[1]{#1}
\newcommand{\HLJLgi}[1]{#1}
\newcommand{\HLJLgo}[1]{#1}
\newcommand{\HLJLgp}[1]{#1}
\newcommand{\HLJLgs}[1]{#1}
\newcommand{\HLJLgsB}[1]{#1}
\newcommand{\HLJLgt}[1]{#1}
\def\qqand{\qquad\hbox{and}\qquad}
\def\qqfor{\qquad\hbox{for}\qquad}
\def\qqas{\qquad\hbox{as}\qquad}
\def\half{ {1 \over 2} }
\def\D{ {\rm d} }
\def\I{ {\rm i} }
\def\E{ {\rm e} }
\def\C{ {\mathbb C} }
\def\R{ {\mathbb R} }
\def\H{ {\mathbb H} }
\def\Z{ {\mathbb Z} }
\def\CC{ {\cal C} }
\def\FF{ {\cal F} }
\def\HH{ {\cal H} }
\def\LL{ {\cal L} }
\def\vc#1{ {\mathbf #1} }
\def\bbC{ {\mathbb C} }
\def\fR{ f_{\rm R} }
\def\fL{ f_{\rm L} }
\def\qqqquad{\qquad\qquad}
\def\qqwhere{\qquad\hbox{where}\qquad}
\def\Res_#1{\underset{#1}{\rm Res}\,}
\def\sech{ {\rm sech}\, }
\def\acos{ {\rm acos}\, }
\def\asin{ {\rm asin}\, }
\def\atan{ {\rm atan}\, }
\def\Ei{ {\rm Ei}\, }
\def\upepsilon{\varepsilon}
\def\Xint#1{ \mathchoice
{\XXint\displaystyle\textstyle{#1} }%
{\XXint\textstyle\scriptstyle{#1} }%
{\XXint\scriptstyle\scriptscriptstyle{#1} }%
{\XXint\scriptscriptstyle\scriptscriptstyle{#1} }%
\!\int}
\def\XXint#1#2#3{ {\setbox0=\hbox{$#1{#2#3}{\int}$}
\vcenter{\hbox{$#2#3$}}\kern-.5\wd0} }
\def\ddashint{\Xint=}
\def\dashint{\Xint-}
% \def\dashint
\def\infdashint{\dashint_{-\infty}^\infty}
\def\addtab#1={#1\;&=}
\def\ccr{\\\addtab}
\def\ip<#1>{\left\langle{#1}\right\rangle}
\def\dx{\D x}
\def\dt{\D t}
\def\dz{\D z}
\def\ds{\D s}
\def\rR{ {\rm R} }
\def\rL{ {\rm L} }
\def\norm#1{\left\| #1 \right\|}
\def\pr(#1){\left({#1}\right)}
\def\br[#1]{\left[{#1}\right]}
\def\abs#1{\left|{#1}\right|}
\def\fpr(#1){\!\pr({#1})}
\def\sopmatrix#1{ \begin{pmatrix}#1\end{pmatrix} }
\def\endash{–}
\def\emdash{—}
\def\mdblksquare{\blacksquare}
\def\lgblksquare{\blacksquare}
\def\scre{\E}
\def\mapengine#1,#2.{\mapfunction{#1}\ifx\void#2\else\mapengine #2.\fi }
\def\map[#1]{\mapengine #1,\void.}
\def\mapenginesep_#1#2,#3.{\mapfunction{#2}\ifx\void#3\else#1\mapengine #3.\fi }
\def\mapsep_#1[#2]{\mapenginesep_{#1}#2,\void.}
\def\vcbr[#1]{\pr(#1)}
\def\bvect[#1,#2]{
{
\def\dots{\cdots}
\def\mapfunction##1{\ | \ ##1}
\sopmatrix{
\,#1\map[#2]\,
}
}
}
\def\vect[#1]{
{\def\dots{\ldots}
\vcbr[{#1}]
} }
\def\vectt[#1]{
{\def\dots{\ldots}
\vect[{#1}]^{\top}
} }
\def\Vectt[#1]{
{
\def\mapfunction##1{##1 \cr}
\def\dots{\vdots}
\begin{pmatrix}
\map[#1]
\end{pmatrix}
} }
\def\addtab#1={#1\;&=}
\def\ccr{\\\addtab}
\def\questionequals{= \!\!\!\!\!\!{\scriptstyle ? \atop }\,\,\,}
\begin{document}
\textbf{Applied Complex Analysis (2021)}
\section{Lecture 15: Electrostatic charges in a potential well}
\begin{itemize}
\item[1. ] One charge in a well
\item[2. ] Two charges in a well
\item[3. ] \[
N
\]
charges in a well
\item[4. ] Limiting distribution
\end{itemize}
We study the dynamics of many electric charges in a potential well. We restrict our attention to 1D: picture an infinitely long wire with charges on it. We will see that as the number of charges becomes large, we can determine the limiting distribution by inverting the Hilbert transform, but one for which the \emph{interval} it is posed on is solved for.
\subsection{One charge in a potential well}
Consider a point charge in a potential well $V(x) = x^2 / 2$, initially located at $\lambda^0$. The dynamics of the point charge are governed by Newton's law
\[
m { \D^2 \lambda \over \dt^2} + \gamma { \D \lambda \over \dt} = -V'(\lambda) = - \lambda
\]
Here $m$ is the mass (we will always take it to be 1) and $\gamma$ is a damping constant (think friction) which ensures that the charge eventually comes to rest.
This is a simple damped harmonic oscillator: if we are positive we move left and if we are negative we move right. Here we show the evolution as a function of time, where we plot this on top of the potential to see its effect.
\begin{lstlisting}
(*@\HLJLk{using}@*) (*@\HLJLn{DifferentialEquations}@*)(*@\HLJLp{,}@*) (*@\HLJLn{Plots}@*)(*@\HLJLp{,}@*) (*@\HLJLn{StaticArrays}@*)
(*@\HLJLn{V}@*) (*@\HLJLoB{=}@*) (*@\HLJLn{x}@*) (*@\HLJLoB{->}@*) (*@\HLJLn{x}@*)(*@\HLJLoB{{\textasciicircum}}@*)(*@\HLJLni{2}@*)(*@\HLJLoB{/}@*)(*@\HLJLni{2}@*) (*@\HLJLcs{{\#}}@*) (*@\HLJLcs{Potential}@*)
(*@\HLJLn{Vp}@*) (*@\HLJLoB{=}@*) (*@\HLJLn{x}@*) (*@\HLJLoB{->}@*) (*@\HLJLn{x}@*) (*@\HLJLcs{{\#}}@*) (*@\HLJLcs{Force}@*)
(*@\HLJLn{\ensuremath{\lambda}{\_}0}@*) (*@\HLJLoB{=}@*) (*@\HLJLnfB{2.3}@*) (*@\HLJLcs{{\#}}@*) (*@\HLJLcs{initial}@*) (*@\HLJLcs{location}@*)
(*@\HLJLn{v{\_}0}@*) (*@\HLJLoB{=}@*) (*@\HLJLnfB{1.2}@*) (*@\HLJLcs{{\#}}@*) (*@\HLJLcs{initial}@*) (*@\HLJLcs{velocity}@*)
(*@\HLJLn{\ensuremath{\gamma}}@*) (*@\HLJLoB{=}@*) (*@\HLJLnfB{1.0}@*) (*@\HLJLcs{{\#}}@*) (*@\HLJLcs{damping}@*)
(*@\HLJLn{\ensuremath{\lambda}v}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{solve}@*)(*@\HLJLp{(}@*)(*@\HLJLnf{ODEProblem}@*)(*@\HLJLp{(((}@*)(*@\HLJLn{\ensuremath{\lambda}}@*)(*@\HLJLp{,}@*)(*@\HLJLn{v}@*)(*@\HLJLp{),}@*)(*@\HLJLn{{\_}}@*)(*@\HLJLp{,}@*)(*@\HLJLn{t}@*)(*@\HLJLp{)}@*) (*@\HLJLoB{->}@*) (*@\HLJLnf{SVector}@*)(*@\HLJLp{(}@*)(*@\HLJLn{v}@*)(*@\HLJLp{,}@*)(*@\HLJLoB{-}@*)(*@\HLJLnf{Vp}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}}@*)(*@\HLJLp{)}@*) (*@\HLJLoB{-}@*) (*@\HLJLn{\ensuremath{\gamma}}@*)(*@\HLJLoB{*}@*)(*@\HLJLn{v}@*)(*@\HLJLp{),}@*) (*@\HLJLnf{SVector}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}{\_}0}@*)(*@\HLJLp{,}@*)(*@\HLJLn{v{\_}0}@*)(*@\HLJLp{),}@*) (*@\HLJLp{(}@*)(*@\HLJLnfB{0.0}@*)(*@\HLJLp{,}@*) (*@\HLJLnfB{20.0}@*)(*@\HLJLp{));}@*) (*@\HLJLn{reltol}@*)(*@\HLJLoB{=}@*)(*@\HLJLnfB{1E-6}@*)(*@\HLJLp{);}@*)
(*@\HLJLn{tt}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{range}@*)(*@\HLJLp{(}@*)(*@\HLJLnfB{0.}@*)(*@\HLJLp{,}@*)(*@\HLJLni{20}@*)(*@\HLJLp{;}@*) (*@\HLJLn{length}@*)(*@\HLJLoB{=}@*)(*@\HLJLni{1000}@*)(*@\HLJLp{)}@*)
(*@\HLJLn{xx}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{range}@*)(*@\HLJLp{(}@*)(*@\HLJLoB{-}@*)(*@\HLJLnf{sqrt}@*)(*@\HLJLp{(}@*)(*@\HLJLni{2}@*)(*@\HLJLoB{*}@*)(*@\HLJLnf{last}@*)(*@\HLJLp{(}@*)(*@\HLJLn{tt}@*)(*@\HLJLp{)),}@*)(*@\HLJLnf{sqrt}@*)(*@\HLJLp{(}@*)(*@\HLJLni{2}@*)(*@\HLJLoB{*}@*)(*@\HLJLnf{last}@*)(*@\HLJLp{(}@*)(*@\HLJLn{tt}@*)(*@\HLJLp{));}@*) (*@\HLJLn{length}@*)(*@\HLJLoB{=}@*)(*@\HLJLni{1000}@*)(*@\HLJLp{)}@*)
(*@\HLJLnf{plot}@*)(*@\HLJLp{(}@*)(*@\HLJLn{xx}@*)(*@\HLJLp{,}@*) (*@\HLJLn{V}@*)(*@\HLJLoB{.}@*)(*@\HLJLp{(}@*)(*@\HLJLn{xx}@*)(*@\HLJLp{);}@*) (*@\HLJLn{label}@*)(*@\HLJLoB{=}@*)(*@\HLJLs{"{}potential"{}}@*)(*@\HLJLp{,}@*) (*@\HLJLn{xlabel}@*)(*@\HLJLoB{=}@*)(*@\HLJLs{"{}x"{}}@*)(*@\HLJLp{,}@*) (*@\HLJLn{ylabel}@*)(*@\HLJLoB{=}@*)(*@\HLJLs{"{}time}@*) (*@\HLJLs{and}@*) (*@\HLJLs{potential"{}}@*)(*@\HLJLp{)}@*)
(*@\HLJLnf{plot!}@*)(*@\HLJLp{(}@*)(*@\HLJLn{first}@*)(*@\HLJLoB{.}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}v}@*)(*@\HLJLoB{.}@*)(*@\HLJLp{(}@*)(*@\HLJLn{tt}@*)(*@\HLJLp{)),}@*) (*@\HLJLn{tt}@*)(*@\HLJLp{;}@*) (*@\HLJLn{label}@*)(*@\HLJLoB{=}@*)(*@\HLJLs{"{}charge"{}}@*)(*@\HLJLp{)}@*)
\end{lstlisting}
\includegraphics[width=\linewidth]{C:/Users/mfaso/OneDrive/Documents/GitHub/M3M6AppliedComplexAnalysis/output/figures/Lecture15_1_1.pdf}
In the limit the charge reaches an equilibrium: it no longer varies in time. I.e., it reaches a point where ${ \D \lambda^2 \over \dt^2} = { \D \lambda \over \dt} = 0$, which is equivalent to solving
\[
0 = - V'(\lambda) = - \lambda
\]
in other words, the minimum of the well, in this case $\lambda = 0$.
\subsection{Two charges in a potential well}
A point charge at $x_0$ in 2D creates a (Newtonian) potential field of $V_{x_0}(x) = -\log|x - x_0|$ with derivative
\[
V_{x_0}'(x) = \begin{cases} -{1 \over x - x_0} & x > x_0 \\
{1 \over x_0 - x} & x < x_0
\end{cases} = {1 \over x_0 - x}
\]
Here we are thinking of the charges as 2D but restricted to the real line: think of a 1D wire embedded in 2D space.
Suppose there are now two charges, $\lambda_1$ and $\lambda_2$ but no potential well. The effect on the first charge $\lambda_1$ is to repulse away from $\lambda_2$ via the potential field $V_{\lambda_2}(x) = -\log|x - \lambda_2|$. Since we have
\[
m {\D^2 \lambda_1 \over \D t^2} = - V_{\lambda_2}'(\lambda_1) = {1 \over \lambda_1 -\lambda_2}
\]
Similarly, the effect on $\lambda_2$ is
\[
m {\D^2 \lambda_2 \over \D t^2} = {1 \over \lambda_2 -\lambda_1}
\]
Unrestricted, the two potentials will repulse off to infinity:
\begin{lstlisting}
(*@\HLJLn{N}@*) (*@\HLJLoB{=}@*) (*@\HLJLni{2}@*)
(*@\HLJLn{\ensuremath{\lambda}{\_}0}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{randn}@*)(*@\HLJLp{(}@*)(*@\HLJLni{2}@*)(*@\HLJLp{)}@*) (*@\HLJLcs{{\#}}@*) (*@\HLJLcs{random}@*) (*@\HLJLcs{initial}@*) (*@\HLJLcs{location}@*)
(*@\HLJLn{v{\_}0}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{randn}@*)(*@\HLJLp{(}@*)(*@\HLJLni{2}@*)(*@\HLJLp{)}@*) (*@\HLJLcs{{\#}}@*) (*@\HLJLcs{random}@*) (*@\HLJLcs{initial}@*) (*@\HLJLcs{velocity}@*)
(*@\HLJLn{prob}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{ODEProblem}@*)(*@\HLJLp{(}@*)(*@\HLJLk{function}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}v}@*)(*@\HLJLp{,}@*)(*@\HLJLn{{\_}}@*)(*@\HLJLp{,}@*)(*@\HLJLn{t}@*)(*@\HLJLp{)}@*)
(*@\HLJLn{\ensuremath{\lambda}}@*) (*@\HLJLoB{=}@*) (*@\HLJLn{\ensuremath{\lambda}v}@*)(*@\HLJLp{[}@*)(*@\HLJLni{1}@*)(*@\HLJLoB{:}@*)(*@\HLJLn{N}@*)(*@\HLJLp{]}@*)
(*@\HLJLn{v}@*) (*@\HLJLoB{=}@*) (*@\HLJLn{\ensuremath{\lambda}v}@*)(*@\HLJLp{[}@*)(*@\HLJLn{N}@*)(*@\HLJLoB{+}@*)(*@\HLJLni{1}@*)(*@\HLJLoB{:}@*)(*@\HLJLk{end}@*)(*@\HLJLp{]}@*)
(*@\HLJLp{[}@*)(*@\HLJLn{v}@*)(*@\HLJLp{;}@*) (*@\HLJLni{1}@*)(*@\HLJLoB{/}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}}@*)(*@\HLJLp{[}@*)(*@\HLJLni{1}@*)(*@\HLJLp{]}@*) (*@\HLJLoB{-}@*) (*@\HLJLn{\ensuremath{\lambda}}@*)(*@\HLJLp{[}@*)(*@\HLJLni{2}@*)(*@\HLJLp{]);}@*) (*@\HLJLni{1}@*)(*@\HLJLoB{/}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}}@*)(*@\HLJLp{[}@*)(*@\HLJLni{2}@*)(*@\HLJLp{]}@*) (*@\HLJLoB{-}@*) (*@\HLJLn{\ensuremath{\lambda}}@*)(*@\HLJLp{[}@*)(*@\HLJLni{1}@*)(*@\HLJLp{])]}@*)
(*@\HLJLk{end}@*)(*@\HLJLp{,}@*) (*@\HLJLp{[}@*)(*@\HLJLn{\ensuremath{\lambda}{\_}0}@*)(*@\HLJLp{;}@*) (*@\HLJLn{v{\_}0}@*)(*@\HLJLp{],}@*) (*@\HLJLp{(}@*)(*@\HLJLnfB{0.0}@*)(*@\HLJLp{,}@*) (*@\HLJLnfB{10.0}@*)(*@\HLJLp{))}@*)
(*@\HLJLn{\ensuremath{\lambda}v}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{solve}@*)(*@\HLJLp{(}@*)(*@\HLJLn{prob}@*)(*@\HLJLp{;}@*) (*@\HLJLn{reltol}@*)(*@\HLJLoB{=}@*)(*@\HLJLnfB{1E-6}@*)(*@\HLJLp{)}@*)
(*@\HLJLn{tt}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{range}@*)(*@\HLJLp{(}@*)(*@\HLJLnfB{0.}@*)(*@\HLJLp{,}@*)(*@\HLJLni{20}@*)(*@\HLJLp{;}@*) (*@\HLJLn{length}@*)(*@\HLJLoB{=}@*)(*@\HLJLni{1000}@*)(*@\HLJLp{)}@*)
(*@\HLJLnf{\ensuremath{\lambda}v}@*)(*@\HLJLp{(}@*)(*@\HLJLnfB{0.1}@*)(*@\HLJLp{)}@*)
(*@\HLJLn{p}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{plot}@*)(*@\HLJLp{(;}@*)(*@\HLJLn{xlabel}@*)(*@\HLJLoB{=}@*)(*@\HLJLs{"{}x"{}}@*)(*@\HLJLp{,}@*)(*@\HLJLn{ylabel}@*)(*@\HLJLoB{=}@*)(*@\HLJLs{"{}t"{}}@*)(*@\HLJLp{)}@*)
(*@\HLJLk{for}@*) (*@\HLJLn{j}@*) (*@\HLJLoB{=}@*) (*@\HLJLni{1}@*)(*@\HLJLoB{:}@*)(*@\HLJLn{N}@*)
(*@\HLJLnf{plot!}@*)(*@\HLJLp{(}@*)(*@\HLJLn{getindex}@*)(*@\HLJLoB{.}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}v}@*)(*@\HLJLoB{.}@*)(*@\HLJLp{(}@*)(*@\HLJLn{tt}@*)(*@\HLJLp{),}@*)(*@\HLJLn{j}@*)(*@\HLJLp{),}@*) (*@\HLJLn{tt}@*)(*@\HLJLp{;}@*) (*@\HLJLn{label}@*)(*@\HLJLoB{=}@*)(*@\HLJLs{"{}charge}@*) (*@\HLJLsi{{\$}j}@*)(*@\HLJLs{"{}}@*)(*@\HLJLp{)}@*)
(*@\HLJLk{end}@*)
(*@\HLJLn{p}@*)
\end{lstlisting}
\includegraphics[width=\linewidth]{C:/Users/mfaso/OneDrive/Documents/GitHub/M3M6AppliedComplexAnalysis/output/figures/Lecture15_2_1.pdf}
Adding in a potential well and damping and we get an equilibrium again:
\begin{align*}
{\D^2 \lambda_1 \over \D t^2} + \gamma {\D \lambda_1 \over \dt} = {1 \over \lambda_1 -\lambda_2} - V'(\lambda_1) \\
{\D^2 \lambda_2 \over \D t^2}+ \gamma {\D \lambda_2 \over \dt} = {1 \over \lambda_2 -\lambda_1} - V'(\lambda_2)
\end{align*}
which we see here
\begin{lstlisting}
(*@\HLJLn{V}@*) (*@\HLJLoB{=}@*) (*@\HLJLn{x}@*) (*@\HLJLoB{->}@*) (*@\HLJLn{x}@*)(*@\HLJLoB{{\textasciicircum}}@*)(*@\HLJLni{2}@*)(*@\HLJLoB{/}@*)(*@\HLJLni{2}@*) (*@\HLJLcs{{\#}}@*) (*@\HLJLcs{Potential}@*)
(*@\HLJLn{Vp}@*) (*@\HLJLoB{=}@*) (*@\HLJLn{x}@*) (*@\HLJLoB{->}@*) (*@\HLJLn{x}@*) (*@\HLJLcs{{\#}}@*) (*@\HLJLcs{Force}@*)
(*@\HLJLn{\ensuremath{\gamma}}@*) (*@\HLJLoB{=}@*) (*@\HLJLnfB{1.0}@*) (*@\HLJLcs{{\#}}@*) (*@\HLJLcs{damping}@*)
(*@\HLJLn{prob}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{ODEProblem}@*)(*@\HLJLp{(}@*)(*@\HLJLk{function}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}v}@*)(*@\HLJLp{,}@*)(*@\HLJLn{{\_}}@*)(*@\HLJLp{,}@*)(*@\HLJLn{t}@*)(*@\HLJLp{)}@*)
(*@\HLJLn{\ensuremath{\lambda}}@*) (*@\HLJLoB{=}@*) (*@\HLJLn{\ensuremath{\lambda}v}@*)(*@\HLJLp{[}@*)(*@\HLJLni{1}@*)(*@\HLJLoB{:}@*)(*@\HLJLn{N}@*)(*@\HLJLp{]}@*)
(*@\HLJLn{v}@*) (*@\HLJLoB{=}@*) (*@\HLJLn{\ensuremath{\lambda}v}@*)(*@\HLJLp{[}@*)(*@\HLJLn{N}@*)(*@\HLJLoB{+}@*)(*@\HLJLni{1}@*)(*@\HLJLoB{:}@*)(*@\HLJLk{end}@*)(*@\HLJLp{]}@*)
(*@\HLJLp{[}@*)(*@\HLJLn{v}@*)(*@\HLJLp{;}@*) (*@\HLJLni{1}@*)(*@\HLJLoB{/}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}}@*)(*@\HLJLp{[}@*)(*@\HLJLni{1}@*)(*@\HLJLp{]}@*) (*@\HLJLoB{-}@*) (*@\HLJLn{\ensuremath{\lambda}}@*)(*@\HLJLp{[}@*)(*@\HLJLni{2}@*)(*@\HLJLp{])}@*) (*@\HLJLoB{-}@*) (*@\HLJLnf{Vp}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}}@*)(*@\HLJLp{[}@*)(*@\HLJLni{1}@*)(*@\HLJLp{])}@*) (*@\HLJLoB{-}@*) (*@\HLJLn{\ensuremath{\gamma}}@*)(*@\HLJLoB{*}@*)(*@\HLJLn{v}@*)(*@\HLJLp{[}@*)(*@\HLJLni{1}@*)(*@\HLJLp{];}@*) (*@\HLJLni{1}@*)(*@\HLJLoB{/}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}}@*)(*@\HLJLp{[}@*)(*@\HLJLni{2}@*)(*@\HLJLp{]}@*) (*@\HLJLoB{-}@*) (*@\HLJLn{\ensuremath{\lambda}}@*)(*@\HLJLp{[}@*)(*@\HLJLni{1}@*)(*@\HLJLp{])}@*) (*@\HLJLoB{-}@*) (*@\HLJLnf{Vp}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}}@*)(*@\HLJLp{[}@*)(*@\HLJLni{2}@*)(*@\HLJLp{])}@*) (*@\HLJLoB{-}@*) (*@\HLJLn{\ensuremath{\gamma}}@*)(*@\HLJLoB{*}@*)(*@\HLJLn{v}@*)(*@\HLJLp{[}@*)(*@\HLJLni{2}@*)(*@\HLJLp{]]}@*)
(*@\HLJLk{end}@*)(*@\HLJLp{,}@*) (*@\HLJLp{[}@*)(*@\HLJLn{\ensuremath{\lambda}{\_}0}@*)(*@\HLJLp{;}@*) (*@\HLJLn{v{\_}0}@*)(*@\HLJLp{],}@*) (*@\HLJLp{(}@*)(*@\HLJLnfB{0.0}@*)(*@\HLJLp{,}@*) (*@\HLJLnfB{20.0}@*)(*@\HLJLp{))}@*)
(*@\HLJLn{\ensuremath{\lambda}v}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{solve}@*)(*@\HLJLp{(}@*)(*@\HLJLn{prob}@*)(*@\HLJLp{;}@*) (*@\HLJLn{reltol}@*)(*@\HLJLoB{=}@*)(*@\HLJLnfB{1E-6}@*)(*@\HLJLp{);}@*)
(*@\HLJLn{xx}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{range}@*)(*@\HLJLp{(}@*)(*@\HLJLoB{-}@*)(*@\HLJLnf{sqrt}@*)(*@\HLJLp{(}@*)(*@\HLJLni{2}@*)(*@\HLJLoB{*}@*)(*@\HLJLnf{last}@*)(*@\HLJLp{(}@*)(*@\HLJLn{tt}@*)(*@\HLJLp{)),}@*)(*@\HLJLnf{sqrt}@*)(*@\HLJLp{(}@*)(*@\HLJLni{2}@*)(*@\HLJLoB{*}@*)(*@\HLJLnf{last}@*)(*@\HLJLp{(}@*)(*@\HLJLn{tt}@*)(*@\HLJLp{));}@*) (*@\HLJLn{length}@*)(*@\HLJLoB{=}@*)(*@\HLJLni{1000}@*)(*@\HLJLp{)}@*)
(*@\HLJLn{p}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{plot}@*)(*@\HLJLp{(}@*)(*@\HLJLn{xx}@*)(*@\HLJLp{,}@*) (*@\HLJLn{V}@*)(*@\HLJLoB{.}@*)(*@\HLJLp{(}@*)(*@\HLJLn{xx}@*)(*@\HLJLp{);}@*) (*@\HLJLn{label}@*)(*@\HLJLoB{=}@*)(*@\HLJLs{"{}potential"{}}@*)(*@\HLJLp{,}@*) (*@\HLJLn{xlabel}@*)(*@\HLJLoB{=}@*)(*@\HLJLs{"{}x"{}}@*)(*@\HLJLp{,}@*) (*@\HLJLn{ylabel}@*)(*@\HLJLoB{=}@*)(*@\HLJLs{"{}time}@*) (*@\HLJLs{and}@*) (*@\HLJLs{potential"{}}@*)(*@\HLJLp{)}@*)
(*@\HLJLk{for}@*) (*@\HLJLn{j}@*) (*@\HLJLoB{=}@*) (*@\HLJLni{1}@*)(*@\HLJLoB{:}@*)(*@\HLJLn{N}@*)
(*@\HLJLnf{plot!}@*)(*@\HLJLp{(}@*)(*@\HLJLn{getindex}@*)(*@\HLJLoB{.}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}v}@*)(*@\HLJLoB{.}@*)(*@\HLJLp{(}@*)(*@\HLJLn{tt}@*)(*@\HLJLp{),}@*)(*@\HLJLn{j}@*)(*@\HLJLp{),}@*) (*@\HLJLn{tt}@*)(*@\HLJLp{;}@*) (*@\HLJLn{label}@*)(*@\HLJLoB{=}@*)(*@\HLJLs{"{}charge}@*) (*@\HLJLsi{{\$}j}@*)(*@\HLJLs{"{}}@*)(*@\HLJLp{)}@*)
(*@\HLJLk{end}@*)
(*@\HLJLn{p}@*)
\end{lstlisting}
\includegraphics[width=\linewidth]{C:/Users/mfaso/OneDrive/Documents/GitHub/M3M6AppliedComplexAnalysis/output/figures/Lecture15_3_1.pdf}
The fixed point is when the change with time vanishes, given by
\begin{align*}
0 = {1 \over \lambda_1 -\lambda_2} - V'(\lambda_1) \\
0 = {1 \over \lambda_2 -\lambda_1} - V'(\lambda_2)
\end{align*}
For this potential, we can solve it exactly: we need to solve
\begin{align*}
\lambda_1 = {1 \over \lambda_1 -\lambda_2} \\
\lambda_2 = {1 \over \lambda_2 -\lambda_1}
\end{align*}
Using $\lambda_1 = -\lambda_2$, we find that $\lambda_1 = \pm{1 \over \sqrt 2}$.
\subsubsection{$N$ charges in a potential well}
We now consider $N$ charges, where each charge repulses every other charge by a logarithmic potential, so we end up needing to sum over all the other charges:
\begin{align*}
m {\D^2 \lambda_k \over \D t^2} + \gamma {\D \lambda_k \over \dt} =
\sum_{j=1 \atop j \neq k}^N {1 \over \lambda_k -\lambda_j} - V'(\lambda_k)
\end{align*}
\begin{lstlisting}
(*@\HLJLn{N}@*) (*@\HLJLoB{=}@*) (*@\HLJLni{100}@*)
(*@\HLJLn{V}@*) (*@\HLJLoB{=}@*) (*@\HLJLn{x}@*) (*@\HLJLoB{->}@*) (*@\HLJLn{x}@*)(*@\HLJLoB{{\textasciicircum}}@*)(*@\HLJLni{2}@*)(*@\HLJLoB{/}@*)(*@\HLJLni{2}@*) (*@\HLJLcs{{\#}}@*) (*@\HLJLcs{Potential}@*)
(*@\HLJLn{Vp}@*) (*@\HLJLoB{=}@*) (*@\HLJLn{x}@*) (*@\HLJLoB{->}@*) (*@\HLJLn{x}@*) (*@\HLJLcs{{\#}}@*) (*@\HLJLcs{Force}@*)
(*@\HLJLn{\ensuremath{\lambda}{\_}0}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{randn}@*)(*@\HLJLp{(}@*)(*@\HLJLn{N}@*)(*@\HLJLp{)}@*) (*@\HLJLcs{{\#}}@*) (*@\HLJLcs{initial}@*) (*@\HLJLcs{location}@*)
(*@\HLJLn{v{\_}0}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{randn}@*)(*@\HLJLp{(}@*)(*@\HLJLn{N}@*)(*@\HLJLp{)}@*) (*@\HLJLcs{{\#}}@*) (*@\HLJLcs{initial}@*) (*@\HLJLcs{velocity}@*)
(*@\HLJLn{\ensuremath{\gamma}}@*) (*@\HLJLoB{=}@*) (*@\HLJLnfB{1.0}@*) (*@\HLJLcs{{\#}}@*) (*@\HLJLcs{damping}@*)
(*@\HLJLn{prob}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{ODEProblem}@*)(*@\HLJLp{(}@*)(*@\HLJLk{function}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}v}@*)(*@\HLJLp{,}@*)(*@\HLJLn{{\_}}@*)(*@\HLJLp{,}@*)(*@\HLJLn{t}@*)(*@\HLJLp{)}@*)
(*@\HLJLn{\ensuremath{\lambda}}@*) (*@\HLJLoB{=}@*) (*@\HLJLn{\ensuremath{\lambda}v}@*)(*@\HLJLp{[}@*)(*@\HLJLni{1}@*)(*@\HLJLoB{:}@*)(*@\HLJLn{N}@*)(*@\HLJLp{]}@*)
(*@\HLJLn{v}@*) (*@\HLJLoB{=}@*) (*@\HLJLn{\ensuremath{\lambda}v}@*)(*@\HLJLp{[}@*)(*@\HLJLn{N}@*)(*@\HLJLoB{+}@*)(*@\HLJLni{1}@*)(*@\HLJLoB{:}@*)(*@\HLJLk{end}@*)(*@\HLJLp{]}@*)
(*@\HLJLp{[}@*)(*@\HLJLn{v}@*)(*@\HLJLp{;}@*) (*@\HLJLp{[}@*)(*@\HLJLnf{sum}@*)(*@\HLJLp{(}@*)(*@\HLJLni{1}@*) (*@\HLJLoB{./}@*) (*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}}@*)(*@\HLJLp{[}@*)(*@\HLJLn{k}@*)(*@\HLJLp{]}@*) (*@\HLJLoB{.-}@*) (*@\HLJLn{\ensuremath{\lambda}}@*)(*@\HLJLp{[[}@*)(*@\HLJLni{1}@*)(*@\HLJLoB{:}@*)(*@\HLJLn{k}@*)(*@\HLJLoB{-}@*)(*@\HLJLni{1}@*)(*@\HLJLp{;}@*)(*@\HLJLn{k}@*)(*@\HLJLoB{+}@*)(*@\HLJLni{1}@*)(*@\HLJLoB{:}@*)(*@\HLJLk{end}@*)(*@\HLJLp{]]))}@*) (*@\HLJLoB{-}@*) (*@\HLJLnf{Vp}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}}@*)(*@\HLJLp{[}@*)(*@\HLJLn{k}@*)(*@\HLJLp{])}@*) (*@\HLJLoB{-}@*) (*@\HLJLn{\ensuremath{\gamma}}@*)(*@\HLJLoB{*}@*)(*@\HLJLn{v}@*)(*@\HLJLp{[}@*)(*@\HLJLn{k}@*)(*@\HLJLp{]}@*) (*@\HLJLk{for}@*) (*@\HLJLn{k}@*)(*@\HLJLoB{=}@*)(*@\HLJLni{1}@*)(*@\HLJLoB{:}@*)(*@\HLJLn{N}@*)(*@\HLJLp{]]}@*)
(*@\HLJLk{end}@*)(*@\HLJLp{,}@*) (*@\HLJLp{[}@*)(*@\HLJLn{\ensuremath{\lambda}{\_}0}@*)(*@\HLJLp{;}@*) (*@\HLJLn{v{\_}0}@*)(*@\HLJLp{],}@*) (*@\HLJLp{(}@*)(*@\HLJLnfB{0.0}@*)(*@\HLJLp{,}@*) (*@\HLJLnfB{20.0}@*)(*@\HLJLp{))}@*)
(*@\HLJLn{\ensuremath{\lambda}v}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{solve}@*)(*@\HLJLp{(}@*)(*@\HLJLn{prob}@*)(*@\HLJLp{;}@*) (*@\HLJLn{reltol}@*)(*@\HLJLoB{=}@*)(*@\HLJLnfB{1E-6}@*)(*@\HLJLp{)}@*)
(*@\HLJLn{xx}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{range}@*)(*@\HLJLp{(}@*)(*@\HLJLoB{-}@*)(*@\HLJLnf{sqrt}@*)(*@\HLJLp{(}@*)(*@\HLJLni{2}@*)(*@\HLJLoB{*}@*)(*@\HLJLnf{last}@*)(*@\HLJLp{(}@*)(*@\HLJLn{tt}@*)(*@\HLJLp{)),}@*)(*@\HLJLnf{sqrt}@*)(*@\HLJLp{(}@*)(*@\HLJLni{2}@*)(*@\HLJLoB{*}@*)(*@\HLJLnf{last}@*)(*@\HLJLp{(}@*)(*@\HLJLn{tt}@*)(*@\HLJLp{));}@*) (*@\HLJLn{length}@*)(*@\HLJLoB{=}@*)(*@\HLJLni{1000}@*)(*@\HLJLp{)}@*)
(*@\HLJLn{p}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{plot}@*)(*@\HLJLp{(}@*)(*@\HLJLn{xx}@*)(*@\HLJLp{,}@*) (*@\HLJLn{V}@*)(*@\HLJLoB{.}@*)(*@\HLJLp{(}@*)(*@\HLJLn{xx}@*)(*@\HLJLp{);}@*) (*@\HLJLn{legend}@*)(*@\HLJLoB{=}@*)(*@\HLJLkc{false}@*)(*@\HLJLp{,}@*) (*@\HLJLn{xlabel}@*)(*@\HLJLoB{=}@*)(*@\HLJLs{"{}x"{}}@*)(*@\HLJLp{,}@*) (*@\HLJLn{ylabel}@*)(*@\HLJLoB{=}@*)(*@\HLJLs{"{}time}@*) (*@\HLJLs{and}@*) (*@\HLJLs{potential"{}}@*)(*@\HLJLp{)}@*)
(*@\HLJLk{for}@*) (*@\HLJLn{j}@*) (*@\HLJLoB{=}@*) (*@\HLJLni{1}@*)(*@\HLJLoB{:}@*)(*@\HLJLn{N}@*)
(*@\HLJLnf{plot!}@*)(*@\HLJLp{(}@*)(*@\HLJLn{getindex}@*)(*@\HLJLoB{.}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}v}@*)(*@\HLJLoB{.}@*)(*@\HLJLp{(}@*)(*@\HLJLn{tt}@*)(*@\HLJLp{),}@*)(*@\HLJLn{j}@*)(*@\HLJLp{),}@*) (*@\HLJLn{tt}@*)(*@\HLJLp{;}@*) (*@\HLJLn{color}@*)(*@\HLJLoB{=:}@*)(*@\HLJLn{red}@*)(*@\HLJLp{)}@*)
(*@\HLJLk{end}@*)
(*@\HLJLn{p}@*)
\end{lstlisting}
\includegraphics[width=\linewidth]{C:/Users/mfaso/OneDrive/Documents/GitHub/M3M6AppliedComplexAnalysis/output/figures/Lecture15_4_1.pdf}
As the number of charges becomes large, they spread off to infinity. In the case of $V(x) = x^2$, we can renormalize by dividing by $N$ so they stay bounded: $\mu_k = {\lambda_k \over \sqrt N}$.
\begin{lstlisting}
(*@\HLJLn{p}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{plot}@*)(*@\HLJLp{(}@*)(*@\HLJLn{xx}@*)(*@\HLJLp{,}@*) (*@\HLJLn{V}@*)(*@\HLJLoB{.}@*)(*@\HLJLp{(}@*)(*@\HLJLn{xx}@*)(*@\HLJLp{);}@*) (*@\HLJLn{legend}@*)(*@\HLJLoB{=}@*)(*@\HLJLkc{false}@*)(*@\HLJLp{,}@*) (*@\HLJLn{xlabel}@*)(*@\HLJLoB{=}@*)(*@\HLJLs{"{}x"{}}@*)(*@\HLJLp{,}@*) (*@\HLJLn{ylabel}@*)(*@\HLJLoB{=}@*)(*@\HLJLs{"{}time}@*) (*@\HLJLs{and}@*) (*@\HLJLs{potential"{}}@*)(*@\HLJLp{)}@*)
(*@\HLJLk{for}@*) (*@\HLJLn{j}@*) (*@\HLJLoB{=}@*) (*@\HLJLni{1}@*)(*@\HLJLoB{:}@*)(*@\HLJLn{N}@*)
(*@\HLJLnf{plot!}@*)(*@\HLJLp{(}@*)(*@\HLJLn{getindex}@*)(*@\HLJLoB{.}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}v}@*)(*@\HLJLoB{.}@*)(*@\HLJLp{(}@*)(*@\HLJLn{tt}@*)(*@\HLJLp{)}@*)(*@\HLJLoB{./}@*)(*@\HLJLnf{sqrt}@*)(*@\HLJLp{(}@*)(*@\HLJLn{N}@*)(*@\HLJLp{),}@*)(*@\HLJLn{j}@*)(*@\HLJLp{),}@*) (*@\HLJLn{tt}@*)(*@\HLJLp{;}@*) (*@\HLJLn{color}@*)(*@\HLJLoB{=:}@*)(*@\HLJLn{red}@*)(*@\HLJLp{)}@*)
(*@\HLJLk{end}@*)
(*@\HLJLn{p}@*)
\end{lstlisting}
\includegraphics[width=\linewidth]{C:/Users/mfaso/OneDrive/Documents/GitHub/M3M6AppliedComplexAnalysis/output/figures/Lecture15_5_1.pdf}
This begs questions: why does it balance out at $\pm \sqrt 2$? Why does it have a nice histogram precisely like ${\sqrt{2-x^2} \over \pi}$:
\begin{lstlisting}
(*@\HLJLnf{histogram}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}v}@*)(*@\HLJLoB{.}@*)(*@\HLJLp{(}@*)(*@\HLJLnfB{20.0}@*)(*@\HLJLp{)[}@*)(*@\HLJLni{1}@*)(*@\HLJLoB{:}@*)(*@\HLJLn{N}@*)(*@\HLJLp{]}@*)(*@\HLJLoB{./}@*)(*@\HLJLnf{sqrt}@*)(*@\HLJLp{(}@*)(*@\HLJLn{N}@*)(*@\HLJLp{);}@*) (*@\HLJLn{nbins}@*)(*@\HLJLoB{=}@*)(*@\HLJLni{20}@*)(*@\HLJLp{,}@*) (*@\HLJLn{normalize}@*)(*@\HLJLoB{=}@*)(*@\HLJLkc{true}@*)(*@\HLJLp{,}@*) (*@\HLJLn{label}@*)(*@\HLJLoB{=}@*)(*@\HLJLs{"{}histogram}@*) (*@\HLJLs{of}@*) (*@\HLJLs{charges"{}}@*)(*@\HLJLp{)}@*)
(*@\HLJLnf{plot!}@*)(*@\HLJLp{(}@*)(*@\HLJLn{x}@*) (*@\HLJLoB{->}@*) (*@\HLJLnf{sqrt}@*)(*@\HLJLp{(}@*)(*@\HLJLni{2}@*)(*@\HLJLoB{-}@*)(*@\HLJLn{x}@*)(*@\HLJLoB{{\textasciicircum}}@*)(*@\HLJLni{2}@*)(*@\HLJLp{)}@*)(*@\HLJLoB{/}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\pi}}@*)(*@\HLJLp{),}@*) (*@\HLJLnf{range}@*)(*@\HLJLp{(}@*)(*@\HLJLnf{eps}@*)(*@\HLJLp{()}@*)(*@\HLJLoB{-}@*)(*@\HLJLnf{sqrt}@*)(*@\HLJLp{(}@*)(*@\HLJLnfB{2.0}@*)(*@\HLJLp{);}@*) (*@\HLJLn{stop}@*)(*@\HLJLoB{=}@*)(*@\HLJLnf{sqrt}@*)(*@\HLJLp{(}@*)(*@\HLJLni{2}@*)(*@\HLJLp{)}@*)(*@\HLJLoB{-}@*)(*@\HLJLnf{eps}@*)(*@\HLJLp{(),}@*) (*@\HLJLn{length}@*)(*@\HLJLoB{=}@*)(*@\HLJLni{100}@*)(*@\HLJLp{),}@*) (*@\HLJLn{label}@*)(*@\HLJLoB{=}@*)(*@\HLJLs{"{}semicircle"{}}@*)(*@\HLJLp{)}@*)
\end{lstlisting}
\includegraphics[width=\linewidth]{C:/Users/mfaso/OneDrive/Documents/GitHub/M3M6AppliedComplexAnalysis/output/figures/Lecture15_6_1.pdf}
\subsubsection{Equilibrium distribution}
Plugging in $\lambda_k = \sqrt N \mu_k$, we get a dynamical system for $\mu_k$:
\[
m {\D^2 \mu_k \over \D t} + \gamma {\D \mu_k \over \D t}= {1 \over N} \sum_{j=1 \atop j \neq k}^N {1 \over \mu_k -\mu_j} - \mu_k
\]
(The choice of scaling like $\sqrt N$ was dictated by $V(x)$, if $V(x) = x^4$ it would be $N^{1/4}$.) Thus the limit of the charges is given when the change with $t$ vanishes, that is
\[
0 = {1 \over N} \sum_{j=1 \atop j \neq k}^N {1 \over \mu_k -\mu_j} - \mu_k, \qquad k = 1, \ldots, N
\]
It is convenient to represent the point charges by Dirac delta functions:
\[
w_N(x) = {1 \over N} \sum_{k=1}^N \delta_{\mu_k}(x)
\]
normalized so that $\int w_N(x) \dx = 1$, so that
\[
{1 \over N} \sum_{j=1}^N {1 \over x -\mu_j} = \int_{-\infty}^\infty {w_N(t) \dt \over x - t}
\]
or in other words, we have
\[
\HH_{(-\infty,\infty)} w_N(\mu_k) = {V'(\mu_k) \over \pi}, \qquad k = 1, \ldots, N
\]
since
\[
\HH w_N (\mu_k) = {1 \over \pi} \lim_{\epsilon\rightarrow 0} \left(\int_{-\infty}^{\mu_k-\epsilon} + \int_{\mu_k+\epsilon}^\infty\right) {w_N(t) \over \mu_k-t} \dt = {1 \over N\pi} \sum_{j \neq k} {1 \over \mu_k - \mu_j}
\]
Formally (see a more detailed explanation below), $w_N(x)$ tends to a continuous limit as $N\rightarrow \infty$, which we have guessed from the histogram to be $w(x) = { \sqrt{2-x^2} \over \pi}$ for $-\sqrt 2 < x < \sqrt2$. We expect this limit to satisfy the same equation as $w_N$, that is
\[
\HH w(x) = {x \over \pi}
\]
for $x$ in the support of $w(x)$.
Why is it $[-\sqrt 2, \sqrt 2]$? Consider the problem posed on a general interval $[a,b]$ where $a$ and $b$ are unknowns. We want to choose the interval $[a,b]$ so that there exists a $w(x)$ satisfying
\begin{itemize}
\item[1. ] \[
w
\]
is bounded (Based on observation)
\item[2. ] \[
w(x) \geq 0
\]
for $a \leq x \leq b$ (Since it is a probability distribution)
\item[3. ] \[
\int_a^b w(x) \dx = 1
\]
(Since it is a probability distribution)
\item[4. ] \[
\HH_{[a,b]} w(x) = x/\pi
\]
\end{itemize}
As we saw last lecture, there exists a bounded solution to $\HH_{[-b,b]} u = x/\pi$, namely $u(x) = { \sqrt{b^2-x^2} \over \pi}$. The choice $b = \sqrt{2}$ ensures that $\int_{-b}^b u(x) \dx = 1$, hence $u(x) = w(x)$.
\subsubsection{Aside: Explanation of limit of $w_N(x)$}
This is beyond the scope of the course, but the convergence of $w_N(x)$ to $w(x)$ is known as weak-* convergence. A simple version of this is that
\[
\int_c^d w_N(x) \dx \rightarrow \int_c^d w(x) \dx
\]
for every choice of interval $(c,d)$. $\int_c^d w_N(x) \dx$ is precisely the number of charges in $(c,d)$ scaled by $1/N$, which is exactly what a histogram plots.
\begin{lstlisting}
(*@\HLJLk{using}@*) (*@\HLJLn{ApproxFun}@*)(*@\HLJLp{,}@*) (*@\HLJLn{SingularIntegralEquations}@*)
(*@\HLJLn{a}@*) (*@\HLJLoB{=}@*) (*@\HLJLoB{-}@*)(*@\HLJLnfB{0.1}@*)(*@\HLJLp{;}@*) (*@\HLJLn{b}@*)(*@\HLJLoB{=}@*) (*@\HLJLnfB{0.3}@*)(*@\HLJLp{;}@*)
(*@\HLJLn{w}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{Fun}@*)(*@\HLJLp{(}@*)(*@\HLJLn{x}@*) (*@\HLJLoB{->}@*) (*@\HLJLnf{sqrt}@*)(*@\HLJLp{(}@*)(*@\HLJLni{2}@*)(*@\HLJLoB{-}@*)(*@\HLJLn{x}@*)(*@\HLJLoB{{\textasciicircum}}@*)(*@\HLJLni{2}@*)(*@\HLJLp{)}@*)(*@\HLJLoB{/}@*)(*@\HLJLn{\ensuremath{\pi}}@*)(*@\HLJLp{,}@*) (*@\HLJLn{a}@*) (*@\HLJLoB{..}@*) (*@\HLJLn{b}@*)(*@\HLJLp{)}@*)
(*@\HLJLnf{sum}@*)(*@\HLJLp{(}@*)(*@\HLJLn{w}@*)(*@\HLJLp{),}@*) (*@\HLJLcs{{\#}}@*) (*@\HLJLcs{integral}@*) (*@\HLJLcs{of}@*) (*@\HLJLcs{w(x)}@*) (*@\HLJLcs{between}@*) (*@\HLJLcs{a}@*) (*@\HLJLcs{and}@*) (*@\HLJLcs{b}@*)
(*@\HLJLnf{length}@*)(*@\HLJLp{(}@*)(*@\HLJLnf{filter}@*)(*@\HLJLp{(}@*)(*@\HLJLn{\ensuremath{\lambda}}@*) (*@\HLJLoB{->}@*) (*@\HLJLn{a}@*) (*@\HLJLoB{\ensuremath{\leq}}@*) (*@\HLJLn{\ensuremath{\lambda}}@*) (*@\HLJLoB{\ensuremath{\leq}}@*) (*@\HLJLn{b}@*)(*@\HLJLp{,}@*) (*@\HLJLnf{\ensuremath{\lambda}v}@*)(*@\HLJLp{(}@*)(*@\HLJLnfB{20.0}@*)(*@\HLJLp{)[}@*)(*@\HLJLni{1}@*)(*@\HLJLoB{:}@*)(*@\HLJLn{N}@*)(*@\HLJLp{]}@*)(*@\HLJLoB{/}@*)(*@\HLJLnf{sqrt}@*)(*@\HLJLp{(}@*)(*@\HLJLn{N}@*)(*@\HLJLp{)))}@*)(*@\HLJLoB{/}@*)(*@\HLJLn{N}@*) (*@\HLJLcs{{\#}}@*) (*@\HLJLcs{integral}@*) (*@\HLJLcs{of}@*) (*@\HLJLcs{w{\_}N(x)}@*) (*@\HLJLcs{between}@*) (*@\HLJLcs{a}@*) (*@\HLJLcs{and}@*) (*@\HLJLcs{b}@*)
\end{lstlisting}
\begin{lstlisting}
(0.1790059168895313, 0.18)
\end{lstlisting}
Another variant of describing weak-* convergence is that the Cauchy transforms converge, that is, for $z$ on any contour $\gamma$ surrounding $a$ and $b$ (now the support of $w$) we have
\[
\int_a^b {w_N(x) \over x - z} \dx \rightarrow \int_a^b {w(x) \over x - z} \dx
\]
converges uniformly with respect to $N \rightarrow \infty$. here we demonstrate it converges at a point:
\begin{lstlisting}
(*@\HLJLn{x}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{Fun}@*)(*@\HLJLp{(}@*)(*@\HLJLoB{-}@*)(*@\HLJLnf{sqrt}@*)(*@\HLJLp{(}@*)(*@\HLJLni{2}@*)(*@\HLJLp{)}@*) (*@\HLJLoB{..}@*) (*@\HLJLnf{sqrt}@*)(*@\HLJLp{(}@*)(*@\HLJLni{2}@*)(*@\HLJLp{))}@*)
(*@\HLJLn{w}@*) (*@\HLJLoB{=}@*) (*@\HLJLnf{sqrt}@*)(*@\HLJLp{(}@*)(*@\HLJLni{2}@*)(*@\HLJLoB{-}@*)(*@\HLJLn{x}@*)(*@\HLJLoB{{\textasciicircum}}@*)(*@\HLJLni{2}@*)(*@\HLJLp{)}@*)(*@\HLJLoB{/}@*)(*@\HLJLn{\ensuremath{\pi}}@*)
(*@\HLJLn{z}@*) (*@\HLJLoB{=}@*) (*@\HLJLnfB{0.1}@*)(*@\HLJLoB{+}@*)(*@\HLJLnfB{0.2}@*)(*@\HLJLn{im}@*)
(*@\HLJLnf{cauchy}@*)(*@\HLJLp{(}@*)(*@\HLJLn{w}@*)(*@\HLJLp{,}@*) (*@\HLJLn{z}@*)(*@\HLJLp{),}@*) (*@\HLJLp{(}@*)(*@\HLJLnf{sum}@*)(*@\HLJLp{(}@*)(*@\HLJLni{1}@*) (*@\HLJLoB{./}@*)(*@\HLJLp{((}@*)(*@\HLJLnf{\ensuremath{\lambda}v}@*)(*@\HLJLp{(}@*)(*@\HLJLnfB{20.0}@*)(*@\HLJLp{)[}@*)(*@\HLJLni{1}@*)(*@\HLJLoB{:}@*)(*@\HLJLn{N}@*)(*@\HLJLp{]}@*)(*@\HLJLoB{/}@*)(*@\HLJLnf{sqrt}@*)(*@\HLJLp{(}@*)(*@\HLJLn{N}@*)(*@\HLJLp{))}@*) (*@\HLJLoB{.-}@*) (*@\HLJLn{z}@*)(*@\HLJLp{))}@*)(*@\HLJLoB{/}@*)(*@\HLJLn{N}@*)(*@\HLJLp{)}@*)(*@\HLJLoB{/}@*)(*@\HLJLp{(}@*)(*@\HLJLni{2}@*)(*@\HLJLn{\ensuremath{\pi}}@*)(*@\HLJLoB{*}@*)(*@\HLJLn{im}@*)(*@\HLJLp{)}@*)
\end{lstlisting}
\begin{lstlisting}
(0.1949409042727309 + 0.013681505291622225im, 0.19542062466509094 + 0.01372
4291088994856im)
\end{lstlisting}
\end{document}
| {
"alphanum_fraction": 0.5310939097,
"avg_line_length": 60.3791887125,
"ext": "tex",
"hexsha": "9c1b80c4bb1e82ead98e946b46f5ada8e4c3b50b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a29356b8d42e5ce2ca764c440386d6524b8dcd31",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "MarcoFasondini/M3M6AppliedComplexAnalysis",
"max_forks_repo_path": "output/Lecture15.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a29356b8d42e5ce2ca764c440386d6524b8dcd31",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "MarcoFasondini/M3M6AppliedComplexAnalysis",
"max_issues_repo_path": "output/Lecture15.tex",
"max_line_length": 1028,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a29356b8d42e5ce2ca764c440386d6524b8dcd31",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "MarcoFasondini/M3M6AppliedComplexAnalysis",
"max_stars_repo_path": "output/Lecture15.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 16414,
"size": 34235
} |
\section{Axioms for propositional logic}
| {
"alphanum_fraction": 0.7906976744,
"avg_line_length": 10.75,
"ext": "tex",
"hexsha": "00cf28d0701c58d7275c6059eb87c01770b2dcc0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/logic/propositionalLogicAxioms/01-00-Axioms_for_propositional_logic.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/logic/propositionalLogicAxioms/01-00-Axioms_for_propositional_logic.tex",
"max_line_length": 40,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/logic/propositionalLogicAxioms/01-00-Axioms_for_propositional_logic.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 12,
"size": 43
} |
\hypertarget{section}{%
\section{1}\label{section}}
\bibleverse{1} In the beginning God created the heaven and the earth.
\bibleverse{2} And the earth was without form, and void; and darkness
was upon the face of the deep. And the Spirit of God moved upon the face
of the waters.
\bibleverse{3} And God said, Let there be light: and there was light.
\bibleverse{4} And God saw the light, that it was good: and God divided
the light from the darkness.\footnote{\textbf{1:4} the light from\ldots:
Heb. between the light and between the darkness} \bibleverse{5} And
God called the light Day, and the darkness he called Night. And the
evening and the morning were the first day.\footnote{\textbf{1:5} And
the evening\ldots: Heb. And the evening was, and the morning was etc.}
\bibleverse{6} And God said, Let there be a firmament in the midst of
the waters, and let it divide the waters from the waters.\footnote{\textbf{1:6}
firmament: Heb. expansion} \bibleverse{7} And God made the firmament,
and divided the waters which were under the firmament from the waters
which were above the firmament: and it was so. \bibleverse{8} And God
called the firmament Heaven. And the evening and the morning were the
second day.\footnote{\textbf{1:8} And the evening\ldots: Heb. And the
evening was, and the morning was etc.}
\bibleverse{9} And God said, Let the waters under the heaven be gathered
together unto one place, and let the dry land appear: and it was so.
\bibleverse{10} And God called the dry land Earth; and the gathering
together of the waters called he Seas: and God saw that it was good.
\bibleverse{11} And God said, Let the earth bring forth grass, the herb
yielding seed, and the fruit tree yielding fruit after his kind, whose
seed is in itself, upon the earth: and it was so.\footnote{\textbf{1:11}
grass: Heb. tender grass} \bibleverse{12} And the earth brought forth
grass, and herb yielding seed after his kind, and the tree yielding
fruit, whose seed was in itself, after his kind: and God saw that it was
good. \bibleverse{13} And the evening and the morning were the third
day.\footnote{\textbf{1:13} And the evening\ldots: Heb. And the evening
was, and the morning was etc.}
\bibleverse{14} And God said, Let there be lights in the firmament of
the heaven to divide the day from the night; and let them be for signs,
and for seasons, and for days, and years:\footnote{\textbf{1:14} the
day\ldots: Heb. between the day and between the night} \bibleverse{15}
And let them be for lights in the firmament of the heaven to give light
upon the earth: and it was so. \bibleverse{16} And God made two great
lights; the greater light to rule the day, and the lesser light to rule
the night: he made the stars also.\footnote{\textbf{1:16} to rule the
day\ldots: Heb. for the rule of the day, etc.} \bibleverse{17} And God
set them in the firmament of the heaven to give light upon the earth,
\bibleverse{18} And to rule over the day and over the night, and to
divide the light from the darkness: and God saw that it was good.
\bibleverse{19} And the evening and the morning were the fourth
day.\footnote{\textbf{1:19} And the evening\ldots: Heb. And the evening
was, and the morning was etc.}
\bibleverse{20} And God said, Let the waters bring forth abundantly the
moving creature that hath life, and fowl that may fly above the earth in
the open firmament of heaven.\textsuperscript{{[}\textbf{1:20} moving:
or, creeping{]}}{[}\textbf{1:20} life: Heb.
soul{]}\textsuperscript{{[}\textbf{1:20} fowl\ldots: Heb. let fowl
fly{]}}{[}\textbf{1:20} open\ldots: Heb. face of the firmament of
heaven{]} \bibleverse{21} And God created great whales, and every living
creature that moveth, which the waters brought forth abundantly, after
their kind, and every winged fowl after his kind: and God saw that it
was good. \bibleverse{22} And God blessed them, saying, Be fruitful, and
multiply, and fill the waters in the seas, and let fowl multiply in the
earth. \bibleverse{23} And the evening and the morning were the fifth
day.\footnote{\textbf{1:23} And the evening\ldots: Heb. And the evening
was, and the morning was etc.}
\bibleverse{24} And God said, Let the earth bring forth the living
creature after his kind, cattle, and creeping thing, and beast of the
earth after his kind: and it was so. \bibleverse{25} And God made the
beast of the earth after his kind, and cattle after their kind, and
every thing that creepeth upon the earth after his kind: and God saw
that it was good.
\bibleverse{26} And God said, Let us make man in our image, after our
likeness: and let them have dominion over the fish of the sea, and over
the fowl of the air, and over the cattle, and over all the earth, and
over every creeping thing that creepeth upon the earth. \bibleverse{27}
So God created man in his own image, in the image of God created he him;
male and female created he them. \bibleverse{28} And God blessed them,
and God said unto them, Be fruitful, and multiply, and replenish the
earth, and subdue it: and have dominion over the fish of the sea, and
over the fowl of the air, and over every living thing that moveth upon
the earth.\footnote{\textbf{1:28} moveth: Heb. creepeth}
\bibleverse{29} And God said, Behold, I have given you every herb
bearing seed, which is upon the face of all the earth, and every tree,
in the which is the fruit of a tree yielding seed; to you it shall be
for meat.\textsuperscript{{[}\textbf{1:29} bearing\ldots: Heb. seeding
seed{]}}{[}\textbf{1:29} yielding\ldots: Heb. seeding seed{]}
\bibleverse{30} And to every beast of the earth, and to every fowl of
the air, and to every thing that creepeth upon the earth, wherein there
is life, I have given every green herb for meat: and it was
so.\footnote{\textbf{1:30} life: Heb. a living soul} \bibleverse{31} And
God saw every thing that he had made, and, behold, it was very good. And
the evening and the morning were the sixth day.\footnote{\textbf{1:31}
And the evening\ldots: Heb. And the evening was, and the morning was
etc.}
\hypertarget{section-1}{%
\section{2}\label{section-1}}
\bibleverse{1} Thus the heavens and the earth were finished, and all the
host of them. \bibleverse{2} And on the seventh day God ended his work
which he had made; and he rested on the seventh day from all his work
which he had made. \bibleverse{3} And God blessed the seventh day, and
sanctified it: because that in it he had rested from all his work which
God created and made.\footnote{\textbf{2:3} created\ldots: Heb. created
to make}
\bibleverse{4} These are the generations of the heavens and of the earth
when they were created, in the day that the LORD God made the earth and
the heavens, \bibleverse{5} And every plant of the field before it was
in the earth, and every herb of the field before it grew: for the LORD
God had not caused it to rain upon the earth, and there was not a man to
till the ground. \bibleverse{6} But there went up a mist from the earth,
and watered the whole face of the ground.\footnote{\textbf{2:6}
there\ldots: or, a mist which went up from, etc.} \bibleverse{7} And
the LORD God formed man of the dust of the ground, and breathed into his
nostrils the breath of life; and man became a living soul.\footnote{\textbf{2:7}
of the dust\ldots: Heb. dust of the ground}
\bibleverse{8} And the LORD God planted a garden eastward in Eden; and
there he put the man whom he had formed. \bibleverse{9} And out of the
ground made the LORD God to grow every tree that is pleasant to the
sight, and good for food; the tree of life also in the midst of the
garden, and the tree of knowledge of good and evil. \bibleverse{10} And
a river went out of Eden to water the garden; and from thence it was
parted, and became into four heads. \bibleverse{11} The name of the
first is Pison: that is it which compasseth the whole land of Havilah,
where there is gold; \bibleverse{12} And the gold of that land is good:
there is bdellium and the onyx stone. \bibleverse{13} And the name of
the second river is Gihon: the same is it that compasseth the whole land
of Ethiopia.\footnote{\textbf{2:13} Ethiopia: Heb. Cush} \bibleverse{14}
And the name of the third river is Hiddekel: that is it which goeth
toward the east of Assyria. And the fourth river is
Euphrates.\footnote{\textbf{2:14} toward\ldots: or, eastward to Assyria}
\bibleverse{15} And the LORD God took the man, and put him into the
garden of Eden to dress it and to keep it.\footnote{\textbf{2:15} the
man: or, Adam}
\bibleverse{16} And the LORD God commanded the man, saying, Of every
tree of the garden thou mayest freely eat:\footnote{\textbf{2:16}
thou\ldots: Heb. eating thou shalt eat} \bibleverse{17} But of the
tree of the knowledge of good and evil, thou shalt not eat of it: for in
the day that thou eatest thereof thou shalt surely die.\footnote{\textbf{2:17}
thou shalt surely\ldots: Heb. dying thou shalt die}
\bibleverse{18} And the LORD God said, It is not good that the man
should be alone; I will make him an help meet for him.\footnote{\textbf{2:18}
meet\ldots: Heb. as before him} \bibleverse{19} And out of the ground
the LORD God formed every beast of the field, and every fowl of the air;
and brought them unto Adam to see what he would call them: and
whatsoever Adam called every living creature, that was the name
thereof.\footnote{\textbf{2:19} Adam: or, the man} \bibleverse{20} And
Adam gave names to all cattle, and to the fowl of the air, and to every
beast of the field; but for Adam there was not found an help meet for
him.\footnote{\textbf{2:20} gave: Heb. called}
\bibleverse{21} And the LORD God caused a deep sleep to fall upon Adam,
and he slept: and he took one of his ribs, and closed up the flesh
instead thereof; \bibleverse{22} And the rib, which the LORD God had
taken from man, made he a woman, and brought her unto the
man.\footnote{\textbf{2:22} made: Heb. builded} \bibleverse{23} And Adam
said, This is now bone of my bones, and flesh of my flesh: she shall be
called Woman, because she was taken out of
Man.\textsuperscript{{[}\textbf{2:23} Woman: Heb.
Isha{]}}{[}\textbf{2:23} Man: Heb. Ish{]} \bibleverse{24} Therefore
shall a man leave his father and his mother, and shall cleave unto his
wife: and they shall be one flesh. \bibleverse{25} And they were both
naked, the man and his wife, and were not ashamed.
\hypertarget{section-2}{%
\section{3}\label{section-2}}
\bibleverse{1} Now the serpent was more subtil than any beast of the
field which the LORD God had made. And he said unto the woman, Yea, hath
God said, Ye shall not eat of every tree of the garden?\footnote{\textbf{3:1}
Yea\ldots: Heb. Yea, because, etc.} \bibleverse{2} And the woman said
unto the serpent, We may eat of the fruit of the trees of the garden:
\bibleverse{3} But of the fruit of the tree which is in the midst of the
garden, God hath said, Ye shall not eat of it, neither shall ye touch
it, lest ye die. \bibleverse{4} And the serpent said unto the woman, Ye
shall not surely die: \bibleverse{5} For God doth know that in the day
ye eat thereof, then your eyes shall be opened, and ye shall be as gods,
knowing good and evil.
\bibleverse{6} And when the woman saw that the tree was good for food,
and that it was pleasant to the eyes, and a tree to be desired to make
one wise, she took of the fruit thereof, and did eat, and gave also unto
her husband with her; and he did eat.\footnote{\textbf{3:6} pleasant:
Heb. a desire} \bibleverse{7} And the eyes of them both were opened,
and they knew that they were naked; and they sewed fig leaves together,
and made themselves aprons.\footnote{\textbf{3:7} aprons: or, things to
gird about} \bibleverse{8} And they heard the voice of the LORD God
walking in the garden in the cool of the day: and Adam and his wife hid
themselves from the presence of the LORD God amongst the trees of the
garden.\footnote{\textbf{3:8} cool: Heb. wind}
\bibleverse{9} And the LORD God called unto Adam, and said unto him,
Where art thou? \bibleverse{10} And he said, I heard thy voice in the
garden, and I was afraid, because I was naked; and I hid myself.
\bibleverse{11} And he said, Who told thee that thou wast naked? Hast
thou eaten of the tree, whereof I commanded thee that thou shouldest not
eat? \bibleverse{12} And the man said, The woman whom thou gavest to be
with me, she gave me of the tree, and I did eat. \bibleverse{13} And the
LORD God said unto the woman, What is this that thou hast done? And the
woman said, The serpent beguiled me, and I did eat.
\bibleverse{14} And the LORD God said unto the serpent, Because thou
hast done this, thou art cursed above all cattle, and above every beast
of the field; upon thy belly shalt thou go, and dust shalt thou eat all
the days of thy life: \bibleverse{15} And I will put enmity between thee
and the woman, and between thy seed and her seed; it shall bruise thy
head, and thou shalt bruise his heel. \bibleverse{16} Unto the woman he
said, I will greatly multiply thy sorrow and thy conception; in sorrow
thou shalt bring forth children; and thy desire shall be to thy husband,
and he shall rule over thee.\footnote{\textbf{3:16} to thy\ldots: or,
subject to thy husband}
\bibleverse{17} And unto Adam he said, Because thou hast hearkened unto
the voice of thy wife, and hast eaten of the tree, of which I commanded
thee, saying, Thou shalt not eat of it: cursed is the ground for thy
sake; in sorrow shalt thou eat of it all the days of thy life;
\bibleverse{18} Thorns also and thistles shall it bring forth to thee;
and thou shalt eat the herb of the field;\footnote{\textbf{3:18}
bring\ldots: Heb. cause to bud} \bibleverse{19} In the sweat of thy
face shalt thou eat bread, till thou return unto the ground; for out of
it wast thou taken: for dust thou art, and unto dust shalt thou return.
\bibleverse{20} And Adam called his wife's name Eve; because she was the
mother of all living.\footnote{\textbf{3:20} Eve: Heb. Chavah: that is
Living} \bibleverse{21} Unto Adam also and to his wife did the LORD
God make coats of skins, and clothed them.
\bibleverse{22} And the LORD God said, Behold, the man is become as one
of us, to know good and evil: and now, lest he put forth his hand, and
take also of the tree of life, and eat, and live for ever:
\bibleverse{23} Therefore the LORD God sent him forth from the garden of
Eden, to till the ground from whence he was taken. \bibleverse{24} So he
drove out the man; and he placed at the east of the garden of Eden
Cherubims, and a flaming sword which turned every way, to keep the way
of the tree of life.
\hypertarget{section-3}{%
\section{4}\label{section-3}}
\bibleverse{1} And Adam knew Eve his wife; and she conceived, and bare
Cain, and said, I have gotten a man from the LORD.\footnote{\textbf{4:1}
Cain: that is, Gotten, or, Acquired} \bibleverse{2} And she again bare
his brother Abel. And Abel was a keeper of sheep, but Cain was a tiller
of the ground.\textsuperscript{{[}\textbf{4:2} Abel: Heb.
Hebel{]}}{[}\textbf{4:2} a keeper: Heb. a feeder{]}
\bibleverse{3} And in process of time it came to pass, that Cain brought
of the fruit of the ground an offering unto the LORD.\footnote{\textbf{4:3}
in process\ldots: Heb. at the end of days} \bibleverse{4} And Abel, he
also brought of the firstlings of his flock and of the fat thereof. And
the LORD had respect unto Abel and to his offering:\footnote{\textbf{4:4}
flock: Heb. sheep, or, goats} \bibleverse{5} But unto Cain and to his
offering he had not respect. And Cain was very wroth, and his
countenance fell.
\bibleverse{6} And the LORD said unto Cain, Why art thou wroth? and why
is thy countenance fallen? \bibleverse{7} If thou doest well, shalt thou
not be accepted? and if thou doest not well, sin lieth at the door. And
unto thee shall be his desire, and thou shalt rule over
him.\textsuperscript{{[}\textbf{4:7} be accepted: or, have the
excellency{]}}{[}\textbf{4:7} unto\ldots: or, subject unto thee{]}
\bibleverse{8} And Cain talked with Abel his brother: and it came to
pass, when they were in the field, that Cain rose up against Abel his
brother, and slew him.
\bibleverse{9} And the LORD said unto Cain, Where is Abel thy brother?
And he said, I know not: Am I my brother's keeper? \bibleverse{10} And
he said, What hast thou done? the voice of thy brother's blood crieth
unto me from the ground.\footnote{\textbf{4:10} blood: Heb. bloods}
\bibleverse{11} And now art thou cursed from the earth, which hath
opened her mouth to receive thy brother's blood from thy hand;
\bibleverse{12} When thou tillest the ground, it shall not henceforth
yield unto thee her strength; a fugitive and a vagabond shalt thou be in
the earth.
\bibleverse{13} And Cain said unto the LORD, My punishment is greater
than I can bear.\footnote{\textbf{4:13} My\ldots: or, Mine iniquity is
greater than that it may be forgiven} \bibleverse{14} Behold, thou
hast driven me out this day from the face of the earth; and from thy
face shall I be hid; and I shall be a fugitive and a vagabond in the
earth; and it shall come to pass, that every one that findeth me shall
slay me. \bibleverse{15} And the LORD said unto him, Therefore whosoever
slayeth Cain, vengeance shall be taken on him sevenfold. And the LORD
set a mark upon Cain, lest any finding him should kill him.
\bibleverse{16} And Cain went out from the presence of the LORD, and
dwelt in the land of Nod, on the east of Eden. \bibleverse{17} And Cain
knew his wife; and she conceived, and bare Enoch: and he builded a city,
and called the name of the city, after the name of his son,
Enoch.\footnote{\textbf{4:17} Enoch: Heb. Chanoch} \bibleverse{18} And
unto Enoch was born Irad: and Irad begat Mehujael: and Mehujael begat
Methusael: and Methusael begat Lamech.\footnote{\textbf{4:18} Lamech:
Heb. Lemech}
\bibleverse{19} And Lamech took unto him two wives: the name of the one
was Adah, and the name of the other Zillah. \bibleverse{20} And Adah
bare Jabal: he was the father of such as dwell in tents, and of such as
have cattle. \bibleverse{21} And his brother's name was Jubal: he was
the father of all such as handle the harp and organ. \bibleverse{22} And
Zillah, she also bare Tubal-cain, an instructer of every artificer in
brass and iron: and the sister of Tubal-cain was Naamah.\footnote{\textbf{4:22}
instructer: Heb. whetter}
\bibleverse{23} And Lamech said unto his wives, Adah and Zillah, Hear my
voice; ye wives of Lamech, hearken unto my speech: for I have slain a
man to my wounding, and a young man to my
hurt.\textsuperscript{{[}\textbf{4:23} I have\ldots: or, I would slay a
man in my wound, etc.{]}}{[}\textbf{4:23} to my hurt: or, in my hurt{]}
\bibleverse{24} If Cain shall be avenged sevenfold, truly Lamech seventy
and sevenfold.
\bibleverse{25} And Adam knew his wife again; and she bare a son, and
called his name Seth: For God, said she, hath appointed me another seed
instead of Abel, whom Cain slew.\footnote{\textbf{4:25} Seth: Heb.
Sheth: that is Appointed, or, Put} \bibleverse{26} And to Seth, to him
also there was born a son; and he called his name Enos: then began men
to call upon the name of the LORD.\textsuperscript{{[}\textbf{4:26}
Enos: Heb. Enosh{]}}{[}\textbf{4:26} to call\ldots: or, to call
themselves by the name of the Lord{]}
\hypertarget{section-4}{%
\section{5}\label{section-4}}
\bibleverse{1} This is the book of the generations of Adam. In the day
that God created man, in the likeness of God made he him; \bibleverse{2}
Male and female created he them; and blessed them, and called their name
Adam, in the day when they were created.
\bibleverse{3} And Adam lived an hundred and thirty years, and begat a
son in his own likeness, after his image; and called his name Seth:
\bibleverse{4} And the days of Adam after he had begotten Seth were
eight hundred years: and he begat sons and daughters: \bibleverse{5} And
all the days that Adam lived were nine hundred and thirty years: and he
died.
\bibleverse{6} And Seth lived an hundred and five years, and begat
Enos:\footnote{\textbf{5:6} Enos: Heb. Enosh} \bibleverse{7} And Seth
lived after he begat Enos eight hundred and seven years, and begat sons
and daughters: \bibleverse{8} And all the days of Seth were nine hundred
and twelve years: and he died.
\bibleverse{9} And Enos lived ninety years, and begat Cainan:\footnote{\textbf{5:9}
Cainan: Heb. Kenan} \bibleverse{10} And Enos lived after he begat
Cainan eight hundred and fifteen years, and begat sons and daughters:
\bibleverse{11} And all the days of Enos were nine hundred and five
years: and he died.
\bibleverse{12} And Cainan lived seventy years, and begat
Mahalaleel:\footnote{\textbf{5:12} Mahalaleel: Gr. Maleleel}
\bibleverse{13} And Cainan lived after he begat Mahalaleel eight hundred
and forty years, and begat sons and daughters: \bibleverse{14} And all
the days of Cainan were nine hundred and ten years: and he died.
\bibleverse{15} And Mahalaleel lived sixty and five years, and begat
Jared:\footnote{\textbf{5:15} Jared: Heb. Jered} \bibleverse{16} And
Mahalaleel lived after he begat Jared eight hundred and thirty years,
and begat sons and daughters: \bibleverse{17} And all the days of
Mahalaleel were eight hundred ninety and five years: and he died.
\bibleverse{18} And Jared lived an hundred sixty and two years, and he
begat Enoch: \bibleverse{19} And Jared lived after he begat Enoch eight
hundred years, and begat sons and daughters: \bibleverse{20} And all the
days of Jared were nine hundred sixty and two years: and he died.
\bibleverse{21} And Enoch lived sixty and five years, and begat
Methuselah:\footnote{\textbf{5:21} Methuselah: Gr. Mathusala}
\bibleverse{22} And Enoch walked with God after he begat Methuselah
three hundred years, and begat sons and daughters: \bibleverse{23} And
all the days of Enoch were three hundred sixty and five years:
\bibleverse{24} And Enoch walked with God: and he was not; for God took
him.
\bibleverse{25} And Methuselah lived an hundred eighty and seven years,
and begat Lamech: \bibleverse{26} And Methuselah lived after he begat
Lamech seven hundred eighty and two years, and begat sons and
daughters:\footnote{\textbf{5:26} Lamech: Heb. Lemech} \bibleverse{27}
And all the days of Methuselah were nine hundred sixty and nine years:
and he died.
\bibleverse{28} And Lamech lived an hundred eighty and two years, and
begat a son: \bibleverse{29} And he called his name Noah, saying, This
same shall comfort us concerning our work and toil of our hands, because
of the ground which the LORD hath cursed.\footnote{\textbf{5:29} Noah:
Gr. Noe: that is Rest, or, Comfort} \bibleverse{30} And Lamech lived
after he begat Noah five hundred ninety and five years, and begat sons
and daughters: \bibleverse{31} And all the days of Lamech were seven
hundred seventy and seven years: and he died. \bibleverse{32} And Noah
was five hundred years old: and Noah begat Shem, Ham, and Japheth.
\hypertarget{section-5}{%
\section{6}\label{section-5}}
\bibleverse{1} And it came to pass, when men began to multiply on the
face of the earth, and daughters were born unto them, \bibleverse{2}
That the sons of God saw the daughters of men that they were fair; and
they took them wives of all which they chose. \bibleverse{3} And the
LORD said, My spirit shall not always strive with man, for that he also
is flesh: yet his days shall be an hundred and twenty years.
\bibleverse{4} There were giants in the earth in those days; and also
after that, when the sons of God came in unto the daughters of men, and
they bare children to them, the same became mighty men which were of
old, men of renown.
\bibleverse{5} And GOD saw that the wickedness of man was great in the
earth, and that every imagination of the thoughts of his heart was only
evil continually.\textsuperscript{{[}\textbf{6:5} every\ldots: or, the
whole imagination: the Hebrew word signifieth not only the imagination,
but also the purposes and desires{]}}{[}\textbf{6:5} continually: Heb.
every day{]}
\bibleverse{6} And it repented the LORD that he had made man on the
earth, and it grieved him at his heart. \bibleverse{7} And the LORD
said, I will destroy man whom I have created from the face of the earth;
both man, and beast, and the creeping thing, and the fowls of the air;
for it repenteth me that I have made them.\footnote{\textbf{6:7}
both\ldots: Heb. from man unto beast}
\bibleverse{8} But Noah found grace in the eyes of the LORD.
\bibleverse{9} These are the generations of Noah: Noah was a just man
and perfect in his generations, and Noah walked with God.\footnote{\textbf{6:9}
perfect: or, upright} \bibleverse{10} And Noah begat three sons, Shem,
Ham, and Japheth.
\bibleverse{11} The earth also was corrupt before God, and the earth was
filled with violence. \bibleverse{12} And God looked upon the earth,
and, behold, it was corrupt; for all flesh had corrupted his way upon
the earth.
\bibleverse{13} And God said unto Noah, The end of all flesh is come
before me; for the earth is filled with violence through them; and,
behold, I will destroy them with the earth.\footnote{\textbf{6:13} with
the earth: or, from the earth}
\bibleverse{14} Make thee an ark of gopher wood; rooms shalt thou make
in the ark, and shalt pitch it within and without with pitch.\footnote{\textbf{6:14}
rooms: Heb. nests} \bibleverse{15} And this is the fashion which thou
shalt make it of: The length of the ark shall be three hundred cubits,
the breadth of it fifty cubits, and the height of it thirty cubits.
\bibleverse{16} A window shalt thou make to the ark, and in a cubit
shalt thou finish it above; and the door of the ark shalt thou set in
the side thereof; with lower, second, and third stories shalt thou make
it. \bibleverse{17} And, behold, I, even I, do bring a flood of waters
upon the earth, to destroy all flesh, wherein is the breath of life,
from under heaven; and every thing that is in the earth shall die.
\bibleverse{18} But with thee will I establish my covenant; and thou
shalt come into the ark, thou, and thy sons, and thy wife, and thy sons'
wives with thee. \bibleverse{19} And of every living thing of all flesh,
two of every sort shalt thou bring into the ark, to keep them alive with
thee; they shall be male and female. \bibleverse{20} Of fowls after
their kind, and of cattle after their kind, of every creeping thing of
the earth after his kind, two of every sort shall come unto thee, to
keep them alive. \bibleverse{21} And take thou unto thee of all food
that is eaten, and thou shalt gather it to thee; and it shall be for
food for thee, and for them. \bibleverse{22} Thus did Noah; according to
all that God commanded him, so did he.
\hypertarget{section-6}{%
\section{7}\label{section-6}}
\bibleverse{1} And the LORD said unto Noah, Come thou and all thy house
into the ark; for thee have I seen righteous before me in this
generation. \bibleverse{2} Of every clean beast thou shalt take to thee
by sevens, the male and his female: and of beasts that are not clean by
two, the male and his female.\footnote{\textbf{7:2} by sevens: Heb.
seven seven} \bibleverse{3} Of fowls also of the air by sevens, the
male and the female; to keep seed alive upon the face of all the
earth.\footnote{\textbf{7:3} by sevens: Heb. seven seven} \bibleverse{4}
For yet seven days, and I will cause it to rain upon the earth forty
days and forty nights; and every living substance that I have made will
I destroy from off the face of the earth.\footnote{\textbf{7:4} destroy:
Heb. blot out}
\bibleverse{5} And Noah did according unto all that the LORD commanded
him. \bibleverse{6} And Noah was six hundred years old when the flood of
waters was upon the earth.
\bibleverse{7} And Noah went in, and his sons, and his wife, and his
sons' wives with him, into the ark, because of the waters of the flood.
\bibleverse{8} Of clean beasts, and of beasts that are not clean, and of
fowls, and of every thing that creepeth upon the earth, \bibleverse{9}
There went in two and two unto Noah into the ark, the male and the
female, as God had commanded Noah. \bibleverse{10} And it came to pass
after seven days, that the waters of the flood were upon the
earth.\footnote{\textbf{7:10} after\ldots: or, on the seventh day}
\bibleverse{11} In the six hundredth year of Noah's life, in the second
month, the seventeenth day of the month, the same day were all the
fountains of the great deep broken up, and the windows of heaven were
opened.\footnote{\textbf{7:11} windows: or, floodgates} \bibleverse{12}
And the rain was upon the earth forty days and forty nights.
\bibleverse{13} In the selfsame day entered Noah, and Shem, and Ham, and
Japheth, the sons of Noah, and Noah's wife, and the three wives of his
sons with them, into the ark; \bibleverse{14} They, and every beast
after his kind, and all the cattle after their kind, and every creeping
thing that creepeth upon the earth after his kind, and every fowl after
his kind, every bird of every sort.\footnote{\textbf{7:14} sort: Heb.
wing} \bibleverse{15} And they went in unto Noah into the ark, two and
two of all flesh, wherein is the breath of life. \bibleverse{16} And
they that went in, went in male and female of all flesh, as God had
commanded him: and the LORD shut him in.
\bibleverse{17} And the flood was forty days upon the earth; and the
waters increased, and bare up the ark, and it was lift up above the
earth. \bibleverse{18} And the waters prevailed, and were increased
greatly upon the earth; and the ark went upon the face of the waters.
\bibleverse{19} And the waters prevailed exceedingly upon the earth; and
all the high hills, that were under the whole heaven, were covered.
\bibleverse{20} Fifteen cubits upward did the waters prevail; and the
mountains were covered.
\bibleverse{21} And all flesh died that moved upon the earth, both of
fowl, and of cattle, and of beast, and of every creeping thing that
creepeth upon the earth, and every man: \bibleverse{22} All in whose
nostrils was the breath of life, of all that was in the dry land,
died.\footnote{\textbf{7:22} the breath\ldots: Heb. the breath of the
spirit of life} \bibleverse{23} And every living substance was
destroyed which was upon the face of the ground, both man, and cattle,
and the creeping things, and the fowl of the heaven; and they were
destroyed from the earth: and Noah only remained alive, and they that
were with him in the ark. \bibleverse{24} And the waters prevailed upon
the earth an hundred and fifty days.
\hypertarget{section-7}{%
\section{8}\label{section-7}}
\bibleverse{1} And God remembered Noah, and every living thing, and all
the cattle that was with him in the ark: and God made a wind to pass
over the earth, and the waters asswaged; \bibleverse{2} The fountains
also of the deep and the windows of heaven were stopped, and the rain
from heaven was restrained; \bibleverse{3} And the waters returned from
off the earth continually: and after the end of the hundred and fifty
days the waters were abated.\footnote{\textbf{8:3} continually: Heb. in
going and returning}
\bibleverse{4} And the ark rested in the seventh month, on the
seventeenth day of the month, upon the mountains of Ararat.
\bibleverse{5} And the waters decreased continually until the tenth
month: in the tenth month, on the first day of the month, were the tops
of the mountains seen.\footnote{\textbf{8:5} decreased\ldots: Heb. were
in going and decreasing}
\bibleverse{6} And it came to pass at the end of forty days, that Noah
opened the window of the ark which he had made: \bibleverse{7} And he
sent forth a raven, which went forth to and fro, until the waters were
dried up from off the earth.\footnote{\textbf{8:7} to\ldots: Heb. in
going forth and returning} \bibleverse{8} Also he sent forth a dove
from him, to see if the waters were abated from off the face of the
ground; \bibleverse{9} But the dove found no rest for the sole of her
foot, and she returned unto him into the ark, for the waters were on the
face of the whole earth: then he put forth his hand, and took her, and
pulled her in unto him into the ark.\footnote{\textbf{8:9} pulled\ldots:
Heb. caused her to come} \bibleverse{10} And he stayed yet other seven
days; and again he sent forth the dove out of the ark; \bibleverse{11}
And the dove came in to him in the evening; and, lo, in her mouth was an
olive leaf pluckt off: so Noah knew that the waters were abated from off
the earth. \bibleverse{12} And he stayed yet other seven days; and sent
forth the dove; which returned not again unto him any more.
\bibleverse{13} And it came to pass in the six hundredth and first year,
in the first month, the first day of the month, the waters were dried up
from off the earth: and Noah removed the covering of the ark, and
looked, and, behold, the face of the ground was dry. \bibleverse{14} And
in the second month, on the seven and twentieth day of the month, was
the earth dried.
\bibleverse{15} And God spake unto Noah, saying, \bibleverse{16} Go
forth of the ark, thou, and thy wife, and thy sons, and thy sons' wives
with thee. \bibleverse{17} Bring forth with thee every living thing that
is with thee, of all flesh, both of fowl, and of cattle, and of every
creeping thing that creepeth upon the earth; that they may breed
abundantly in the earth, and be fruitful, and multiply upon the earth.
\bibleverse{18} And Noah went forth, and his sons, and his wife, and his
sons' wives with him: \bibleverse{19} Every beast, every creeping thing,
and every fowl, and whatsoever creepeth upon the earth, after their
kinds, went forth out of the ark.\footnote{\textbf{8:19} kinds: Heb.
families}
\bibleverse{20} And Noah builded an altar unto the LORD; and took of
every clean beast, and of every clean fowl, and offered burnt offerings
on the altar. \bibleverse{21} And the LORD smelled a sweet savour; and
the LORD said in his heart, I will not again curse the ground any more
for man's sake; for the imagination of man's heart is evil from his
youth; neither will I again smite any more every thing living, as I have
done.\textsuperscript{{[}\textbf{8:21} a sweet\ldots: Heb. a savour of
rest or, satisfaction{]}}{[}\textbf{8:21} for the imagination: or,
through the imagination{]} \bibleverse{22} While the earth remaineth,
seedtime and harvest, and cold and heat, and summer and winter, and day
and night shall not cease.\footnote{\textbf{8:22} While\ldots: Heb. As
yet all the days of the earth}
\hypertarget{section-8}{%
\section{9}\label{section-8}}
\bibleverse{1} And God blessed Noah and his sons, and said unto them, Be
fruitful, and multiply, and replenish the earth. \bibleverse{2} And the
fear of you and the dread of you shall be upon every beast of the earth,
and upon every fowl of the air, upon all that moveth upon the earth, and
upon all the fishes of the sea; into your hand are they delivered.
\bibleverse{3} Every moving thing that liveth shall be meat for you;
even as the green herb have I given you all things. \bibleverse{4} But
flesh with the life thereof, which is the blood thereof, shall ye not
eat. \bibleverse{5} And surely your blood of your lives will I require;
at the hand of every beast will I require it, and at the hand of man; at
the hand of every man's brother will I require the life of man.
\bibleverse{6} Whoso sheddeth man's blood, by man shall his blood be
shed: for in the image of God made he man. \bibleverse{7} And you, be ye
fruitful, and multiply; bring forth abundantly in the earth, and
multiply therein.
\bibleverse{8} And God spake unto Noah, and to his sons with him,
saying, \bibleverse{9} And I, behold, I establish my covenant with you,
and with your seed after you; \bibleverse{10} And with every living
creature that is with you, of the fowl, of the cattle, and of every
beast of the earth with you; from all that go out of the ark, to every
beast of the earth. \bibleverse{11} And I will establish my covenant
with you; neither shall all flesh be cut off any more by the waters of a
flood; neither shall there any more be a flood to destroy the earth.
\bibleverse{12} And God said, This is the token of the covenant which I
make between me and you and every living creature that is with you, for
perpetual generations: \bibleverse{13} I do set my bow in the cloud, and
it shall be for a token of a covenant between me and the earth.
\bibleverse{14} And it shall come to pass, when I bring a cloud over the
earth, that the bow shall be seen in the cloud: \bibleverse{15} And I
will remember my covenant, which is between me and you and every living
creature of all flesh; and the waters shall no more become a flood to
destroy all flesh. \bibleverse{16} And the bow shall be in the cloud;
and I will look upon it, that I may remember the everlasting covenant
between God and every living creature of all flesh that is upon the
earth. \bibleverse{17} And God said unto Noah, This is the token of the
covenant, which I have established between me and all flesh that is upon
the earth.
\bibleverse{18} And the sons of Noah, that went forth of the ark, were
Shem, and Ham, and Japheth: and Ham is the father of Canaan.\footnote{\textbf{9:18}
Canaan: Heb. Chenaan} \bibleverse{19} These are the three sons of
Noah: and of them was the whole earth overspread. \bibleverse{20} And
Noah began to be an husbandman, and he planted a vineyard:
\bibleverse{21} And he drank of the wine, and was drunken; and he was
uncovered within his tent. \bibleverse{22} And Ham, the father of
Canaan, saw the nakedness of his father, and told his two brethren
without. \bibleverse{23} And Shem and Japheth took a garment, and laid
it upon both their shoulders, and went backward, and covered the
nakedness of their father; and their faces were backward, and they saw
not their father's nakedness.
\bibleverse{24} And Noah awoke from his wine, and knew what his younger
son had done unto him. \bibleverse{25} And he said, Cursed be Canaan; a
servant of servants shall he be unto his brethren. \bibleverse{26} And
he said, Blessed be the LORD God of Shem; and Canaan shall be his
servant.\footnote{\textbf{9:26} his servant: or, servant to them}
\bibleverse{27} God shall enlarge Japheth, and he shall dwell in the
tents of Shem; and Canaan shall be his servant.\footnote{\textbf{9:27}
enlarge: or, persuade}
\bibleverse{28} And Noah lived after the flood three hundred and fifty
years. \bibleverse{29} And all the days of Noah were nine hundred and
fifty years: and he died.
\hypertarget{section-9}{%
\section{10}\label{section-9}}
\bibleverse{1} Now these are the generations of the sons of Noah, Shem,
Ham, and Japheth: and unto them were sons born after the flood.
\bibleverse{2} The sons of Japheth; Gomer, and Magog, and Madai, and
Javan, and Tubal, and Meshech, and Tiras. \bibleverse{3} And the sons of
Gomer; Ashkenaz, and Riphath, and Togarmah. \bibleverse{4} And the sons
of Javan; Elishah, and Tarshish, Kittim, and Dodanim.\footnote{\textbf{10:4}
Dodanim: or, as some read it, Rodanim} \bibleverse{5} By these were
the isles of the Gentiles divided in their lands; every one after his
tongue, after their families, in their nations.
\bibleverse{6} And the sons of Ham; Cush, and Mizraim, and Phut, and
Canaan. \bibleverse{7} And the sons of Cush; Seba, and Havilah, and
Sabtah, and Raamah, and Sabtecha: and the sons of Raamah; Sheba, and
Dedan. \bibleverse{8} And Cush begat Nimrod: he began to be a mighty one
in the earth. \bibleverse{9} He was a mighty hunter before the LORD:
wherefore it is said, Even as Nimrod the mighty hunter before the LORD.
\bibleverse{10} And the beginning of his kingdom was Babel, and Erech,
and Accad, and Calneh, in the land of Shinar.\footnote{\textbf{10:10}
Babel: Gr. Babylon} \bibleverse{11} Out of that land went forth
Asshur, and builded Nineveh, and the city Rehoboth, and
Calah,\textsuperscript{{[}\textbf{10:11} went\ldots: or, he went out
into Assyria{]}}{[}\textbf{10:11} the city\ldots: or, the streets of the
city{]} \bibleverse{12} And Resen between Nineveh and Calah: the same is
a great city. \bibleverse{13} And Mizraim begat Ludim, and Anamim, and
Lehabim, and Naphtuhim, \bibleverse{14} And Pathrusim, and Casluhim,
(out of whom came Philistim,) and Caphtorim.
\bibleverse{15} And Canaan begat Sidon his firstborn, and
Heth,\footnote{\textbf{10:15} Sidon: Heb. Tzidon} \bibleverse{16} And
the Jebusite, and the Amorite, and the Girgasite, \bibleverse{17} And
the Hivite, and the Arkite, and the Sinite, \bibleverse{18} And the
Arvadite, and the Zemarite, and the Hamathite: and afterward were the
families of the Canaanites spread abroad. \bibleverse{19} And the border
of the Canaanites was from Sidon, as thou comest to Gerar, unto Gaza; as
thou goest, unto Sodom, and Gomorrah, and Admah, and Zeboim, even unto
Lasha.\footnote{\textbf{10:19} Gaza: Heb. Azzah} \bibleverse{20} These
are the sons of Ham, after their families, after their tongues, in their
countries, and in their nations.
\bibleverse{21} Unto Shem also, the father of all the children of Eber,
the brother of Japheth the elder, even to him were children born.
\bibleverse{22} The children of Shem; Elam, and Asshur, and Arphaxad,
and Lud, and Aram.\footnote{\textbf{10:22} Arphaxad: Heb. Arpachshad}
\bibleverse{23} And the children of Aram; Uz, and Hul, and Gether, and
Mash. \bibleverse{24} And Arphaxad begat Salah; and Salah begat
Eber.\footnote{\textbf{10:24} Salah: Heb. Shelah} \bibleverse{25} And
unto Eber were born two sons: the name of one was Peleg; for in his days
was the earth divided; and his brother's name was Joktan.\footnote{\textbf{10:25}
Peleg: that is Division} \bibleverse{26} And Joktan begat Almodad, and
Sheleph, and Hazarmaveth, and Jerah, \bibleverse{27} And Hadoram, and
Uzal, and Diklah, \bibleverse{28} And Obal, and Abimael, and Sheba,
\bibleverse{29} And Ophir, and Havilah, and Jobab: all these were the
sons of Joktan. \bibleverse{30} And their dwelling was from Mesha, as
thou goest unto Sephar a mount of the east. \bibleverse{31} These are
the sons of Shem, after their families, after their tongues, in their
lands, after their nations. \bibleverse{32} These are the families of
the sons of Noah, after their generations, in their nations: and by
these were the nations divided in the earth after the flood.
\hypertarget{section-10}{%
\section{11}\label{section-10}}
\bibleverse{1} And the whole earth was of one language, and of one
speech.\textsuperscript{{[}\textbf{11:1} language: Heb.
lip.{]}}{[}\textbf{11:1} speech: Heb. words{]} \bibleverse{2} And it
came to pass, as they journeyed from the east, that they found a plain
in the land of Shinar; and they dwelt there.\footnote{\textbf{11:2}
from\ldots: or, eastward} \bibleverse{3} And they said one to another,
Go to, let us make brick, and burn them throughly. And they had brick
for stone, and slime had they for
morter.\textsuperscript{{[}\textbf{11:3} they said\ldots: Heb. a man
said to his neighbour{]}}{[}\textbf{11:3} burn\ldots: Heb. burn them to
a burning{]} \bibleverse{4} And they said, Go to, let us build us a city
and a tower, whose top may reach unto heaven; and let us make us a name,
lest we be scattered abroad upon the face of the whole earth.
\bibleverse{5} And the LORD came down to see the city and the tower,
which the children of men builded. \bibleverse{6} And the LORD said,
Behold, the people is one, and they have all one language; and this they
begin to do: and now nothing will be restrained from them, which they
have imagined to do. \bibleverse{7} Go to, let us go down, and there
confound their language, that they may not understand one another's
speech. \bibleverse{8} So the LORD scattered them abroad from thence
upon the face of all the earth: and they left off to build the city.
\bibleverse{9} Therefore is the name of it called Babel; because the
LORD did there confound the language of all the earth: and from thence
did the LORD scatter them abroad upon the face of all the
earth.\footnote{\textbf{11:9} Babel: that is, Confusion}
\bibleverse{10} These are the generations of Shem: Shem was an hundred
years old, and begat Arphaxad two years after the flood: \bibleverse{11}
And Shem lived after he begat Arphaxad five hundred years, and begat
sons and daughters. \bibleverse{12} And Arphaxad lived five and thirty
years, and begat Salah: \bibleverse{13} And Arphaxad lived after he
begat Salah four hundred and three years, and begat sons and daughters.
\bibleverse{14} And Salah lived thirty years, and begat Eber:
\bibleverse{15} And Salah lived after he begat Eber four hundred and
three years, and begat sons and daughters. \bibleverse{16} And Eber
lived four and thirty years, and begat Peleg:\footnote{\textbf{11:16}
Peleg: Gr. Phalec} \bibleverse{17} And Eber lived after he begat Peleg
four hundred and thirty years, and begat sons and daughters.
\bibleverse{18} And Peleg lived thirty years, and begat Reu:
\bibleverse{19} And Peleg lived after he begat Reu two hundred and nine
years, and begat sons and daughters. \bibleverse{20} And Reu lived two
and thirty years, and begat Serug:\footnote{\textbf{11:20} Serug: Gr.
Saruch} \bibleverse{21} And Reu lived after he begat Serug two hundred
and seven years, and begat sons and daughters. \bibleverse{22} And Serug
lived thirty years, and begat Nahor: \bibleverse{23} And Serug lived
after he begat Nahor two hundred years, and begat sons and daughters.
\bibleverse{24} And Nahor lived nine and twenty years, and begat
Terah:\footnote{\textbf{11:24} Terah: Gr. Thara} \bibleverse{25} And
Nahor lived after he begat Terah an hundred and nineteen years, and
begat sons and daughters. \bibleverse{26} And Terah lived seventy years,
and begat Abram, Nahor, and Haran.
\bibleverse{27} Now these are the generations of Terah: Terah begat
Abram, Nahor, and Haran; and Haran begat Lot. \bibleverse{28} And Haran
died before his father Terah in the land of his nativity, in Ur of the
Chaldees. \bibleverse{29} And Abram and Nahor took them wives: the name
of Abram's wife was Sarai; and the name of Nahor's wife, Milcah, the
daughter of Haran, the father of Milcah, and the father of Iscah.
\bibleverse{30} But Sarai was barren; she had no child. \bibleverse{31}
And Terah took Abram his son, and Lot the son of Haran his son's son,
and Sarai his daughter in law, his son Abram's wife; and they went forth
with them from Ur of the Chaldees, to go into the land of Canaan; and
they came unto Haran, and dwelt there. \bibleverse{32} And the days of
Terah were two hundred and five years: and Terah died in Haran.
\hypertarget{section-11}{%
\section{12}\label{section-11}}
\bibleverse{1} Now the LORD had said unto Abram, Get thee out of thy
country, and from thy kindred, and from thy father's house, unto a land
that I will shew thee: \bibleverse{2} And I will make of thee a great
nation, and I will bless thee, and make thy name great; and thou shalt
be a blessing: \bibleverse{3} And I will bless them that bless thee, and
curse him that curseth thee: and in thee shall all families of the earth
be blessed.
\bibleverse{4} So Abram departed, as the LORD had spoken unto him; and
Lot went with him: and Abram was seventy and five years old when he
departed out of Haran. \bibleverse{5} And Abram took Sarai his wife, and
Lot his brother's son, and all their substance that they had gathered,
and the souls that they had gotten in Haran; and they went forth to go
into the land of Canaan; and into the land of Canaan they came.
\bibleverse{6} And Abram passed through the land unto the place of
Sichem, unto the plain of Moreh. And the Canaanite was then in the
land.\footnote{\textbf{12:6} plain: Heb. plains} \bibleverse{7} And the
LORD appeared unto Abram, and said, Unto thy seed will I give this land:
and there builded he an altar unto the LORD, who appeared unto him.
\bibleverse{8} And he removed from thence unto a mountain on the east of
Beth-el, and pitched his tent, having Beth-el on the west, and Hai on
the east: and there he builded an altar unto the LORD, and called upon
the name of the LORD. \bibleverse{9} And Abram journeyed, going on still
toward the south.\footnote{\textbf{12:9} going\ldots: Heb. in going and
journeying}
\bibleverse{10} And there was a famine in the land: and Abram went down
into Egypt to sojourn there; for the famine was grievous in the land.
\bibleverse{11} And it came to pass, when he was come near to enter into
Egypt, that he said unto Sarai his wife, Behold now, I know that thou
art a fair woman to look upon: \bibleverse{12} Therefore it shall come
to pass, when the Egyptians shall see thee, that they shall say, This is
his wife: and they will kill me, but they will save thee alive.
\bibleverse{13} Say, I pray thee, thou art my sister: that it may be
well with me for thy sake; and my soul shall live because of thee.
\bibleverse{14} And it came to pass, that, when Abram was come into
Egypt, the Egyptians beheld the woman that she was very fair.
\bibleverse{15} The princes also of Pharaoh saw her, and commended her
before Pharaoh: and the woman was taken into Pharaoh's house.
\bibleverse{16} And he entreated Abram well for her sake: and he had
sheep, and oxen, and he asses, and menservants, and maidservants, and
she asses, and camels. \bibleverse{17} And the LORD plagued Pharaoh and
his house with great plagues because of Sarai Abram's wife.
\bibleverse{18} And Pharaoh called Abram, and said, What is this that
thou hast done unto me? why didst thou not tell me that she was thy
wife? \bibleverse{19} Why saidst thou, She is my sister? so I might have
taken her to me to wife: now therefore behold thy wife, take her, and go
thy way. \bibleverse{20} And Pharaoh commanded his men concerning him:
and they sent him away, and his wife, and all that he had.
\hypertarget{section-12}{%
\section{13}\label{section-12}}
\bibleverse{1} And Abram went up out of Egypt, he, and his wife, and all
that he had, and Lot with him, into the south. \bibleverse{2} And Abram
was very rich in cattle, in silver, and in gold. \bibleverse{3} And he
went on his journeys from the south even to Beth-el, unto the place
where his tent had been at the beginning, between Beth-el and Hai;
\bibleverse{4} Unto the place of the altar, which he had made there at
the first: and there Abram called on the name of the LORD.
\bibleverse{5} And Lot also, which went with Abram, had flocks, and
herds, and tents. \bibleverse{6} And the land was not able to bear them,
that they might dwell together: for their substance was great, so that
they could not dwell together. \bibleverse{7} And there was a strife
between the herdmen of Abram's cattle and the herdmen of Lot's cattle:
and the Canaanite and the Perizzite dwelled then in the land.
\bibleverse{8} And Abram said unto Lot, Let there be no strife, I pray
thee, between me and thee, and between my herdmen and thy herdmen; for
we be brethren.\footnote{\textbf{13:8} brethren: Heb. men brethren}
\bibleverse{9} Is not the whole land before thee? separate thyself, I
pray thee, from me: if thou wilt take the left hand, then I will go to
the right; or if thou depart to the right hand, then I will go to the
left.
\bibleverse{10} And Lot lifted up his eyes, and beheld all the plain of
Jordan, that it was well watered every where, before the LORD destroyed
Sodom and Gomorrah, even as the garden of the LORD, like the land of
Egypt, as thou comest unto Zoar. \bibleverse{11} Then Lot chose him all
the plain of Jordan; and Lot journeyed east: and they separated
themselves the one from the other. \bibleverse{12} Abram dwelled in the
land of Canaan, and Lot dwelled in the cities of the plain, and pitched
his tent toward Sodom. \bibleverse{13} But the men of Sodom were wicked
and sinners before the LORD exceedingly.
\bibleverse{14} And the LORD said unto Abram, after that Lot was
separated from him, Lift up now thine eyes, and look from the place
where thou art northward, and southward, and eastward, and westward:
\bibleverse{15} For all the land which thou seest, to thee will I give
it, and to thy seed for ever. \bibleverse{16} And I will make thy seed
as the dust of the earth: so that if a man can number the dust of the
earth, then shall thy seed also be numbered. \bibleverse{17} Arise, walk
through the land in the length of it and in the breadth of it; for I
will give it unto thee. \bibleverse{18} Then Abram removed his tent, and
came and dwelt in the plain of Mamre, which is in Hebron, and built
there an altar unto the LORD.\footnote{\textbf{13:18} plain: Heb. plains}
\hypertarget{section-13}{%
\section{14}\label{section-13}}
\bibleverse{1} And it came to pass in the days of Amraphel king of
Shinar, Arioch king of Ellasar, Chedorlaomer king of Elam, and Tidal
king of nations; \bibleverse{2} That these made war with Bera king of
Sodom, and with Birsha king of Gomorrah, Shinab king of Admah, and
Shemeber king of Zeboiim, and the king of Bela, which is Zoar.
\bibleverse{3} All these were joined together in the vale of Siddim,
which is the salt sea. \bibleverse{4} Twelve years they served
Chedorlaomer, and in the thirteenth year they rebelled. \bibleverse{5}
And in the fourteenth year came Chedorlaomer, and the kings that were
with him, and smote the Rephaims in Ashteroth Karnaim, and the Zuzims in
Ham, and the Emims in Shaveh Kiriathaim,\footnote{\textbf{14:5}
Shaveh\ldots: or, The plain of Kiriathaim} \bibleverse{6} And the
Horites in their mount Seir, unto El-paran, which is by the
wilderness.\footnote{\textbf{14:6} El-paran: or, The plain of Paran}
\bibleverse{7} And they returned, and came to En-mishpat, which is
Kadesh, and smote all the country of the Amalekites, and also the
Amorites, that dwelt in Hazezon-tamar. \bibleverse{8} And there went out
the king of Sodom, and the king of Gomorrah, and the king of Admah, and
the king of Zeboiim, and the king of Bela (the same is Zoar;) and they
joined battle with them in the vale of Siddim; \bibleverse{9} With
Chedorlaomer the king of Elam, and with Tidal king of nations, and
Amraphel king of Shinar, and Arioch king of Ellasar; four kings with
five. \bibleverse{10} And the vale of Siddim was full of slimepits; and
the kings of Sodom and Gomorrah fled, and fell there; and they that
remained fled to the mountain. \bibleverse{11} And they took all the
goods of Sodom and Gomorrah, and all their victuals, and went their way.
\bibleverse{12} And they took Lot, Abram's brother's son, who dwelt in
Sodom, and his goods, and departed.
\bibleverse{13} And there came one that had escaped, and told Abram the
Hebrew; for he dwelt in the plain of Mamre the Amorite, brother of
Eshcol, and brother of Aner: and these were confederate with
Abram.\footnote{\textbf{14:13} plain: Heb. plains} \bibleverse{14} And
when Abram heard that his brother was taken captive, he armed his
trained servants, born in his own house, three hundred and eighteen, and
pursued them unto Dan.\textsuperscript{{[}\textbf{14:14} armed: or, led
forth{]}}{[}\textbf{14:14} trained: or, instructed{]} \bibleverse{15}
And he divided himself against them, he and his servants, by night, and
smote them, and pursued them unto Hobah, which is on the left hand of
Damascus. \bibleverse{16} And he brought back all the goods, and also
brought again his brother Lot, and his goods, and the women also, and
the people.
\bibleverse{17} And the king of Sodom went out to meet him after his
return from the slaughter of Chedorlaomer, and of the kings that were
with him, at the valley of Shaveh, which is the king's dale.
\bibleverse{18} And Melchizedek king of Salem brought forth bread and
wine: and he was the priest of the most high God. \bibleverse{19} And he
blessed him, and said, Blessed be Abram of the most high God, possessor
of heaven and earth: \bibleverse{20} And blessed be the most high God,
which hath delivered thine enemies into thy hand. And he gave him tithes
of all.
\bibleverse{21} And the king of Sodom said unto Abram, Give me the
persons, and take the goods to thyself.\footnote{\textbf{14:21} persons:
Heb. souls} \bibleverse{22} And Abram said to the king of Sodom, I
have lift up mine hand unto the LORD, the most high God, the possessor
of heaven and earth, \bibleverse{23} That I will not take from a thread
even to a shoelatchet, and that I will not take any thing that is thine,
lest thou shouldest say, I have made Abram rich: \bibleverse{24} Save
only that which the young men have eaten, and the portion of the men
which went with me, Aner, Eshcol, and Mamre; let them take their
portion.
\hypertarget{section-14}{%
\section{15}\label{section-14}}
\bibleverse{1} After these things the word of the LORD came unto Abram
in a vision, saying, Fear not, Abram: I am thy shield, and thy exceeding
great reward.
\bibleverse{2} And Abram said, Lord GOD, what wilt thou give me, seeing
I go childless, and the steward of my house is this Eliezer of Damascus?
\bibleverse{3} And Abram said, Behold, to me thou hast given no seed:
and, lo, one born in my house is mine heir. \bibleverse{4} And, behold,
the word of the LORD came unto him, saying, This shall not be thine
heir; but he that shall come forth out of thine own bowels shall be
thine heir. \bibleverse{5} And he brought him forth abroad, and said,
Look now toward heaven, and tell the stars, if thou be able to number
them: and he said unto him, So shall thy seed be. \bibleverse{6} And he
believed in the LORD; and he counted it to him for righteousness.
\bibleverse{7} And he said unto him, I am the LORD that brought thee out
of Ur of the Chaldees, to give thee this land to inherit it.
\bibleverse{8} And he said, Lord GOD, whereby shall I know that I shall
inherit it? \bibleverse{9} And he said unto him, Take me an heifer of
three years old, and a she goat of three years old, and a ram of three
years old, and a turtledove, and a young pigeon. \bibleverse{10} And he
took unto him all these, and divided them in the midst, and laid each
piece one against another: but the birds divided he not. \bibleverse{11}
And when the fowls came down upon the carcases, Abram drove them away.
\bibleverse{12} And when the sun was going down, a deep sleep fell upon
Abram; and, lo, an horror of great darkness fell upon him.
\bibleverse{13} And he said unto Abram, Know of a surety that thy seed
shall be a stranger in a land that is not theirs, and shall serve them;
and they shall afflict them four hundred years; \bibleverse{14} And also
that nation, whom they shall serve, will I judge: and afterward shall
they come out with great substance. \bibleverse{15} And thou shalt go to
thy fathers in peace; thou shalt be buried in a good old age.
\bibleverse{16} But in the fourth generation they shall come hither
again: for the iniquity of the Amorites is not yet full.
\bibleverse{17} And it came to pass, that, when the sun went down, and
it was dark, behold a smoking furnace, and a burning lamp that passed
between those pieces.\footnote{\textbf{15:17} a burning\ldots: Heb. a
lamp of fire} \bibleverse{18} In the same day the LORD made a covenant
with Abram, saying, Unto thy seed have I given this land, from the river
of Egypt unto the great river, the river Euphrates: \bibleverse{19} The
Kenites, and the Kenizzites, and the Kadmonites, \bibleverse{20} And the
Hittites, and the Perizzites, and the Rephaims, \bibleverse{21} And the
Amorites, and the Canaanites, and the Girgashites, and the Jebusites.
\hypertarget{section-15}{%
\section{16}\label{section-15}}
\bibleverse{1} Now Sarai Abram's wife bare him no children: and she had
an handmaid, an Egyptian, whose name was Hagar. \bibleverse{2} And Sarai
said unto Abram, Behold now, the LORD hath restrained me from bearing: I
pray thee, go in unto my maid; it may be that I may obtain children by
her. And Abram hearkened to the voice of Sarai.\footnote{\textbf{16:2}
obtain\ldots: Heb. be built by her} \bibleverse{3} And Sarai Abram's
wife took Hagar her maid the Egyptian, after Abram had dwelt ten years
in the land of Canaan, and gave her to her husband Abram to be his wife.
\bibleverse{4} And he went in unto Hagar, and she conceived: and when
she saw that she had conceived, her mistress was despised in her eyes.
\bibleverse{5} And Sarai said unto Abram, My wrong be upon thee: I have
given my maid into thy bosom; and when she saw that she had conceived, I
was despised in her eyes: the LORD judge between me and thee.
\bibleverse{6} But Abram said unto Sarai, Behold, thy maid is in thy
hand; do to her as it pleaseth thee. And when Sarai dealt hardly with
her, she fled from her face.\textsuperscript{{[}\textbf{16:6} as\ldots:
Heb. that which is good in thine eyes{]}}{[}\textbf{16:6} dealt\ldots:
Heb. afflicted her{]}
\bibleverse{7} And the angel of the LORD found her by a fountain of
water in the wilderness, by the fountain in the way to Shur.
\bibleverse{8} And he said, Hagar, Sarai's maid, whence camest thou? and
whither wilt thou go? And she said, I flee from the face of my mistress
Sarai. \bibleverse{9} And the angel of the LORD said unto her, Return to
thy mistress, and submit thyself under her hands.
\bibleverse{10} And the angel of the LORD said unto her, I will multiply
thy seed exceedingly, that it shall not be numbered for multitude.
\bibleverse{11} And the angel of the LORD said unto her, Behold, thou
art with child, and shalt bear a son, and shalt call his name Ishmael;
because the LORD hath heard thy affliction.\footnote{\textbf{16:11}
Ishmael: that is, God shall hear} \bibleverse{12} And he will be a
wild man; his hand will be against every man, and every man's hand
against him; and he shall dwell in the presence of all his brethren.
\bibleverse{13} And she called the name of the LORD that spake unto her,
Thou God seest me: for she said, Have I also here looked after him that
seeth me? \bibleverse{14} Wherefore the well was called Beer-lahai-roi;
behold, it is between Kadesh and Bered.\footnote{\textbf{16:14}
Beer-lahai-roi: that is, The well of him that liveth and seeth me}
\bibleverse{15} And Hagar bare Abram a son: and Abram called his son's
name, which Hagar bare, Ishmael. \bibleverse{16} And Abram was fourscore
and six years old, when Hagar bare Ishmael to Abram.
\hypertarget{section-16}{%
\section{17}\label{section-16}}
\bibleverse{1} And when Abram was ninety years old and nine, the LORD
appeared to Abram, and said unto him, I am the Almighty God; walk before
me, and be thou perfect.\footnote{\textbf{17:1} perfect: or, upright,
or, sincere} \bibleverse{2} And I will make my covenant between me and
thee, and will multiply thee exceedingly. \bibleverse{3} And Abram fell
on his face: and God talked with him, saying,
\bibleverse{4} As for me, behold, my covenant is with thee, and thou
shalt be a father of many nations.\footnote{\textbf{17:4} many\ldots:
Heb. multitude of nations} \bibleverse{5} Neither shall thy name any
more be called Abram, but thy name shall be Abraham; for a father of
many nations have I made thee.\footnote{\textbf{17:5} Abraham: that is,
Father of a great multitude} \bibleverse{6} And I will make thee
exceeding fruitful, and I will make nations of thee, and kings shall
come out of thee.
\bibleverse{7} And I will establish my covenant between me and thee and
thy seed after thee in their generations for an everlasting covenant, to
be a God unto thee, and to thy seed after thee. \bibleverse{8} And I
will give unto thee, and to thy seed after thee, the land wherein thou
art a stranger, all the land of Canaan, for an everlasting possession;
and I will be their God.\footnote{\textbf{17:8} wherein\ldots: Heb. of
thy sojournings}
\bibleverse{9} And God said unto Abraham, Thou shalt keep my covenant
therefore, thou, and thy seed after thee in their generations.
\bibleverse{10} This is my covenant, which ye shall keep, between me and
you and thy seed after thee; Every man child among you shall be
circumcised. \bibleverse{11} And ye shall circumcise the flesh of your
foreskin; and it shall be a token of the covenant betwixt me and you.
\bibleverse{12} And he that is eight days old shall be circumcised among
you, every man child in your generations, he that is born in the house,
or bought with money of any stranger, which is not of thy
seed.\footnote{\textbf{17:12} he that is eight\ldots: Heb. a son of
eight days} \bibleverse{13} He that is born in thy house, and he that
is bought with thy money, must needs be circumcised: and my covenant
shall be in your flesh for an everlasting covenant. \bibleverse{14} And
the uncircumcised man child whose flesh of his foreskin is not
circumcised, that soul shall be cut off from his people; he hath broken
my covenant.
\bibleverse{15} And God said unto Abraham, As for Sarai thy wife, thou
shalt not call her name Sarai, but Sarah shall her name be.\footnote{\textbf{17:15}
Sarah: that is Princess} \bibleverse{16} And I will bless her, and
give thee a son also of her: yea, I will bless her, and she shall be a
mother of nations; kings of people shall be of her.\footnote{\textbf{17:16}
she\ldots: Heb. she shall become nations} \bibleverse{17} Then Abraham
fell upon his face, and laughed, and said in his heart, Shall a child be
born unto him that is an hundred years old? and shall Sarah, that is
ninety years old, bear? \bibleverse{18} And Abraham said unto God, O
that Ishmael might live before thee! \bibleverse{19} And God said, Sarah
thy wife shall bear thee a son indeed; and thou shalt call his name
Isaac: and I will establish my covenant with him for an everlasting
covenant, and with his seed after him. \bibleverse{20} And as for
Ishmael, I have heard thee: Behold, I have blessed him, and will make
him fruitful, and will multiply him exceedingly; twelve princes shall he
beget, and I will make him a great nation. \bibleverse{21} But my
covenant will I establish with Isaac, which Sarah shall bear unto thee
at this set time in the next year. \bibleverse{22} And he left off
talking with him, and God went up from Abraham.
\bibleverse{23} And Abraham took Ishmael his son, and all that were born
in his house, and all that were bought with his money, every male among
the men of Abraham's house; and circumcised the flesh of their foreskin
in the selfsame day, as God had said unto him. \bibleverse{24} And
Abraham was ninety years old and nine, when he was circumcised in the
flesh of his foreskin. \bibleverse{25} And Ishmael his son was thirteen
years old, when he was circumcised in the flesh of his foreskin.
\bibleverse{26} In the selfsame day was Abraham circumcised, and Ishmael
his son. \bibleverse{27} And all the men of his house, born in the
house, and bought with money of the stranger, were circumcised with him.
\hypertarget{section-17}{%
\section{18}\label{section-17}}
\bibleverse{1} And the LORD appeared unto him in the plains of Mamre:
and he sat in the tent door in the heat of the day;\footnote{\textbf{18:1}
plain: Heb. plains} \bibleverse{2} And he lift up his eyes and looked,
and, lo, three men stood by him: and when he saw them, he ran to meet
them from the tent door, and bowed himself toward the ground,
\bibleverse{3} And said, My Lord, if now I have found favour in thy
sight, pass not away, I pray thee, from thy servant: \bibleverse{4} Let
a little water, I pray you, be fetched, and wash your feet, and rest
yourselves under the tree: \bibleverse{5} And I will fetch a morsel of
bread, and comfort ye your hearts; after that ye shall pass on: for
therefore are ye come to your servant. And they said, So do, as thou
hast said.\textsuperscript{{[}\textbf{18:5} comfort: Heb.
stay{]}}{[}\textbf{18:5} are\ldots: Heb. you have passed{]}
\bibleverse{6} And Abraham hastened into the tent unto Sarah, and said,
Make ready quickly three measures of fine meal, knead it, and make cakes
upon the hearth.\footnote{\textbf{18:6} Make ready\ldots: Heb. Hasten}
\bibleverse{7} And Abraham ran unto the herd, and fetcht a calf tender
and good, and gave it unto a young man; and he hasted to dress it.
\bibleverse{8} And he took butter, and milk, and the calf which he had
dressed, and set it before them; and he stood by them under the tree,
and they did eat.
\bibleverse{9} And they said unto him, Where is Sarah thy wife? And he
said, Behold, in the tent. \bibleverse{10} And he said, I will certainly
return unto thee according to the time of life; and, lo, Sarah thy wife
shall have a son. And Sarah heard it in the tent door, which was behind
him. \bibleverse{11} Now Abraham and Sarah were old and well stricken in
age; and it ceased to be with Sarah after the manner of women.
\bibleverse{12} Therefore Sarah laughed within herself, saying, After I
am waxed old shall I have pleasure, my lord being old also?
\bibleverse{13} And the LORD said unto Abraham, Wherefore did Sarah
laugh, saying, Shall I of a surety bear a child, which am old?
\bibleverse{14} Is any thing too hard for the LORD? At the time
appointed I will return unto thee, according to the time of life, and
Sarah shall have a son. \bibleverse{15} Then Sarah denied, saying, I
laughed not; for she was afraid. And he said, Nay; but thou didst laugh.
\bibleverse{16} And the men rose up from thence, and looked toward
Sodom: and Abraham went with them to bring them on the way.
\bibleverse{17} And the LORD said, Shall I hide from Abraham that thing
which I do; \bibleverse{18} Seeing that Abraham shall surely become a
great and mighty nation, and all the nations of the earth shall be
blessed in him? \bibleverse{19} For I know him, that he will command his
children and his household after him, and they shall keep the way of the
LORD, to do justice and judgment; that the LORD may bring upon Abraham
that which he hath spoken of him. \bibleverse{20} And the LORD said,
Because the cry of Sodom and Gomorrah is great, and because their sin is
very grievous; \bibleverse{21} I will go down now, and see whether they
have done altogether according to the cry of it, which is come unto me;
and if not, I will know. \bibleverse{22} And the men turned their faces
from thence, and went toward Sodom: but Abraham stood yet before the
LORD.
\bibleverse{23} And Abraham drew near, and said, Wilt thou also destroy
the righteous with the wicked? \bibleverse{24} Peradventure there be
fifty righteous within the city: wilt thou also destroy and not spare
the place for the fifty righteous that are therein? \bibleverse{25} That
be far from thee to do after this manner, to slay the righteous with the
wicked: and that the righteous should be as the wicked, that be far from
thee: Shall not the Judge of all the earth do right? \bibleverse{26} And
the LORD said, If I find in Sodom fifty righteous within the city, then
I will spare all the place for their sakes. \bibleverse{27} And Abraham
answered and said, Behold now, I have taken upon me to speak unto the
Lord, which am but dust and ashes: \bibleverse{28} Peradventure there
shall lack five of the fifty righteous: wilt thou destroy all the city
for lack of five? And he said, If I find there forty and five, I will
not destroy it. \bibleverse{29} And he spake unto him yet again, and
said, Peradventure there shall be forty found there. And he said, I will
not do it for forty's sake. \bibleverse{30} And he said unto him, Oh let
not the Lord be angry, and I will speak: Peradventure there shall thirty
be found there. And he said, I will not do it, if I find thirty there.
\bibleverse{31} And he said, Behold now, I have taken upon me to speak
unto the Lord: Peradventure there shall be twenty found there. And he
said, I will not destroy it for twenty's sake. \bibleverse{32} And he
said, Oh let not the Lord be angry, and I will speak yet but this once:
Peradventure ten shall be found there. And he said, I will not destroy
it for ten's sake. \bibleverse{33} And the LORD went his way, as soon as
he had left communing with Abraham: and Abraham returned unto his place.
\hypertarget{section-18}{%
\section{19}\label{section-18}}
\bibleverse{1} And there came two angels to Sodom at even; and Lot sat
in the gate of Sodom: and Lot seeing them rose up to meet them; and he
bowed himself with his face toward the ground; \bibleverse{2} And he
said, Behold now, my lords, turn in, I pray you, into your servant's
house, and tarry all night, and wash your feet, and ye shall rise up
early, and go on your ways. And they said, Nay; but we will abide in the
street all night. \bibleverse{3} And he pressed upon them greatly; and
they turned in unto him, and entered into his house; and he made them a
feast, and did bake unleavened bread, and they did eat.
\bibleverse{4} But before they lay down, the men of the city, even the
men of Sodom, compassed the house round, both old and young, all the
people from every quarter: \bibleverse{5} And they called unto Lot, and
said unto him, Where are the men which came in to thee this night? bring
them out unto us, that we may know them. \bibleverse{6} And Lot went out
at the door unto them, and shut the door after him, \bibleverse{7} And
said, I pray you, brethren, do not so wickedly. \bibleverse{8} Behold
now, I have two daughters which have not known man; let me, I pray you,
bring them out unto you, and do ye to them as is good in your eyes: only
unto these men do nothing; for therefore came they under the shadow of
my roof. \bibleverse{9} And they said, Stand back. And they said again,
This one fellow came in to sojourn, and he will needs be a judge: now
will we deal worse with thee, than with them. And they pressed sore upon
the man, even Lot, and came near to break the door. \bibleverse{10} But
the men put forth their hand, and pulled Lot into the house to them, and
shut to the door. \bibleverse{11} And they smote the men that were at
the door of the house with blindness, both small and great: so that they
wearied themselves to find the door.
\bibleverse{12} And the men said unto Lot, Hast thou here any besides?
son in law, and thy sons, and thy daughters, and whatsoever thou hast in
the city, bring them out of this place: \bibleverse{13} For we will
destroy this place, because the cry of them is waxen great before the
face of the LORD; and the LORD hath sent us to destroy it.
\bibleverse{14} And Lot went out, and spake unto his sons in law, which
married his daughters, and said, Up, get you out of this place; for the
LORD will destroy this city. But he seemed as one that mocked unto his
sons in law.
\bibleverse{15} And when the morning arose, then the angels hastened
Lot, saying, Arise, take thy wife, and thy two daughters, which are
here; lest thou be consumed in the iniquity of the
city.\textsuperscript{{[}\textbf{19:15} are here: Heb. are
found{]}}{[}\textbf{19:15} iniquity: or, punishment{]} \bibleverse{16}
And while he lingered, the men laid hold upon his hand, and upon the
hand of his wife, and upon the hand of his two daughters; the LORD being
merciful unto him: and they brought him forth, and set him without the
city.
\bibleverse{17} And it came to pass, when they had brought them forth
abroad, that he said, Escape for thy life; look not behind thee, neither
stay thou in all the plain; escape to the mountain, lest thou be
consumed. \bibleverse{18} And Lot said unto them, Oh, not so, my Lord:
\bibleverse{19} Behold now, thy servant hath found grace in thy sight,
and thou hast magnified thy mercy, which thou hast shewed unto me in
saving my life; and I cannot escape to the mountain, lest some evil take
me, and I die: \bibleverse{20} Behold now, this city is near to flee
unto, and it is a little one: Oh, let me escape thither, (is it not a
little one?) and my soul shall live. \bibleverse{21} And he said unto
him, See, I have accepted thee concerning this thing also, that I will
not overthrow this city, for the which thou hast spoken.\footnote{\textbf{19:21}
thee: Heb. thy face} \bibleverse{22} Haste thee, escape thither; for I
cannot do any thing till thou be come thither. Therefore the name of the
city was called Zoar.\footnote{\textbf{19:22} Zoar: that is, Little}
\bibleverse{23} The sun was risen upon the earth when Lot entered into
Zoar.\footnote{\textbf{19:23} risen: Heb. gone forth}
\bibleverse{24} Then the LORD rained upon Sodom and upon Gomorrah
brimstone and fire from the LORD out of heaven; \bibleverse{25} And he
overthrew those cities, and all the plain, and all the inhabitants of
the cities, and that which grew upon the ground.
\bibleverse{26} But his wife looked back from behind him, and she became
a pillar of salt.
\bibleverse{27} And Abraham gat up early in the morning to the place
where he stood before the LORD: \bibleverse{28} And he looked toward
Sodom and Gomorrah, and toward all the land of the plain, and beheld,
and, lo, the smoke of the country went up as the smoke of a furnace.
\bibleverse{29} And it came to pass, when God destroyed the cities of
the plain, that God remembered Abraham, and sent Lot out of the midst of
the overthrow, when he overthrew the cities in the which Lot dwelt.
\bibleverse{30} And Lot went up out of Zoar, and dwelt in the mountain,
and his two daughters with him; for he feared to dwell in Zoar: and he
dwelt in a cave, he and his two daughters. \bibleverse{31} And the
firstborn said unto the younger, Our father is old, and there is not a
man in the earth to come in unto us after the manner of all the earth:
\bibleverse{32} Come, let us make our father drink wine, and we will lie
with him, that we may preserve seed of our father. \bibleverse{33} And
they made their father drink wine that night: and the firstborn went in,
and lay with her father; and he perceived not when she lay down, nor
when she arose. \bibleverse{34} And it came to pass on the morrow, that
the firstborn said unto the younger, Behold, I lay yesternight with my
father: let us make him drink wine this night also; and go thou in, and
lie with him, that we may preserve seed of our father. \bibleverse{35}
And they made their father drink wine that night also: and the younger
arose, and lay with him; and he perceived not when she lay down, nor
when she arose. \bibleverse{36} Thus were both the daughters of Lot with
child by their father. \bibleverse{37} And the firstborn bare a son, and
called his name Moab: the same is the father of the Moabites unto this
day. \bibleverse{38} And the younger, she also bare a son, and called
his name Ben-ammi: the same is the father of the children of Ammon unto
this day.
\hypertarget{section-19}{%
\section{20}\label{section-19}}
\bibleverse{1} And Abraham journeyed from thence toward the south
country, and dwelled between Kadesh and Shur, and sojourned in Gerar.
\bibleverse{2} And Abraham said of Sarah his wife, She is my sister: and
Abimelech king of Gerar sent, and took Sarah.
\bibleverse{3} But God came to Abimelech in a dream by night, and said
to him, Behold, thou art but a dead man, for the woman which thou hast
taken; for she is a man's wife.\footnote{\textbf{20:3} a man's\ldots:
Heb. married to an husband} \bibleverse{4} But Abimelech had not come
near her: and he said, Lord, wilt thou slay also a righteous nation?
\bibleverse{5} Said he not unto me, She is my sister? and she, even she
herself said, He is my brother: in the integrity of my heart and
innocency of my hands have I done this.\footnote{\textbf{20:5}
integrity: or, simplicity, or, sincerity} \bibleverse{6} And God said
unto him in a dream, Yea, I know that thou didst this in the integrity
of thy heart; for I also withheld thee from sinning against me:
therefore suffered I thee not to touch her. \bibleverse{7} Now therefore
restore the man his wife; for he is a prophet, and he shall pray for
thee, and thou shalt live: and if thou restore her not, know thou that
thou shalt surely die, thou, and all that are thine.
\bibleverse{8} Therefore Abimelech rose early in the morning, and called
all his servants, and told all these things in their ears: and the men
were sore afraid. \bibleverse{9} Then Abimelech called Abraham, and said
unto him, What hast thou done unto us? and what have I offended thee,
that thou hast brought on me and on my kingdom a great sin? thou hast
done deeds unto me that ought not to be done. \bibleverse{10} And
Abimelech said unto Abraham, What sawest thou, that thou hast done this
thing? \bibleverse{11} And Abraham said, Because I thought, Surely the
fear of God is not in this place; and they will slay me for my wife's
sake. \bibleverse{12} And yet indeed she is my sister; she is the
daughter of my father, but not the daughter of my mother; and she became
my wife. \bibleverse{13} And it came to pass, when God caused me to
wander from my father's house, that I said unto her, This is thy
kindness which thou shalt shew unto me; at every place whither we shall
come, say of me, He is my brother.
\bibleverse{14} And Abimelech took sheep, and oxen, and menservants, and
womenservants, and gave them unto Abraham, and restored him Sarah his
wife. \bibleverse{15} And Abimelech said, Behold, my land is before
thee: dwell where it pleaseth thee.\footnote{\textbf{20:15} where\ldots:
Heb. as is good in thine eyes} \bibleverse{16} And unto Sarah he said,
Behold, I have given thy brother a thousand pieces of silver: behold, he
is to thee a covering of the eyes, unto all that are with thee, and with
all other: thus she was reproved.
\bibleverse{17} So Abraham prayed unto God: and God healed Abimelech,
and his wife, and his maidservants; and they bare children.
\bibleverse{18} For the LORD had fast closed up all the wombs of the
house of Abimelech, because of Sarah Abraham's wife.
\hypertarget{section-20}{%
\section{21}\label{section-20}}
\bibleverse{1} And the LORD visited Sarah as he had said, and the LORD
did unto Sarah as he had spoken. \bibleverse{2} For Sarah conceived, and
bare Abraham a son in his old age, at the set time of which God had
spoken to him. \bibleverse{3} And Abraham called the name of his son
that was born unto him, whom Sarah bare to him, Isaac. \bibleverse{4}
And Abraham circumcised his son Isaac being eight days old, as God had
commanded him. \bibleverse{5} And Abraham was an hundred years old, when
his son Isaac was born unto him.
\bibleverse{6} And Sarah said, God hath made me to laugh, so that all
that hear will laugh with me. \bibleverse{7} And she said, Who would
have said unto Abraham, that Sarah should have given children suck? for
I have born him a son in his old age. \bibleverse{8} And the child grew,
and was weaned: and Abraham made a great feast the same day that Isaac
was weaned.
\bibleverse{9} And Sarah saw the son of Hagar the Egyptian, which she
had born unto Abraham, mocking. \bibleverse{10} Wherefore she said unto
Abraham, Cast out this bondwoman and her son: for the son of this
bondwoman shall not be heir with my son, even with Isaac.
\bibleverse{11} And the thing was very grievous in Abraham's sight
because of his son.
\bibleverse{12} And God said unto Abraham, Let it not be grievous in thy
sight because of the lad, and because of thy bondwoman; in all that
Sarah hath said unto thee, hearken unto her voice; for in Isaac shall
thy seed be called. \bibleverse{13} And also of the son of the bondwoman
will I make a nation, because he is thy seed.
\bibleverse{14} And Abraham rose up early in the morning, and took
bread, and a bottle of water, and gave it unto Hagar, putting it on her
shoulder, and the child, and sent her away: and she departed, and
wandered in the wilderness of Beer-sheba. \bibleverse{15} And the water
was spent in the bottle, and she cast the child under one of the shrubs.
\bibleverse{16} And she went, and sat her down over against him a good
way off, as it were a bowshot: for she said, Let me not see the death of
the child. And she sat over against him, and lift up her voice, and
wept. \bibleverse{17} And God heard the voice of the lad; and the angel
of God called to Hagar out of heaven, and said unto her, What aileth
thee, Hagar? fear not; for God hath heard the voice of the lad where he
is. \bibleverse{18} Arise, lift up the lad, and hold him in thine hand;
for I will make him a great nation. \bibleverse{19} And God opened her
eyes, and she saw a well of water; and she went, and filled the bottle
with water, and gave the lad drink. \bibleverse{20} And God was with the
lad; and he grew, and dwelt in the wilderness, and became an archer.
\bibleverse{21} And he dwelt in the wilderness of Paran: and his mother
took him a wife out of the land of Egypt.
\bibleverse{22} And it came to pass at that time, that Abimelech and
Phichol the chief captain of his host spake unto Abraham, saying, God is
with thee in all that thou doest: \bibleverse{23} Now therefore swear
unto me here by God that thou wilt not deal falsely with me, nor with my
son, nor with my son's son: but according to the kindness that I have
done unto thee, thou shalt do unto me, and to the land wherein thou hast
sojourned.\footnote{\textbf{21:23} that thou\ldots: Heb. if thou shalt
lie unto me} \bibleverse{24} And Abraham said, I will swear.
\bibleverse{25} And Abraham reproved Abimelech because of a well of
water, which Abimelech's servants had violently taken away.
\bibleverse{26} And Abimelech said, I wot not who hath done this thing:
neither didst thou tell me, neither yet heard I of it, but to day.
\bibleverse{27} And Abraham took sheep and oxen, and gave them unto
Abimelech; and both of them made a covenant. \bibleverse{28} And Abraham
set seven ewe lambs of the flock by themselves. \bibleverse{29} And
Abimelech said unto Abraham, What mean these seven ewe lambs which thou
hast set by themselves? \bibleverse{30} And he said, For these seven ewe
lambs shalt thou take of my hand, that they may be a witness unto me,
that I have digged this well. \bibleverse{31} Wherefore he called that
place Beer-sheba; because there they sware both of them.\footnote{\textbf{21:31}
Beer-sheba: that is, The well of the oath} \bibleverse{32} Thus they
made a covenant at Beer-sheba: then Abimelech rose up, and Phichol the
chief captain of his host, and they returned into the land of the
Philistines.
\bibleverse{33} And Abraham planted a grove in Beer-sheba, and called
there on the name of the LORD, the everlasting God.\footnote{\textbf{21:33}
grove: or, tree} \bibleverse{34} And Abraham sojourned in the
Philistines' land many days.
\hypertarget{section-21}{%
\section{22}\label{section-21}}
\bibleverse{1} And it came to pass after these things, that God did
tempt Abraham, and said unto him, Abraham: and he said, Behold, here I
am.\footnote{\textbf{22:1} Behold\ldots: Heb. Behold me} \bibleverse{2}
And he said, Take now thy son, thine only son Isaac, whom thou lovest,
and get thee into the land of Moriah; and offer him there for a burnt
offering upon one of the mountains which I will tell thee of.
\bibleverse{3} And Abraham rose up early in the morning, and saddled his
ass, and took two of his young men with him, and Isaac his son, and
clave the wood for the burnt offering, and rose up, and went unto the
place of which God had told him. \bibleverse{4} Then on the third day
Abraham lifted up his eyes, and saw the place afar off. \bibleverse{5}
And Abraham said unto his young men, Abide ye here with the ass; and I
and the lad will go yonder and worship, and come again to you.
\bibleverse{6} And Abraham took the wood of the burnt offering, and laid
it upon Isaac his son; and he took the fire in his hand, and a knife;
and they went both of them together. \bibleverse{7} And Isaac spake unto
Abraham his father, and said, My father: and he said, Here am I, my son.
And he said, Behold the fire and the wood: but where is the lamb for a
burnt offering?\footnote{\textbf{22:7} lamb: or, kid} \bibleverse{8} And
Abraham said, My son, God will provide himself a lamb for a burnt
offering: so they went both of them together. \bibleverse{9} And they
came to the place which God had told him of; and Abraham built an altar
there, and laid the wood in order, and bound Isaac his son, and laid him
on the altar upon the wood. \bibleverse{10} And Abraham stretched forth
his hand, and took the knife to slay his son.
\bibleverse{11} And the angel of the LORD called unto him out of heaven,
and said, Abraham, Abraham: and he said, Here am I. \bibleverse{12} And
he said, Lay not thine hand upon the lad, neither do thou any thing unto
him: for now I know that thou fearest God, seeing thou hast not withheld
thy son, thine only son from me. \bibleverse{13} And Abraham lifted up
his eyes, and looked, and behold behind him a ram caught in a thicket by
his horns: and Abraham went and took the ram, and offered him up for a
burnt offering in the stead of his son. \bibleverse{14} And Abraham
called the name of that place Jehovah-jireh: as it is said to this day,
In the mount of the LORD it shall be seen.\footnote{\textbf{22:14}
Jehovah-jireh: that is, The Lord will see, or, provide}
\bibleverse{15} And the angel of the LORD called unto Abraham out of
heaven the second time, \bibleverse{16} And said, By myself have I
sworn, saith the LORD, for because thou hast done this thing, and hast
not withheld thy son, thine only son: \bibleverse{17} That in blessing I
will bless thee, and in multiplying I will multiply thy seed as the
stars of the heaven, and as the sand which is upon the sea shore; and
thy seed shall possess the gate of his enemies;\footnote{\textbf{22:17}
shore: Heb. lip} \bibleverse{18} And in thy seed shall all the nations
of the earth be blessed; because thou hast obeyed my voice.
\bibleverse{19} So Abraham returned unto his young men, and they rose up
and went together to Beer-sheba; and Abraham dwelt at Beer-sheba.
\bibleverse{20} And it came to pass after these things, that it was told
Abraham, saying, Behold, Milcah, she hath also born children unto thy
brother Nahor; \bibleverse{21} Huz his firstborn, and Buz his brother,
and Kemuel the father of Aram, \bibleverse{22} And Chesed, and Hazo, and
Pildash, and Jidlaph, and Bethuel. \bibleverse{23} And Bethuel begat
Rebekah: these eight Milcah did bear to Nahor, Abraham's
brother.\footnote{\textbf{22:23} Rebekah: Gr. Rebecca} \bibleverse{24}
And his concubine, whose name was Reumah, she bare also Tebah, and
Gaham, and Thahash, and Maachah.
\hypertarget{section-22}{%
\section{23}\label{section-22}}
\bibleverse{1} And Sarah was an hundred and seven and twenty years old:
these were the years of the life of Sarah. \bibleverse{2} And Sarah died
in Kirjath-arba; the same is Hebron in the land of Canaan: and Abraham
came to mourn for Sarah, and to weep for her.
\bibleverse{3} And Abraham stood up from before his dead, and spake unto
the sons of Heth, saying, \bibleverse{4} I am a stranger and a sojourner
with you: give me a possession of a buryingplace with you, that I may
bury my dead out of my sight. \bibleverse{5} And the children of Heth
answered Abraham, saying unto him, \bibleverse{6} Hear us, my lord: thou
art a mighty prince among us: in the choice of our sepulchres bury thy
dead; none of us shall withhold from thee his sepulchre, but that thou
mayest bury thy dead.\footnote{\textbf{23:6} a mighty\ldots: Heb. a
prince of God} \bibleverse{7} And Abraham stood up, and bowed himself
to the people of the land, even to the children of Heth. \bibleverse{8}
And he communed with them, saying, If it be your mind that I should bury
my dead out of my sight; hear me, and intreat for me to Ephron the son
of Zohar, \bibleverse{9} That he may give me the cave of Machpelah,
which he hath, which is in the end of his field; for as much money as it
is worth he shall give it me for a possession of a buryingplace amongst
you.\footnote{\textbf{23:9} as much\ldots: Heb. full money}
\bibleverse{10} And Ephron dwelt among the children of Heth: and Ephron
the Hittite answered Abraham in the audience of the children of Heth,
even of all that went in at the gate of his city, saying,\footnote{\textbf{23:10}
audience: Heb. ears} \bibleverse{11} Nay, my lord, hear me: the field
give I thee, and the cave that is therein, I give it thee; in the
presence of the sons of my people give I it thee: bury thy dead.
\bibleverse{12} And Abraham bowed down himself before the people of the
land. \bibleverse{13} And he spake unto Ephron in the audience of the
people of the land, saying, But if thou wilt give it, I pray thee, hear
me: I will give thee money for the field; take it of me, and I will bury
my dead there. \bibleverse{14} And Ephron answered Abraham, saying unto
him, \bibleverse{15} My lord, hearken unto me: the land is worth four
hundred shekels of silver; what is that betwixt me and thee? bury
therefore thy dead.
\bibleverse{16} And Abraham hearkened unto Ephron; and Abraham weighed
to Ephron the silver, which he had named in the audience of the sons of
Heth, four hundred shekels of silver, current money with the merchant.
\bibleverse{17} And the field of Ephron, which was in Machpelah, which
was before Mamre, the field, and the cave which was therein, and all the
trees that were in the field, that were in all the borders round about,
were made sure \bibleverse{18} Unto Abraham for a possession in the
presence of the children of Heth, before all that went in at the gate of
his city. \bibleverse{19} And after this, Abraham buried Sarah his wife
in the cave of the field of Machpelah before Mamre: the same is Hebron
in the land of Canaan. \bibleverse{20} And the field, and the cave that
is therein, were made sure unto Abraham for a possession of a
buryingplace by the sons of Heth.
\hypertarget{section-23}{%
\section{24}\label{section-23}}
\bibleverse{1} And Abraham was old, and well stricken in age: and the
LORD had blessed Abraham in all things.\footnote{\textbf{24:1}
well\ldots: Heb. gone into days} \bibleverse{2} And Abraham said unto
his eldest servant of his house, that ruled over all that he had, Put, I
pray thee, thy hand under my thigh: \bibleverse{3} And I will make thee
swear by the LORD, the God of heaven, and the God of the earth, that
thou shalt not take a wife unto my son of the daughters of the
Canaanites, among whom I dwell: \bibleverse{4} But thou shalt go unto my
country, and to my kindred, and take a wife unto my son Isaac.
\bibleverse{5} And the servant said unto him, Peradventure the woman
will not be willing to follow me unto this land: must I needs bring thy
son again unto the land from whence thou camest? \bibleverse{6} And
Abraham said unto him, Beware thou that thou bring not my son thither
again.
\bibleverse{7} The LORD God of heaven, which took me from my father's
house, and from the land of my kindred, and which spake unto me, and
that sware unto me, saying, Unto thy seed will I give this land; he
shall send his angel before thee, and thou shalt take a wife unto my son
from thence. \bibleverse{8} And if the woman will not be willing to
follow thee, then thou shalt be clear from this my oath: only bring not
my son thither again. \bibleverse{9} And the servant put his hand under
the thigh of Abraham his master, and sware to him concerning that
matter.
\bibleverse{10} And the servant took ten camels of the camels of his
master, and departed; for all the goods of his master were in his hand:
and he arose, and went to Mesopotamia, unto the city of
Nahor.\footnote{\textbf{24:10} for: or, and} \bibleverse{11} And he made
his camels to kneel down without the city by a well of water at the time
of the evening, even the time that women go out to draw
water.\footnote{\textbf{24:11} that\ldots: Heb. that women who draw
water go forth} \bibleverse{12} And he said, O LORD God of my master
Abraham, I pray thee, send me good speed this day, and shew kindness
unto my master Abraham. \bibleverse{13} Behold, I stand here by the well
of water; and the daughters of the men of the city come out to draw
water: \bibleverse{14} And let it come to pass, that the damsel to whom
I shall say, Let down thy pitcher, I pray thee, that I may drink; and
she shall say, Drink, and I will give thy camels drink also: let the
same be she that thou hast appointed for thy servant Isaac; and thereby
shall I know that thou hast shewed kindness unto my master.
\bibleverse{15} And it came to pass, before he had done speaking, that,
behold, Rebekah came out, who was born to Bethuel, son of Milcah, the
wife of Nahor, Abraham's brother, with her pitcher upon her shoulder.
\bibleverse{16} And the damsel was very fair to look upon, a virgin,
neither had any man known her: and she went down to the well, and filled
her pitcher, and came up.\footnote{\textbf{24:16} very\ldots: Heb. good
of countenance} \bibleverse{17} And the servant ran to meet her, and
said, Let me, I pray thee, drink a little water of thy pitcher.
\bibleverse{18} And she said, Drink, my lord: and she hasted, and let
down her pitcher upon her hand, and gave him drink. \bibleverse{19} And
when she had done giving him drink, she said, I will draw water for thy
camels also, until they have done drinking. \bibleverse{20} And she
hasted, and emptied her pitcher into the trough, and ran again unto the
well to draw water, and drew for all his camels. \bibleverse{21} And the
man wondering at her held his peace, to wit whether the LORD had made
his journey prosperous or not. \bibleverse{22} And it came to pass, as
the camels had done drinking, that the man took a golden earring of half
a shekel weight, and two bracelets for her hands of ten shekels weight
of gold;\footnote{\textbf{24:22} earring: or, jewel for the forehead}
\bibleverse{23} And said, Whose daughter art thou? tell me, I pray thee:
is there room in thy father's house for us to lodge in? \bibleverse{24}
And she said unto him, I am the daughter of Bethuel the son of Milcah,
which she bare unto Nahor. \bibleverse{25} She said moreover unto him,
We have both straw and provender enough, and room to lodge in.
\bibleverse{26} And the man bowed down his head, and worshipped the
LORD. \bibleverse{27} And he said, Blessed be the LORD God of my master
Abraham, who hath not left destitute my master of his mercy and his
truth: I being in the way, the LORD led me to the house of my master's
brethren. \bibleverse{28} And the damsel ran, and told them of her
mother's house these things.
\bibleverse{29} And Rebekah had a brother, and his name was Laban: and
Laban ran out unto the man, unto the well. \bibleverse{30} And it came
to pass, when he saw the earring and bracelets upon his sister's hands,
and when he heard the words of Rebekah his sister, saying, Thus spake
the man unto me; that he came unto the man; and, behold, he stood by the
camels at the well. \bibleverse{31} And he said, Come in, thou blessed
of the LORD; wherefore standest thou without? for I have prepared the
house, and room for the camels.
\bibleverse{32} And the man came into the house: and he ungirded his
camels, and gave straw and provender for the camels, and water to wash
his feet, and the men's feet that were with him. \bibleverse{33} And
there was set meat before him to eat: but he said, I will not eat, until
I have told mine errand. And he said, Speak on. \bibleverse{34} And he
said, I am Abraham's servant. \bibleverse{35} And the LORD hath blessed
my master greatly; and he is become great: and he hath given him flocks,
and herds, and silver, and gold, and menservants, and maidservants, and
camels, and asses. \bibleverse{36} And Sarah my master's wife bare a son
to my master when she was old: and unto him hath he given all that he
hath. \bibleverse{37} And my master made me swear, saying, Thou shalt
not take a wife to my son of the daughters of the Canaanites, in whose
land I dwell: \bibleverse{38} But thou shalt go unto my father's house,
and to my kindred, and take a wife unto my son. \bibleverse{39} And I
said unto my master, Peradventure the woman will not follow me.
\bibleverse{40} And he said unto me, The LORD, before whom I walk, will
send his angel with thee, and prosper thy way; and thou shalt take a
wife for my son of my kindred, and of my father's house: \bibleverse{41}
Then shalt thou be clear from this my oath, when thou comest to my
kindred; and if they give not thee one, thou shalt be clear from my
oath. \bibleverse{42} And I came this day unto the well, and said, O
LORD God of my master Abraham, if now thou do prosper my way which I go:
\bibleverse{43} Behold, I stand by the well of water; and it shall come
to pass, that when the virgin cometh forth to draw water, and I say to
her, Give me, I pray thee, a little water of thy pitcher to drink;
\bibleverse{44} And she say to me, Both drink thou, and I will also draw
for thy camels: let the same be the woman whom the LORD hath appointed
out for my master's son. \bibleverse{45} And before I had done speaking
in mine heart, behold, Rebekah came forth with her pitcher on her
shoulder; and she went down unto the well, and drew water: and I said
unto her, Let me drink, I pray thee. \bibleverse{46} And she made haste,
and let down her pitcher from her shoulder, and said, Drink, and I will
give thy camels drink also: so I drank, and she made the camels drink
also. \bibleverse{47} And I asked her, and said, Whose daughter art
thou? And she said, The daughter of Bethuel, Nahor's son, whom Milcah
bare unto him: and I put the earring upon her face, and the bracelets
upon her hands. \bibleverse{48} And I bowed down my head, and worshipped
the LORD, and blessed the LORD God of my master Abraham, which had led
me in the right way to take my master's brother's daughter unto his son.
\bibleverse{49} And now if ye will deal kindly and truly with my master,
tell me: and if not, tell me; that I may turn to the right hand, or to
the left. \bibleverse{50} Then Laban and Bethuel answered and said, The
thing proceedeth from the LORD: we cannot speak unto thee bad or good.
\bibleverse{51} Behold, Rebekah is before thee, take her, and go, and
let her be thy master's son's wife, as the LORD hath spoken.
\bibleverse{52} And it came to pass, that, when Abraham's servant heard
their words, he worshipped the LORD, bowing himself to the earth.
\bibleverse{53} And the servant brought forth jewels of silver, and
jewels of gold, and raiment, and gave them to Rebekah: he gave also to
her brother and to her mother precious things.\footnote{\textbf{24:53}
jewels: Heb. vessels}
\bibleverse{54} And they did eat and drink, he and the men that were
with him, and tarried all night; and they rose up in the morning, and he
said, Send me away unto my master. \bibleverse{55} And her brother and
her mother said, Let the damsel abide with us a few days, at the least
ten; after that she shall go.\footnote{\textbf{24:55} a few\ldots: or, a
full year, or ten months} \bibleverse{56} And he said unto them,
Hinder me not, seeing the LORD hath prospered my way; send me away that
I may go to my master. \bibleverse{57} And they said, We will call the
damsel, and enquire at her mouth. \bibleverse{58} And they called
Rebekah, and said unto her, Wilt thou go with this man? And she said, I
will go. \bibleverse{59} And they sent away Rebekah their sister, and
her nurse, and Abraham's servant, and his men. \bibleverse{60} And they
blessed Rebekah, and said unto her, Thou art our sister, be thou the
mother of thousands of millions, and let thy seed possess the gate of
those which hate them.
\bibleverse{61} And Rebekah arose, and her damsels, and they rode upon
the camels, and followed the man: and the servant took Rebekah, and went
his way.
\bibleverse{62} And Isaac came from the way of the well Lahai-roi; for
he dwelt in the south country. \bibleverse{63} And Isaac went out to
meditate in the field at the eventide: and he lifted up his eyes, and
saw, and, behold, the camels were coming.\footnote{\textbf{24:63} to
meditate: or, to pray} \bibleverse{64} And Rebekah lifted up her eyes,
and when she saw Isaac, she lighted off the camel. \bibleverse{65} For
she had said unto the servant, What man is this that walketh in the
field to meet us? And the servant had said, It is my master: therefore
she took a vail, and covered herself. \bibleverse{66} And the servant
told Isaac all things that he had done. \bibleverse{67} And Isaac
brought her into his mother Sarah's tent, and took Rebekah, and she
became his wife; and he loved her: and Isaac was comforted after his
mother's death.
\hypertarget{section-24}{%
\section{25}\label{section-24}}
\bibleverse{1} Then again Abraham took a wife, and her name was Keturah.
\bibleverse{2} And she bare him Zimran, and Jokshan, and Medan, and
Midian, and Ishbak, and Shuah. \bibleverse{3} And Jokshan begat Sheba,
and Dedan. And the sons of Dedan were Asshurim, and Letushim, and
Leummim. \bibleverse{4} And the sons of Midian; Ephah, and Epher, and
Hanoch, and Abidah, and Eldaah. All these were the children of Keturah.
\bibleverse{5} And Abraham gave all that he had unto Isaac.
\bibleverse{6} But unto the sons of the concubines, which Abraham had,
Abraham gave gifts, and sent them away from Isaac his son, while he yet
lived, eastward, unto the east country. \bibleverse{7} And these are the
days of the years of Abraham's life which he lived, an hundred
threescore and fifteen years. \bibleverse{8} Then Abraham gave up the
ghost, and died in a good old age, an old man, and full of years; and
was gathered to his people. \bibleverse{9} And his sons Isaac and
Ishmael buried him in the cave of Machpelah, in the field of Ephron the
son of Zohar the Hittite, which is before Mamre; \bibleverse{10} The
field which Abraham purchased of the sons of Heth: there was Abraham
buried, and Sarah his wife.
\bibleverse{11} And it came to pass after the death of Abraham, that God
blessed his son Isaac; and Isaac dwelt by the well Lahai-roi.
\bibleverse{12} Now these are the generations of Ishmael, Abraham's son,
whom Hagar the Egyptian, Sarah's handmaid, bare unto Abraham:
\bibleverse{13} And these are the names of the sons of Ishmael, by their
names, according to their generations: the firstborn of Ishmael,
Nebajoth; and Kedar, and Adbeel, and Mibsam, \bibleverse{14} And Mishma,
and Dumah, and Massa, \bibleverse{15} Hadar, and Tema, Jetur, Naphish,
and Kedemah:\footnote{\textbf{25:15} Hadar: or, Hadad} \bibleverse{16}
These are the sons of Ishmael, and these are their names, by their
towns, and by their castles; twelve princes according to their nations.
\bibleverse{17} And these are the years of the life of Ishmael, an
hundred and thirty and seven years: and he gave up the ghost and died;
and was gathered unto his people. \bibleverse{18} And they dwelt from
Havilah unto Shur, that is before Egypt, as thou goest toward Assyria:
and he died in the presence of all his brethren.\footnote{\textbf{25:18}
died: Heb. fell}
\bibleverse{19} And these are the generations of Isaac, Abraham's son:
Abraham begat Isaac: \bibleverse{20} And Isaac was forty years old when
he took Rebekah to wife, the daughter of Bethuel the Syrian of
Padan-aram, the sister to Laban the Syrian. \bibleverse{21} And Isaac
intreated the LORD for his wife, because she was barren: and the LORD
was intreated of him, and Rebekah his wife conceived. \bibleverse{22}
And the children struggled together within her; and she said, If it be
so, why am I thus? And she went to enquire of the LORD. \bibleverse{23}
And the LORD said unto her, Two nations are in thy womb, and two manner
of people shall be separated from thy bowels; and the one people shall
be stronger than the other people; and the elder shall serve the
younger.
\bibleverse{24} And when her days to be delivered were fulfilled,
behold, there were twins in her womb. \bibleverse{25} And the first came
out red, all over like an hairy garment; and they called his name Esau.
\bibleverse{26} And after that came his brother out, and his hand took
hold on Esau's heel; and his name was called Jacob: and Isaac was
threescore years old when she bare them. \bibleverse{27} And the boys
grew: and Esau was a cunning hunter, a man of the field; and Jacob was a
plain man, dwelling in tents. \bibleverse{28} And Isaac loved Esau,
because he did eat of his venison: but Rebekah loved Jacob.\footnote{\textbf{25:28}
he\ldots: Heb. venison was in his mouth}
\bibleverse{29} And Jacob sod pottage: and Esau came from the field, and
he was faint: \bibleverse{30} And Esau said to Jacob, Feed me, I pray
thee, with that same red pottage; for I am faint: therefore was his name
called Edom.\textsuperscript{{[}\textbf{25:30} with\ldots: Heb. with
that red, with that red pottage{]}}{[}\textbf{25:30} Edom: that is
Red{]} \bibleverse{31} And Jacob said, Sell me this day thy birthright.
\bibleverse{32} And Esau said, Behold, I am at the point to die: and
what profit shall this birthright do to me?\footnote{\textbf{25:32}
at\ldots: Heb. going to die} \bibleverse{33} And Jacob said, Swear to
me this day; and he sware unto him: and he sold his birthright unto
Jacob. \bibleverse{34} Then Jacob gave Esau bread and pottage of
lentiles; and he did eat and drink, and rose up, and went his way: thus
Esau despised his birthright.
\hypertarget{section-25}{%
\section{26}\label{section-25}}
\bibleverse{1} And there was a famine in the land, beside the first
famine that was in the days of Abraham. And Isaac went unto Abimelech
king of the Philistines unto Gerar. \bibleverse{2} And the LORD appeared
unto him, and said, Go not down into Egypt; dwell in the land which I
shall tell thee of: \bibleverse{3} Sojourn in this land, and I will be
with thee, and will bless thee; for unto thee, and unto thy seed, I will
give all these countries, and I will perform the oath which I sware unto
Abraham thy father; \bibleverse{4} And I will make thy seed to multiply
as the stars of heaven, and will give unto thy seed all these countries;
and in thy seed shall all the nations of the earth be blessed;
\bibleverse{5} Because that Abraham obeyed my voice, and kept my charge,
my commandments, my statutes, and my laws.
\bibleverse{6} And Isaac dwelt in Gerar: \bibleverse{7} And the men of
the place asked him of his wife; and he said, She is my sister: for he
feared to say, She is my wife; lest, said he, the men of the place
should kill me for Rebekah; because she was fair to look upon.
\bibleverse{8} And it came to pass, when he had been there a long time,
that Abimelech king of the Philistines looked out at a window, and saw,
and, behold, Isaac was sporting with Rebekah his wife. \bibleverse{9}
And Abimelech called Isaac, and said, Behold, of a surety she is thy
wife: and how saidst thou, She is my sister? And Isaac said unto him,
Because I said, Lest I die for her. \bibleverse{10} And Abimelech said,
What is this thou hast done unto us? one of the people might lightly
have lien with thy wife, and thou shouldest have brought guiltiness upon
us. \bibleverse{11} And Abimelech charged all his people, saying, He
that toucheth this man or his wife shall surely be put to death.
\bibleverse{12} Then Isaac sowed in that land, and received in the same
year an hundredfold: and the LORD blessed him.\footnote{\textbf{26:12}
received: Heb. found} \bibleverse{13} And the man waxed great, and
went forward, and grew until he became very great: \footnote{\textbf{26:13}
went\ldots: Heb. went going} \bibleverse{14} For he had possession of
flocks, and possession of herds, and great store of servants: and the
Philistines envied him.\footnote{\textbf{26:14} servants: or, husbandry}
\bibleverse{15} For all the wells which his father's servants had digged
in the days of Abraham his father, the Philistines had stopped them, and
filled them with earth. \bibleverse{16} And Abimelech said unto Isaac,
Go from us; for thou art much mightier than we.
\bibleverse{17} And Isaac departed thence, and pitched his tent in the
valley of Gerar, and dwelt there. \bibleverse{18} And Isaac digged again
the wells of water, which they had digged in the days of Abraham his
father; for the Philistines had stopped them after the death of Abraham:
and he called their names after the names by which his father had called
them. \bibleverse{19} And Isaac's servants digged in the valley, and
found there a well of springing water.\footnote{\textbf{26:19}
springing: Heb. living} \bibleverse{20} And the herdmen of Gerar did
strive with Isaac's herdmen, saying, The water is ours: and he called
the name of the well Esek; because they strove with him.\footnote{\textbf{26:20}
Esek: that is, Contention} \bibleverse{21} And they digged another
well, and strove for that also: and he called the name of it
Sitnah.\footnote{\textbf{26:21} Sitnah: that is, Hatred} \bibleverse{22}
And he removed from thence, and digged another well; and for that they
strove not: and he called the name of it Rehoboth; and he said, For now
the LORD hath made room for us, and we shall be fruitful in the
land.\footnote{\textbf{26:22} Rehoboth: that is Room} \bibleverse{23}
And he went up from thence to Beer-sheba. \bibleverse{24} And the LORD
appeared unto him the same night, and said, I am the God of Abraham thy
father: fear not, for I am with thee, and will bless thee, and multiply
thy seed for my servant Abraham's sake. \bibleverse{25} And he builded
an altar there, and called upon the name of the LORD, and pitched his
tent there: and there Isaac's servants digged a well.
\bibleverse{26} Then Abimelech went to him from Gerar, and Ahuzzath one
of his friends, and Phichol the chief captain of his army.
\bibleverse{27} And Isaac said unto them, Wherefore come ye to me,
seeing ye hate me, and have sent me away from you? \bibleverse{28} And
they said, We saw certainly that the LORD was with thee: and we said,
Let there be now an oath betwixt us, even betwixt us and thee, and let
us make a covenant with thee;\footnote{\textbf{26:28} We saw\ldots: Heb.
Seeing we saw} \bibleverse{29} That thou wilt do us no hurt, as we
have not touched thee, and as we have done unto thee nothing but good,
and have sent thee away in peace: thou art now the blessed of the
LORD.\footnote{\textbf{26:29} That\ldots: Heb. If thou shalt}
\bibleverse{30} And he made them a feast, and they did eat and drink.
\bibleverse{31} And they rose up betimes in the morning, and sware one
to another: and Isaac sent them away, and they departed from him in
peace. \bibleverse{32} And it came to pass the same day, that Isaac's
servants came, and told him concerning the well which they had digged,
and said unto him, We have found water. \bibleverse{33} And he called it
Shebah: therefore the name of the city is Beer-sheba unto this
day.\textsuperscript{{[}\textbf{26:33} Shebah: That is, an
oath{]}}{[}\textbf{26:33} Beer-sheba: that is, the well of the oath{]}
\bibleverse{34} And Esau was forty years old when he took to wife Judith
the daughter of Beeri the Hittite, and Bashemath the daughter of Elon
the Hittite: \bibleverse{35} Which were a grief of mind unto Isaac and
to Rebekah.\footnote{\textbf{26:35} a grief\ldots: Heb. bitterness of
spirit}
\hypertarget{section-26}{%
\section{27}\label{section-26}}
\bibleverse{1} And it came to pass, that when Isaac was old, and his
eyes were dim, so that he could not see, he called Esau his eldest son,
and said unto him, My son: and he said unto him, Behold, here am I.
\bibleverse{2} And he said, Behold now, I am old, I know not the day of
my death: \bibleverse{3} Now therefore take, I pray thee, thy weapons,
thy quiver and thy bow, and go out to the field, and take me some
venison;\footnote{\textbf{27:3} take: Heb. hunt} \bibleverse{4} And make
me savoury meat, such as I love, and bring it to me, that I may eat;
that my soul may bless thee before I die. \bibleverse{5} And Rebekah
heard when Isaac spake to Esau his son. And Esau went to the field to
hunt for venison, and to bring it.
\bibleverse{6} And Rebekah spake unto Jacob her son, saying, Behold, I
heard thy father speak unto Esau thy brother, saying, \bibleverse{7}
Bring me venison, and make me savoury meat, that I may eat, and bless
thee before the LORD before my death. \bibleverse{8} Now therefore, my
son, obey my voice according to that which I command thee.
\bibleverse{9} Go now to the flock, and fetch me from thence two good
kids of the goats; and I will make them savoury meat for thy father,
such as he loveth: \bibleverse{10} And thou shalt bring it to thy
father, that he may eat, and that he may bless thee before his death.
\bibleverse{11} And Jacob said to Rebekah his mother, Behold, Esau my
brother is a hairy man, and I am a smooth man: \bibleverse{12} My father
peradventure will feel me, and I shall seem to him as a deceiver; and I
shall bring a curse upon me, and not a blessing. \bibleverse{13} And his
mother said unto him, Upon me be thy curse, my son: only obey my voice,
and go fetch me them. \bibleverse{14} And he went, and fetched, and
brought them to his mother: and his mother made savoury meat, such as
his father loved. \bibleverse{15} And Rebekah took goodly raiment of her
eldest son Esau, which were with her in the house, and put them upon
Jacob her younger son:\footnote{\textbf{27:15} goodly: Heb. desirable}
\bibleverse{16} And she put the skins of the kids of the goats upon his
hands, and upon the smooth of his neck: \bibleverse{17} And she gave the
savoury meat and the bread, which she had prepared, into the hand of her
son Jacob.
\bibleverse{18} And he came unto his father, and said, My father: and he
said, Here am I; who art thou, my son? \bibleverse{19} And Jacob said
unto his father, I am Esau thy firstborn; I have done according as thou
badest me: arise, I pray thee, sit and eat of my venison, that thy soul
may bless me. \bibleverse{20} And Isaac said unto his son, How is it
that thou hast found it so quickly, my son? And he said, Because the
LORD thy God brought it to me.\footnote{\textbf{27:20} to me: Heb.
before me} \bibleverse{21} And Isaac said unto Jacob, Come near, I
pray thee, that I may feel thee, my son, whether thou be my very son
Esau or not. \bibleverse{22} And Jacob went near unto Isaac his father;
and he felt him, and said, The voice is Jacob's voice, but the hands are
the hands of Esau. \bibleverse{23} And he discerned him not, because his
hands were hairy, as his brother Esau's hands: so he blessed him.
\bibleverse{24} And he said, Art thou my very son Esau? And he said, I
am. \bibleverse{25} And he said, Bring it near to me, and I will eat of
my son's venison, that my soul may bless thee. And he brought it near to
him, and he did eat: and he brought him wine, and he drank.
\bibleverse{26} And his father Isaac said unto him, Come near now, and
kiss me, my son. \bibleverse{27} And he came near, and kissed him: and
he smelled the smell of his raiment, and blessed him, and said, See, the
smell of my son is as the smell of a field which the LORD hath blessed:
\bibleverse{28} Therefore God give thee of the dew of heaven, and the
fatness of the earth, and plenty of corn and wine: \bibleverse{29} Let
people serve thee, and nations bow down to thee: be lord over thy
brethren, and let thy mother's sons bow down to thee: cursed be every
one that curseth thee, and blessed be he that blesseth thee.
\bibleverse{30} And it came to pass, as soon as Isaac had made an end of
blessing Jacob, and Jacob was yet scarce gone out from the presence of
Isaac his father, that Esau his brother came in from his hunting.
\bibleverse{31} And he also had made savoury meat, and brought it unto
his father, and said unto his father, Let my father arise, and eat of
his son's venison, that thy soul may bless me. \bibleverse{32} And Isaac
his father said unto him, Who art thou? And he said, I am thy son, thy
firstborn Esau. \bibleverse{33} And Isaac trembled very exceedingly, and
said, Who? where is he that hath taken venison, and brought it me, and I
have eaten of all before thou camest, and have blessed him? yea, and he
shall be blessed.\textsuperscript{{[}\textbf{27:33} trembled\ldots: Heb.
trembled with a great trembling greatly{]}}{[}\textbf{27:33} taken: Heb.
hunted{]} \bibleverse{34} And when Esau heard the words of his father,
he cried with a great and exceeding bitter cry, and said unto his
father, Bless me, even me also, O my father. \bibleverse{35} And he
said, Thy brother came with subtilty, and hath taken away thy blessing.
\bibleverse{36} And he said, Is not he rightly named Jacob? for he hath
supplanted me these two times: he took away my birthright; and, behold,
now he hath taken away my blessing. And he said, Hast thou not reserved
a blessing for me?\footnote{\textbf{27:36} Jacob: that is, A supplanter}
\bibleverse{37} And Isaac answered and said unto Esau, Behold, I have
made him thy lord, and all his brethren have I given to him for
servants; and with corn and wine have I sustained him: and what shall I
do now unto thee, my son?\footnote{\textbf{27:37} sustained: or,
supported} \bibleverse{38} And Esau said unto his father, Hast thou
but one blessing, my father? bless me, even me also, O my father. And
Esau lifted up his voice, and wept. \bibleverse{39} And Isaac his father
answered and said unto him, Behold, thy dwelling shall be the fatness of
the earth, and of the dew of heaven from above;\footnote{\textbf{27:39}
the fatness: or, of the fatness} \bibleverse{40} And by thy sword
shalt thou live, and shalt serve thy brother; and it shall come to pass
when thou shalt have the dominion, that thou shalt break his yoke from
off thy neck.
\bibleverse{41} And Esau hated Jacob because of the blessing wherewith
his father blessed him: and Esau said in his heart, The days of mourning
for my father are at hand; then will I slay my brother Jacob.
\bibleverse{42} And these words of Esau her elder son were told to
Rebekah: and she sent and called Jacob her younger son, and said unto
him, Behold, thy brother Esau, as touching thee, doth comfort himself,
purposing to kill thee. \bibleverse{43} Now therefore, my son, obey my
voice; and arise, flee thou to Laban my brother to Haran;
\bibleverse{44} And tarry with him a few days, until thy brother's fury
turn away; \bibleverse{45} Until thy brother's anger turn away from
thee, and he forget that which thou hast done to him: then I will send,
and fetch thee from thence: why should I be deprived also of you both in
one day? \bibleverse{46} And Rebekah said to Isaac, I am weary of my
life because of the daughters of Heth: if Jacob take a wife of the
daughters of Heth, such as these which are of the daughters of the land,
what good shall my life do me?
\hypertarget{section-27}{%
\section{28}\label{section-27}}
\bibleverse{1} And Isaac called Jacob, and blessed him, and charged him,
and said unto him, Thou shalt not take a wife of the daughters of
Canaan. \bibleverse{2} Arise, go to Padan-aram, to the house of Bethuel
thy mother's father; and take thee a wife from thence of the daughters
of Laban thy mother's brother. \bibleverse{3} And God Almighty bless
thee, and make thee fruitful, and multiply thee, that thou mayest be a
multitude of people;\footnote{\textbf{28:3} a multitude\ldots: Heb. an
assembly of people} \bibleverse{4} And give thee the blessing of
Abraham, to thee, and to thy seed with thee; that thou mayest inherit
the land wherein thou art a stranger, which God gave unto
Abraham.\footnote{\textbf{28:4} wherein\ldots: Heb. of thy sojournings}
\bibleverse{5} And Isaac sent away Jacob: and he went to Padan-aram unto
Laban, son of Bethuel the Syrian, the brother of Rebekah, Jacob's and
Esau's mother.
\bibleverse{6} When Esau saw that Isaac had blessed Jacob, and sent him
away to Padan-aram, to take him a wife from thence; and that as he
blessed him he gave him a charge, saying, Thou shalt not take a wife of
the daughters of Canaan; \bibleverse{7} And that Jacob obeyed his father
and his mother, and was gone to Padan-aram; \bibleverse{8} And Esau
seeing that the daughters of Canaan pleased not Isaac his
father;\footnote{\textbf{28:8} pleased\ldots: Heb. were evil in the
eyes, etc} \bibleverse{9} Then went Esau unto Ishmael, and took unto
the wives which he had Mahalath the daughter of Ishmael Abraham's son,
the sister of Nebajoth, to be his wife.\footnote{\textbf{28:9} Mahalath:
or, Bashemath}
\bibleverse{10} And Jacob went out from Beer-sheba, and went toward
Haran.\footnote{\textbf{28:10} Haran: Gr. Charran} \bibleverse{11} And
he lighted upon a certain place, and tarried there all night, because
the sun was set; and he took of the stones of that place, and put them
for his pillows, and lay down in that place to sleep. \bibleverse{12}
And he dreamed, and behold a ladder set up on the earth, and the top of
it reached to heaven: and behold the angels of God ascending and
descending on it. \bibleverse{13} And, behold, the LORD stood above it,
and said, I am the LORD God of Abraham thy father, and the God of Isaac:
the land whereon thou liest, to thee will I give it, and to thy seed;
\bibleverse{14} And thy seed shall be as the dust of the earth, and thou
shalt spread abroad to the west, and to the east, and to the north, and
to the south: and in thee and in thy seed shall all the families of the
earth be blessed.\footnote{\textbf{28:14} spread\ldots: Heb. break forth}
\bibleverse{15} And, behold, I am with thee, and will keep thee in all
places whither thou goest, and will bring thee again into this land; for
I will not leave thee, until I have done that which I have spoken to
thee of.
\bibleverse{16} And Jacob awaked out of his sleep, and he said, Surely
the LORD is in this place; and I knew it not. \bibleverse{17} And he was
afraid, and said, How dreadful is this place! this is none other but the
house of God, and this is the gate of heaven. \bibleverse{18} And Jacob
rose up early in the morning, and took the stone that he had put for his
pillows, and set it up for a pillar, and poured oil upon the top of it.
\bibleverse{19} And he called the name of that place Beth-el: but the
name of that city was called Luz at the first.\footnote{\textbf{28:19}
Beth-el: that is, The house of God} \bibleverse{20} And Jacob vowed a
vow, saying, If God will be with me, and will keep me in this way that I
go, and will give me bread to eat, and raiment to put on,
\bibleverse{21} So that I come again to my father's house in peace; then
shall the LORD be my God: \bibleverse{22} And this stone, which I have
set for a pillar, shall be God's house: and of all that thou shalt give
me I will surely give the tenth unto thee.
\hypertarget{section-28}{%
\section{29}\label{section-28}}
\bibleverse{1} Then Jacob went on his journey, and came into the land of
the people of the east.\textsuperscript{{[}\textbf{29:1} went\ldots:
Heb. lift up his feet{]}}{[}\textbf{29:1} people: Heb. children{]}
\bibleverse{2} And he looked, and behold a well in the field, and, lo,
there were three flocks of sheep lying by it; for out of that well they
watered the flocks: and a great stone was upon the well's mouth.
\bibleverse{3} And thither were all the flocks gathered: and they rolled
the stone from the well's mouth, and watered the sheep, and put the
stone again upon the well's mouth in his place. \bibleverse{4} And Jacob
said unto them, My brethren, whence be ye? And they said, Of Haran are
we. \bibleverse{5} And he said unto them, Know ye Laban the son of
Nahor? And they said, We know him. \bibleverse{6} And he said unto them,
Is he well? And they said, He is well: and, behold, Rachel his daughter
cometh with the sheep.\footnote{\textbf{29:6} Is he\ldots: Heb. Is there
peace to him?} \bibleverse{7} And he said, Lo, it is yet high day,
neither is it time that the cattle should be gathered together: water ye
the sheep, and go and feed them.\footnote{\textbf{29:7} it is\ldots:
Heb. yet the day is great} \bibleverse{8} And they said, We cannot,
until all the flocks be gathered together, and till they roll the stone
from the well's mouth; then we water the sheep.
\bibleverse{9} And while he yet spake with them, Rachel came with her
father's sheep: for she kept them. \bibleverse{10} And it came to pass,
when Jacob saw Rachel the daughter of Laban his mother's brother, and
the sheep of Laban his mother's brother, that Jacob went near, and
rolled the stone from the well's mouth, and watered the flock of Laban
his mother's brother. \bibleverse{11} And Jacob kissed Rachel, and
lifted up his voice, and wept. \bibleverse{12} And Jacob told Rachel
that he was her father's brother, and that he was Rebekah's son: and she
ran and told her father. \bibleverse{13} And it came to pass, when Laban
heard the tidings of Jacob his sister's son, that he ran to meet him,
and embraced him, and kissed him, and brought him to his house. And he
told Laban all these things.\footnote{\textbf{29:13} tidings: Heb.
hearing} \bibleverse{14} And Laban said to him, Surely thou art my
bone and my flesh. And he abode with him the space of a
month.\footnote{\textbf{29:14} the space\ldots: Heb. a month of days}
\bibleverse{15} And Laban said unto Jacob, Because thou art my brother,
shouldest thou therefore serve me for nought? tell me, what shall thy
wages be? \bibleverse{16} And Laban had two daughters: the name of the
elder was Leah, and the name of the younger was Rachel. \bibleverse{17}
Leah was tender eyed; but Rachel was beautiful and well favoured.
\bibleverse{18} And Jacob loved Rachel; and said, I will serve thee
seven years for Rachel thy younger daughter. \bibleverse{19} And Laban
said, It is better that I give her to thee, than that I should give her
to another man: abide with me. \bibleverse{20} And Jacob served seven
years for Rachel; and they seemed unto him but a few days, for the love
he had to her.
\bibleverse{21} And Jacob said unto Laban, Give me my wife, for my days
are fulfilled, that I may go in unto her. \bibleverse{22} And Laban
gathered together all the men of the place, and made a feast.
\bibleverse{23} And it came to pass in the evening, that he took Leah
his daughter, and brought her to him; and he went in unto her.
\bibleverse{24} And Laban gave unto his daughter Leah Zilpah his maid
for an handmaid. \bibleverse{25} And it came to pass, that in the
morning, behold, it was Leah: and he said to Laban, What is this thou
hast done unto me? did not I serve with thee for Rachel? wherefore then
hast thou beguiled me? \bibleverse{26} And Laban said, It must not be so
done in our country, to give the younger before the
firstborn.\footnote{\textbf{29:26} country: Heb. place} \bibleverse{27}
Fulfil her week, and we will give thee this also for the service which
thou shalt serve with me yet seven other years. \bibleverse{28} And
Jacob did so, and fulfilled her week: and he gave him Rachel his
daughter to wife also. \bibleverse{29} And Laban gave to Rachel his
daughter Bilhah his handmaid to be her maid. \bibleverse{30} And he went
in also unto Rachel, and he loved also Rachel more than Leah, and served
with him yet seven other years.
\bibleverse{31} And when the LORD saw that Leah was hated, he opened her
womb: but Rachel was barren. \bibleverse{32} And Leah conceived, and
bare a son, and she called his name Reuben: for she said, Surely the
LORD hath looked upon my affliction; now therefore my husband will love
me.\footnote{\textbf{29:32} Reuben: that is, See a son} \bibleverse{33}
And she conceived again, and bare a son; and said, Because the LORD hath
heard that I was hated, he hath therefore given me this son also: and
she called his name Simeon.\footnote{\textbf{29:33} Simeon: that is,
Hearing} \bibleverse{34} And she conceived again, and bare a son; and
said, Now this time will my husband be joined unto me, because I have
born him three sons: therefore was his name called Levi.\footnote{\textbf{29:34}
Levi: that is, Joined} \bibleverse{35} And she conceived again, and
bare a son: and she said, Now will I praise the LORD: therefore she
called his name Judah; and left
bearing.\textsuperscript{{[}\textbf{29:35} Judah: that is,
Praise{]}}{[}\textbf{29:35} left\ldots: Heb. stood from bearing{]}
\hypertarget{section-29}{%
\section{30}\label{section-29}}
\bibleverse{1} And when Rachel saw that she bare Jacob no children,
Rachel envied her sister; and said unto Jacob, Give me children, or else
I die. \bibleverse{2} And Jacob's anger was kindled against Rachel: and
he said, Am I in God's stead, who hath withheld from thee the fruit of
the womb? \bibleverse{3} And she said, Behold my maid Bilhah, go in unto
her; and she shall bear upon my knees, that I may also have children by
her.\footnote{\textbf{30:3} have\ldots: Heb. be built by her}
\bibleverse{4} And she gave him Bilhah her handmaid to wife: and Jacob
went in unto her. \bibleverse{5} And Bilhah conceived, and bare Jacob a
son. \bibleverse{6} And Rachel said, God hath judged me, and hath also
heard my voice, and hath given me a son: therefore called she his name
Dan.\footnote{\textbf{30:6} Dan: that is, Judging} \bibleverse{7} And
Bilhah Rachel's maid conceived again, and bare Jacob a second son.
\bibleverse{8} And Rachel said, With great wrestlings have I wrestled
with my sister, and I have prevailed: and she called his name
Naphtali.\textsuperscript{{[}\textbf{30:8} great\ldots: Heb. wrestlings
of God{]}}{[}\textbf{30:8} Naphtali: that is, My wrestling: Gr.
Nephthalim{]} \bibleverse{9} When Leah saw that she had left bearing,
she took Zilpah her maid, and gave her Jacob to wife. \bibleverse{10}
And Zilpah Leah's maid bare Jacob a son. \bibleverse{11} And Leah said,
A troop cometh: and she called his name Gad.\footnote{\textbf{30:11}
Gad: that is, A troop, or, company} \bibleverse{12} And Zilpah Leah's
maid bare Jacob a second son. \bibleverse{13} And Leah said, Happy am I,
for the daughters will call me blessed: and she called his name
Asher.\textsuperscript{{[}\textbf{30:13} Happy\ldots: Heb. In my
happiness{]}}{[}\textbf{30:13} Asher: that is, Happy{]}
\bibleverse{14} And Reuben went in the days of wheat harvest, and found
mandrakes in the field, and brought them unto his mother Leah. Then
Rachel said to Leah, Give me, I pray thee, of thy son's mandrakes.
\bibleverse{15} And she said unto her, Is it a small matter that thou
hast taken my husband? and wouldest thou take away my son's mandrakes
also? And Rachel said, Therefore he shall lie with thee to night for thy
son's mandrakes. \bibleverse{16} And Jacob came out of the field in the
evening, and Leah went out to meet him, and said, Thou must come in unto
me; for surely I have hired thee with my son's mandrakes. And he lay
with her that night. \bibleverse{17} And God hearkened unto Leah, and
she conceived, and bare Jacob the fifth son. \bibleverse{18} And Leah
said, God hath given me my hire, because I have given my maiden to my
husband: and she called his name Issachar.\footnote{\textbf{30:18}
Issachar: that is, An hire} \bibleverse{19} And Leah conceived again,
and bare Jacob the sixth son. \bibleverse{20} And Leah said, God hath
endued me with a good dowry; now will my husband dwell with me, because
I have born him six sons: and she called his name Zebulun.\footnote{\textbf{30:20}
Zebulun: that is, Dwelling: Gr. Zabulon} \bibleverse{21} And
afterwards she bare a daughter, and called her name Dinah.\footnote{\textbf{30:21}
Dinah: that is Judgment}
\bibleverse{22} And God remembered Rachel, and God hearkened to her, and
opened her womb. \bibleverse{23} And she conceived, and bare a son; and
said, God hath taken away my reproach: \bibleverse{24} And she called
his name Joseph; and said, The LORD shall add to me another
son.\footnote{\textbf{30:24} Joseph: that is, Adding}
\bibleverse{25} And it came to pass, when Rachel had born Joseph, that
Jacob said unto Laban, Send me away, that I may go unto mine own place,
and to my country. \bibleverse{26} Give me my wives and my children, for
whom I have served thee, and let me go: for thou knowest my service
which I have done thee. \bibleverse{27} And Laban said unto him, I pray
thee, if I have found favour in thine eyes, tarry: for I have learned by
experience that the LORD hath blessed me for thy sake. \bibleverse{28}
And he said, Appoint me thy wages, and I will give it. \bibleverse{29}
And he said unto him, Thou knowest how I have served thee, and how thy
cattle was with me. \bibleverse{30} For it was little which thou hadst
before I came, and it is now increased unto a multitude; and the LORD
hath blessed thee since my coming: and now when shall I provide for mine
own house also?\textsuperscript{{[}\textbf{30:30} increased: Heb. broken
forth{]}}{[}\textbf{30:30} since\ldots: Heb. at my foot{]}
\bibleverse{31} And he said, What shall I give thee? And Jacob said,
Thou shalt not give me any thing: if thou wilt do this thing for me, I
will again feed and keep thy flock: \bibleverse{32} I will pass through
all thy flock to day, removing from thence all the speckled and spotted
cattle, and all the brown cattle among the sheep, and the spotted and
speckled among the goats: and of such shall be my hire. \bibleverse{33}
So shall my righteousness answer for me in time to come, when it shall
come for my hire before thy face: every one that is not speckled and
spotted among the goats, and brown among the sheep, that shall be
counted stolen with me.\footnote{\textbf{30:33} in time\ldots: Heb. to
morrow} \bibleverse{34} And Laban said, Behold, I would it might be
according to thy word. \bibleverse{35} And he removed that day the he
goats that were ringstraked and spotted, and all the she goats that were
speckled and spotted, and every one that had some white in it, and all
the brown among the sheep, and gave them into the hand of his sons.
\bibleverse{36} And he set three days' journey betwixt himself and
Jacob: and Jacob fed the rest of Laban's flocks.
\bibleverse{37} And Jacob took him rods of green poplar, and of the
hazel and chesnut tree; and pilled white strakes in them, and made the
white appear which was in the rods. \bibleverse{38} And he set the rods
which he had pilled before the flocks in the gutters in the watering
troughs when the flocks came to drink, that they should conceive when
they came to drink. \bibleverse{39} And the flocks conceived before the
rods, and brought forth cattle ringstraked, speckled, and spotted.
\bibleverse{40} And Jacob did separate the lambs, and set the faces of
the flocks toward the ringstraked, and all the brown in the flock of
Laban; and he put his own flocks by themselves, and put them not unto
Laban's cattle. \bibleverse{41} And it came to pass, whensoever the
stronger cattle did conceive, that Jacob laid the rods before the eyes
of the cattle in the gutters, that they might conceive among the rods.
\bibleverse{42} But when the cattle were feeble, he put them not in: so
the feebler were Laban's, and the stronger Jacob's. \bibleverse{43} And
the man increased exceedingly, and had much cattle, and maidservants,
and menservants, and camels, and asses.
\hypertarget{section-30}{%
\section{31}\label{section-30}}
\bibleverse{1} And he heard the words of Laban's sons, saying, Jacob
hath taken away all that was our father's; and of that which was our
father's hath he gotten all this glory. \bibleverse{2} And Jacob beheld
the countenance of Laban, and, behold, it was not toward him as
before.\footnote{\textbf{31:2} as before: Heb. as yesterday and the day
before} \bibleverse{3} And the LORD said unto Jacob, Return unto the
land of thy fathers, and to thy kindred; and I will be with thee.
\bibleverse{4} And Jacob sent and called Rachel and Leah to the field
unto his flock, \bibleverse{5} And said unto them, I see your father's
countenance, that it is not toward me as before; but the God of my
father hath been with me. \bibleverse{6} And ye know that with all my
power I have served your father. \bibleverse{7} And your father hath
deceived me, and changed my wages ten times; but God suffered him not to
hurt me. \bibleverse{8} If he said thus, The speckled shall be thy
wages; then all the cattle bare speckled: and if he said thus, The
ringstraked shall be thy hire; then bare all the cattle ringstraked.
\bibleverse{9} Thus God hath taken away the cattle of your father, and
given them to me. \bibleverse{10} And it came to pass at the time that
the cattle conceived, that I lifted up mine eyes, and saw in a dream,
and, behold, the rams which leaped upon the cattle were ringstraked,
speckled, and grisled.\footnote{\textbf{31:10} rams: or, he goats}
\bibleverse{11} And the angel of God spake unto me in a dream, saying,
Jacob: And I said, Here am I. \bibleverse{12} And he said, Lift up now
thine eyes, and see, all the rams which leap upon the cattle are
ringstraked, speckled, and grisled: for I have seen all that Laban doeth
unto thee. \bibleverse{13} I am the God of Beth-el, where thou
anointedst the pillar, and where thou vowedst a vow unto me: now arise,
get thee out from this land, and return unto the land of thy kindred.
\bibleverse{14} And Rachel and Leah answered and said unto him, Is there
yet any portion or inheritance for us in our father's house?
\bibleverse{15} Are we not counted of him strangers? for he hath sold
us, and hath quite devoured also our money. \bibleverse{16} For all the
riches which God hath taken from our father, that is ours, and our
children's: now then, whatsoever God hath said unto thee, do.
\bibleverse{17} Then Jacob rose up, and set his sons and his wives upon
camels; \bibleverse{18} And he carried away all his cattle, and all his
goods which he had gotten, the cattle of his getting, which he had
gotten in Padan-aram, for to go to Isaac his father in the land of
Canaan. \bibleverse{19} And Laban went to shear his sheep: and Rachel
had stolen the images that were her father's.\footnote{\textbf{31:19}
images: Heb. teraphim} \bibleverse{20} And Jacob stole away unawares
to Laban the Syrian, in that he told him not that he fled.\footnote{\textbf{31:20}
unawares\ldots: Heb. the heart of Laban} \bibleverse{21} So he fled
with all that he had; and he rose up, and passed over the river, and set
his face toward the mount Gilead. \bibleverse{22} And it was told Laban
on the third day that Jacob was fled. \bibleverse{23} And he took his
brethren with him, and pursued after him seven days' journey; and they
overtook him in the mount Gilead. \bibleverse{24} And God came to Laban
the Syrian in a dream by night, and said unto him, Take heed that thou
speak not to Jacob either good or bad.\footnote{\textbf{31:24}
either\ldots: Heb. from good to bad}
\bibleverse{25} Then Laban overtook Jacob. Now Jacob had pitched his
tent in the mount: and Laban with his brethren pitched in the mount of
Gilead. \bibleverse{26} And Laban said to Jacob, What hast thou done,
that thou hast stolen away unawares to me, and carried away my
daughters, as captives taken with the sword? \bibleverse{27} Wherefore
didst thou flee away secretly, and steal away from me; and didst not
tell me, that I might have sent thee away with mirth, and with songs,
with tabret, and with harp?\footnote{\textbf{31:27} steal\ldots: Heb.
hast stolen me} \bibleverse{28} And hast not suffered me to kiss my
sons and my daughters? thou hast now done foolishly in so doing.
\bibleverse{29} It is in the power of my hand to do you hurt: but the
God of your father spake unto me yesternight, saying, Take thou heed
that thou speak not to Jacob either good or bad. \bibleverse{30} And
now, though thou wouldest needs be gone, because thou sore longedst
after thy father's house, yet wherefore hast thou stolen my gods?
\bibleverse{31} And Jacob answered and said to Laban, Because I was
afraid: for I said, Peradventure thou wouldest take by force thy
daughters from me. \bibleverse{32} With whomsoever thou findest thy
gods, let him not live: before our brethren discern thou what is thine
with me, and take it to thee. For Jacob knew not that Rachel had stolen
them. \bibleverse{33} And Laban went into Jacob's tent, and into Leah's
tent, and into the two maidservants' tents; but he found them not. Then
went he out of Leah's tent, and entered into Rachel's tent.
\bibleverse{34} Now Rachel had taken the images, and put them in the
camel's furniture, and sat upon them. And Laban searched all the tent,
but found them not.\footnote{\textbf{31:34} searched: Heb. felt}
\bibleverse{35} And she said to her father, Let it not displease my lord
that I cannot rise up before thee; for the custom of women is upon me.
And he searched, but found not the images.
\bibleverse{36} And Jacob was wroth, and chode with Laban: and Jacob
answered and said to Laban, What is my trespass? what is my sin, that
thou hast so hotly pursued after me? \bibleverse{37} Whereas thou hast
searched all my stuff, what hast thou found of all thy household stuff?
set it here before my brethren and thy brethren, that they may judge
betwixt us both.\footnote{\textbf{31:37} searched: Heb. felt}
\bibleverse{38} This twenty years have I been with thee; thy ewes and
thy she goats have not cast their young, and the rams of thy flock have
I not eaten. \bibleverse{39} That which was torn of beasts I brought not
unto thee; I bare the loss of it; of my hand didst thou require it,
whether stolen by day, or stolen by night. \bibleverse{40} Thus I was;
in the day the drought consumed me, and the frost by night; and my sleep
departed from mine eyes. \bibleverse{41} Thus have I been twenty years
in thy house; I served thee fourteen years for thy two daughters, and
six years for thy cattle: and thou hast changed my wages ten times.
\bibleverse{42} Except the God of my father, the God of Abraham, and the
fear of Isaac, had been with me, surely thou hadst sent me away now
empty. God hath seen mine affliction and the labour of my hands, and
rebuked thee yesternight.
\bibleverse{43} And Laban answered and said unto Jacob, These daughters
are my daughters, and these children are my children, and these cattle
are my cattle, and all that thou seest is mine: and what can I do this
day unto these my daughters, or unto their children which they have
born? \bibleverse{44} Now therefore come thou, let us make a covenant, I
and thou; and let it be for a witness between me and thee.
\bibleverse{45} And Jacob took a stone, and set it up for a pillar.
\bibleverse{46} And Jacob said unto his brethren, Gather stones; and
they took stones, and made an heap: and they did eat there upon the
heap. \bibleverse{47} And Laban called it Jegar-sahadutha: but Jacob
called it Galeed.\textsuperscript{{[}\textbf{31:47} Jegar-sahadutha:
that is, The heap of witness, Chaldee{]}}{[}\textbf{31:47} Galeed: that
is, The heap of witness, Heb.{]} \bibleverse{48} And Laban said, This
heap is a witness between me and thee this day. Therefore was the name
of it called Galeed; \bibleverse{49} And Mizpah; for he said, The LORD
watch between me and thee, when we are absent one from
another.\footnote{\textbf{31:49} Mizpah: that is, A beacon, or,
watchtower} \bibleverse{50} If thou shalt afflict my daughters, or if
thou shalt take other wives beside my daughters, no man is with us; see,
God is witness betwixt me and thee. \bibleverse{51} And Laban said to
Jacob, Behold this heap, and behold this pillar, which I have cast
betwixt me and thee; \bibleverse{52} This heap be witness, and this
pillar be witness, that I will not pass over this heap to thee, and that
thou shalt not pass over this heap and this pillar unto me, for harm.
\bibleverse{53} The God of Abraham, and the God of Nahor, the God of
their father, judge betwixt us. And Jacob sware by the fear of his
father Isaac. \bibleverse{54} Then Jacob offered sacrifice upon the
mount, and called his brethren to eat bread: and they did eat bread, and
tarried all night in the mount.\footnote{\textbf{31:54} offered\ldots:
or, killed beasts} \bibleverse{55} And early in the morning Laban rose
up, and kissed his sons and his daughters, and blessed them: and Laban
departed, and returned unto his place.
\hypertarget{section-31}{%
\section{32}\label{section-31}}
\bibleverse{1} And Jacob went on his way, and the angels of God met him.
\bibleverse{2} And when Jacob saw them, he said, This is God's host: and
he called the name of that place Mahanaim.\footnote{\textbf{32:2}
Mahanaim: that is, Two hosts, or, camps}
\bibleverse{3} And Jacob sent messengers before him to Esau his brother
unto the land of Seir, the country of Edom.\footnote{\textbf{32:3}
country: Heb. field} \bibleverse{4} And he commanded them, saying,
Thus shall ye speak unto my lord Esau; Thy servant Jacob saith thus, I
have sojourned with Laban, and stayed there until now: \bibleverse{5}
And I have oxen, and asses, flocks, and menservants, and womenservants:
and I have sent to tell my lord, that I may find grace in thy sight.
\bibleverse{6} And the messengers returned to Jacob, saying, We came to
thy brother Esau, and also he cometh to meet thee, and four hundred men
with him. \bibleverse{7} Then Jacob was greatly afraid and distressed:
and he divided the people that was with him, and the flocks, and herds,
and the camels, into two bands; \bibleverse{8} And said, If Esau come to
the one company, and smite it, then the other company which is left
shall escape.
\bibleverse{9} And Jacob said, O God of my father Abraham, and God of my
father Isaac, the LORD which saidst unto me, Return unto thy country,
and to thy kindred, and I will deal well with thee: \bibleverse{10} I am
not worthy of the least of all the mercies, and of all the truth, which
thou hast shewed unto thy servant; for with my staff I passed over this
Jordan; and now I am become two bands.\footnote{\textbf{32:10} I am
not\ldots: Heb. I am less than all} \bibleverse{11} Deliver me, I pray
thee, from the hand of my brother, from the hand of Esau: for I fear
him, lest he will come and smite me, and the mother with the
children.\footnote{\textbf{32:11} with: Heb. upon} \bibleverse{12} And
thou saidst, I will surely do thee good, and make thy seed as the sand
of the sea, which cannot be numbered for multitude.
\bibleverse{13} And he lodged there that same night; and took of that
which came to his hand a present for Esau his brother; \bibleverse{14}
Two hundred she goats, and twenty he goats, two hundred ewes, and twenty
rams, \bibleverse{15} Thirty milch camels with their colts, forty kine,
and ten bulls, twenty she asses, and ten foals. \bibleverse{16} And he
delivered them into the hand of his servants, every drove by themselves;
and said unto his servants, Pass over before me, and put a space betwixt
drove and drove. \bibleverse{17} And he commanded the foremost, saying,
When Esau my brother meeteth thee, and asketh thee, saying, Whose art
thou? and whither goest thou? and whose are these before thee?
\bibleverse{18} Then thou shalt say, They be thy servant Jacob's; it is
a present sent unto my lord Esau: and, behold, also he is behind us.
\bibleverse{19} And so commanded he the second, and the third, and all
that followed the droves, saying, On this manner shall ye speak unto
Esau, when ye find him. \bibleverse{20} And say ye moreover, Behold, thy
servant Jacob is behind us. For he said, I will appease him with the
present that goeth before me, and afterward I will see his face;
peradventure he will accept of me.\footnote{\textbf{32:20} of me: Heb.
my face} \bibleverse{21} So went the present over before him: and
himself lodged that night in the company. \bibleverse{22} And he rose up
that night, and took his two wives, and his two womenservants, and his
eleven sons, and passed over the ford Jabbok. \bibleverse{23} And he
took them, and sent them over the brook, and sent over that he
had.\footnote{\textbf{32:23} sent them: Heb. caused to pass}
\bibleverse{24} And Jacob was left alone; and there wrestled a man with
him until the breaking of the day.\footnote{\textbf{32:24}
breaking\ldots: Heb. ascending of the morning} \bibleverse{25} And
when he saw that he prevailed not against him, he touched the hollow of
his thigh; and the hollow of Jacob's thigh was out of joint, as he
wrestled with him. \bibleverse{26} And he said, Let me go, for the day
breaketh. And he said, I will not let thee go, except thou bless me.
\bibleverse{27} And he said unto him, What is thy name? And he said,
Jacob. \bibleverse{28} And he said, Thy name shall be called no more
Jacob, but Israel: for as a prince hast thou power with God and with
men, and hast prevailed.\footnote{\textbf{32:28} Israel: that is, A
prince of God} \bibleverse{29} And Jacob asked him, and said, Tell me,
I pray thee, thy name. And he said, Wherefore is it that thou dost ask
after my name? And he blessed him there. \bibleverse{30} And Jacob
called the name of the place Peniel: for I have seen God face to face,
and my life is preserved.\footnote{\textbf{32:30} Peniel: that is, The
face of God} \bibleverse{31} And as he passed over Penuel the sun rose
upon him, and he halted upon his thigh. \bibleverse{32} Therefore the
children of Israel eat not of the sinew which shrank, which is upon the
hollow of the thigh, unto this day: because he touched the hollow of
Jacob's thigh in the sinew that shrank.
\hypertarget{section-32}{%
\section{33}\label{section-32}}
\bibleverse{1} And Jacob lifted up his eyes, and looked, and, behold,
Esau came, and with him four hundred men. And he divided the children
unto Leah, and unto Rachel, and unto the two handmaids. \bibleverse{2}
And he put the handmaids and their children foremost, and Leah and her
children after, and Rachel and Joseph hindermost. \bibleverse{3} And he
passed over before them, and bowed himself to the ground seven times,
until he came near to his brother. \bibleverse{4} And Esau ran to meet
him, and embraced him, and fell on his neck, and kissed him: and they
wept.
\bibleverse{5} And he lifted up his eyes, and saw the women and the
children; and said, Who are those with thee? And he said, The children
which God hath graciously given thy servant.\footnote{\textbf{33:5}
with\ldots: Heb. to thee} \bibleverse{6} Then the handmaidens came
near, they and their children, and they bowed themselves. \bibleverse{7}
And Leah also with her children came near, and bowed themselves: and
after came Joseph near and Rachel, and they bowed themselves.
\bibleverse{8} And he said, What meanest thou by all this drove which I
met? And he said, These are to find grace in the sight of my
lord.\footnote{\textbf{33:8} What\ldots: Heb. What is all this band to
thee?} \bibleverse{9} And Esau said, I have enough, my brother; keep
that thou hast unto thyself.\footnote{\textbf{33:9} keep\ldots: Heb. be
that to thee that is thine} \bibleverse{10} And Jacob said, Nay, I
pray thee, if now I have found grace in thy sight, then receive my
present at my hand: for therefore I have seen thy face, as though I had
seen the face of God, and thou wast pleased with me. \bibleverse{11}
Take, I pray thee, my blessing that is brought to thee; because God hath
dealt graciously with me, and because I have enough. And he urged him,
and he took it.\footnote{\textbf{33:11} enough: Heb. all things}
\bibleverse{12} And he said, Let us take our journey, and let us go, and
I will go before thee. \bibleverse{13} And he said unto him, My lord
knoweth that the children are tender, and the flocks and herds with
young are with me: and if men should overdrive them one day, all the
flock will die. \bibleverse{14} Let my lord, I pray thee, pass over
before his servant: and I will lead on softly, according as the cattle
that goeth before me and the children be able to endure, until I come
unto my lord unto Seir.\footnote{\textbf{33:14} according\ldots: Heb.
according to the foot of the work, etc., and according to the foot of
the children} \bibleverse{15} And Esau said, Let me now leave with
thee some of the folk that are with me. And he said, What needeth it?
let me find grace in the sight of my
lord.\textsuperscript{{[}\textbf{33:15} leave: Heb. set, or,
place{]}}{[}\textbf{33:15} What\ldots: Heb. Wherefore is this?{]}
\bibleverse{16} So Esau returned that day on his way unto Seir.
\bibleverse{17} And Jacob journeyed to Succoth, and built him an house,
and made booths for his cattle: therefore the name of the place is
called Succoth.\footnote{\textbf{33:17} Succoth: that is, Booths}
\bibleverse{18} And Jacob came to Shalem, a city of Shechem, which is in
the land of Canaan, when he came from Padan-aram; and pitched his tent
before the city.\footnote{\textbf{33:18} Shechem: Gr. Sychem}
\bibleverse{19} And he bought a parcel of a field, where he had spread
his tent, at the hand of the children of Hamor, Shechem's father, for an
hundred pieces of money.\textsuperscript{{[}\textbf{33:19} Hamor: Gr.
Emmor{]}}{[}\textbf{33:19} pieces\ldots: or, lambs{]} \bibleverse{20}
And he erected there an altar, and called it El-elohe-Israel.\footnote{\textbf{33:20}
El-elohe-Israel: that is God the God of Israel}
\hypertarget{section-33}{%
\section{34}\label{section-33}}
\bibleverse{1} And Dinah the daughter of Leah, which she bare unto
Jacob, went out to see the daughters of the land. \bibleverse{2} And
when Shechem the son of Hamor the Hivite, prince of the country, saw
her, he took her, and lay with her, and defiled her.\footnote{\textbf{34:2}
defiled\ldots: Heb. humbled her} \bibleverse{3} And his soul clave
unto Dinah the daughter of Jacob, and he loved the damsel, and spake
kindly unto the damsel.\footnote{\textbf{34:3} kindly\ldots: Heb. to the
heart of the damsel} \bibleverse{4} And Shechem spake unto his father
Hamor, saying, Get me this damsel to wife. \bibleverse{5} And Jacob
heard that he had defiled Dinah his daughter: now his sons were with his
cattle in the field: and Jacob held his peace until they were come.
\bibleverse{6} And Hamor the father of Shechem went out unto Jacob to
commune with him. \bibleverse{7} And the sons of Jacob came out of the
field when they heard it: and the men were grieved, and they were very
wroth, because he had wrought folly in Israel in lying with Jacob's
daughter; which thing ought not to be done. \bibleverse{8} And Hamor
communed with them, saying, The soul of my son Shechem longeth for your
daughter: I pray you give her him to wife. \bibleverse{9} And make ye
marriages with us, and give your daughters unto us, and take our
daughters unto you. \bibleverse{10} And ye shall dwell with us: and the
land shall be before you; dwell and trade ye therein, and get you
possessions therein. \bibleverse{11} And Shechem said unto her father
and unto her brethren, Let me find grace in your eyes, and what ye shall
say unto me I will give. \bibleverse{12} Ask me never so much dowry and
gift, and I will give according as ye shall say unto me: but give me the
damsel to wife. \bibleverse{13} And the sons of Jacob answered Shechem
and Hamor his father deceitfully, and said, because he had defiled Dinah
their sister: \bibleverse{14} And they said unto them, We cannot do this
thing, to give our sister to one that is uncircumcised; for that were a
reproach unto us: \bibleverse{15} But in this will we consent unto you:
If ye will be as we be, that every male of you be circumcised;
\bibleverse{16} Then will we give our daughters unto you, and we will
take your daughters to us, and we will dwell with you, and we will
become one people. \bibleverse{17} But if ye will not hearken unto us,
to be circumcised; then will we take our daughter, and we will be gone.
\bibleverse{18} And their words pleased Hamor, and Shechem Hamor's son.
\bibleverse{19} And the young man deferred not to do the thing, because
he had delight in Jacob's daughter: and he was more honourable than all
the house of his father.
\bibleverse{20} And Hamor and Shechem his son came unto the gate of
their city, and communed with the men of their city, saying,
\bibleverse{21} These men are peaceable with us; therefore let them
dwell in the land, and trade therein; for the land, behold, it is large
enough for them; let us take their daughters to us for wives, and let us
give them our daughters. \bibleverse{22} Only herein will the men
consent unto us for to dwell with us, to be one people, if every male
among us be circumcised, as they are circumcised. \bibleverse{23} Shall
not their cattle and their substance and every beast of theirs be ours?
only let us consent unto them, and they will dwell with us.
\bibleverse{24} And unto Hamor and unto Shechem his son hearkened all
that went out of the gate of his city; and every male was circumcised,
all that went out of the gate of his city.
\bibleverse{25} And it came to pass on the third day, when they were
sore, that two of the sons of Jacob, Simeon and Levi, Dinah's brethren,
took each man his sword, and came upon the city boldly, and slew all the
males. \bibleverse{26} And they slew Hamor and Shechem his son with the
edge of the sword, and took Dinah out of Shechem's house, and went
out.\footnote{\textbf{34:26} edge: Heb. mouth} \bibleverse{27} The sons
of Jacob came upon the slain, and spoiled the city, because they had
defiled their sister. \bibleverse{28} They took their sheep, and their
oxen, and their asses, and that which was in the city, and that which
was in the field, \bibleverse{29} And all their wealth, and all their
little ones, and their wives took they captive, and spoiled even all
that was in the house. \bibleverse{30} And Jacob said to Simeon and
Levi, Ye have troubled me to make me to stink among the inhabitants of
the land, among the Canaanites and the Perizzites: and I being few in
number, they shall gather themselves together against me, and slay me;
and I shall be destroyed, I and my house. \bibleverse{31} And they said,
Should he deal with our sister as with an harlot?
\hypertarget{section-34}{%
\section{35}\label{section-34}}
\bibleverse{1} And God said unto Jacob, Arise, go up to Beth-el, and
dwell there: and make there an altar unto God, that appeared unto thee
when thou fleddest from the face of Esau thy brother. \bibleverse{2}
Then Jacob said unto his household, and to all that were with him, Put
away the strange gods that are among you, and be clean, and change your
garments: \bibleverse{3} And let us arise, and go up to Beth-el; and I
will make there an altar unto God, who answered me in the day of my
distress, and was with me in the way which I went. \bibleverse{4} And
they gave unto Jacob all the strange gods which were in their hand, and
all their earrings which were in their ears; and Jacob hid them under
the oak which was by Shechem. \bibleverse{5} And they journeyed: and the
terror of God was upon the cities that were round about them, and they
did not pursue after the sons of Jacob.
\bibleverse{6} So Jacob came to Luz, which is in the land of Canaan,
that is, Beth-el, he and all the people that were with him.
\bibleverse{7} And he built there an altar, and called the place
El-beth-el: because there God appeared unto him, when he fled from the
face of his brother.\footnote{\textbf{35:7} El-beth-el: that is, The God
of Beth-el} \bibleverse{8} But Deborah Rebekah's nurse died, and she
was buried beneath Beth-el under an oak: and the name of it was called
Allon-bachuth.\footnote{\textbf{35:8} Allon-bachuth: that is, The oak of
weeping}
\bibleverse{9} And God appeared unto Jacob again, when he came out of
Padan-aram, and blessed him. \bibleverse{10} And God said unto him, Thy
name is Jacob: thy name shall not be called any more Jacob, but Israel
shall be thy name: and he called his name Israel. \bibleverse{11} And
God said unto him, I am God Almighty: be fruitful and multiply; a nation
and a company of nations shall be of thee, and kings shall come out of
thy loins; \bibleverse{12} And the land which I gave Abraham and Isaac,
to thee I will give it, and to thy seed after thee will I give the land.
\bibleverse{13} And God went up from him in the place where he talked
with him. \bibleverse{14} And Jacob set up a pillar in the place where
he talked with him, even a pillar of stone: and he poured a drink
offering thereon, and he poured oil thereon. \bibleverse{15} And Jacob
called the name of the place where God spake with him, Beth-el.
\bibleverse{16} And they journeyed from Beth-el; and there was but a
little way to come to Ephrath: and Rachel travailed, and she had hard
labour.\footnote{\textbf{35:16} a little\ldots: Heb. a little piece of
ground} \bibleverse{17} And it came to pass, when she was in hard
labour, that the midwife said unto her, Fear not; thou shalt have this
son also. \bibleverse{18} And it came to pass, as her soul was in
departing, (for she died) that she called his name Ben-oni: but his
father called him Benjamin.\textsuperscript{{[}\textbf{35:18} Ben-oni:
that is, The son of my sorrow{]}}{[}\textbf{35:18} Benjamin: that is,
The son of the right hand{]} \bibleverse{19} And Rachel died, and was
buried in the way to Ephrath, which is Beth-lehem. \bibleverse{20} And
Jacob set a pillar upon her grave: that is the pillar of Rachel's grave
unto this day.
\bibleverse{21} And Israel journeyed, and spread his tent beyond the
tower of Edar. \bibleverse{22} And it came to pass, when Israel dwelt in
that land, that Reuben went and lay with Bilhah his father's concubine:
and Israel heard it. Now the sons of Jacob were twelve: \bibleverse{23}
The sons of Leah; Reuben, Jacob's firstborn, and Simeon, and Levi, and
Judah, and Issachar, and Zebulun: \bibleverse{24} The sons of Rachel;
Joseph, and Benjamin: \bibleverse{25} And the sons of Bilhah, Rachel's
handmaid; Dan, and Naphtali: \bibleverse{26} And the sons of Zilpah,
Leah's handmaid; Gad, and Asher: these are the sons of Jacob, which were
born to him in Padan-aram.
\bibleverse{27} And Jacob came unto Isaac his father unto Mamre, unto
the city of Arbah, which is Hebron, where Abraham and Isaac sojourned.
\bibleverse{28} And the days of Isaac were an hundred and fourscore
years. \bibleverse{29} And Isaac gave up the ghost, and died, and was
gathered unto his people, being old and full of days: and his sons Esau
and Jacob buried him.
\hypertarget{section-35}{%
\section{36}\label{section-35}}
\bibleverse{1} Now these are the generations of Esau, who is Edom.
\bibleverse{2} Esau took his wives of the daughters of Canaan; Adah the
daughter of Elon the Hittite, and Aholibamah the daughter of Anah the
daughter of Zibeon the Hivite; \bibleverse{3} And Bashemath Ishmael's
daughter, sister of Nebajoth.\footnote{\textbf{36:3} Bashemath: or,
Mahalath} \bibleverse{4} And Adah bare to Esau Eliphaz; and Bashemath
bare Reuel; \bibleverse{5} And Aholibamah bare Jeush, and Jaalam, and
Korah: these are the sons of Esau, which were born unto him in the land
of Canaan. \bibleverse{6} And Esau took his wives, and his sons, and his
daughters, and all the persons of his house, and his cattle, and all his
beasts, and all his substance, which he had got in the land of Canaan;
and went into the country from the face of his brother Jacob.\footnote{\textbf{36:6}
persons: Heb. souls} \bibleverse{7} For their riches were more than
that they might dwell together; and the land wherein they were strangers
could not bear them because of their cattle. \bibleverse{8} Thus dwelt
Esau in mount Seir: Esau is Edom.
\bibleverse{9} And these are the generations of Esau the father of the
Edomites in mount Seir:\footnote{\textbf{36:9} the Edomites: Heb. Edom}
\bibleverse{10} These are the names of Esau's sons; Eliphaz the son of
Adah the wife of Esau, Reuel the son of Bashemath the wife of Esau.
\bibleverse{11} And the sons of Eliphaz were Teman, Omar, Zepho, and
Gatam, and Kenaz.\footnote{\textbf{36:11} Zepho: or, Zephi}
\bibleverse{12} And Timna was concubine to Eliphaz Esau's son; and she
bare to Eliphaz Amalek: these were the sons of Adah Esau's wife.
\bibleverse{13} And these are the sons of Reuel; Nahath, and Zerah,
Shammah, and Mizzah: these were the sons of Bashemath Esau's wife.
\bibleverse{14} And these were the sons of Aholibamah, the daughter of
Anah the daughter of Zibeon, Esau's wife: and she bare to Esau Jeush,
and Jaalam, and Korah.
\bibleverse{15} These were dukes of the sons of Esau: the sons of
Eliphaz the firstborn son of Esau; duke Teman, duke Omar, duke Zepho,
duke Kenaz, \bibleverse{16} Duke Korah, duke Gatam, and duke Amalek:
these are the dukes that came of Eliphaz in the land of Edom; these were
the sons of Adah.
\bibleverse{17} And these are the sons of Reuel Esau's son; duke Nahath,
duke Zerah, duke Shammah, duke Mizzah: these are the dukes that came of
Reuel in the land of Edom; these are the sons of Bashemath Esau's wife.
\bibleverse{18} And these are the sons of Aholibamah Esau's wife; duke
Jeush, duke Jaalam, duke Korah: these were the dukes that came of
Aholibamah the daughter of Anah, Esau's wife. \bibleverse{19} These are
the sons of Esau, who is Edom, and these are their dukes.
\bibleverse{20} These are the sons of Seir the Horite, who inhabited the
land; Lotan, and Shobal, and Zibeon, and Anah, \bibleverse{21} And
Dishon, and Ezer, and Dishan: these are the dukes of the Horites, the
children of Seir in the land of Edom. \bibleverse{22} And the children
of Lotan were Hori and Hemam; and Lotan's sister was Timna.\footnote{\textbf{36:22}
Hemam: or, Homam} \bibleverse{23} And the children of Shobal were
these; Alvan, and Manahath, and Ebal, Shepho, and
Onam.\textsuperscript{{[}\textbf{36:23} Alvan: or,
Alian{]}}{[}\textbf{36:23} Shepho: or, Shephi{]} \bibleverse{24} And
these are the children of Zibeon; both Ajah, and Anah: this was that
Anah that found the mules in the wilderness, as he fed the asses of
Zibeon his father. \bibleverse{25} And the children of Anah were these;
Dishon, and Aholibamah the daughter of Anah. \bibleverse{26} And these
are the children of Dishon; Hemdan, and Eshban, and Ithran, and
Cheran.\footnote{\textbf{36:26} Hemdan: or, Amram} \bibleverse{27} The
children of Ezer are these; Bilhan, and Zaavan, and Akan.\footnote{\textbf{36:27}
Akan: or, Jakan} \bibleverse{28} The children of Dishan are these; Uz,
and Aran. \bibleverse{29} These are the dukes that came of the Horites;
duke Lotan, duke Shobal, duke Zibeon, duke Anah, \bibleverse{30} Duke
Dishon, duke Ezer, duke Dishan: these are the dukes that came of Hori,
among their dukes in the land of Seir.
\bibleverse{31} And these are the kings that reigned in the land of
Edom, before there reigned any king over the children of Israel.
\bibleverse{32} And Bela the son of Beor reigned in Edom: and the name
of his city was Dinhabah. \bibleverse{33} And Bela died, and Jobab the
son of Zerah of Bozrah reigned in his stead. \bibleverse{34} And Jobab
died, and Husham of the land of Temani reigned in his stead.
\bibleverse{35} And Husham died, and Hadad the son of Bedad, who smote
Midian in the field of Moab, reigned in his stead: and the name of his
city was Avith. \bibleverse{36} And Hadad died, and Samlah of Masrekah
reigned in his stead. \bibleverse{37} And Samlah died, and Saul of
Rehoboth by the river reigned in his stead. \bibleverse{38} And Saul
died, and Baal-hanan the son of Achbor reigned in his stead.
\bibleverse{39} And Baal-hanan the son of Achbor died, and Hadar reigned
in his stead: and the name of his city was Pau; and his wife's name was
Mehetabel, the daughter of Matred, the daughter of Mezahab.\footnote{\textbf{36:39}
Hadar, Pau: or, Hadad, Pai: after his death was an Aristocracy}
\bibleverse{40} And these are the names of the dukes that came of Esau,
according to their families, after their places, by their names; duke
Timnah, duke Alvah, duke Jetheth,\footnote{\textbf{36:40} Alvah: or,
Aliah} \bibleverse{41} Duke Aholibamah, duke Elah, duke Pinon,
\bibleverse{42} Duke Kenaz, duke Teman, duke Mibzar, \bibleverse{43}
Duke Magdiel, duke Iram: these be the dukes of Edom, according to their
habitations in the land of their possession: he is Esau the father of
the Edomites.\footnote{\textbf{36:43} the Edomites: Heb. Edom}
\hypertarget{section-36}{%
\section{37}\label{section-36}}
\bibleverse{1} And Jacob dwelt in the land wherein his father was a
stranger, in the land of Canaan.\footnote{\textbf{37:1} wherein\ldots:
Heb. of his father's sojournings} \bibleverse{2} These are the
generations of Jacob. Joseph, being seventeen years old, was feeding the
flock with his brethren; and the lad was with the sons of Bilhah, and
with the sons of Zilpah, his father's wives: and Joseph brought unto his
father their evil report. \bibleverse{3} Now Israel loved Joseph more
than all his children, because he was the son of his old age: and he
made him a coat of many colours.\footnote{\textbf{37:3} colours: or,
pieces} \bibleverse{4} And when his brethren saw that their father
loved him more than all his brethren, they hated him, and could not
speak peaceably unto him.
\bibleverse{5} And Joseph dreamed a dream, and he told it his brethren:
and they hated him yet the more. \bibleverse{6} And he said unto them,
Hear, I pray you, this dream which I have dreamed: \bibleverse{7} For,
behold, we were binding sheaves in the field, and, lo, my sheaf arose,
and also stood upright; and, behold, your sheaves stood round about, and
made obeisance to my sheaf. \bibleverse{8} And his brethren said to him,
Shalt thou indeed reign over us? or shalt thou indeed have dominion over
us? And they hated him yet the more for his dreams, and for his words.
\bibleverse{9} And he dreamed yet another dream, and told it his
brethren, and said, Behold, I have dreamed a dream more; and, behold,
the sun and the moon and the eleven stars made obeisance to me.
\bibleverse{10} And he told it to his father, and to his brethren: and
his father rebuked him, and said unto him, What is this dream that thou
hast dreamed? Shall I and thy mother and thy brethren indeed come to bow
down ourselves to thee to the earth? \bibleverse{11} And his brethren
envied him; but his father observed the saying.
\bibleverse{12} And his brethren went to feed their father's flock in
Shechem. \bibleverse{13} And Israel said unto Joseph, Do not thy
brethren feed the flock in Shechem? come, and I will send thee unto
them. And he said to him, Here am I. \bibleverse{14} And he said to him,
Go, I pray thee, see whether it be well with thy brethren, and well with
the flocks; and bring me word again. So he sent him out of the vale of
Hebron, and he came to Shechem.\footnote{\textbf{37:14} see\ldots: Heb.
see the peace of thy brethren, etc.}
\bibleverse{15} And a certain man found him, and, behold, he was
wandering in the field: and the man asked him, saying, What seekest
thou? \bibleverse{16} And he said, I seek my brethren: tell me, I pray
thee, where they feed their flocks. \bibleverse{17} And the man said,
They are departed hence; for I heard them say, Let us go to Dothan. And
Joseph went after his brethren, and found them in Dothan.
\bibleverse{18} And when they saw him afar off, even before he came near
unto them, they conspired against him to slay him. \bibleverse{19} And
they said one to another, Behold, this dreamer cometh.\footnote{\textbf{37:19}
dreamer: Heb. master of dreams} \bibleverse{20} Come now therefore,
and let us slay him, and cast him into some pit, and we will say, Some
evil beast hath devoured him: and we shall see what will become of his
dreams. \bibleverse{21} And Reuben heard it, and he delivered him out of
their hands; and said, Let us not kill him. \bibleverse{22} And Reuben
said unto them, Shed no blood, but cast him into this pit that is in the
wilderness, and lay no hand upon him; that he might rid him out of their
hands, to deliver him to his father again.
\bibleverse{23} And it came to pass, when Joseph was come unto his
brethren, that they stript Joseph out of his coat, his coat of many
colours that was on him;\footnote{\textbf{37:23} colours: or, pieces}
\bibleverse{24} And they took him, and cast him into a pit: and the pit
was empty, there was no water in it. \bibleverse{25} And they sat down
to eat bread: and they lifted up their eyes and looked, and, behold, a
company of Ishmeelites came from Gilead with their camels bearing
spicery and balm and myrrh, going to carry it down to Egypt.
\bibleverse{26} And Judah said unto his brethren, What profit is it if
we slay our brother, and conceal his blood? \bibleverse{27} Come, and
let us sell him to the Ishmeelites, and let not our hand be upon him;
for he is our brother and our flesh. And his brethren were
content.\footnote{\textbf{37:27} were\ldots: Heb. hearkened}
\bibleverse{28} Then there passed by Midianites merchantmen; and they
drew and lifted up Joseph out of the pit, and sold Joseph to the
Ishmeelites for twenty pieces of silver: and they brought Joseph into
Egypt.
\bibleverse{29} And Reuben returned unto the pit; and, behold, Joseph
was not in the pit; and he rent his clothes. \bibleverse{30} And he
returned unto his brethren, and said, The child is not; and I, whither
shall I go?
\bibleverse{31} And they took Joseph's coat, and killed a kid of the
goats, and dipped the coat in the blood; \bibleverse{32} And they sent
the coat of many colours, and they brought it to their father; and said,
This have we found: know now whether it be thy son's coat or no.
\bibleverse{33} And he knew it, and said, It is my son's coat; an evil
beast hath devoured him; Joseph is without doubt rent in pieces.
\bibleverse{34} And Jacob rent his clothes, and put sackcloth upon his
loins, and mourned for his son many days. \bibleverse{35} And all his
sons and all his daughters rose up to comfort him; but he refused to be
comforted; and he said, For I will go down into the grave unto my son
mourning. Thus his father wept for him. \bibleverse{36} And the
Midianites sold him into Egypt unto Potiphar, an officer of Pharaoh's,
and captain of the guard.\textsuperscript{{[}\textbf{37:36} officer:
Heb. eunuch: but the word doth signify not only eunuchs, but also
chamberlains, courtiers, and officers{]}}{[}\textbf{37:36}
captain\ldots: or, chief marshal: Heb. chief of the slaughter men, or
executioners{]}
\hypertarget{section-37}{%
\section{38}\label{section-37}}
\bibleverse{1} And it came to pass at that time, that Judah went down
from his brethren, and turned in to a certain Adullamite, whose name was
Hirah. \bibleverse{2} And Judah saw there a daughter of a certain
Canaanite, whose name was Shuah; and he took her, and went in unto her.
\bibleverse{3} And she conceived, and bare a son; and he called his name
Er. \bibleverse{4} And she conceived again, and bare a son; and she
called his name Onan. \bibleverse{5} And she yet again conceived, and
bare a son; and called his name Shelah: and he was at Chezib, when she
bare him. \bibleverse{6} And Judah took a wife for Er his firstborn,
whose name was Tamar. \bibleverse{7} And Er, Judah's firstborn, was
wicked in the sight of the LORD; and the LORD slew him. \bibleverse{8}
And Judah said unto Onan, Go in unto thy brother's wife, and marry her,
and raise up seed to thy brother. \bibleverse{9} And Onan knew that the
seed should not be his; and it came to pass, when he went in unto his
brother's wife, that he spilled it on the ground, lest that he should
give seed to his brother. \bibleverse{10} And the thing which he did
displeased the LORD: wherefore he slew him also.\footnote{\textbf{38:10}
displeased\ldots: Heb. was evil in the eyes of the Lord}
\bibleverse{11} Then said Judah to Tamar his daughter in law, Remain a
widow at thy father's house, till Shelah my son be grown: for he said,
Lest peradventure he die also, as his brethren did. And Tamar went and
dwelt in her father's house.
\bibleverse{12} And in process of time the daughter of Shuah Judah's
wife died; and Judah was comforted, and went up unto his sheepshearers
to Timnath, he and his friend Hirah the Adullamite.\footnote{\textbf{38:12}
in process\ldots: Heb. the days were multiplied} \bibleverse{13} And
it was told Tamar, saying, Behold thy father in law goeth up to Timnath
to shear his sheep. \bibleverse{14} And she put her widow's garments off
from her, and covered her with a vail, and wrapped herself, and sat in
an open place, which is by the way to Timnath; for she saw that Shelah
was grown, and she was not given unto him to wife.\footnote{\textbf{38:14}
an open\ldots: Heb. the door of eyes, or, of Enajim} \bibleverse{15}
When Judah saw her, he thought her to be an harlot; because she had
covered her face. \bibleverse{16} And he turned unto her by the way, and
said, Go to, I pray thee, let me come in unto thee; (for he knew not
that she was his daughter in law.) And she said, What wilt thou give me,
that thou mayest come in unto me? \bibleverse{17} And he said, I will
send thee a kid from the flock. And she said, Wilt thou give me a
pledge, till thou send it?\footnote{\textbf{38:17} a kid: Heb. a kid of
the goats} \bibleverse{18} And he said, What pledge shall I give thee?
And she said, Thy signet, and thy bracelets, and thy staff that is in
thine hand. And he gave it her, and came in unto her, and she conceived
by him. \bibleverse{19} And she arose, and went away, and laid by her
vail from her, and put on the garments of her widowhood. \bibleverse{20}
And Judah sent the kid by the hand of his friend the Adullamite, to
receive his pledge from the woman's hand: but he found her not.
\bibleverse{21} Then he asked the men of that place, saying, Where is
the harlot, that was openly by the way side? And they said, There was no
harlot in this place.\footnote{\textbf{38:21} openly: or, in Enajim}
\bibleverse{22} And he returned to Judah, and said, I cannot find her;
and also the men of the place said, that there was no harlot in this
place. \bibleverse{23} And Judah said, Let her take it to her, lest we
be shamed: behold, I sent this kid, and thou hast not found
her.\footnote{\textbf{38:23} be shamed: Heb. become a contempt}
\bibleverse{24} And it came to pass about three months after, that it
was told Judah, saying, Tamar thy daughter in law hath played the
harlot; and also, behold, she is with child by whoredom. And Judah said,
Bring her forth, and let her be burnt. \bibleverse{25} When she was
brought forth, she sent to her father in law, saying, By the man, whose
these are, am I with child: and she said, Discern, I pray thee, whose
are these, the signet, and bracelets, and staff. \bibleverse{26} And
Judah acknowledged them, and said, She hath been more righteous than I;
because that I gave her not to Shelah my son. And he knew her again no
more.
\bibleverse{27} And it came to pass in the time of her travail, that,
behold, twins were in her womb. \bibleverse{28} And it came to pass,
when she travailed, that the one put out his hand: and the midwife took
and bound upon his hand a scarlet thread, saying, This came out first.
\bibleverse{29} And it came to pass, as he drew back his hand, that,
behold, his brother came out: and she said, How hast thou broken forth?
this breach be upon thee: therefore his name was called
Pharez.\textsuperscript{{[}\textbf{38:29} How hast\ldots: or, Wherefore
hast thou made this breach against thee?{]}}{[}\textbf{38:29} Pharez:
that is A breach{]} \bibleverse{30} And afterward came out his brother,
that had the scarlet thread upon his hand: and his name was called
Zarah.
\hypertarget{section-38}{%
\section{39}\label{section-38}}
\bibleverse{1} And Joseph was brought down to Egypt; and Potiphar, an
officer of Pharaoh, captain of the guard, an Egyptian, bought him of the
hands of the Ishmeelites, which had brought him down thither.
\bibleverse{2} And the LORD was with Joseph, and he was a prosperous
man; and he was in the house of his master the Egyptian. \bibleverse{3}
And his master saw that the LORD was with him, and that the LORD made
all that he did to prosper in his hand. \bibleverse{4} And Joseph found
grace in his sight, and he served him: and he made him overseer over his
house, and all that he had he put into his hand. \bibleverse{5} And it
came to pass from the time that he had made him overseer in his house,
and over all that he had, that the LORD blessed the Egyptian's house for
Joseph's sake; and the blessing of the LORD was upon all that he had in
the house, and in the field. \bibleverse{6} And he left all that he had
in Joseph's hand; and he knew not ought he had, save the bread which he
did eat. And Joseph was a goodly person, and well favoured.
\bibleverse{7} And it came to pass after these things, that his master's
wife cast her eyes upon Joseph; and she said, Lie with me.
\bibleverse{8} But he refused, and said unto his master's wife, Behold,
my master wotteth not what is with me in the house, and he hath
committed all that he hath to my hand; \bibleverse{9} There is none
greater in this house than I; neither hath he kept back any thing from
me but thee, because thou art his wife: how then can I do this great
wickedness, and sin against God? \bibleverse{10} And it came to pass, as
she spake to Joseph day by day, that he hearkened not unto her, to lie
by her, or to be with her. \bibleverse{11} And it came to pass about
this time, that Joseph went into the house to do his business; and there
was none of the men of the house there within. \bibleverse{12} And she
caught him by his garment, saying, Lie with me: and he left his garment
in her hand, and fled, and got him out.
\bibleverse{13} And it came to pass, when she saw that he had left his
garment in her hand, and was fled forth, \bibleverse{14} That she called
unto the men of her house, and spake unto them, saying, See, he hath
brought in an Hebrew unto us to mock us; he came in unto me to lie with
me, and I cried with a loud voice:\footnote{\textbf{39:14} loud: Heb.
great} \bibleverse{15} And it came to pass, when he heard that I
lifted up my voice and cried, that he left his garment with me, and
fled, and got him out. \bibleverse{16} And she laid up his garment by
her, until his lord came home. \bibleverse{17} And she spake unto him
according to these words, saying, The Hebrew servant, which thou hast
brought unto us, came in unto me to mock me: \bibleverse{18} And it came
to pass, as I lifted up my voice and cried, that he left his garment
with me, and fled out.
\bibleverse{19} And it came to pass, when his master heard the words of
his wife, which she spake unto him, saying, After this manner did thy
servant to me; that his wrath was kindled. \bibleverse{20} And Joseph's
master took him, and put him into the prison, a place where the king's
prisoners were bound: and he was there in the prison.
\bibleverse{21} But the LORD was with Joseph, and shewed him mercy, and
gave him favour in the sight of the keeper of the prison.\footnote{\textbf{39:21}
shewed\ldots: Heb. extended kindness unto him} \bibleverse{22} And the
keeper of the prison committed to Joseph's hand all the prisoners that
were in the prison; and whatsoever they did there, he was the doer of
it. \bibleverse{23} The keeper of the prison looked not to any thing
that was under his hand; because the LORD was with him, and that which
he did, the LORD made it to prosper.
\hypertarget{section-39}{%
\section{40}\label{section-39}}
\bibleverse{1} And it came to pass after these things, that the butler
of the king of Egypt and his baker had offended their lord the king of
Egypt. \bibleverse{2} And Pharaoh was wroth against two of his officers,
against the chief of the butlers, and against the chief of the bakers.
\bibleverse{3} And he put them in ward in the house of the captain of
the guard, into the prison, the place where Joseph was bound.
\bibleverse{4} And the captain of the guard charged Joseph with them,
and he served them: and they continued a season in ward.
\bibleverse{5} And they dreamed a dream both of them, each man his dream
in one night, each man according to the interpretation of his dream, the
butler and the baker of the king of Egypt, which were bound in the
prison. \bibleverse{6} And Joseph came in unto them in the morning, and
looked upon them, and, behold, they were sad. \bibleverse{7} And he
asked Pharaoh's officers that were with him in the ward of his lord's
house, saying, Wherefore look ye so sadly to day?\footnote{\textbf{40:7}
look\ldots: Heb. are your faces evil?} \bibleverse{8} And they said
unto him, We have dreamed a dream, and there is no interpreter of it.
And Joseph said unto them, Do not interpretations belong to God? tell me
them, I pray you. \bibleverse{9} And the chief butler told his dream to
Joseph, and said to him, In my dream, behold, a vine was before me;
\bibleverse{10} And in the vine were three branches: and it was as
though it budded, and her blossoms shot forth; and the clusters thereof
brought forth ripe grapes: \bibleverse{11} And Pharaoh's cup was in my
hand: and I took the grapes, and pressed them into Pharaoh's cup, and I
gave the cup into Pharaoh's hand. \bibleverse{12} And Joseph said unto
him, This is the interpretation of it: The three branches are three
days: \bibleverse{13} Yet within three days shall Pharaoh lift up thine
head, and restore thee unto thy place: and thou shalt deliver Pharaoh's
cup into his hand, after the former manner when thou wast his
butler.\footnote{\textbf{40:13} lift\ldots: or, reckon} \bibleverse{14}
But think on me when it shall be well with thee, and shew kindness, I
pray thee, unto me, and make mention of me unto Pharaoh, and bring me
out of this house:\footnote{\textbf{40:14} think\ldots: Heb. remember me
with thee} \bibleverse{15} For indeed I was stolen away out of the
land of the Hebrews: and here also have I done nothing that they should
put me into the dungeon. \bibleverse{16} When the chief baker saw that
the interpretation was good, he said unto Joseph, I also was in my
dream, and, behold, I had three white baskets on my head:\footnote{\textbf{40:16}
white: or, full of holes} \bibleverse{17} And in the uppermost basket
there was of all manner of bakemeats for Pharaoh; and the birds did eat
them out of the basket upon my head.\footnote{\textbf{40:17}
bakemeats\ldots: Heb. meat of Pharaoh, the work of a baker, or, cook}
\bibleverse{18} And Joseph answered and said, This is the interpretation
thereof: The three baskets are three days: \bibleverse{19} Yet within
three days shall Pharaoh lift up thy head from off thee, and shall hang
thee on a tree; and the birds shall eat thy flesh from off
thee.\footnote{\textbf{40:19} lift\ldots: or, reckon thee, and take thy
office from thee}
\bibleverse{20} And it came to pass the third day, which was Pharaoh's
birthday, that he made a feast unto all his servants: and he lifted up
the head of the chief butler and of the chief baker among his
servants.\footnote{\textbf{40:20} lifted\ldots: or, reckoned}
\bibleverse{21} And he restored the chief butler unto his butlership
again; and he gave the cup into Pharaoh's hand: \bibleverse{22} But he
hanged the chief baker: as Joseph had interpreted to them.
\bibleverse{23} Yet did not the chief butler remember Joseph, but forgat
him.
\hypertarget{section-40}{%
\section{41}\label{section-40}}
\bibleverse{1} And it came to pass at the end of two full years, that
Pharaoh dreamed: and, behold, he stood by the river. \bibleverse{2} And,
behold, there came up out of the river seven well favoured kine and
fatfleshed; and they fed in a meadow. \bibleverse{3} And, behold, seven
other kine came up after them out of the river, ill favoured and
leanfleshed; and stood by the other kine upon the brink of the river.
\bibleverse{4} And the ill favoured and leanfleshed kine did eat up the
seven well favoured and fat kine. So Pharaoh awoke. \bibleverse{5} And
he slept and dreamed the second time: and, behold, seven ears of corn
came up upon one stalk, rank and good.\footnote{\textbf{41:5} rank: Heb.
fat} \bibleverse{6} And, behold, seven thin ears and blasted with the
east wind sprung up after them. \bibleverse{7} And the seven thin ears
devoured the seven rank and full ears. And Pharaoh awoke, and, behold,
it was a dream. \bibleverse{8} And it came to pass in the morning that
his spirit was troubled; and he sent and called for all the magicians of
Egypt, and all the wise men thereof: and Pharaoh told them his dream;
but there was none that could interpret them unto Pharaoh.
\bibleverse{9} Then spake the chief butler unto Pharaoh, saying, I do
remember my faults this day: \bibleverse{10} Pharaoh was wroth with his
servants, and put me in ward in the captain of the guard's house, both
me and the chief baker: \bibleverse{11} And we dreamed a dream in one
night, I and he; we dreamed each man according to the interpretation of
his dream. \bibleverse{12} And there was there with us a young man, an
Hebrew, servant to the captain of the guard; and we told him, and he
interpreted to us our dreams; to each man according to his dream he did
interpret. \bibleverse{13} And it came to pass, as he interpreted to us,
so it was; me he restored unto mine office, and him he hanged.
\bibleverse{14} Then Pharaoh sent and called Joseph, and they brought
him hastily out of the dungeon: and he shaved himself, and changed his
raiment, and came in unto Pharaoh.\footnote{\textbf{41:14}
brought\ldots: Heb. made him run} \bibleverse{15} And Pharaoh said
unto Joseph, I have dreamed a dream, and there is none that can
interpret it: and I have heard say of thee, that thou canst understand a
dream to interpret it.\footnote{\textbf{41:15} thou\ldots: or, when thou
hearest a dream thou canst interpret it} \bibleverse{16} And Joseph
answered Pharaoh, saying, It is not in me: God shall give Pharaoh an
answer of peace.
\bibleverse{17} And Pharaoh said unto Joseph, In my dream, behold, I
stood upon the bank of the river: \bibleverse{18} And, behold, there
came up out of the river seven kine, fatfleshed and well favoured; and
they fed in a meadow: \bibleverse{19} And, behold, seven other kine came
up after them, poor and very ill favoured and leanfleshed, such as I
never saw in all the land of Egypt for badness: \bibleverse{20} And the
lean and the ill favoured kine did eat up the first seven fat kine:
\bibleverse{21} And when they had eaten them up, it could not be known
that they had eaten them; but they were still ill favoured, as at the
beginning. So I awoke.\footnote{\textbf{41:21} eaten\ldots: Heb. come to
the inward parts of them} \bibleverse{22} And I saw in my dream, and,
behold, seven ears came up in one stalk, full and good: \bibleverse{23}
And, behold, seven ears, withered, thin, and blasted with the east wind,
sprung up after them:\footnote{\textbf{41:23} withered: or, small}
\bibleverse{24} And the thin ears devoured the seven good ears: and I
told this unto the magicians; but there was none that could declare it
to me.
\bibleverse{25} And Joseph said unto Pharaoh, The dream of Pharaoh is
one: God hath shewed Pharaoh what he is about to do. \bibleverse{26} The
seven good kine are seven years; and the seven good ears are seven
years: the dream is one. \bibleverse{27} And the seven thin and ill
favoured kine that came up after them are seven years; and the seven
empty ears blasted with the east wind shall be seven years of famine.
\bibleverse{28} This is the thing which I have spoken unto Pharaoh: What
God is about to do he sheweth unto Pharaoh. \bibleverse{29} Behold,
there come seven years of great plenty throughout all the land of Egypt:
\bibleverse{30} And there shall arise after them seven years of famine;
and all the plenty shall be forgotten in the land of Egypt; and the
famine shall consume the land; \bibleverse{31} And the plenty shall not
be known in the land by reason of that famine following; for it shall be
very grievous.\footnote{\textbf{41:31} grievous: Heb. heavy}
\bibleverse{32} And for that the dream was doubled unto Pharaoh twice;
it is because the thing is established by God, and God will shortly
bring it to pass.\footnote{\textbf{41:32} established\ldots: or,
prepared of God}
\bibleverse{33} Now therefore let Pharaoh look out a man discreet and
wise, and set him over the land of Egypt. \bibleverse{34} Let Pharaoh do
this, and let him appoint officers over the land, and take up the fifth
part of the land of Egypt in the seven plenteous years.\footnote{\textbf{41:34}
officers: or, overseers} \bibleverse{35} And let them gather all the
food of those good years that come, and lay up corn under the hand of
Pharaoh, and let them keep food in the cities. \bibleverse{36} And that
food shall be for store to the land against the seven years of famine,
which shall be in the land of Egypt; that the land perish not through
the famine.\footnote{\textbf{41:36} perish\ldots: Heb. be not cut off}
\bibleverse{37} And the thing was good in the eyes of Pharaoh, and in
the eyes of all his servants. \bibleverse{38} And Pharaoh said unto his
servants, Can we find such a one as this is, a man in whom the Spirit of
God is? \bibleverse{39} And Pharaoh said unto Joseph, Forasmuch as God
hath shewed thee all this, there is none so discreet and wise as thou
art: \bibleverse{40} Thou shalt be over my house, and according unto thy
word shall all my people be ruled: only in the throne will I be greater
than thou.\footnote{\textbf{41:40} be ruled: Heb. be armed, or, kiss}
\bibleverse{41} And Pharaoh said unto Joseph, See, I have set thee over
all the land of Egypt. \bibleverse{42} And Pharaoh took off his ring
from his hand, and put it upon Joseph's hand, and arrayed him in
vestures of fine linen, and put a gold chain about his neck;\footnote{\textbf{41:42}
fine\ldots: or, silk} \bibleverse{43} And he made him to ride in the
second chariot which he had; and they cried before him, Bow the knee:
and he made him ruler over all the land of Egypt.\footnote{\textbf{41:43}
Bow\ldots: or, Tender father: Heb. Abrech} \bibleverse{44} And Pharaoh
said unto Joseph, I am Pharaoh, and without thee shall no man lift up
his hand or foot in all the land of Egypt. \bibleverse{45} And Pharaoh
called Joseph's name Zaphnath-paaneah; and he gave him to wife Asenath
the daughter of Poti-pherah priest of On. And Joseph went out over all
the land of Egypt.\textsuperscript{{[}\textbf{41:45} Zaphnath-paaneah:
which in the Coptic signifies, A revealer of secrets, or, The man to
whom secrets are revealed{]}}{[}\textbf{41:45} priest: or, prince{]}
\bibleverse{46} And Joseph was thirty years old when he stood before
Pharaoh king of Egypt. And Joseph went out from the presence of Pharaoh,
and went throughout all the land of Egypt. \bibleverse{47} And in the
seven plenteous years the earth brought forth by handfuls.
\bibleverse{48} And he gathered up all the food of the seven years,
which were in the land of Egypt, and laid up the food in the cities: the
food of the field, which was round about every city, laid he up in the
same. \bibleverse{49} And Joseph gathered corn as the sand of the sea,
very much, until he left numbering; for it was without number.
\bibleverse{50} And unto Joseph were born two sons before the years of
famine came, which Asenath the daughter of Poti-pherah priest of On bare
unto him.\footnote{\textbf{41:50} priest: or, prince} \bibleverse{51}
And Joseph called the name of the firstborn Manasseh: For God, said he,
hath made me forget all my toil, and all my father's house.\footnote{\textbf{41:51}
Manasseh: that is, Forgetting} \bibleverse{52} And the name of the
second called he Ephraim: For God hath caused me to be fruitful in the
land of my affliction.\footnote{\textbf{41:52} Ephraim: that is,
Fruitful}
\bibleverse{53} And the seven years of plenteousness, that was in the
land of Egypt, were ended. \bibleverse{54} And the seven years of dearth
began to come, according as Joseph had said: and the dearth was in all
lands; but in all the land of Egypt there was bread. \bibleverse{55} And
when all the land of Egypt was famished, the people cried to Pharaoh for
bread: and Pharaoh said unto all the Egyptians, Go unto Joseph; what he
saith to you, do. \bibleverse{56} And the famine was over all the face
of the earth: and Joseph opened all the storehouses, and sold unto the
Egyptians; and the famine waxed sore in the land of Egypt.\footnote{\textbf{41:56}
all the storehouses: Heb. all wherein was} \bibleverse{57} And all
countries came into Egypt to Joseph for to buy corn; because that the
famine was so sore in all lands.
\hypertarget{section-41}{%
\section{42}\label{section-41}}
\bibleverse{1} Now when Jacob saw that there was corn in Egypt, Jacob
said unto his sons, Why do ye look one upon another? \bibleverse{2} And
he said, Behold, I have heard that there is corn in Egypt: get you down
thither, and buy for us from thence; that we may live, and not die.
\bibleverse{3} And Joseph's ten brethren went down to buy corn in Egypt.
\bibleverse{4} But Benjamin, Joseph's brother, Jacob sent not with his
brethren; for he said, Lest peradventure mischief befall him.
\bibleverse{5} And the sons of Israel came to buy corn among those that
came: for the famine was in the land of Canaan. \bibleverse{6} And
Joseph was the governor over the land, and he it was that sold to all
the people of the land: and Joseph's brethren came, and bowed down
themselves before him with their faces to the earth.
\bibleverse{7} And Joseph saw his brethren, and he knew them, but made
himself strange unto them, and spake roughly unto them; and he said unto
them, Whence come ye? And they said, From the land of Canaan to buy
food.\footnote{\textbf{42:7} roughly\ldots: Heb. hard things with them}
\bibleverse{8} And Joseph knew his brethren, but they knew not him.
\bibleverse{9} And Joseph remembered the dreams which he dreamed of
them, and said unto them, Ye are spies; to see the nakedness of the land
ye are come. \bibleverse{10} And they said unto him, Nay, my lord, but
to buy food are thy servants come. \bibleverse{11} We are all one man's
sons; we are true men, thy servants are no spies. \bibleverse{12} And he
said unto them, Nay, but to see the nakedness of the land ye are come.
\bibleverse{13} And they said, Thy servants are twelve brethren, the
sons of one man in the land of Canaan; and, behold, the youngest is this
day with our father, and one is not. \bibleverse{14} And Joseph said
unto them, That is it that I spake unto you, saying, Ye are spies:
\bibleverse{15} Hereby ye shall be proved: By the life of Pharaoh ye
shall not go forth hence, except your youngest brother come hither.
\bibleverse{16} Send one of you, and let him fetch your brother, and ye
shall be kept in prison, that your words may be proved, whether there be
any truth in you: or else by the life of Pharaoh surely ye are
spies.\footnote{\textbf{42:16} kept\ldots: Heb. bound} \bibleverse{17}
And he put them all together into ward three days.\footnote{\textbf{42:17}
put: Heb. gathered} \bibleverse{18} And Joseph said unto them the
third day, This do, and live; for I fear God: \bibleverse{19} If ye be
true men, let one of your brethren be bound in the house of your prison:
go ye, carry corn for the famine of your houses: \bibleverse{20} But
bring your youngest brother unto me; so shall your words be verified,
and ye shall not die. And they did so.
\bibleverse{21} And they said one to another, We are verily guilty
concerning our brother, in that we saw the anguish of his soul, when he
besought us, and we would not hear; therefore is this distress come upon
us. \bibleverse{22} And Reuben answered them, saying, Spake I not unto
you, saying, Do not sin against the child; and ye would not hear?
therefore, behold, also his blood is required. \bibleverse{23} And they
knew not that Joseph understood them; for he spake unto them by an
interpreter.\footnote{\textbf{42:23} he spake\ldots: Heb. an interpreter
was between them} \bibleverse{24} And he turned himself about from
them, and wept; and returned to them again, and communed with them, and
took from them Simeon, and bound him before their eyes.
\bibleverse{25} Then Joseph commanded to fill their sacks with corn, and
to restore every man's money into his sack, and to give them provision
for the way: and thus did he unto them. \bibleverse{26} And they laded
their asses with the corn, and departed thence. \bibleverse{27} And as
one of them opened his sack to give his ass provender in the inn, he
espied his money; for, behold, it was in his sack's mouth.
\bibleverse{28} And he said unto his brethren, My money is restored;
and, lo, it is even in my sack: and their heart failed them, and they
were afraid, saying one to another, What is this that God hath done unto
us?\footnote{\textbf{42:28} failed\ldots: Heb. went forth}
\bibleverse{29} And they came unto Jacob their father unto the land of
Canaan, and told him all that befell unto them; saying, \bibleverse{30}
The man, who is the lord of the land, spake roughly to us, and took us
for spies of the country.\footnote{\textbf{42:30} roughly\ldots: Heb.
with us hard things} \bibleverse{31} And we said unto him, We are true
men; we are no spies: \bibleverse{32} We be twelve brethren, sons of our
father; one is not, and the youngest is this day with our father in the
land of Canaan. \bibleverse{33} And the man, the lord of the country,
said unto us, Hereby shall I know that ye are true men; leave one of
your brethren here with me, and take food for the famine of your
households, and be gone: \bibleverse{34} And bring your youngest brother
unto me: then shall I know that ye are no spies, but that ye are true
men: so will I deliver you your brother, and ye shall traffick in the
land.
\bibleverse{35} And it came to pass as they emptied their sacks, that,
behold, every man's bundle of money was in his sack: and when both they
and their father saw the bundles of money, they were afraid.
\bibleverse{36} And Jacob their father said unto them, Me have ye
bereaved of my children: Joseph is not, and Simeon is not, and ye will
take Benjamin away: all these things are against me. \bibleverse{37} And
Reuben spake unto his father, saying, Slay my two sons, if I bring him
not to thee: deliver him into my hand, and I will bring him to thee
again. \bibleverse{38} And he said, My son shall not go down with you;
for his brother is dead, and he is left alone: if mischief befall him by
the way in the which ye go, then shall ye bring down my gray hairs with
sorrow to the grave.
\hypertarget{section-42}{%
\section{43}\label{section-42}}
\bibleverse{1} And the famine was sore in the land. \bibleverse{2} And
it came to pass, when they had eaten up the corn which they had brought
out of Egypt, their father said unto them, Go again, buy us a little
food. \bibleverse{3} And Judah spake unto him, saying, The man did
solemnly protest unto us, saying, Ye shall not see my face, except your
brother be with you.\footnote{\textbf{43:3} did\ldots: Heb. protesting
protested} \bibleverse{4} If thou wilt send our brother with us, we
will go down and buy thee food: \bibleverse{5} But if thou wilt not send
him, we will not go down: for the man said unto us, Ye shall not see my
face, except your brother be with you. \bibleverse{6} And Israel said,
Wherefore dealt ye so ill with me, as to tell the man whether ye had yet
a brother? \bibleverse{7} And they said, The man asked us straitly of
our state, and of our kindred, saying, Is your father yet alive? have ye
another brother? and we told him according to the tenor of these words:
could we certainly know that he would say, Bring your brother
down?\textsuperscript{{[}\textbf{43:7} asked\ldots: Heb. asking asked
us{]}}{[}\textbf{43:7} tenor: Heb. mouth{]}\footnote{\textbf{43:7}
could\ldots: Heb. knowing could we know} \bibleverse{8} And Judah said
unto Israel his father, Send the lad with me, and we will arise and go;
that we may live, and not die, both we, and thou, and also our little
ones. \bibleverse{9} I will be surety for him; of my hand shalt thou
require him: if I bring him not unto thee, and set him before thee, then
let me bear the blame for ever: \bibleverse{10} For except we had
lingered, surely now we had returned this second time.\footnote{\textbf{43:10}
this: or, twice by this}
\bibleverse{11} And their father Israel said unto them, If it must be so
now, do this; take of the best fruits in the land in your vessels, and
carry down the man a present, a little balm, and a little honey, spices,
and myrrh, nuts, and almonds: \bibleverse{12} And take double money in
your hand; and the money that was brought again in the mouth of your
sacks, carry it again in your hand; peradventure it was an oversight:
\bibleverse{13} Take also your brother, and arise, go again unto the
man: \bibleverse{14} And God Almighty give you mercy before the man,
that he may send away your other brother, and Benjamin. If I be bereaved
of my children, I am bereaved.\footnote{\textbf{43:14} If\ldots: or, And
I, as I have been, etc.}
\bibleverse{15} And the men took that present, and they took double
money in their hand, and Benjamin; and rose up, and went down to Egypt,
and stood before Joseph. \bibleverse{16} And when Joseph saw Benjamin
with them, he said to the ruler of his house, Bring these men home, and
slay, and make ready; for these men shall dine with me at
noon.\textsuperscript{{[}\textbf{43:16} slay: Heb. kill a
killing{]}}{[}\textbf{43:16} dine: Heb. eat{]} \bibleverse{17} And the
man did as Joseph bade; and the man brought the men into Joseph's house.
\bibleverse{18} And the men were afraid, because they were brought into
Joseph's house; and they said, Because of the money that was returned in
our sacks at the first time are we brought in; that he may seek occasion
against us, and fall upon us, and take us for bondmen, and our
asses.\footnote{\textbf{43:18} seek\ldots: Heb. roll himself upon us}
\bibleverse{19} And they came near to the steward of Joseph's house, and
they communed with him at the door of the house, \bibleverse{20} And
said, O sir, we came indeed down at the first time to buy
food:\footnote{\textbf{43:20} we\ldots: Heb. coming down we came down}
\bibleverse{21} And it came to pass, when we came to the inn, that we
opened our sacks, and, behold, every man's money was in the mouth of his
sack, our money in full weight: and we have brought it again in our
hand. \bibleverse{22} And other money have we brought down in our hands
to buy food: we cannot tell who put our money in our sacks.
\bibleverse{23} And he said, Peace be to you, fear not: your God, and
the God of your father, hath given you treasure in your sacks: I had
your money. And he brought Simeon out unto them.\footnote{\textbf{43:23}
I had\ldots: Heb. your money came to me} \bibleverse{24} And the man
brought the men into Joseph's house, and gave them water, and they
washed their feet; and he gave their asses provender. \bibleverse{25}
And they made ready the present against Joseph came at noon: for they
heard that they should eat bread there.
\bibleverse{26} And when Joseph came home, they brought him the present
which was in their hand into the house, and bowed themselves to him to
the earth. \bibleverse{27} And he asked them of their welfare, and said,
Is your father well, the old man of whom ye spake? Is he yet
alive?\textsuperscript{{[}\textbf{43:27} welfare: Heb.
peace{]}}{[}\textbf{43:27} Is your\ldots: Heb. Is there peace to your
father?{]} \bibleverse{28} And they answered, Thy servant our father is
in good health, he is yet alive. And they bowed down their heads, and
made obeisance. \bibleverse{29} And he lifted up his eyes, and saw his
brother Benjamin, his mother's son, and said, Is this your younger
brother, of whom ye spake unto me? And he said, God be gracious unto
thee, my son. \bibleverse{30} And Joseph made haste; for his bowels did
yearn upon his brother: and he sought where to weep; and he entered into
his chamber, and wept there. \bibleverse{31} And he washed his face, and
went out, and refrained himself, and said, Set on bread. \bibleverse{32}
And they set on for him by himself, and for them by themselves, and for
the Egyptians, which did eat with him, by themselves: because the
Egyptians might not eat bread with the Hebrews; for that is an
abomination unto the Egyptians. \bibleverse{33} And they sat before him,
the firstborn according to his birthright, and the youngest according to
his youth: and the men marvelled one at another. \bibleverse{34} And he
took and sent messes unto them from before him: but Benjamin's mess was
five times so much as any of theirs. And they drank, and were merry with
him.\footnote{\textbf{43:34} were\ldots: Heb. drank largely}
\hypertarget{section-43}{%
\section{44}\label{section-43}}
\bibleverse{1} And he commanded the steward of his house, saying, Fill
the men's sacks with food, as much as they can carry, and put every
man's money in his sack's mouth.\footnote{\textbf{44:1} the
steward\ldots: Heb. him that was over his house} \bibleverse{2} And
put my cup, the silver cup, in the sack's mouth of the youngest, and his
corn money. And he did according to the word that Joseph had spoken.
\bibleverse{3} As soon as the morning was light, the men were sent away,
they and their asses. \bibleverse{4} And when they were gone out of the
city, and not yet far off, Joseph said unto his steward, Up, follow
after the men; and when thou dost overtake them, say unto them,
Wherefore have ye rewarded evil for good? \bibleverse{5} Is not this it
in which my lord drinketh, and whereby indeed he divineth? ye have done
evil in so doing.\footnote{\textbf{44:5} divineth: or, maketh trial?}
\bibleverse{6} And he overtook them, and he spake unto them these same
words. \bibleverse{7} And they said unto him, Wherefore saith my lord
these words? God forbid that thy servants should do according to this
thing: \bibleverse{8} Behold, the money, which we found in our sacks'
mouths, we brought again unto thee out of the land of Canaan: how then
should we steal out of thy lord's house silver or gold? \bibleverse{9}
With whomsoever of thy servants it be found, both let him die, and we
also will be my lord's bondmen. \bibleverse{10} And he said, Now also
let it be according unto your words: he with whom it is found shall be
my servant; and ye shall be blameless. \bibleverse{11} Then they
speedily took down every man his sack to the ground, and opened every
man his sack. \bibleverse{12} And he searched, and began at the eldest,
and left at the youngest: and the cup was found in Benjamin's sack.
\bibleverse{13} Then they rent their clothes, and laded every man his
ass, and returned to the city.
\bibleverse{14} And Judah and his brethren came to Joseph's house; for
he was yet there: and they fell before him on the ground.
\bibleverse{15} And Joseph said unto them, What deed is this that ye
have done? wot ye not that such a man as I can certainly
divine?\footnote{\textbf{44:15} divine: or, make trial?} \bibleverse{16}
And Judah said, What shall we say unto my lord? what shall we speak? or
how shall we clear ourselves? God hath found out the iniquity of thy
servants: behold, we are my lord's servants, both we, and he also with
whom the cup is found. \bibleverse{17} And he said, God forbid that I
should do so: but the man in whose hand the cup is found, he shall be my
servant; and as for you, get you up in peace unto your father.
\bibleverse{18} Then Judah came near unto him, and said, Oh my lord, let
thy servant, I pray thee, speak a word in my lord's ears, and let not
thine anger burn against thy servant: for thou art even as Pharaoh.
\bibleverse{19} My lord asked his servants, saying, Have ye a father, or
a brother? \bibleverse{20} And we said unto my lord, We have a father,
an old man, and a child of his old age, a little one; and his brother is
dead, and he alone is left of his mother, and his father loveth him.
\bibleverse{21} And thou saidst unto thy servants, Bring him down unto
me, that I may set mine eyes upon him. \bibleverse{22} And we said unto
my lord, The lad cannot leave his father: for if he should leave his
father, his father would die. \bibleverse{23} And thou saidst unto thy
servants, Except your youngest brother come down with you, ye shall see
my face no more. \bibleverse{24} And it came to pass when we came up
unto thy servant my father, we told him the words of my lord.
\bibleverse{25} And our father said, Go again, and buy us a little food.
\bibleverse{26} And we said, We cannot go down: if our youngest brother
be with us, then will we go down: for we may not see the man's face,
except our youngest brother be with us. \bibleverse{27} And thy servant
my father said unto us, Ye know that my wife bare me two sons:
\bibleverse{28} And the one went out from me, and I said, Surely he is
torn in pieces; and I saw him not since: \bibleverse{29} And if ye take
this also from me, and mischief befall him, ye shall bring down my gray
hairs with sorrow to the grave. \bibleverse{30} Now therefore when I
come to thy servant my father, and the lad be not with us; seeing that
his life is bound up in the lad's life; \bibleverse{31} It shall come to
pass, when he seeth that the lad is not with us, that he will die: and
thy servants shall bring down the gray hairs of thy servant our father
with sorrow to the grave. \bibleverse{32} For thy servant became surety
for the lad unto my father, saying, If I bring him not unto thee, then I
shall bear the blame to my father for ever. \bibleverse{33} Now
therefore, I pray thee, let thy servant abide instead of the lad a
bondman to my lord; and let the lad go up with his brethren.
\bibleverse{34} For how shall I go up to my father, and the lad be not
with me? lest peradventure I see the evil that shall come on my
father.\footnote{\textbf{44:34} come\ldots: Heb. find my father}
\hypertarget{section-44}{%
\section{45}\label{section-44}}
\bibleverse{1} Then Joseph could not refrain himself before all them
that stood by him; and he cried, Cause every man to go out from me. And
there stood no man with him, while Joseph made himself known unto his
brethren. \bibleverse{2} And he wept aloud: and the Egyptians and the
house of Pharaoh heard.\footnote{\textbf{45:2} wept\ldots: Heb. gave
forth his voice in weeping} \bibleverse{3} And Joseph said unto his
brethren, I am Joseph; doth my father yet live? And his brethren could
not answer him; for they were troubled at his presence.\footnote{\textbf{45:3}
troubled: or, terrified} \bibleverse{4} And Joseph said unto his
brethren, Come near to me, I pray you. And they came near. And he said,
I am Joseph your brother, whom ye sold into Egypt. \bibleverse{5} Now
therefore be not grieved, nor angry with yourselves, that ye sold me
hither: for God did send me before you to preserve life.\footnote{\textbf{45:5}
nor\ldots: Heb. neither let there be anger in your eyes}
\bibleverse{6} For these two years hath the famine been in the land: and
yet there are five years, in the which there shall neither be earing nor
harvest. \bibleverse{7} And God sent me before you to preserve you a
posterity in the earth, and to save your lives by a great
deliverance.\footnote{\textbf{45:7} to preserve\ldots: Heb. to put for
you a remnant} \bibleverse{8} So now it was not you that sent me
hither, but God: and he hath made me a father to Pharaoh, and lord of
all his house, and a ruler throughout all the land of Egypt.
\bibleverse{9} Haste ye, and go up to my father, and say unto him, Thus
saith thy son Joseph, God hath made me lord of all Egypt: come down unto
me, tarry not: \bibleverse{10} And thou shalt dwell in the land of
Goshen, and thou shalt be near unto me, thou, and thy children, and thy
children's children, and thy flocks, and thy herds, and all that thou
hast: \bibleverse{11} And there will I nourish thee; for yet there are
five years of famine; lest thou, and thy household, and all that thou
hast, come to poverty. \bibleverse{12} And, behold, your eyes see, and
the eyes of my brother Benjamin, that it is my mouth that speaketh unto
you. \bibleverse{13} And ye shall tell my father of all my glory in
Egypt, and of all that ye have seen; and ye shall haste and bring down
my father hither. \bibleverse{14} And he fell upon his brother
Benjamin's neck, and wept; and Benjamin wept upon his neck.
\bibleverse{15} Moreover he kissed all his brethren, and wept upon them:
and after that his brethren talked with him.
\bibleverse{16} And the fame thereof was heard in Pharaoh's house,
saying, Joseph's brethren are come: and it pleased Pharaoh well, and his
servants.\footnote{\textbf{45:16} pleased\ldots: Heb. was good in the
eyes of Pharaoh} \bibleverse{17} And Pharaoh said unto Joseph, Say
unto thy brethren, This do ye; lade your beasts, and go, get you unto
the land of Canaan; \bibleverse{18} And take your father and your
households, and come unto me: and I will give you the good of the land
of Egypt, and ye shall eat the fat of the land. \bibleverse{19} Now thou
art commanded, this do ye; take you wagons out of the land of Egypt for
your little ones, and for your wives, and bring your father, and come.
\bibleverse{20} Also regard not your stuff; for the good of all the land
of Egypt is yours.\footnote{\textbf{45:20} regard\ldots: Heb. let not
your eye spare, etc.} \bibleverse{21} And the children of Israel did
so: and Joseph gave them wagons, according to the commandment of
Pharaoh, and gave them provision for the way.\footnote{\textbf{45:21}
commandment: Heb. mouth} \bibleverse{22} To all of them he gave each
man changes of raiment; but to Benjamin he gave three hundred pieces of
silver, and five changes of raiment. \bibleverse{23} And to his father
he sent after this manner; ten asses laden with the good things of
Egypt, and ten she asses laden with corn and bread and meat for his
father by the way.\footnote{\textbf{45:23} laden\ldots: Heb. carrying}
\bibleverse{24} So he sent his brethren away, and they departed: and he
said unto them, See that ye fall not out by the way.
\bibleverse{25} And they went up out of Egypt, and came into the land of
Canaan unto Jacob their father, \bibleverse{26} And told him, saying,
Joseph is yet alive, and he is governor over all the land of Egypt. And
Jacob's heart fainted, for he believed them not.\footnote{\textbf{45:26}
Jacob's: Heb. his} \bibleverse{27} And they told him all the words of
Joseph, which he had said unto them: and when he saw the wagons which
Joseph had sent to carry him, the spirit of Jacob their father revived:
\bibleverse{28} And Israel said, It is enough; Joseph my son is yet
alive: I will go and see him before I die.
\hypertarget{section-45}{%
\section{46}\label{section-45}}
\bibleverse{1} And Israel took his journey with all that he had, and
came to Beer-sheba, and offered sacrifices unto the God of his father
Isaac. \bibleverse{2} And God spake unto Israel in the visions of the
night, and said, Jacob, Jacob. And he said, Here am I. \bibleverse{3}
And he said, I am God, the God of thy father: fear not to go down into
Egypt; for I will there make of thee a great nation: \bibleverse{4} I
will go down with thee into Egypt; and I will also surely bring thee up
again: and Joseph shall put his hand upon thine eyes.
\bibleverse{5} And Jacob rose up from Beer-sheba: and the sons of Israel
carried Jacob their father, and their little ones, and their wives, in
the wagons which Pharaoh had sent to carry him. \bibleverse{6} And they
took their cattle, and their goods, which they had gotten in the land of
Canaan, and came into Egypt, Jacob, and all his seed with him:
\bibleverse{7} His sons, and his sons' sons with him, his daughters, and
his sons' daughters, and all his seed brought he with him into Egypt.
\bibleverse{8} And these are the names of the children of Israel, which
came into Egypt, Jacob and his sons: Reuben, Jacob's firstborn.
\bibleverse{9} And the sons of Reuben; Hanoch, and Phallu, and Hezron,
and Carmi.
\bibleverse{10} And the sons of Simeon; Jemuel, and Jamin, and Ohad, and
Jachin, and Zohar, and Shaul the son of a Canaanitish
woman.\textsuperscript{{[}\textbf{46:10} Jemuel: or,
Nemuel{]}}{[}\textbf{46:10} Jachin: or, Jarib{]}\footnote{\textbf{46:10}
Zohar: or, Zerah}
\bibleverse{11} And the sons of Levi; Gershon, Kohath, and
Merari.\footnote{\textbf{46:11} Gershon: or, Gershom}
\bibleverse{12} And the sons of Judah; Er, and Onan, and Shelah, and
Pharez, and Zerah: but Er and Onan died in the land of Canaan. And the
sons of Pharez were Hezron and Hamul.
\bibleverse{13} And the sons of Issachar; Tola, and Phuvah, and Job, and
Shimron.\footnote{\textbf{46:13} Phuvah, and Job: or, Puah, and Jashub}
\bibleverse{14} And the sons of Zebulun; Sered, and Elon, and Jahleel.
\bibleverse{15} These be the sons of Leah, which she bare unto Jacob in
Padan-aram, with his daughter Dinah: all the souls of his sons and his
daughters were thirty and three.
\bibleverse{16} And the sons of Gad; Ziphion, and Haggi, Shuni, and
Ezbon, Eri, and Arodi, and Areli.\textsuperscript{{[}\textbf{46:16}
Ziphion: or, Zephon{]}}{[}\textbf{46:16} Ezbon: or, Ozni{]}\footnote{\textbf{46:16}
Arodi: or, Arod}
\bibleverse{17} And the sons of Asher; Jimnah, and Ishuah, and Isui, and
Beriah, and Serah their sister: and the sons of Beriah; Heber, and
Malchiel. \bibleverse{18} These are the sons of Zilpah, whom Laban gave
to Leah his daughter, and these she bare unto Jacob, even sixteen souls.
\bibleverse{19} The sons of Rachel Jacob's wife; Joseph, and Benjamin.
\bibleverse{20} And unto Joseph in the land of Egypt were born Manasseh
and Ephraim, which Asenath the daughter of Poti-pherah priest of On bare
unto him.\footnote{\textbf{46:20} priest: or, prince}
\bibleverse{21} And the sons of Benjamin were Belah, and Becher, and
Ashbel, Gera, and Naaman, Ehi, and Rosh, Muppim, and Huppim, and
Ard.\textsuperscript{{[}\textbf{46:21} Ehi: or,
Ahiram{]}}{[}\textbf{46:21} Muppim: or, Shupham or,
Shuppim{]}\footnote{\textbf{46:21} Huppim: or, Hupham} \bibleverse{22}
These are the sons of Rachel, which were born to Jacob: all the souls
were fourteen.
\bibleverse{23} And the sons of Dan; Hushim.\footnote{\textbf{46:23}
Hushim: or, Shuham}
\bibleverse{24} And the sons of Naphtali; Jahzeel, and Guni, and Jezer,
and Shillem. \bibleverse{25} These are the sons of Bilhah, which Laban
gave unto Rachel his daughter, and she bare these unto Jacob: all the
souls were seven. \bibleverse{26} All the souls that came with Jacob
into Egypt, which came out of his loins, besides Jacob's sons' wives,
all the souls were threescore and six;\footnote{\textbf{46:26} loins:
Heb. thigh} \bibleverse{27} And the sons of Joseph, which were born
him in Egypt, were two souls: all the souls of the house of Jacob, which
came into Egypt, were threescore and ten.
\bibleverse{28} And he sent Judah before him unto Joseph, to direct his
face unto Goshen; and they came into the land of Goshen. \bibleverse{29}
And Joseph made ready his chariot, and went up to meet Israel his
father, to Goshen, and presented himself unto him; and he fell on his
neck, and wept on his neck a good while. \bibleverse{30} And Israel said
unto Joseph, Now let me die, since I have seen thy face, because thou
art yet alive. \bibleverse{31} And Joseph said unto his brethren, and
unto his father's house, I will go up, and shew Pharaoh, and say unto
him, My brethren, and my father's house, which were in the land of
Canaan, are come unto me; \bibleverse{32} And the men are shepherds, for
their trade hath been to feed cattle; and they have brought their
flocks, and their herds, and all that they have.\footnote{\textbf{46:32}
their trade\ldots: Heb. they are men of cattle} \bibleverse{33} And it
shall come to pass, when Pharaoh shall call you, and shall say, What is
your occupation? \bibleverse{34} That ye shall say, Thy servants' trade
hath been about cattle from our youth even until now, both we, and also
our fathers: that ye may dwell in the land of Goshen; for every shepherd
is an abomination unto the Egyptians.
\hypertarget{section-46}{%
\section{47}\label{section-46}}
\bibleverse{1} Then Joseph came and told Pharaoh, and said, My father
and my brethren, and their flocks, and their herds, and all that they
have, are come out of the land of Canaan; and, behold, they are in the
land of Goshen. \bibleverse{2} And he took some of his brethren, even
five men, and presented them unto Pharaoh. \bibleverse{3} And Pharaoh
said unto his brethren, What is your occupation? And they said unto
Pharaoh, Thy servants are shepherds, both we, and also our fathers.
\bibleverse{4} They said moreover unto Pharaoh, For to sojourn in the
land are we come; for thy servants have no pasture for their flocks; for
the famine is sore in the land of Canaan: now therefore, we pray thee,
let thy servants dwell in the land of Goshen. \bibleverse{5} And Pharaoh
spake unto Joseph, saying, Thy father and thy brethren are come unto
thee: \bibleverse{6} The land of Egypt is before thee; in the best of
the land make thy father and brethren to dwell; in the land of Goshen
let them dwell: and if thou knowest any men of activity among them, then
make them rulers over my cattle. \bibleverse{7} And Joseph brought in
Jacob his father, and set him before Pharaoh: and Jacob blessed Pharaoh.
\bibleverse{8} And Pharaoh said unto Jacob, How old art thou?\footnote{\textbf{47:8}
How\ldots: Heb. How many are the days of the years of thy life?}
\bibleverse{9} And Jacob said unto Pharaoh, The days of the years of my
pilgrimage are an hundred and thirty years: few and evil have the days
of the years of my life been, and have not attained unto the days of the
years of the life of my fathers in the days of their pilgrimage.
\bibleverse{10} And Jacob blessed Pharaoh, and went out from before
Pharaoh.
\bibleverse{11} And Joseph placed his father and his brethren, and gave
them a possession in the land of Egypt, in the best of the land, in the
land of Rameses, as Pharaoh had commanded. \bibleverse{12} And Joseph
nourished his father, and his brethren, and all his father's household,
with bread, according to their families.\footnote{\textbf{47:12}
according\ldots: or, as a little child is nourished: Heb. according to
the little ones}
\bibleverse{13} And there was no bread in all the land; for the famine
was very sore, so that the land of Egypt and all the land of Canaan
fainted by reason of the famine. \bibleverse{14} And Joseph gathered up
all the money that was found in the land of Egypt, and in the land of
Canaan, for the corn which they bought: and Joseph brought the money
into Pharaoh's house. \bibleverse{15} And when money failed in the land
of Egypt, and in the land of Canaan, all the Egyptians came unto Joseph,
and said, Give us bread: for why should we die in thy presence? for the
money faileth. \bibleverse{16} And Joseph said, Give your cattle; and I
will give you for your cattle, if money fail. \bibleverse{17} And they
brought their cattle unto Joseph: and Joseph gave them bread in exchange
for horses, and for the flocks, and for the cattle of the herds, and for
the asses: and he fed them with bread for all their cattle for that
year.\footnote{\textbf{47:17} fed\ldots: Heb. led them} \bibleverse{18}
When that year was ended, they came unto him the second year, and said
unto him, We will not hide it from my lord, how that our money is spent;
my lord also hath our herds of cattle; there is not ought left in the
sight of my lord, but our bodies, and our lands: \bibleverse{19}
Wherefore shall we die before thine eyes, both we and our land? buy us
and our land for bread, and we and our land will be servants unto
Pharaoh: and give us seed, that we may live, and not die, that the land
be not desolate. \bibleverse{20} And Joseph bought all the land of Egypt
for Pharaoh; for the Egyptians sold every man his field, because the
famine prevailed over them: so the land became Pharaoh's.
\bibleverse{21} And as for the people, he removed them to cities from
one end of the borders of Egypt even to the other end thereof.
\bibleverse{22} Only the land of the priests bought he not; for the
priests had a portion assigned them of Pharaoh, and did eat their
portion which Pharaoh gave them: wherefore they sold not their
lands.\footnote{\textbf{47:22} priests: or, princes} \bibleverse{23}
Then Joseph said unto the people, Behold, I have bought you this day and
your land for Pharaoh: lo, here is seed for you, and ye shall sow the
land. \bibleverse{24} And it shall come to pass in the increase, that ye
shall give the fifth part unto Pharaoh, and four parts shall be your
own, for seed of the field, and for your food, and for them of your
households, and for food for your little ones. \bibleverse{25} And they
said, Thou hast saved our lives: let us find grace in the sight of my
lord, and we will be Pharaoh's servants. \bibleverse{26} And Joseph made
it a law over the land of Egypt unto this day, that Pharaoh should have
the fifth part; except the land of the priests only, which became not
Pharaoh's.\footnote{\textbf{47:26} priests: or, princes}
\bibleverse{27} And Israel dwelt in the land of Egypt, in the country of
Goshen; and they had possessions therein, and grew, and multiplied
exceedingly. \bibleverse{28} And Jacob lived in the land of Egypt
seventeen years: so the whole age of Jacob was an hundred forty and
seven years.\footnote{\textbf{47:28} the whole\ldots: Heb. the days of
the years of his life} \bibleverse{29} And the time drew nigh that
Israel must die: and he called his son Joseph, and said unto him, If now
I have found grace in thy sight, put, I pray thee, thy hand under my
thigh, and deal kindly and truly with me; bury me not, I pray thee, in
Egypt: \bibleverse{30} But I will lie with my fathers, and thou shalt
carry me out of Egypt, and bury me in their buryingplace. And he said, I
will do as thou hast said. \bibleverse{31} And he said, Swear unto me.
And he sware unto him. And Israel bowed himself upon the bed's head.
\hypertarget{section-47}{%
\section{48}\label{section-47}}
\bibleverse{1} And it came to pass after these things, that one told
Joseph, Behold, thy father is sick: and he took with him his two sons,
Manasseh and Ephraim. \bibleverse{2} And one told Jacob, and said,
Behold, thy son Joseph cometh unto thee: and Israel strengthened
himself, and sat upon the bed. \bibleverse{3} And Jacob said unto
Joseph, God Almighty appeared unto me at Luz in the land of Canaan, and
blessed me, \bibleverse{4} And said unto me, Behold, I will make thee
fruitful, and multiply thee, and I will make of thee a multitude of
people; and will give this land to thy seed after thee for an
everlasting possession.
\bibleverse{5} And now thy two sons, Ephraim and Manasseh, which were
born unto thee in the land of Egypt before I came unto thee into Egypt,
are mine; as Reuben and Simeon, they shall be mine. \bibleverse{6} And
thy issue, which thou begettest after them, shall be thine, and shall be
called after the name of their brethren in their inheritance.
\bibleverse{7} And as for me, when I came from Padan, Rachel died by me
in the land of Canaan in the way, when yet there was but a little way to
come unto Ephrath: and I buried her there in the way of Ephrath; the
same is Beth-lehem.
\bibleverse{8} And Israel beheld Joseph's sons, and said, Who are these?
\bibleverse{9} And Joseph said unto his father, They are my sons, whom
God hath given me in this place. And he said, Bring them, I pray thee,
unto me, and I will bless them. \bibleverse{10} Now the eyes of Israel
were dim for age, so that he could not see. And he brought them near
unto him; and he kissed them, and embraced them.\footnote{\textbf{48:10}
dim: Heb. heavy} \bibleverse{11} And Israel said unto Joseph, I had
not thought to see thy face: and, lo, God hath shewed me also thy seed.
\bibleverse{12} And Joseph brought them out from between his knees, and
he bowed himself with his face to the earth. \bibleverse{13} And Joseph
took them both, Ephraim in his right hand toward Israel's left hand, and
Manasseh in his left hand toward Israel's right hand, and brought them
near unto him. \bibleverse{14} And Israel stretched out his right hand,
and laid it upon Ephraim's head, who was the younger, and his left hand
upon Manasseh's head, guiding his hands wittingly; for Manasseh was the
firstborn.
\bibleverse{15} And he blessed Joseph, and said, God, before whom my
fathers Abraham and Isaac did walk, the God which fed me all my life
long unto this day, \bibleverse{16} The Angel which redeemed me from all
evil, bless the lads; and let my name be named on them, and the name of
my fathers Abraham and Isaac; and let them grow into a multitude in the
midst of the earth.\footnote{\textbf{48:16} grow: Heb. as fishes do
increase} \bibleverse{17} And when Joseph saw that his father laid his
right hand upon the head of Ephraim, it displeased him: and he held up
his father's hand, to remove it from Ephraim's head unto Manasseh's
head.\footnote{\textbf{48:17} displeased\ldots: was evil in his eyes}
\bibleverse{18} And Joseph said unto his father, Not so, my father: for
this is the firstborn; put thy right hand upon his head. \bibleverse{19}
And his father refused, and said, I know it, my son, I know it: he also
shall become a people, and he also shall be great: but truly his younger
brother shall be greater than he, and his seed shall become a multitude
of nations.\footnote{\textbf{48:19} multitude: Heb. fulness}
\bibleverse{20} And he blessed them that day, saying, In thee shall
Israel bless, saying, God make thee as Ephraim and as Manasseh: and he
set Ephraim before Manasseh. \bibleverse{21} And Israel said unto
Joseph, Behold, I die: but God shall be with you, and bring you again
unto the land of your fathers. \bibleverse{22} Moreover I have given to
thee one portion above thy brethren, which I took out of the hand of the
Amorite with my sword and with my bow.
\hypertarget{section-48}{%
\section{49}\label{section-48}}
\bibleverse{1} And Jacob called unto his sons, and said, Gather
yourselves together, that I may tell you that which shall befall you in
the last days. \bibleverse{2} Gather yourselves together, and hear, ye
sons of Jacob; and hearken unto Israel your father.
\bibleverse{3} Reuben, thou art my firstborn, my might, and the
beginning of my strength, the excellency of dignity, and the excellency
of power: \bibleverse{4} Unstable as water, thou shalt not excel;
because thou wentest up to thy father's bed; then defiledst thou it: he
went up to my couch.\textsuperscript{{[}\textbf{49:4} thou shalt\ldots:
Heb. do not thou excel{]}}{[}\textbf{49:4} he went\ldots: or, my couch
is gone{]}
\bibleverse{5} Simeon and Levi are brethren; instruments of cruelty are
in their habitations.\footnote{\textbf{49:5} instruments\ldots: or,
their swords are weapons of violence} \bibleverse{6} O my soul, come
not thou into their secret; unto their assembly, mine honour, be not
thou united: for in their anger they slew a man, and in their selfwill
they digged down a wall.\footnote{\textbf{49:6} digged\ldots: or,
houghed oxen} \bibleverse{7} Cursed be their anger, for it was fierce;
and their wrath, for it was cruel: I will divide them in Jacob, and
scatter them in Israel.
\bibleverse{8} Judah, thou art he whom thy brethren shall praise: thy
hand shall be in the neck of thine enemies; thy father's children shall
bow down before thee. \bibleverse{9} Judah is a lion's whelp: from the
prey, my son, thou art gone up: he stooped down, he couched as a lion,
and as an old lion; who shall rouse him up? \bibleverse{10} The sceptre
shall not depart from Judah, nor a lawgiver from between his feet, until
Shiloh come; and unto him shall the gathering of the people be.
\bibleverse{11} Binding his foal unto the vine, and his ass's colt unto
the choice vine; he washed his garments in wine, and his clothes in the
blood of grapes: \bibleverse{12} His eyes shall be red with wine, and
his teeth white with milk.
\bibleverse{13} Zebulun shall dwell at the haven of the sea; and he
shall be for an haven of ships; and his border shall be unto Zidon.
\bibleverse{14} Issachar is a strong ass couching down between two
burdens: \bibleverse{15} And he saw that rest was good, and the land
that it was pleasant; and bowed his shoulder to bear, and became a
servant unto tribute.
\bibleverse{16} Dan shall judge his people, as one of the tribes of
Israel. \bibleverse{17} Dan shall be a serpent by the way, an adder in
the path, that biteth the horse heels, so that his rider shall fall
backward.\footnote{\textbf{49:17} an adder: Heb. an arrow-snake}
\bibleverse{18} I have waited for thy salvation, O LORD.
\bibleverse{19} Gad, a troop shall overcome him: but he shall overcome
at the last.
\bibleverse{20} Out of Asher his bread shall be fat, and he shall yield
royal dainties.
\bibleverse{21} Naphtali is a hind let loose: he giveth goodly words.
\bibleverse{22} Joseph is a fruitful bough, even a fruitful bough by a
well; whose branches run over the wall:\footnote{\textbf{49:22}
branches: Heb. daughters} \bibleverse{23} The archers have sorely
grieved him, and shot at him, and hated him: \bibleverse{24} But his bow
abode in strength, and the arms of his hands were made strong by the
hands of the mighty God of Jacob; (from thence is the shepherd, the
stone of Israel:) \bibleverse{25} Even by the God of thy father, who
shall help thee; and by the Almighty, who shall bless thee with
blessings of heaven above, blessings of the deep that lieth under,
blessings of the breasts, and of the womb: \bibleverse{26} The blessings
of thy father have prevailed above the blessings of my progenitors unto
the utmost bound of the everlasting hills: they shall be on the head of
Joseph, and on the crown of the head of him that was separate from his
brethren.
\bibleverse{27} Benjamin shall ravin as a wolf: in the morning he shall
devour the prey, and at night he shall divide the spoil.
\bibleverse{28} All these are the twelve tribes of Israel: and this is
it that their father spake unto them, and blessed them; every one
according to his blessing he blessed them. \bibleverse{29} And he
charged them, and said unto them, I am to be gathered unto my people:
bury me with my fathers in the cave that is in the field of Ephron the
Hittite, \bibleverse{30} In the cave that is in the field of Machpelah,
which is before Mamre, in the land of Canaan, which Abraham bought with
the field of Ephron the Hittite for a possession of a buryingplace.
\bibleverse{31} There they buried Abraham and Sarah his wife; there they
buried Isaac and Rebekah his wife; and there I buried Leah.
\bibleverse{32} The purchase of the field and of the cave that is
therein was from the children of Heth. \bibleverse{33} And when Jacob
had made an end of commanding his sons, he gathered up his feet into the
bed, and yielded up the ghost, and was gathered unto his people.
\hypertarget{section-49}{%
\section{50}\label{section-49}}
\bibleverse{1} And Joseph fell upon his father's face, and wept upon
him, and kissed him. \bibleverse{2} And Joseph commanded his servants
the physicians to embalm his father: and the physicians embalmed Israel.
\bibleverse{3} And forty days were fulfilled for him; for so are
fulfilled the days of those which are embalmed: and the Egyptians
mourned for him threescore and ten days.\footnote{\textbf{50:3} mourned:
Heb. wept} \bibleverse{4} And when the days of his mourning were past,
Joseph spake unto the house of Pharaoh, saying, If now I have found
grace in your eyes, speak, I pray you, in the ears of Pharaoh, saying,
\bibleverse{5} My father made me swear, saying, Lo, I die: in my grave
which I have digged for me in the land of Canaan, there shalt thou bury
me. Now therefore let me go up, I pray thee, and bury my father, and I
will come again. \bibleverse{6} And Pharaoh said, Go up, and bury thy
father, according as he made thee swear.
\bibleverse{7} And Joseph went up to bury his father: and with him went
up all the servants of Pharaoh, the elders of his house, and all the
elders of the land of Egypt, \bibleverse{8} And all the house of Joseph,
and his brethren, and his father's house: only their little ones, and
their flocks, and their herds, they left in the land of Goshen.
\bibleverse{9} And there went up with him both chariots and horsemen:
and it was a very great company. \bibleverse{10} And they came to the
threshingfloor of Atad, which is beyond Jordan, and there they mourned
with a great and very sore lamentation: and he made a mourning for his
father seven days. \bibleverse{11} And when the inhabitants of the land,
the Canaanites, saw the mourning in the floor of Atad, they said, This
is a grievous mourning to the Egyptians: wherefore the name of it was
called Abel-mizraim, which is beyond Jordan.\footnote{\textbf{50:11}
Abel-mizraim: that is, The mourning of the Egyptians} \bibleverse{12}
And his sons did unto him according as he commanded them:
\bibleverse{13} For his sons carried him into the land of Canaan, and
buried him in the cave of the field of Machpelah, which Abraham bought
with the field for a possession of a buryingplace of Ephron the Hittite,
before Mamre.
\bibleverse{14} And Joseph returned into Egypt, he, and his brethren,
and all that went up with him to bury his father, after he had buried
his father.
\bibleverse{15} And when Joseph's brethren saw that their father was
dead, they said, Joseph will peradventure hate us, and will certainly
requite us all the evil which we did unto him. \bibleverse{16} And they
sent a messenger unto Joseph, saying, Thy father did command before he
died, saying,\footnote{\textbf{50:16} sent: Heb. charged}
\bibleverse{17} So shall ye say unto Joseph, Forgive, I pray thee now,
the trespass of thy brethren, and their sin; for they did unto thee
evil: and now, we pray thee, forgive the trespass of the servants of the
God of thy father. And Joseph wept when they spake unto him.
\bibleverse{18} And his brethren also went and fell down before his
face; and they said, Behold, we be thy servants. \bibleverse{19} And
Joseph said unto them, Fear not: for am I in the place of God?
\bibleverse{20} But as for you, ye thought evil against me; but God
meant it unto good, to bring to pass, as it is this day, to save much
people alive. \bibleverse{21} Now therefore fear ye not: I will nourish
you, and your little ones. And he comforted them, and spake kindly unto
them.\footnote{\textbf{50:21} kindly\ldots: Heb. to their hearts}
\bibleverse{22} And Joseph dwelt in Egypt, he, and his father's house:
and Joseph lived an hundred and ten years. \bibleverse{23} And Joseph
saw Ephraim's children of the third generation: the children also of
Machir the son of Manasseh were brought up upon Joseph's knees.+ 50.23
brought\ldots: Heb. born \bibleverse{24} And Joseph said unto his
brethren, I die: and God will surely visit you, and bring you out of
this land unto the land which he sware to Abraham, to Isaac, and to
Jacob. \bibleverse{25} And Joseph took an oath of the children of
Israel, saying, God will surely visit you, and ye shall carry up my
bones from hence. \bibleverse{26} So Joseph died, being an hundred and
ten years old: and they embalmed him, and he was put in a coffin in
Egypt.
| {
"alphanum_fraction": 0.7615539291,
"avg_line_length": 60.6098577734,
"ext": "tex",
"hexsha": "8f38932039c02ebaa7dd8b5218151428e5c00afa",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "039ab9b18364ecade1d56695cb77c40ee62b1317",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "bibliadelpueblo/BibliaLibre",
"max_forks_repo_path": "Bibles/English.KingJames/out/tex/01-Genesis.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "039ab9b18364ecade1d56695cb77c40ee62b1317",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "bibliadelpueblo/BibliaLibre",
"max_issues_repo_path": "Bibles/English.KingJames/out/tex/01-Genesis.tex",
"max_line_length": 85,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "039ab9b18364ecade1d56695cb77c40ee62b1317",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "bibliadelpueblo/BibliaLibre",
"max_stars_repo_path": "Bibles/English.KingJames/out/tex/01-Genesis.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 71597,
"size": 247167
} |
\documentclass[DM,lsstdraft,authoryear,toc]{lsstdoc}
% lsstdoc documentation: https://lsst-texmf.lsst.io/lsstdoc.html
% Package imports go here.
\usepackage{longtable}
\usepackage{booktabs}
\usepackage[export]{adjustbox}
\usepackage{arydshln}
\usepackage{hyperref}
\usepackage{array}
\usepackage{xcolor}
\usepackage{caption}
% Local commands go here.
\renewcommand{\_}{%
\textunderscore\nobreak\hspace{0pt}%
}
% To add a short-form title:
% \title[Short title]{Title}
\title{Data Management Detailed Product Tree}
% Optional subtitle
% \setDocSubtitle{A subtitle}
\setcounter{tocdepth}{5}
\author{DMLT}
% Optional: name of the document's curator
\setDocCurator{G. Comoretto}
\setDocRef{DMTN-104}
\date{\today}
\setDocAbstract{%
This document collects in one place all DM products and their characterization.
}
% Change history defined here.
% Order: oldest first.
% Fields: VERSION, DATE, DESCRIPTION, OWNER NAME.
% See LPM-51 for version number policy.
\setDocChangeRecord{%
\addtohist{1}{YYYY-MM-DD}{Unreleased.}{Gabriele Comoretto}
}
\begin{document}
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
% Create the title page.
% Table of contents is added automatically with the "toc" class option.
\maketitle
%switch to \maketitle if you wan the title page and toc
% ADD CONTENT HERE ... a file per section can be good for editing
\input{body}
\newpage
\appendix
% Include all the relevant bib files.
% https://lsst-texmf.lsst.io/lsstdoc.html#bibliographies
\section{References} \label{sec:bib}
\bibliography{lsst,lsst-dm,refs_ads,refs,books,local}
%Make sure lsst-texmf/bin/generateAcronyms.py is in your path
\section{Acronyms used in this document}\label{sec:acronyms}
\input{acronyms.tex}
\end{document}
| {
"alphanum_fraction": 0.7594579334,
"avg_line_length": 23.3026315789,
"ext": "tex",
"hexsha": "7945e4e93c09d28bdb057526386b7c5c46d060c1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "02f074beb4cb9eb1e08dc5e5f8446f7df2e33a4d",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "lsst-dm/DMTN-104",
"max_forks_repo_path": "DMTN-104.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "02f074beb4cb9eb1e08dc5e5f8446f7df2e33a4d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "lsst-dm/DMTN-104",
"max_issues_repo_path": "DMTN-104.tex",
"max_line_length": 79,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "02f074beb4cb9eb1e08dc5e5f8446f7df2e33a4d",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "lsst-dm/DMTN-104",
"max_stars_repo_path": "DMTN-104.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 530,
"size": 1771
} |
\documentclass[a4paper,oneside]{article}
\usepackage[left=30mm,right=30mm,top=30mm,bottom=45mm]{geometry}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{pdfpages}
\usepackage{tikz}
\usepackage[sfdefault,light]{FiraSans}
\usepackage{FiraMono}
\usepackage{a3pages}
\usepackage{pdflscape}
\usepackage[%
bookmarksnumbered=true,
colorlinks=true,
linkcolor=cyan!50!blue,
citecolor=violet,
urlcolor=purple,
]{hyperref}
\newcommand\code[1]{\texttt{#1}}
\title{Including PDF Documents In \LaTeX}
\author{Raphael Frey\\[2mm]\small%
\href{https://github.com/alpenwasser/TeX/tree/master/include-pdfs}
{\nolinkurl{https://github.com/alpenwasser/TeX}}}
\begin{document}
\maketitle
\tableofcontents
\section{Overview}
Including PDF documents can be done in various ways, three of which will be
presented here and have their advantages and drawbacks discussed.
\begin{itemize}
\item
They can be included as regular images with the \code{\textbackslash
includegraphics} command. This has the advantage of allowing
convenient scaling and integration in the text, and as such is
well-suited for PDF documents which are indeed actual pictures.
It does, however, not conveniently allow to include pages from a PDF
document as if they were pages from the main document, or to scale
them to truly use the full page.
\item
One may use the \code{pdfpages} package. This allows to insert pages
from PDF files more or less as if they were part of the main document.
\item
Lastly, by using Ti\emph{k}Z and overlaying the included PDF files
over the page. Compared to the \code{pdfpages} package, this has the
effect of allowing annotations and still having the existing headers
and footers of the main document (if the included PDF does not cover
them). This may be desirable or not. However, this approach only works
for single pages.
\end{itemize}
% -------------------------------------------------------------------------- %
\clearpage
\section{Including as Image}
\label{sec:includegraphics}
% -------------------------------------------------------------------------- %
Pretty straightforward, see Figure \ref{fig:includegraphics}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.75\textwidth]{includes/a4.pdf}
\caption{%
This PDF file has been included with a \code{\textbackslash
includegraphics} command.}
\label{fig:includegraphics}
\end{figure}
% -------------------------------------------------------------------------- %
\clearpage
\section{pdfpages}
\label{sec:pdfpages}
% -------------------------------------------------------------------------- %
For the full range of available package options, of which
there are many, consult the manual, which is available at
\href{http://ctan.org/pkg/pdfpages}{\nolinkurl{http://ctan.org/pkg/pdfpages}}.
The following command is used to include the first page of a multi-page
document. The included page is the next page.
\begin{verbatim}
\includepdf[pages={1}]{includes/lipsum.pdf}
\end{verbatim}
\includepdf[pages={1}]{includes/lipsum.pdf}
The next two pages include all pages of an external PDF in a 2 by 2 grid. A
frame is put around each page. The command used is:
\begin{verbatim}
\includepdf[frame,nup=2x2,pages=-}]{includes/lipsum.pdf}
\end{verbatim}
Note that the page numbering on the main document includes the included pdf
pages, even if their page numbers are false (due to being from the included
document, obviously).
\includepdf[frame,nup=2x2,pages=-]{includes/lipsum.pdf}
Including a landscape document in a portrait main document can be done in two
ways. Either tell \code{pdfpages} that the included page is landscape:
\begin{verbatim}
\includepdf[landscape]{includes/a4.pdf}
\end{verbatim}
This will result in the PDF page in the main document being rotated to fit the
included landscape page.
\includepdf[landscape]{includes/a4.pdf}
Alternatively, we can rotate the included page, which will result in the main
document's page still being in portrait mode:
\begin{verbatim}
\includepdf[angle=90]{includes/a4.pdf}
\end{verbatim}
\includepdf[angle=90]{includes/a4.pdf}
Lastly, we can also include landscape A3 documents like so:
\begin{verbatim}
\begin{a3pages}
\includepdf{includes/a3.pdf}
\end{a3pages}
\end{verbatim}
This uses the \code{a3pages} package by me, available at
\href{https://github.com/alpenwasser/TeX/tree/master/A3Pages}
{\nolinkurl{https://github.com/alpenwasser/TeX/tree/master/A3Pages}}
\begin{a3pages}
\includepdf{includes/a3.pdf}
\end{a3pages}
% -------------------------------------------------------------------------- %
\clearpage
\section{Ti\emph{k}Z}
\label{sec:tikz}
% -------------------------------------------------------------------------- %
Lastly, we're going to include PDF documents by using Ti\emph{k}Z. We put a
\code{clearpage} command before and after the \code{tikzpicture} environment
so that the PDF is put on a separate page.
Notice that the page number of the main document is still visible below the
included \code{tikzpicture}. The same would go for any headers and other page
decoration. This may be desirable or not; up to you.
\emph{Note:} The use of \code{pdfpagewidth} might cause problems with certain
\TeX{} engines. In those cases, simply replace it with a manually defined
length, like \code{297mm} or whatever is appropriate.
\begin{verbatim}
\clearpage
\begin{tikzpicture}[overlay,remember picture]
\node at (current page.center)
{\includegraphics[angle=90,width=0.95\pdfpagewidth]{includes/a4.pdf}};
\end{tikzpicture}
\clearpage
\end{verbatim}
\clearpage
\begin{tikzpicture}[overlay,remember picture]
\node at (current page.center)
{\includegraphics[angle=90,width=0.95\pdfpagewidth]{includes/a4.pdf}};
\end{tikzpicture}
\clearpage
The next page is produced with the help of the \code{pdflscape} package:
\begin{verbatim}
% In preamble:
\usepackage{pdflscape}
% In document:
\begin{landscape}
\begin{tikzpicture}[overlay,remember picture]
\node[xshift=178mm,yshift=-28mm] at (current page.center)
{\includegraphics[width=0.95\pdfpageheight]{includes/a4.pdf}};
\end{tikzpicture}
\end{landscape}
\end{verbatim}
Note the \code{xshift} and \code{yshift} arguments to the \code{node}. The
\code{pdflscape} package seems to interfere with the page nodes of
Ti\emph{k}Z. I haven't been able to find a combination of anchors and
nodes which correctly places the included pdf on the center of the page
automaticlly.
\begin{landscape}
\begin{tikzpicture}[overlay,remember picture]
\node[xshift=178mm,yshift=-28mm] at (current page.center)
{\includegraphics[width=0.95\pdfpageheight]{includes/a4.pdf}};
\end{tikzpicture}
\end{landscape}
Lastly, we shall include an A3 landscape document as above, but with the
Ti\emph{k}Z technique:
\begin{verbatim}
\begin{a3pages}
\begin{tikzpicture}[overlay,remember picture]
\node at (current page.center)
{\includegraphics[width=\pdfpagewidth]{includes/a3.pdf}};
\end{tikzpicture}
\end{a3pages}
\end{verbatim}
Again, note that the page number (\emph{19}) is still present.
\begin{a3pages}
\begin{tikzpicture}[overlay,remember picture]
\node at (current page.center)
{\includegraphics[width=\pdfpagewidth]{includes/a3.pdf}};
\end{tikzpicture}
\end{a3pages}
\end{document}
| {
"alphanum_fraction": 0.6780652711,
"avg_line_length": 33.7324561404,
"ext": "tex",
"hexsha": "1020661a3fa52a30bc204b21dc72c8544061e0ad",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2019-01-12T10:16:04.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-04-30T12:14:41.000Z",
"max_forks_repo_head_hexsha": "fdd9aed7715029c3f1968b933c17d445d282cfb1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "alpenwasser/latex-sandbox",
"max_forks_repo_path": "include-pdfs/includePDF.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "fdd9aed7715029c3f1968b933c17d445d282cfb1",
"max_issues_repo_issues_event_max_datetime": "2019-11-14T15:58:26.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-03-14T22:58:44.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "alpenwasser/TeX",
"max_issues_repo_path": "include-pdfs/includePDF.tex",
"max_line_length": 78,
"max_stars_count": 12,
"max_stars_repo_head_hexsha": "fdd9aed7715029c3f1968b933c17d445d282cfb1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "alpenwasser/latex-sandbox",
"max_stars_repo_path": "include-pdfs/includePDF.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-23T17:11:45.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-03-29T22:59:20.000Z",
"num_tokens": 2087,
"size": 7691
} |
\documentclass[a4paper,twoside]{book}
\usepackage{graphicx}
\usepackage[pdftex]{hyperref}
\usepackage{fancyvrb}
\usepackage{makeidx}
\makeindex
\hypersetup{a4paper=true,
plainpages=false,
colorlinks=true,
pdfpagemode=UseOutlines, % FullScreen, UseNone, UseOutlines, UseThumbs
bookmarksopen=true,
bookmarksopenlevel=2,
colorlinks=true,
linkcolor=blue,
urlcolor=blue,
breaklinks=true,
pdfstartview=FitH, % Fit, FitH, FitV, FitBH
pagebackref=true,
pdfhighlight=/I,
baseurl={http://www.rizauddin.com/},
pdftitle={Document processing with LyX and SGML},
pdfsubject={LaTeX},
pdfauthor={Copyright \textcopyright 2004-2018, Rizauddin Saian},
pdfkeywords={LaTeX}
}
\newenvironment{dedication}
{
\cleardoublepage
\thispagestyle{empty}
\vspace*{\stretch{1}}
\hfill\begin{minipage}[t]{0.66\textwidth}
\raggedright
}%
{
\end{minipage}
\vspace*{\stretch{3}}
\clearpage
}
%%%%%%%%%%%%%
\title{A Simple \LaTeX\ Tutorial}
\author{Rizauddin Saian \\
Faculty of Computer \& Mathematical Sciences \\
Universiti Teknologi MARA, Perlis}
%\date{July 11, 2018}
\renewcommand{\chaptername}{Tutorial}
\begin{document}
\maketitle
\pagenumbering{roman}
\frontmatter
% ... other frontmatter
\begin{dedication}
To my wife and daughters.
\end{dedication}
\tableofcontents
\chapter{Preface}
\section*{How to Contact Us}
Please address comments and questions concerning this book to:\\
\begin{quote}
Rizauddin Saian\\
Faculty of Computer \& Mathematical Sciences \\
Universiti Teknologi MARA, Perlis\\
Email: [email protected]
\end{quote}
\section*{Acknowledgements}
A big thank you to my family and all my students.
\listoffigures
\mainmatter
\pagenumbering{arabic}
\chapter{Your First \LaTeX{} Document}
\label{chap:firstLaTeX}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{img/tut1}
\caption{Your first \LaTeX document.}
\label{fig:tut1}
\end{figure}
\index{article} \index{a4paper} \index{12pt}
\begin{Verbatim}[frame=single]
\documentclass[a4paper,12pt]{article}
\begin{document}
To produce a simple \LaTeX{} document is very easy. You just
need to type in the text. To create a new paragraph,
just leave one or more blank lines.
\LaTeX{} will take care of the indentation of each paragraph.
By default, the first paragraph will not be indented in
a new section. The rest of the paragraphs will be indented
automatically. Take note that if no chapter or section is
defined, all paragraphs will be indented.
To produce a single quote, use the ` symbol (usually located
under the `Esc' key on the keyboard), and close it with
the ' symbol (usually located besides the `Enter' key
on the keyboard).
For double quote, you should use those keys twice. For
example, ``the quoted text''.
\LaTeX{} could produced dashes with various length.
There are three types of dashes: `hyphens', `en-dashes'
and `em-dashes'. Hyphens are obtained in LATEX by
typing -, en-dashes by typing -- and em-dashes by
typing ---.
\end{document}
\end{Verbatim}
\noindent\textbf{Note:}
\begin{description}
\item[en-dashes]\index{en-dash} to specify range of numbers, for example, ``on pages 12--20''
\item[em-dashes]\index{em-dash} used for punctuating, for example, ``\LaTeX{} is a good text
processing software---trust me.''
\end{description}
\chapter{Title Page}
\label{chap:titlePage}
\begin{figure}[h]
\centering
\includegraphics[width=0.75\linewidth]{img/tut2}
\caption{The title page.}
\label{fig:tut2}
\end{figure}
\index{title} \index{author} \index{article} \index{date} \index{today} \index{maketitle}
\begin{Verbatim}[frame=single]
\documentclass{article}
\title{My First \LaTeX{} Document}
\author{
Rizauddin bin Saian \thanks{[email protected]} \\
Faculty of Computer \& Mathematical Sciences \\
Universiti Teknologi MARA, Perlis
\and
Zeti Zuryani binti Mohd Zakuan \thanks{[email protected]} \\
Faculty of Law \\
Universiti Teknologi MARA Perlis
}
\date{\today}
\begin{document}
\maketitle
Put your contents here!
\end{Verbatim}
\chapter{List}
\label{chap:list}
\begin{figure}[h]
\centering
\includegraphics[width=0.6\linewidth]{img/tut3}
\caption{List.}
\label{fig:tut3}
\end{figure}
\index{enumerate} \index{item} \index{itemize}
\index{list}
\begin{Verbatim}[frame=single]
\documentclass{article}
\begin{document}
This is an ordered list:
\begin{enumerate}
\item goat
\item cat
\end{enumerate}
\noindent This is an unordered list:
\begin{itemize}
\item fuel
\item price
\end{itemize}
\noindent Nested ordered list:
\begin{enumerate}
\item This is an ordered list:
\begin{enumerate}
\item goat
\item cat
\end{enumerate}
\item This is an unordered list:
\begin{itemize}
\item fuel
\item price
\end{itemize}
\end{enumerate}
\noindent Nested unordered list:
\begin{itemize}
\item This is an ordered list:
\begin{enumerate}
\item goat
\item cat
\end{enumerate}
\item This is an unordered list:
\begin{itemize}
\item fuel
\item price
\end{itemize}
\end{itemize}
\begin{description}
\item[ordered list] numbered list
\item[unordered list] bulleted list
\item without any title
\end{description}
\end{document}
\end{Verbatim}
\chapter{Table}
\label{chap:table}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{img/tut4}
\caption{Table.}
\label{fig:tut4}
\end{figure}
\index{table} \index{center} \index{tabular}
\index{hline} \index{ref} \index{hline}
\begin{Verbatim}[frame=single]
\documentclass{article}
\begin{document}
This text is before the table.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|} \hline
Year & Cost (RM) \\
\hline \hline
2000 & 1.30 \\
2001 & 0.20 \\
2002 & 1.00 \\
2003 & 1.50 \\
2004 & 1.70 \\
2005 & 1.80 \\
2008 & 2.70 \\
\hline
\end{tabular}
\end{center}
\caption {Sample Data}
\label{tab:TheCost}
\end{table}
According to Table~{\ref{tab:TheCost}}, $\dots$.
\end{document}
\end{Verbatim}
\section{Spanning the table}
\label{sec:spanningTheTable}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{img/tut4-1}
\caption{Spanning the table.}
\label{fig:tut4}
\end{figure}
\index{multicolumn} \index{tabular} \index{table} \index{center} \index{tabular} \index{label} \index{caption} \index{hline} \index{cline}
\begin{Verbatim}[frame=single]
\documentclass{article}
\begin{document}
Table~{\ref{tab:ngdata}} shows how to have headings
that don't span all of the columns.
\begin{table}[ht!]
\begin{center}
\begin{tabular}{|c|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1.5cm}|} \hline
&\multicolumn{5}{|c|}{Times (seconds)} \\ \cline{2-6}
&\multicolumn{4}{|c|}{Ball one}& Ball two \\ \cline{2-5}
&\multicolumn{4}{|c|}{Technique} &\\ \cline{2-5}
$i$ & $g_A$ & $g_B$ & $d_A$ & $d_B$ & \\ \hline \hline
1 & & & & & \\
2 & & & & & \\
3 & & & & & \\
4 & & & & & \\
5 & & & & & \\
\hline \hline
average& & & & & \\
\hline
$g$ & & & & & \\
\hline
\end{tabular}
\end{center}
\caption {Timing data}
\label{tab:ngdata}
\end{table}
\end{document}
\end{Verbatim}
\section{The Table of Tables}
\label{sec:theTableOfTables}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{img/tut4-2}
\caption{The table of tables.}
\label{fig:tut4-2}
\end{figure}
\index{listoftables} \index{table} \index{center} \index{tabular} \index{label} \index{caption} \index{hline}
\begin{Verbatim}[frame=single]
\documentclass{article}
\begin{document}
\listoftables
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|} \hline
Year & Cost (RM) \\
\hline \hline
2000 & 1.30 \\
2001 & 0.20 \\
2002 & 1.00 \\
\hline
\end{tabular}
\end{center}
\caption {Sample Data 1}
\label{tab:TheCost}
\end{table}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|} \hline
Year & Cost (RM) \\
\hline \hline
2000 & 1.30 \\
\hline
\end{tabular}
\end{center}
\caption {Sample Data 2}
\label{tab:TheCost}
\end{table}
\end{document}
\end{Verbatim}
\chapter{Figures}
\label{chap:figures}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{img/tut5}
\caption{Figures.}
\label{fig:tut5}
\end{figure}
\index{graphicx} \index{includegraphics} \index{textwidth} \index{caption} \index{label} \index{figure} \index{centering}
\begin{Verbatim}[frame=single]
\documentclass{article}
\usepackage{graphicx}
\begin{document}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.6\textwidth]{ten}
\caption{RM10}
\label{fig:car}
\end{figure}
Figure~{\ref{fig:car}} shows a RM10 note.
\end{document}
\end{Verbatim}
\noindent You can also scale the picture using [scale=0.8]
instead of [width=0.6\textbackslash textwidth]
\section{Table of Figures}
\label{sec:tableOfFigures}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{img/tut5-1}
\caption{Table of figures.}
\label{fig:tut5-1}
\end{figure}
\begin{Verbatim}[frame=single]
\documentclass{article}
\usepackage{graphicx}
\begin{document}
\listoffigures
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{ten}
\caption{RM10}
\label{fig:ten}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{five}
\caption{RM5}
\label{fig:five}
\end{figure}
\end{document}
\end{Verbatim}
\chapter{Document Styles}
\label{chap:documentStyles}
\section{Article document style}
\label{sec:articleDocumentStyle}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{img/tut6-1}
\caption{Article document style.}
\label{fig:tut6-1}
\end{figure}
\index{article} \index{section} \index{subsection}
\begin{Verbatim}[frame=single]
\documentclass{article}
\title{My First \LaTeX{} document}
\author{Your Name \thanks{Your Institution}}
\date{\today}
\begin{document}
\maketitle
\section{Introduction}
This is a section.
\subsection{Second level of section}
This is the second level of section.
\subsubsection{Third level of section}
This is the third level of section.
Is it necessary to have more subsection?
\end{document}
\end{Verbatim}
\section{Report document style}
\label{reportDocumentStyle}
\begin{figure}[ht!]
\centering
\begin{tabular}{c|c}
\includegraphics[width=0.3\linewidth]{img/tut6-2a} &
\includegraphics[width=0.3\linewidth]{img/tut6-2b} \\
\end{tabular}
\caption{Report document style.}
\label{fig:tut6-3a}
\end{figure}
\index{report} \index{chapter} \index{section} \index{subsection}
\begin{Verbatim}[frame=single]
\documentclass{report}
\title{My First \LaTeX{} Report}
\author{Your Name \thanks{Your Institution}}
\date{\today}
\begin{document}
\maketitle
\chapter{Introduction}
A chapter
\section{First level of section}
This is a section.
\subsection{Second level of section}
This is the second level of section.
\subsubsection{Third level of section}
This is the third level of section.
Is it necessary to have more subsection?
\end{document}
\end{Verbatim}
\section{Book document style}
\label{sec:bookDocumentStyle}
\begin{figure}[ht!]
\centering
\begin{tabular}{c|c}
\includegraphics[width=0.3\linewidth]{img/tut6-3a} &
\includegraphics[width=0.3\linewidth]{img/tut6-3b} \\
\hline \\
\includegraphics[width=0.3\linewidth]{img/tut6-3c}
\end{tabular}
\caption{Book document style.}
\label{fig:tut6-3a}
\end{figure}
\index{book} \index{part} \index{chapter} \index{section} \index{subsection} \index{subsubsection}
\begin{Verbatim}[frame=single]
\documentclass{book}
\title{My First \LaTeX{} Book}
\author{Your Name \thanks{Your Institution}}
\date{\today}
\begin{document}
\maketitle
\part{For The Beginners}
\chapter{Introduction}
\section{Introduction}
This is a section.
\subsection{Second level of section}
This is the second level of section.
\subsubsection{Third level of section}
This is the third level of section.
Is it necessary to have more subsection?
\end{document}
\end{Verbatim}
\chapter{Table of Contents}
\label{chap:tableOfContents}
To generate a table of contents, insert ``\textbackslash tableofcontents''
after the ``\textbackslash maketitle'' command.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{img/tut7}
\caption{The table of contents.}
\label{fig:tut7}
\end{figure}
\index{tableofcontents}
\begin{Verbatim}[frame=single]
\documentclass{article}
\title{My First \LaTeX{} document}
\author{Your Name \thanks{Your Institution}}
\date{\today}
\begin{document}
\maketitle
\tableofcontents
\section{Introduction}
This is a section.
\subsection{Second level of section}
This is the second level of section.
\subsubsection{Third level of section}
This is the third level of section.
Is it necessary to have more subsection?
\end{document}
\end{Verbatim}
\chapter{Abstract}
\label{chap:abstract}
\begin{figure}[h]
\centering
\includegraphics[width=0.75\linewidth]{img/tut8}
\caption{The abstract.}
\label{fig:tut8}
\end{figure}
\index{textbf} \index{abstract} \index{thanks} \index{today} \index{date}
\begin{Verbatim}[frame=single]
\documentclass{article}
\title{My First \LaTeX{} document}
\author{Your Name \thanks{Your Institution}}
\date{\today}
\begin{document}
\maketitle
\begin {abstract}
Note that the \textbf{abstract} environment in \LaTeX\
is defined for
reports and articles (but not for books) so
that it gets typeset
differently from other sections.
\end{abstract}
\tableofcontents
\section{Introduction}
This is a section.
\subsection{Second level of section}
This is the second level of section.
\subsubsection{Third level of section}
This is the third level of section.
Is it necessary to have more subsection?
\end{document}
\end{Verbatim}
\chapter{Bibliography}
\label{chap:bibliography}
\section{Plain format}
\label{sec:plainformat}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{img/tut9-1}
\caption{Plain format.}
\label{fig:tut9-1}
\end{figure}
\index{cite} \index{plain}
\begin{Verbatim}[frame=single]
\documentclass{article}
\begin{document}
According to \cite{lamport1994latex}, \dots.
Bla bla bla is bla bla bla \cite{griffiths1997learning,
goossens1997latex}.
Hey, I can add page too \cite[p.~32]{lamport1994latex}.
\bibliographystyle{plain}
\bibliography{ref}
\end{document}
\end{Verbatim}
\begin{Verbatim}[frame=single]
%ref.bib
%random articles
@book{lamport1994latex,
title={{\LaTeX}: A document preparation system: User's guide
and reference manual},
author={Lamport, Leslie},
year={1994},
publisher={Addison-Wesley}
}
@book{goossens1997latex,
title={The {\LaTeX} graphics companion: Illustrating documents
with {\TeX} and PostScript},
author={Goossens, Michel and Rahtz,
Sebastian PQ and Rahtz, Sebastian and Mittelbach, Frank},
volume={1},
year={1997},
publisher={Addison-Wesley Professional}
}
@book{griffiths1997learning,
title={Learning {\LaTeX}},
author={Griffiths, David F and Higham, Desmond J},
year={1997},
publisher={SIAM}
}
\end{Verbatim}
\section{APA Format}
\label{sec:APAformat}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{img/tut9-2}
\caption{APA format.}
\label{fig:tut9-2}
\end{figure}
\index{citeA} \index{cite} \index{apacite}
\begin{Verbatim}[frame=single]
\documentclass{article}
\usepackage{apacite}
\begin{document}
According to \citeA{lamport1994latex}, \dots.
Bla bla bla is bla bla bla \cite{griffiths1997learning,
goossens1997latex}.
Hey, I can add page too \cite[p.~32]{lamport1994latex}.
\bibliographystyle{apacite}
\bibliography{ref}
\end{document}
\end{Verbatim}
\section{IEEE Format}
\label{sec:IEEEformat}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{img/tut9-3}
\caption{IEEE Format.}
\label{fig:tut9-3}
\end{figure}
\index{cite} \index{IEEEtranN} \index{natbib}
\begin{Verbatim}[frame=single]
\documentclass{article}
\usepackage[numbers]{natbib}
\begin{document}
According to \cite{lamport1994latex}, \dots.
Bla bla bla is bla bla bla
\cite{griffiths1997learning, goossens1997latex}.
Can add page also \cite[p.~32]{lamport1994latex}.
\bibliographystyle{IEEEtranN}
\bibliography{ref}
\end{document}
\end{Verbatim}
%\clearpage
\cleardoublepage % for book class
\addcontentsline{toc}{chapter}{Index}
\printindex
\backmatter
\end{document}
| {
"alphanum_fraction": 0.6774435656,
"avg_line_length": 21.5167701863,
"ext": "tex",
"hexsha": "84d0ad9a0c589e226da8a8d6abc2980e58f25222",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3156e005f390287408745ea40267f37a86859eb6",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "rizauddin/simple-latex-tutorial",
"max_forks_repo_path": "notes/simpleLatexTutorial.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3156e005f390287408745ea40267f37a86859eb6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "rizauddin/simple-latex-tutorial",
"max_issues_repo_path": "notes/simpleLatexTutorial.tex",
"max_line_length": 139,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "3156e005f390287408745ea40267f37a86859eb6",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "rizauddin/simple-latex-tutorial",
"max_stars_repo_path": "notes/simpleLatexTutorial.tex",
"max_stars_repo_stars_event_max_datetime": "2019-06-12T07:58:11.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-06-12T07:58:11.000Z",
"num_tokens": 5326,
"size": 17321
} |
\documentclass{scrartcl}
\usepackage{libertine}
\usepackage{minted}
\usepackage[includepaths={./, ../ly/}, relative]{lyluatex}
\begin{document}
\section*{Print LilyPond code}
\setluaoption{ly}{verbatim}{true}
\setluaoption{ly}{intertext}{The “intertext” option can be used to print some text between the code and the score}
The \texttt{verbatim} option causes the LilyPond code to be printed before the score. The current issue is that the content of \texttt{\textbackslash lilypond} and the \texttt{lilypond} environment are passed as one line.
\lilypond[intertext=]{ c d e d }
\hrule
\begin{lilypond}[intertext=This one will have another content]
{
c d e d
}
\end{lilypond}
\hrule
\medskip
\renewcommand{\lyIntertext}[1]{
\textcolor{magenta}{#1}
\bigskip
}
The contents of included \emph{files} already work properly:
\setluaoption{ly}{intertext}{The formatting of the intertext be adjusted by renewing the command “lyIntertext”}
\lysetverbenv{\begin{minted}{TeX}}{\end{minted}}
\lilypondfile[intertext={Yet another content}]{Sphinx/Sphinx}
\setluaoption{ly}{verbatim}{false}
\setluaoption{ly}{intertext}{}
\end{document}
| {
"alphanum_fraction": 0.7581069238,
"avg_line_length": 24.8043478261,
"ext": "tex",
"hexsha": "a24641cbd6edcbcbe5d495f96e83d890dcbffa06",
"lang": "TeX",
"max_forks_count": 9,
"max_forks_repo_forks_event_max_datetime": "2022-03-12T20:32:10.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-05-29T13:19:27.000Z",
"max_forks_repo_head_hexsha": "06a7c270ec2251d4ad6bcda016879689dd4e6484",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jperon/lylatex",
"max_forks_repo_path": "examples/verbatim.tex",
"max_issues_count": 244,
"max_issues_repo_head_hexsha": "06a7c270ec2251d4ad6bcda016879689dd4e6484",
"max_issues_repo_issues_event_max_datetime": "2022-02-19T09:52:34.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-05-19T21:07:54.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jperon/lylatex",
"max_issues_repo_path": "examples/verbatim.tex",
"max_line_length": 222,
"max_stars_count": 48,
"max_stars_repo_head_hexsha": "06a7c270ec2251d4ad6bcda016879689dd4e6484",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jperon/lylatex",
"max_stars_repo_path": "examples/verbatim.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-25T15:04:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-05-29T15:53:18.000Z",
"num_tokens": 344,
"size": 1141
} |
\section{Windows}
\subsection{Introduction}
This section describes the multiple windows interface of the \OCS{} editor. This design principle was chosen in order
to extend the flexibility of the editor, especially on the multiple screens setups and on environments providing advanced
windows management features, like for instance: multiple desktops found commonly on many open source desktop environments.
However, it is enough to have a single large screen to see the advantages of this concept.
OpenCS windows interface is easy to describe and understand. In fact we decided to minimize use of many windows concepts
applied commonly in various applications. For instance dialog windows are really hard to find in the \OCS. You are free to try,
though.
Because of this, and the fact that we expect that user is familiar with other applications using windows this section is mostly
focused on practical ways of organizing work with the \OCS.
\subsection{Basics}
After starting \OCS{} and choosing content files to use a editor window should show up. It probably does not look surprising:
there is a menubar at the top, and there is a~large empty area. That is it: a brand new \OCS{} window contains only menubar
and statusbar. In order to make it a little bit more useful you probably want to enable some panels\footnote{Also known as widgets.}.
You are free to do so, just try to explore the menubar.
You probably founded out the way to enable and disable some interesting tables, but those will be described later. For now, let's
just focus on the windows itself.
\paragraph{Creating new windows}
is easy! Just visit view menu, and use the ``New View'' item. Suddenly, out of the blue a new window will show up. As you would expect,
it is also blank, and you are free to add any of the \OCS{} panels.
\paragraph{Closing opened window}
is also easy! Simply close that window decoration button. We suspect that you knew that already, but better to be sure.
Closing last \OCS{} window will also terminate application session.
\paragraph{Multi-everything}
is the main foundation of \OCS{} interface. You are free to create as many windows as you want to, free to populate it with
any panels you may want to, and move everything as you wish to -- even if it makes no sense at all. If you just got crazy idea and
you are wonder if you are able to have one hundred \OCS{} windows showing panels of the same type, well most likely you are
able to do so.
The principle behind this design decision is easy to see for \BS{} made editor, but maybe not so clear for users who are
just about to begin their wonderful journey of modding.
\subsection{Advanced}
So why? Why this is created in such manner. The answer is frankly simple: because it is effective. When creating a mod, you often
have to work only with just one table. For instance you are just balancing weapons damage and other statistics. It makes sense
to have all the space for just that one table. More often, you are required to work with two and switch them from time to time.
All major graphical environments commonly present in operating systems comes with switcher feature, that is a key shortcut to change
active window. It is very effective and fast when you have only two windows, each holding only one table. Sometimes you have to work
with two at the time, and with one from time to time. Here, you can have one window holding two tables, and second holding just one.
OpenCS is designed to simply make sense and do not slowdown users. It is as simple as possible (but not simpler), and uses one
flexible approach in all cases.
There is no point in digging deeper in the windows of \OCS. Let's explore panels, starting with tables.
%We should write some tips and tricks here. | {
"alphanum_fraction": 0.7768725361,
"avg_line_length": 70.462962963,
"ext": "tex",
"hexsha": "dd319e6c001497b92f015210f9c18dbccc7985fd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5fdd264d0704e33b44b1ccf17ab4fb721f362e34",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "Bodillium/openmw",
"max_forks_repo_path": "manual/opencs/windows.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5fdd264d0704e33b44b1ccf17ab4fb721f362e34",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "Bodillium/openmw",
"max_issues_repo_path": "manual/opencs/windows.tex",
"max_line_length": 136,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5fdd264d0704e33b44b1ccf17ab4fb721f362e34",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "Bodillium/openmw",
"max_stars_repo_path": "manual/opencs/windows.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 821,
"size": 3805
} |
\chapter{Examples with Weights}
\label{chapt:exampleswithweights}
\section{Introduction}
This chapter, definitely work in progress, presents examples that
use weights. New examples will be added as they become available.
\section{Weighted Edit Distance for Spelling Correction}
The following script, based on an example by M\r{a}ns Huldén, illustrates how weighted \fst{}s
can be used to make a spelling corrector that returns a set of possible
corrections ranked by likelihood, or even just the best (most likely)
correction, for a misspelled word.
The first phenomenon that we need to model is the degree of dissimilarity between an
observed misspelled word and a correctly spelled possible correction. For example, the misspelled
word \emph{accomodation} is intuitively very similar to the correct word
\emph{accommodation}, differing in only one missing letter. Similarly,
\emph{panick}
is very similar to \emph{panic}, differing in having one extra letter. And
\emph{advertize} is similar to the correctly spelled \emph{advertise}, differing only
in one changed letter. Increasing in dissimilarity, \emph{camoflague} differs from
\emph{camouflage} in two letters, \emph{garentee} also differs from
\emph{guarantee} in
two letters, and \emph{bizness} differs from \emph{business} in three letters.
We can quantify the amount of dissimilarity between two words as the \emph{edit distance},
a count of the number of editing changes, or, perhaps a more sophisticated measure
of the amount of work, required to change one word into the
other. The changes could include adding a letter, deleting a letter, or changing
one letter into another letter.
Weighted Kleene \fst{}s have weights that are ``costs,'' under the Tropical Semiring, and
the edit distance can be formalized in Kleene as the cost of
transducing one word into another, assigning a cost to deleting a
letter, a cost to adding a letter and a cost to changing one letter into
another letter. In the Tropical Semiring, the costs associated with multiple changes are simply
added together to yield the total cost. Mapping a letter to itself is free, involving no cost,
i.e., a neutral cost of 0.0. Of
course, with
enough editing changes, any word can be changed to any other word,
but intuitively we want to concentrate on the possible corrections that
involve the lowest cost, the fewest edits. These intuitions will be encoded in what is known
as a \emph{error model} or \emph{edit model}.
Another phenomenon that needs to be modeled is the fact that some words are
simply more common than others in any reasonably sized corpus. Looking at a misspelled word
in isolation,
if a set of possible corrections having the same edit distance includes a more common word and
some less
common words, the more common word is more likely to be the best correction. For
example, the misspelled \emph{frm} differs in only one letter from the possible
corrections \emph{farm}, \emph{firm} and \emph{from}, but \emph{from} is far more common than
the alternatives and is---considered in isolation---more likely to be the intended
word. These intuitions will be encoded in a unigram \emph{language model}.
We will build our weighted edit-modeling transducer from four main parts. The first
part is the identity mapping:
\begin{Verbatim}
$ident = . <0.0> ; // map any symbol to itself
\end{Verbatim}
\noindent
Recall that . (the dot, period or full stop) denotes any symbol or the transducer that maps any
symbol to itself. Such identity mappings are free, having no cost in
our spell-checking example, so we assign the identity mapping a weight of 0:0.
The second part is
\begin{Verbatim}
$changecost = <7.0> ;
$delete = .:"" $changecost ; // map a symbol
// to the empty string
\end{Verbatim}
\noindent which denotes the transducer that maps any symbol, denoted as . (dot),
downward to the empty string, denoted \texttt{""}. We arbitrarily assign the
weight of 7.0 to such a deletion, a value that we can adjust later.
The third part is
\begin{Verbatim}
$insert = "":. $changecost ; // map the empty string
// to a symbol
\end{Verbatim}
\noindent which denotes the transducer that maps the empty string downward to any
symbol, inserting a symbol where there was none before. Such mappings are technically known as
epentheses. We assign such
insertions the weight of 7.0.
Finally, the fourth part is
\begin{Verbatim}
$swap = ( .:. - . ) $changecost ;
\end{Verbatim}
\noindent
Recall that \texttt{.:.} denotes the transducer that maps any symbol to any
symbol, including itself. Then \texttt{(~.:.~- .~)} denotes the transducer
that maps any symbol to any other symbol, so \emph{not} including identity mappings. We
assign such letter swaps the cost of 7.0.
Armed with these definitions, our first draft of the model for transducing between a
misspelled word and a properly spelled word, the \emph{edit model}, is a
sequence of zero or more identity mappings, deletions, insertions or swaps,
in Kleene terms:
\begin{Verbatim}
$editModel = ( $ident | $delete | $insert | $swap )* ;
\end{Verbatim}
We also need a unigram \emph{language model} that encodes the language (i.e., the
set) of properly spelled words that will serve as possible corrections.
Initially, such a model can be constructed
as a simple unweighted union of words,
\begin{Verbatim}
$langModel = a | back | bake | bee | carrot | deer | eye ;
\end{Verbatim}
\noindent
extending it eventually to hundreds of thousands or even millions of words. For the purposes
of this example, we can simply search the web for posted wordlists, such as the
``Simpsons Frequency Dictionary,''\footnote{\url{http://pastebin.com/anKcMdvk}} which contains the 5000 most frequent words
found in open subtitles from \emph{The Simpsons} television series. If we download this list, we can, with a
little editing,\footnote{There are some apparently extraneous punctuation letters
in this list, and for Kleene regular expressions, some vertical bars (``pipes''), digits and punctuation
letters need to be removed or literalized using double quotes or backslashes.} convert it into a useful test model.
\begin{Verbatim}
$langModel = the | you | i | a | to | and | of | it ... ;
// the full list has 5000 words, some needing editing
// to compile correctly in Kleene
\end{Verbatim}
\noindent
Then if the misspelled word is \emph{frm}, the language of possible corrections,
\verb!$corr!, is computed as
\begin{Verbatim}
$corr = frm _o_ $editModel _o_ $langModel ;
// FstType: vector, Semiring: standard, 12992 states,
56693 arcs, 2330246 paths, Transducer, Weighted, Closed Sigma
\end{Verbatim}
\noindent
When I run this experiment, using the bare Simpsons wordlist, with my edits, Kleene
informs me that the result \texttt{\$corr} contains 2330246 paths, representing
the fact that, with enough edits, \emph{frm} can be transduced into any of the
5000 words in the language model, in multiple ways.
At this point, we can start to use the weights (costs) to focus on the more
likely corrections, given our edit model and language model. In the OpenFst
library, the function that prunes an \fsm{} to contain only the best paths (i.e.,
lowest cost paths, in the Tropical Semiring) is called ShortestPath. That operation
and terminology are exposed in the Kleene function \verb!$^shortestPath($fst, #num=1)!. For
example, we can limit \verb!$corr! to the best five corrections using
\begin{Verbatim}
$corr = $^shortestPath($^lowerside(frm
_o_
$editModel
_o_
$langModel),
5) ;
print $corr ;
fry : 7.0
from : 7.0
firm : 7.0
farm : 7.0
arm : 7.0
\end{Verbatim}
\noindent
Each of the five possible corrections has a weight of 7.0, indicating (in our
current model that assigns a weight of 7.0 to each change) that just one
edit was needed to change \emph{frm} into the correctly
spelled word.
If we ask ShortestPath to return just one result,
\begin{Verbatim}
$corr = $^shortestPath($^lowerside(frm
_o_
$editModel
_o_
$langModel),
1) ;
print $corr ;
arm : 7.0
\end{Verbatim}
\noindent
it randomly gives us \emph{arm}, which is as close to \emph{frm} as the
other possibilities, given our edit model,
but intuitively is not the mostly likely correction.
What's missing in this experiment so far is a modeling of the fact that some
words, in the language model, occur more frequently than others. And when a
misspelled word is considered in isolation, possible corrections that are more
likely should have precedence over those that are less likely. We
know intuitively that \emph{from}, a common English function word, occurs more often than the
alternatives \emph{farm}, \emph{firm}, \emph{fry} and \emph{arm}, and we can confirm this by looking at actual frequency counts. Again, we will
model the likelihood of individual words in the language model using cost weights in the Tropical Semiring.
The Simpsons word list, in fact, gives us some additional information that allows
us to at least approximate the probability (and therefore the cost) of each
correct word. The top of the list looks like this:
\begin{Verbatim}
Rank Word Frequency
1 the (107946)
2 you (98068)
3 i (91502)
4 a (79241)
5 to (70362)
6 and (47916)
7 of (42175)
8 it (36497)
9 in (32503)
10 my (32254)
11 that (32083)
12 is (31533)
13 this (29902)
14 me (28208)
...
\end{Verbatim}
\noindent
The ``Frequency'' column shows the count of occurrences of each word from the
corpus. For example, P(the), the probability of a word being \emph{the}, i.e., the probability of
selecting a word from the corpus at random, and having it be \emph{the}, is
computed as the count of occurrences of \emph{the}, here 107946, divided by the
total number of tokens in the corpus, i.e.\@
\begin{Verbatim}
P(the) = 107946 / TotalTokens
\end{Verbatim}
\noindent
Disappointingly, the Simpsons wordlist doesn't tell us the TotalTokens,
so we are left to do some informed guessing. In the Brown Corpus, the
most common word in English,
\emph{the}, is said to account for almost 7\% of the tokens. For now, let's accept
this number as being valid for the Simpsons corpus as well. If 107946 instances of \emph{the}
represented 7\% of the corpus, then the total number of tokens would be a little over one and half
million.
\begin{Verbatim}
107946 / TotalTokens = .07
TotalTokens = 1542085
\end{Verbatim}
\noindent
For our example, let's use the round number of 1,500,000. The probability of \emph{the}, and some
other common words, can then be estimated as
\begin{Verbatim}
P(the) = 107946 / 1,500,000 = 0.07196
P(you) = 98068 / 1,500000 = 0.06538
P(i) = 91502 / 1,500,000 = 0.06100
P(a) = 79241 / 1,500,000 = 0.05300
\end{Verbatim}
\noindent
Note that as the frequency of occurrence decreases, the probability value also decreases.
As usual, if the probability of an event, such as the appearance of the word \emph{the} in
running English text, is p,
then the cost, C(the), is computed as $-$log(p).
\begin{Verbatim}
C(the) = -log(107946 / 1,500,000) = 2.63159
C(you) = -log(98068 / 1,500,000) = 2.72756
C(i) = -log(91502 / 1,500,000) = 2.79686
C(a) = -log(79241 / 1,500,000) = 2.94073
\end{Verbatim}
\noindent
Note that as the frequencies/probabilities decrease, the costs \emph{in}crease.
High probabilities correspond to low costs, and low probabilities correspond to
high costs.
These costs can now be included in an improved \emph{weighted} language model. If we tediously precompute
the costs, we can simply list the cost for each word, using the usual
angle-bracket syntax. Here is the Kleene syntax:
\begin{Verbatim}
$langModel =
the <2.63159>
| you <2.72756>
| i <2.79686>
| a <2.940731>
... ;
\end{Verbatim}
\noindent
Alternatively, we can let Kleene convert the probabilities into costs using the
\verb!#^prob2c()! function, ``probability to cost,'' which is pre-defined as
\begin{Verbatim}
#^prob2c(#prob) {
return -#^log(#prob) ;
}
\end{Verbatim}
\noindent
The definition of the improved, weighted language model might then look like this, with
\verb!#TotalTokens! defined as 1500000.0, a floating-point number, so that
when it is used in division, the result is also a float.
\begin{Verbatim}
#TotalTokens = 1500000.0 ;
$langModel =
the <#^prob2c(107946 / #TotalTokens)>
| you <#^prob2c(98068 / #TotalTokens)>
| i <#^prob2c(91502 / #TotalTokens)>
| a <#^prob2c(79241 / #TotalTokens)>
... ;
// and so on for all 5000 words in the model
\end{Verbatim}
\noindent
We could easily adjust the value of \verb!#TotalTokens! if we ever get more
precise information about the size of the corpus.
With the improved language model, now weighted, the best correction for \emph{frm} selected by
\verb!$^shortestPath()! is now \emph{from}, which intuitively seems right.
\begin{Verbatim}
$corr = $^shortestPath($^lowerside(frm
_o_
$editModel
_o_
$langModel)) ;
print $corr ;
from : 12.295898
\end{Verbatim}
\noindent
And we can see the \emph{n} best possible corrections by passing an optional numerical
argument to \verb!$^shortestPath()!, e.g.\@ 10:
\begin{Verbatim}
$corr = $^shortestPath($^lowerside(frm
_o_
$editModel
_o_
$langModel),
10) ;
print $corr ;
from : 12.295898
from : 15.744141
arm : 15.818359
farm : 16.557617
fry : 17.483398
firm : 17.483398
for : 18.069336
i'm : 18.161133
are : 18.52832
him : 19.44336
\end{Verbatim}
\noindent
In the Tropical Semiring, as in golf, the lower scores are the better scores,
and \verb!$^shortestPath()! chooses the path(s) with the lowest cost(s).
After the weighted language model is defined, the rest of the final script is shown below,
including the definition of the convenience
functions \verb!$^correct()!, which returns an \fsm{},
and \verb!^correctp()!, which simply prints out the results. Both functions have an
optional numerical second argument, default 1, which controls how many paths are
retained in the shortest-path result, and an optional numerical third argument,
default 7.0, which assigns the
weight for each editing change.
\begin{Verbatim}
$ident = . ; // identity mapping, no change
// Three kinds of change
$delete = .:"" ; // map any symbol to the empty string
$insert = "":. ; // map the empty string to any symbol
$swap = .:. - . ; // map one symbol to another symbol
// (not to itself)
// Higher change weights (costs) make changes more costly,
// favoring corrections that look as much as possible
// like the misspelled word. Lower change weights will
// tend to favor "corrections" that are very high-frequency
// words, no matter how high the edit distance between
// the observed misspelled word and the proposed corrections.
$^editModel(#changecost) {
return ( $ident <0.0> |
$delete <#changecost> |
$insert <#changecost> |
$swap <#changecost>
)* ;
}
$^correct($word, #num=1, #changecost = 7.0) {
return $^shortestPath($^lowerside($word
_o_
$^editModel(#changecost)
_o_
$langModel),
#num) ;
}
^correctp($word, #num=1, #changecost=7.0) {
print "\n" ;
print $^toString(#num) " correction(s) for " $word ;
print $^shortestPath($^lowerside($word
_o_
$^editModel(#changecost)
_o_
$langModel),
#num) ;
}
// end of script
\end{Verbatim}
Once the script is loaded, one can simply call
\begin{alltt}
print \verb!$^correct!(\emph{misspelledword}) ;
print \verb!$^correct!(\emph{misspelledword}, 5) ;
print \verb!$^correct!(\emph{misspelledword}, 5, 8.0) ;
\end{alltt}
\noindent
or just
\begin{alltt}
\verb!^correctp!(\emph{misspelledword}) ;
\verb!^correctp!(\emph{misspelledword}, 5) ;
\verb!^correctp!(\emph{misspelledword}, 5, 8.0) ;
\end{alltt}
\noindent
The full script, \verb!weightededitdistance.kl!, is available from the
download page at \url{www.kleene-lang.org}.
| {
"alphanum_fraction": 0.6738979922,
"avg_line_length": 37.6782608696,
"ext": "tex",
"hexsha": "13574553560ac593071e9a0751b30a598fbdbe07",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2019-08-24T12:16:26.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-06-20T03:29:18.000Z",
"max_forks_repo_head_hexsha": "938beb074bcf3706852630881da15e5badb730d5",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "cscott/kleene-lang",
"max_forks_repo_path": "doc/user/kleene/chapt6.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "938beb074bcf3706852630881da15e5badb730d5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "cscott/kleene-lang",
"max_issues_repo_path": "doc/user/kleene/chapt6.tex",
"max_line_length": 145,
"max_stars_count": 8,
"max_stars_repo_head_hexsha": "938beb074bcf3706852630881da15e5badb730d5",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "cscott/kleene-lang",
"max_stars_repo_path": "doc/user/kleene/chapt6.tex",
"max_stars_repo_stars_event_max_datetime": "2020-01-08T04:23:09.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-13T05:56:54.000Z",
"num_tokens": 4612,
"size": 17332
} |
% !TEX root = ../../../proposal.tex
\label{sec:gpu}
\GPUTable
The most computationally expensive part of our general DROWN attack is breaking the 40-bit symmetric key. We wanted to find the platform that would have the best tradeoff of cost and speed for the attack, so we performed some preliminary experiments comparing performance of symmetric key breaking on CPUs, GPUs, and FPGAs. These experiments used a na\"{\i}ve version of the attack using the OpenSSL implementation of MD5 and RC2.
The CPU machine contained four Intel Xeon E7-4820 CPUs with a total of 32 cores (64 concurrent threads). The GPU system was equipped with a ZOTAC GeForce GTX TITAN and an Intel Xeon E5-1620 host CPU\@. The FPGA setup consisted of 64 Spartan-6 LX150 FPGAs.
We benchmarked the performance of the CPU and GPU implementations over a large corpus of randomly generated keys, and then extrapolated to the full attack.
For the FPGAs, we tested the functionality in simulation and estimated the actual runtime by theoretically filling the FPGA up to 90\% with the design, including communication.
Table~\ref{perf_comparison} compares the three platforms.
While the FPGA implementation was the fastest in our test setup, the speed-to-cost ratio of GPUs was the most promising. Therefore, we decided to focus on optimizing the attack on the GPU platform.
We developed several optimizations:
\paragraph{Generating key candidates on GPUs.} Our na\"{\i}ve implementation generated key candidates on the CPUs. For each \hashcomputation, a key candidate was transmitted to the GPU, and the GPU responded with the key validity. The bottleneck in this approach was the PCI-E Bus. Even newer boards with PCI-E 3.0 or even PCI-E 4.0 are too slow to handle the large amount of data required to keep the GPUs busy. We solved this problem by generating the key candidates directly on the GPUs.
\paragraph{Generating memory blocks of keys.}
Our hash computation kernel had to access different candidate keys from the GPU memory. Accessing global memory is typically a slow operation and we needed to keep memory access as minimal as possible. Ideally we would be able to access the candidate keys on a register level or from a constant memory block, which is almost as fast as a register. However, there are not enough registers or constant memory available to store all the key values.
We decided to divide each key value into two parts $k_H$ and $k_L$, where $|k_H|=1$ byte and $|k_L|=4$ bytes. We stored all possible $2^8$ $k_H$ values in the constant read-only memory, and all possible $2^{32}$ $k_L$ values in the global memory.
Next we used an in-kernel loop. We loaded the latter 4 bytes from the slow global memory and stored it in registers. Inside the inner loop we iterated through our first byte $k_H$ by accessing the fast constant memory. The resulting key candidate was computed as $k=k_H||k_L$.
\paragraph{Using 32-bit data types.}
Although modern GPUs support several data types ranging in size from 8 to 64 bits, many instructions are designed for 32-bit data types. This fits the design of MD5 perfectly, because it uses 32-bit data types. RC2, however, uses both 8-bit and 16-bit data types, which are not suitable for 32-bit instruction sets. This forced us to rewrite the original RC2 algorithm to use 32-bit instructions.
% to prevent casting from different datatypes which would typically involve a additional bitfield extraction call.
\paragraph{Avoiding loop branches.} Our kernel has to concatenate several inputs to generate the \texttt{server\_write\_key} needed for the encryption as described in Section~\ref{sec:ssl2}. Using loops to move this data generates branches because there is always an if() inside a for() loop. To avoid these branches, which always slow down a GPU implementation, we manually shifted the input bytes into the 32-bit registers for MD5. This was possible since the \hashcomputation inputs,
$(mk_{clear} || mk_{secret} || ``0" || r_c || r_s)$,
have constant length.
\paragraph{Optimizing MD5 computation.} Our MD5 inputs have known input length and block structure, allowing us to use the so-called zero-based optimizations. Given the known input length (49 bytes) and the fact that MD5 uses zero padding, in our case the MD5 input block included four 0x00 bytes. These \hex{00} bytes are read four times per MD5 computation which allowed us to drop in total 16 ADD operations per MD5 computation. In addition, we applied the Initial-step optimizations used in the Hashcat implementation~\cite{hashcat-talk}.
\paragraph{Skipping the second encryption block.} The input of the brute-force computation is a 16-byte client challenge $r_c$ and the resulting ciphertext from the \texttt{ServerVerify} message which is computed with an RC2 cipher. As RC2 is an 8-byte block cipher the RC2 input is split into two blocks and two RC2 encryptions are performed. In our verification algorithm, we skipped the second decryption step as soon as we saw the key candidate does not decrypt the first plaintext block correctly. This resulted in a speedup of about a factor of 1.5.
\paragraph{RC2 permutation table in constant memory.}
The RC2 algorithm uses a 256-byte permutation table which is constant for all RC2 computations. Hence, this table is a good candidate to be put into the constant memory, which is nearly as fast as registers and makes it easy to address the table elements. When finally using the values, we copied them into the even faster shared memory. Although this copy operation has to be repeated, it still led to a speed up of approximately a factor of 2.
\paragraph{RC2 key setup without keysize checks.}
The key used for RC2 encryption is generated using MD5, thus the key size is always 128 bits. Therefore, we do not have to check for the input key size, and can simply skip the size verification branch completely.
| {
"alphanum_fraction": 0.7909863946,
"avg_line_length": 136.7441860465,
"ext": "tex",
"hexsha": "1f02119e5674a1ebf3f7c47c35f7358dd0dcc264",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5607114fb4340c5b6e944c73ed6019006d3ebec9",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "dadrian/dissertation",
"max_forks_repo_path": "papers/drown/paper/gpu.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5607114fb4340c5b6e944c73ed6019006d3ebec9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "dadrian/dissertation",
"max_issues_repo_path": "papers/drown/paper/gpu.tex",
"max_line_length": 555,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5607114fb4340c5b6e944c73ed6019006d3ebec9",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "dadrian/dissertation",
"max_stars_repo_path": "papers/drown/paper/gpu.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1325,
"size": 5880
} |
\documentclass[./butidigress.tex]{subfiles}
\begin{document}
\chapter{Introduction}\label{chap:intro}
\epi{I have but one lamp by which my feet are guided, and that is the lamp of experience. I know of no way of judging the future but by the past.}{\attrib{Patrick Henry}{Speech at the Virginia Convention}{1775}}
\newpage
\setcounter{footnote}{0}
% Why am I writing this?
\lettrine{T}{his} document is an attempt to layout the personal philosophy that I have developed over the course of my life so far.
While realizing that I have only lived 23 years, quite a short time in the scheme of things, I believe that documenting my thoughts will serve me to reflect on from whence I came and to meditate on where I am going.
I intend this to be some manner of living document, updated over time with my new insights.
This introduction exists to define some terms and as exposition regarding the development of my philosophy.
We start our journey with a discussion of ethics vs.\ morals and follow that up with a (mostly) chronological account of my journey to this point.
There'll be some housekeeping type stuff after that, like a discussion on this thing's title and a breakdown of all the different kinds of margin notes.
Just before getting into the meat of this thing, we'll do the classic textbook breakdown of each chapter.
\unnumsection{Ethics}\label{sec:ethics}
Personal philosophy is a bit abstract, I would, to a certain degree, prefer the term \emph{ethical code}.
What does that mean exactly?
I assume that we all have a passing familiarity with the term ethics and the term morals, but little understanding on their relationship and my specific usage may be slightly opaque.
Therefore, I think a precise definition of the term ethical (or ethics) and the term morality (or morals) would be appropriate here, I am using a definition set forth in \book{Objectivity}, by Lorraine Daston and Peter Galison (2007).\autocite{objectivity}
\begin{description}
\item [ethics] normative codes of conduct that are bound up with a way of being in the world, an ethos in the sense of the habitual disposition of an individual or group
\item [morals] specific normative rules that may be upheld or transgressed and to which one may be held to account
\end{description}
I really doubt that these are very many people's working definitions of these terms.
It doesn't really matter in this case, because I'm just defining these terms to mean those definitions in this specific context.
Once you're done reading this (this is a sentence which will likely only ever be read by me) you can go back to whatever normal definition you have, but for now use these.
I have no real moral code (by the above definition), besides, like, the law,\ftnote{Which is, again by the above definition, a moral, and not an ethical, code.} I guess; I'm saying that I don't believe in God or religion, not that I don't care about morality.
I don't believe in an afterlife; my ethics are founded purely on choice, or they try to be, and I don't think there will be any reward for following them or a punishment for breaking them.
Removing incentive, I have found (along with many behavioral psychologists and those who realize the obvious), tends to make one less likely to do some thing.
The question that comes to mind is: why continue to follow?
The answer is in two parts, neither of which are likely to mollify current skeptics.
First, this list is, at least in part, descriptive, it's how I have lived my life (which has worked out pretty well for me so far).
The second part is really a rejection of the question; living this way has made my life better, more fulfilling.
\unnumsection{Development}\label{sec:development}
\epi{Constant and frequent questioning is the first key to wisdom\ldots\ For through doubting we a led to inquire, and by inquiry we perceive the truth.}{\attrib{Peter Abelard}{Sic et Non}{\~{}1121}}
I started college pretty goddamn depressed; for about two years, I thought about my suicide more like a historical event than a potential future choice.
Missing out on what amounts to about two years of social development in such an important time is hard.
This document got started near the end of my \nth{5} year of college, I am now without suicidal thoughts (for all intents and purposes---I'm not gonna say I'm perfect\ftnote{Or that I never feel depressed or shitty, which is an important note, i.e., one's not a complete failure if one feels bad, one just needs to keep fighting.}).
I'm not sure when I got the idea to start writing.
For sure I got the idea sometime in the spring semester of my \nth{5} year at Northeastern, I just remember that I started writing just after getting the idea.
When I first decided to write this, I figured it might be beneficial or focusing to write down how I want to ideally live my life.
After talking to people about my task, I decided to modify my purpose.
The original goal remains: the \emph{codification of a lifestyle}; a new goal enters: a reflection of how I got from \emph{preordained death} to a being healthy(er) human being.
What this document has become for me is an accounting of how I survived, one might call it an existential \book{Robinson Crusoe} if one wanted to be full of oneself.
The tenets enumerated within these pages represent an attempt at codification of life changes I made in order to, in the most literal sense, save my own life.
They are a product of years of hard work; I introspected and I studied and I made friends and I didn't do it alone.
There is something, quite a lot actually, to be said for getting help if you need it; it may be hard, both on a stigma level and on a \say{social interactions are hard} level, but the reward can be really worth it.
Don't ever be too proud to get help, it's not even a matter of pride.
The stigmatization of mental health is dangerous and buying into it can be deadly.
Learning that there isn't anything wrong with me was groundbreaking and is to a large degree the reason that these guidelines exist.\ftnote{\href{https://www.hftd.org/}{it's okay not to be okay}}
When I say that \say{nothing's wrong with me,} I don't mean like, I'm perfect or that having a mental illness and just ignoring it is fine.
First, the phrase \say{\href{https://www.hftd.org/}{it's okay not to be okay}} refers to an organization called \textit{Hope for the Day,} which has a mission of reducing the stigma surrounding mental health.
So really, what it means is that I shouldn't feel lesser just because my brain doesn't function \say{normally,}\ftnote{Assuming a \say{ground state} can lead to some problematic stuff, but the word works for our purposes here.} my brain (and by extension and more importantly, me) isn't worse than everyone else's, it's (I'm) just different.
Every human being is unique.\foottodo{Kinda get to the next sentence (the transition one).}
Now that we know \emph{why} this got made, we'll embark on the journey of \emph{how} this got made.
\unnumsubsection{Narrative Interlude}\label{subsec:narrative}\margindate{5}{11}{18}
\epi{Morality is not properly the doctrine of how we may make ourselves happy, but how we make make ourselves worthy of happiness.}{\attrib{Immanuel Kant}{Critique of Practical Reason}{1788}}
This is the part where I'll go through some years of college and explain what happened in them.
(Also the part where I hope that my dumbass college years weren't just some derivative bullshit.\ftnote{Also the part where I hope I don't swear too much\ldots actually probably not the only part.})
A pretty constant fear I had through college was that I would be mistaken for just another frat bro, cause that has never been what I want to project to the world.
And I get that I'm making a generalization about those kinds of people, but, c'mon, I'm still working through some stuff here.)
I attended, as previously probably mentioned, Northeastern University in Boston, Massachusetts.
It was not my first choice, or really even a destination that I had considered before I got in.
My reason for applying? my friend, Henry, texted me (or talked to me at water polo practice) and said (approximately), \say{Yo, apply to Northeastern, they don't need an essay.}
Northeastern was the first college that accepted me, the first of four, and they gave me some cash and seemed to actually want me.
(The others were state schools, two didn't give me a major that I wanted and I didn't like the other all that much when I visited.)
So I essentially went in completely blind and ended up staying for five years.\ftnote{The majority of people at Northeastern do five years b/c co-op---which is, by the way, a pretty awesome program (with a few caveats, for example, the complete and merciless destruction of school spirit and difficulty in maintaining friendships).}
Basically, you can split that ish into two parts: the first two years and the last three; the former were frankly pretty garbage, the latter, probably the best years of my life.
So we'll tackle the first two years, this is going to be \lips\ not that dope, but hopefully cathartic.
\hspace{2em}\vdots
Lol jk, we'll actually go over some of the people that I met while at Northeastern, just as a primer (and also for some emotional comments\ftnote{If anyone I went to Costa Rica with is reading this, it may feel familiar.}).
(Trinna omit last names here maybe?)
(And also, I feel like I'm gonna be pretty shitty at explaining why I love these people so much, but to be fair, I'm pretty drunk.
I do love them though.)
Damn, wrote that last paragraph when I was a lot drunker than I thought I was.\ftnote{(So I'll use this spot to talk about something that occurred to me while writing this. It's that books are, I get that I'm reusing a word here, atomic. As recollection holds, all books I have read, read like they were written in one long ass sitting. I can't recall a book that talked about, or referenced, the time that passed during the writing process, or the different states of mind of different writing sessions.)}
\entryskip
Sober Michael signing back on.
Expounding on one's friends without being gushy (not that there's anything wrong with that necessarily) or overly revelatory re: unexpressed feelings is obviously difficult.
I consider my core friend group to be the people with whom I went to Costa Rica in Spring 2018 plus one or two people.
The order in which I mention people is not intended to be relevant, but instead that which made narrative sense.
Freshman year of college, within the first week or two of classes, I took the elevator down a couple floors to a party in a room below me; it was there that I would meet some lifelong (I assume) friends.
This room\ftnote{Technically it was a suite, but who's really keeping score?} was the residence of the first two people I'll mention, Brock and Vishal; Brock I knew from water polo and Vishal I knew from Brock.
Another teammate of mine, Will, introduced me to his roommate Milan, in them I found my future housemates and another two great friends.
\entryskip
\textbox{Alright, so full disclosure time.
Strong memories of this party, frankly, don't really exist; to be fair, it was five years ago and had there been underage drinking, it would have been heavy, had it happened.
So there may be some conflation of different parties in the same place.
Basically, Brock and Vishal's suite is like, a metaphor for the early days of school.
Like, first months.}
Here too, I met Averie, with whom I have an uncharacteristically vivid memory of her finding me at the party once every like fifteen minutes and making sure that remembered her name.
So this is the part where I drop the whole party metaphor thing, because it's tiresome and it feels a bit disingenuous.
Instead I'll just run through the remaining people, starting with Carly.\ftnote{Shoutout, b/c you're probably the one reading this.}
Carly is a person about whom it is hard to say too much about; a person with whom I actually can have an interesting philosophical conversation without feeling like a goddamn charlatan.\ftnote{But like I still do a bit \lips\ but that's my own thing.}\foottodo{Man, just kill yourself with this whole couple of paragraphs.}
Naty, probably the nicest human being I have had the pleasure to meet, again a person with whom I can discuss so many topics, from engineering stuff, to \tvshow{The Office} (see page~\pageref{chap:sincerity} for more on that).
Mary, I believe, but have had trouble expressing in the past, is impressive in a particular way.
There is no possible expression here that won't be reductive or condescending or patronizing.
But, her ability to like, sound and act like a normal human being while also being maybe the smartest person I know is just plain ridiculous.
The dynamic brother-sister duo of Dre and Bella are last, but a value-based ordinal position doesn't really make sense.\ftnote{As they say.}
All I'll say about Dre is that he's fascinating.
Talking movies with Bella is always a delight, and she is one of the few people I know (in real life) whose movie opinions and analysis I really respect.\ftnote{Although I swear to God she said that \movie{Three Billboards} was based on a true story}
\entryskip
\ldots\ Getting back to point, starting with the first two years of school.
In reading about all those friends, one might be inclined to assume some pleasant things about early college, and yes, the first bit was good.
Got good grades, made new friends, but, the second semester, the Spring semester was not as good.
\todobrak{insert college downturn and recovery here}
\unnumsection{Influences}\label{sec:influences}\margindate{5}{10}{18}
The previously written introduction to this section was an exposition on the key influences and insights David Foster Wallace lent me.
Using language and tone that seem chillingly crass looking back, I derided those I felt \say{misappropriated} his philosophy, those fucking douchebags.
The pedestal upon which I stood was, apparently, at an elevation high enough to asphyxiate any semblance of rational thought.
With the rise of the \#MeToo movement and the general social change around 2017--18, the subject of David Foster Wallace's deeds have come to the occupy some space within the public consciousness once again.
I do not in any way, shape, or form want to sound like I begrudge any of the women---especially Mary Karr---coming forward and reporting sexual assault or other vile behavior from men; the following few paragraphs are not intended to read like I'm complaining that because people are speaking out I've been inconvenienced.
Having to rewrite is literally nothing compared to such personal trauma.
Ideally my explanation will sound less like a anti-woman screed and more like bargaining for my own morality.
You may recall in the disclaimer (page~\pageref{chap:disclaimer}) my mention of the fundamental connection between art and artist, that one cannot be interpreted without the other.
This follows from a personal definition of art, which is reality mediated through the human experience---or something to that effect.
What I'm trying to say is that a human who produces some work of art, independent of medium, has left in said work an imprint of their essence, a trace of their true, unconscious mind.
(Art is commonly seen as an expression of the unconscious mind.
Basically that's how unintentional symbolism can happen.)
Assuming this view on art is sound (which I'd imagine a great deal of people would have problems here) one inexorably comes to the conclusion that art has morality, reflective of the artist's.
Essentially: one can---frankly one should---judge the quality/morality of art based on its creator.
Like, I won't ever again watch \movie{American Beauty} or a countless number of other movies (\movie{American Beauty} just always comes to mind).
(This line of thought could segue into a conversation about whether or not Weinstein produced films are okay, but that will be left for another time.\ftnote{Or I might get into it later on in this section\lips})
Part of my certainty stems from the atomicity of a particular piece of art.
No work of art is truly an island, collaboration is important and unavoidable, but most art has a specific, principle author.\foottodo{There should be more here on why my philosophy is okay, if only to allay my own fears}
Another influence for the items listed here is Albert Camus, specifically absurdism.
I don't embrace absurdism wholeheartedly; there were some strikingly problematic elements in \book{The Myth of Sisyphus}, what I do take from Camus' philosophy is the desperate, futile struggle against the void.
This fight, that unwinnable, yet cosmically noble struggle against the unknowable appeals to me.
Not all of my influences are now deceased well-known acclaimed authors; the next influence I'll talk about is actually a duo: Freddie Wong and Matthew Arnold.
(The acclaimed authors bit isn't intended as a slight, it was just funny, so shut the fuck up or something.)
Sincerity is one of the core virtues I'll be exploring later on, and my fascination with the concept stems in large part from these two.
I've been watching \textit{freddiew} videos, I assume, since I first went on YouTube, a lot of his early work is pretty ubiquitous on the World Wide Web.
Wong and Arnold also produced \tvshow{Video Game High School}, which I enjoyed immensely and their Hulu series was a good spot of fun as well.
Their podcast, \podcast{Story Break}, is almost certainly my favorite podcast,\ftnote{Although I really like \podcast{Pod Save America} that's kinda its own thing}, but their true contribution to my life comes in introducing me to sincerity.\ftnote{I don't mean sincerity the concept, obviously I know what that is. They pointed out sincerity in media, most memorably \movie{Speed Racer}.}
I'll not get too deep into it here, as there is a whole chapter later on dedicated to it, but I, from them, realized that it was the reason I enjoyed certain works.
The first example of this is \movie{Speed Racer}, my favorite movie (well, tied for first\ftnote{With \movie{Short Term 12} which---you know what, we'll get into it later.}), but I also see it in their works.
One of my first influences, another group of Internet\ftnote{This is the wrong usage of the term, should technically be World Wide Web.} pioneers, Rooster Teeth, has guided me towards a path that I hope will involve more creativity than the normal engineering life.\margindate{5}{1}{18}
I don't remember the most recent time that I \emph{wasn't} a Rooster Teeth fan.
(Checking my profile, it says I joined in 2008, I feel like this may have been after I starting watching)
I've been through a whole mess of development with Red vs. Blue and Achievement Hunter and Funhaus as a backdrop.
Burnie Burns is one of my heroes; I've learn much about not being afraid to fail (which I haven't fully assimilated) from him.
Film Crit Hulk is the pseudonym of an Internet film critic, a man who is, in my estimation, one of the sharpest literary and pop-culture minds around.\foottodo{expand; he's thoughtful and emotional and just plain great (also I learned what semiotics was from him)}
\unnumsection{The Title}\label{sec:thetitle}\margindate{5}{2}{18}
Digression is one of the main features of my personal discourse.
It comes up a little less in casual conversation (mainly because I'm trying to end the conversation ASAP), but it is \emph{for sure} a key feature of my writing.
So get ready, cause this whole document is gonna be chalk full of random detours from the stated subjects.
\iffalse
\unnumsection{Margin Notes Notes}\margindate{6}{5}{18}
At this point, you may have seen the little multicolored notes that appear throughout the book.
The color and font style of each represents a different kind of note.
There are four types of note, commentary, todo, thought, and date.
\begin{itemize}
\item {\commstyle\textcolor{\commcolor}{A commentary note represents some personal notes on the section}}
\item {\todostyle\textcolor{\todocolor}{Todo notes are tasks I have yet to complete}}
\item {\thoughtstyle\textcolor{\thoughtcolor}{Thought notes contain thoughts, related or unrelated to the subject at hand, that I had while writing}}
\item {\textcolor{\datecolor}{Date notes contain the date/time I started the current section}}
\end{itemize}
\fi
\unnumsection{Obligatory Chapter Rundown}\label{sec:chapterrundown}\margindate{5}{10}{18}
As is standard in any textbook type work\ftnote{I hope this book is less dry and more narrative-ish than most textbooks} I will now embark on an explanatory tour of each chapter, covering their subject in brief and providing some useful nuggets.
The chapters in the book are not monolithic, they interwine with each other and support each other.
Because of this interdependence, the chapters will pretty consistently reference others and even give points that may be relevant to other concepts.\foottodo{rewrite this section, it feels a bit forced}
The chapters, as I could decide no other appropriate order, are alphabetical (excluding \say{the} in a single chapter).\margindate{5}{14}{18}
As it so happens, our journey kicks off with maybe my most treasured trait.
Empathy (Chapter~\ref{chap:empathy}): the ability to understand the feelings of another person, is, I claim, the result of conscious choice.
A choice that must be made relentlessly, the empathetic path is not easy, but it is rewarding.
Recognizing that a situation will not be changed is an important part of maturing and, honestly, is great for saving energy to focus on what really matters.
Immutability (Chapter~\ref{chap:immutable}) is the next covered topic and is a more down-to-Earth, everyday idea, especially compared to how I approach confidence in anything in \say{Knowledge}.
I also get into a corollary: effecting change when one can.
I consider, on a certain level, the pursuit of knowledge to be my life's work.
This particular obsession is covered in Chapter~\ref{chap:knowledge}.
I'll cover my ideas regarding certainty, or its impossibility, and my general quest for greater understanding.
This chapter is a bit of a catch-all for things I find interesting, but I try to keep it relevant.
Media, e.g., TV shows and movies, are important to my life and have done a great deal in shaping how I view the world and how I choose to act.
Sincerity (Chapter~\ref{chap:sincerity}) is an important quality to me, in people, but also in the media I consume.
I talk about artists (filmmakers and writers) who, and works of art which exemplify the concept of sincerity.
The Void (Chapter~\ref{chap:thevoid}) refers to an extremely high level component of my philosophy, which can be viewed as an offshoot of absurdism.
\end{document} | {
"alphanum_fraction": 0.7847785129,
"avg_line_length": 97.9527896996,
"ext": "tex",
"hexsha": "1917acff0eaa23f9aa40284152ae93a6070b8744",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5c0b2a25bb7dcc0c5d3035f6e69b9d0816ef9a96",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "mvwicky/ButIDigress",
"max_forks_repo_path": "introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5c0b2a25bb7dcc0c5d3035f6e69b9d0816ef9a96",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "mvwicky/ButIDigress",
"max_issues_repo_path": "introduction.tex",
"max_line_length": 506,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5c0b2a25bb7dcc0c5d3035f6e69b9d0816ef9a96",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "mvwicky/ButIDigress",
"max_stars_repo_path": "introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5257,
"size": 22823
} |
\documentclass[main]{subfiles}
\begin{document}
\chapter{Test}
The test works, if the heading starts with ``Kapitola 1''.
\end{document}
| {
"alphanum_fraction": 0.7463768116,
"avg_line_length": 19.7142857143,
"ext": "tex",
"hexsha": "82ee22f6b75da584d61edce47f145482f2a20e7f",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-02-08T15:57:30.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-09-25T10:43:49.000Z",
"max_forks_repo_head_hexsha": "6b9cc4cdc47c73faadc8c847e29c01ac372c4b19",
"max_forks_repo_licenses": [
"LPPL-1.3c"
],
"max_forks_repo_name": "mrpiggi/subfiles",
"max_forks_repo_path": "tests/issue5-classoptions/sub.tex",
"max_issues_count": 28,
"max_issues_repo_head_hexsha": "6b9cc4cdc47c73faadc8c847e29c01ac372c4b19",
"max_issues_repo_issues_event_max_datetime": "2021-07-14T19:02:27.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-11-05T19:10:32.000Z",
"max_issues_repo_licenses": [
"LPPL-1.3c"
],
"max_issues_repo_name": "mrpiggi/subfiles",
"max_issues_repo_path": "tests/issue5-classoptions/sub.tex",
"max_line_length": 58,
"max_stars_count": 12,
"max_stars_repo_head_hexsha": "6b9cc4cdc47c73faadc8c847e29c01ac372c4b19",
"max_stars_repo_licenses": [
"LPPL-1.3c"
],
"max_stars_repo_name": "mrpiggi/subfiles",
"max_stars_repo_path": "tests/issue5-classoptions/sub.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-13T17:24:33.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-02-02T05:29:35.000Z",
"num_tokens": 42,
"size": 138
} |
\chapter{Preliminaries}\label{chap:preliminaries}
\section{General Notation}
We'll refer to \(\reals\) for the reals, \(\utri\) represents
the unit triangle (or unit simplex) in \(\reals^2\):
\(\utri = \left\{(s, t) \mid 0 \leq s, t, s + t \leq 1\right\}\).
When dealing with sequences with multiple indices, e.g.
\(s_{m, n} = m + n\), we'll use bold symbols to represent
a multi-index: \(\bm{i} = (m, n)\). We'll use \(\left|\bm{i}\right|\) to
represent the sum of the components in a multi-index.
The binomial coefficient
\(\binom{n}{k}\) is equal to \(\frac{n!}{k! (n - k)!}\) and the trinomial
coefficient \(\binom{n}{i, j, k}\) is equal to \(\frac{n!}{i! j! k!}\)
(where \(i + j + k = n\)). The notation \(\delta_{ij}\) represents the
Kronecker delta, a value which is \(1\) when \(i = j\) and \(0\)
otherwise.
\section{Floating Point and Forward Error Analysis}
We assume all floating point operations obey
\begin{equation}
a \star b = \fl{a \circ b} = (a \circ b)(1 + \delta_1) =
(a \circ b) / (1 + \delta_2)
\end{equation}
where \(\star \in \left\{\oplus, \ominus, \otimes, \oslash\right\}\), \(\circ
\in \left\{+, -, \times, \div\right\}\) and \(\left|\delta_1\right|,
\left|\delta_2\right| \leq \mach\). The symbol \(\mach\) is the unit round-off
and \(\star\) is a floating point operation, e.g.
\(a \oplus b = \fl{a + b}\). (For IEEE-754 floating point double precision,
\(\mach = 2^{-53}\).) We denote the computed result of
\(\alpha \in \reals\) in floating point arithmetic by
\(\widehat{\alpha}\) or \(\fl{\alpha}\) and use \(\floats\) as the set of
all floating point numbers (see \cite{Higham2002} for more details).
Following \cite{Higham2002}, we will use the following classic properties in
error analysis.
\begin{enumerate}
\item If \(\delta_i \leq \mach\), \(\rho_i = \pm 1\), then
\(\prod_{i = 1}^n (1 + \delta_i)^{\rho_i} = 1 + \theta_n\),
\item \(\left|\theta_n\right| \leq \gamma_n \coloneqq
n \mach / (1 - n \mach)\),
\item \((1 + \theta_k)(1 + \theta_j) = 1 + \theta_{k + j}\),
\item \(\gamma_k + \gamma_j + \gamma_k \gamma_j \leq \gamma_{k + j}
\Longleftrightarrow (1 + \gamma_k)(1 + \gamma_j) \leq 1 + \gamma_{k + j}\),
\item \((1 + \mach)^j \leq 1 / (1 - j \mach) \Longleftrightarrow
(1 + \mach)^j - 1 \leq \gamma_j\).
\end{enumerate}
\section{B\'{e}zier Curves}
A \emph{B\'{e}zier curve} is a mapping from the unit interval
that is determined by a set of control points
\(\left\{\bm{p}_j\right\}_{j = 0}^n \subset \reals^d\).
For a parameter \(s \in \left[0, 1\right]\), there is a corresponding
point on the curve:
\begin{equation}
b(s) = \sum_{j = 0}^n \binom{n}{j} (1 - s)^{n - j} s^j \bm{p}_j \in
\reals^d.
\end{equation}
This is a combination of the control points weighted by
each Bernstein basis function
\(B_{j, n}(s) = \binom{n}{j} (1 - s)^{n - j} s^j\).
Due to the binomial expansion
\(1 = (s + (1 - s))^n = \sum_{j = 0}^n B_{j, n}(s)\),
a Bernstein basis function is in
\(\left[0, 1\right]\) when \(s\) is as well. Due to this fact, the
curve must be contained in the convex hull of it's control points.
\subsection{de Casteljau Algorithm}
Next, we recall\footnote{We have used slightly non-standard notation for the
terms produced by the de Casteljau algorithm: we start the superscript at
\(n\) and count down to \(0\) as is typically done when describing Horner's
algorithm. For example, we use \(b_j^{(n - 2)}\) instead of
\(b_j^{(2)}\).} the de Casteljau algorithm:
\begin{breakablealgorithm}
\caption{\textit{de Casteljau algorithm for polynomial evaluation.}}
\label{alg:de-casteljau}
\begin{algorithmic}
\Function{\(\mathtt{result} = \mathtt{DeCasteljau}\)}{$b, s$}
\State \(n = \texttt{length}(b) - 1\)
\State \(\widehat{r} = 1 \ominus s\)
\\
\For{\(j = 0, \ldots, n\)}
\State \(\widehat{b}_j^{(n)} = b_j\)
\EndFor
\\
\For{\(k = n - 1, \ldots, 0\)}
\For{\(j = 0, \ldots, k\)}
\State \(\widehat{b}_j^{(k)} = \left(
\widehat{r} \otimes \widehat{b}_j^{(k + 1)}\right) \oplus
\left(s \otimes \widehat{b}_{j + 1}^{(k + 1)}\right)\)
\EndFor
\EndFor
\\
\State \(\mathtt{result} = \widehat{b}_0^{(0)}\)
\EndFunction
\end{algorithmic}
\end{breakablealgorithm}
\begin{theorem}[\cite{Mainar1999}, Corollary 3.2]
If \(p(s) = \sum_{j = 0}^n b_j B_{j, n}(s)\) and \(\mathtt{DeCasteljau}(p, s)\)
is the value computed by the de Casteljau algorithm then\footnote{In the
original paper the factor on \(\widetilde{p}(s)\) is \(\gamma_{2n}\),
but the authors did not consider round-off when computing
\(1 \ominus s\).}
\begin{equation}
\left|p(s) - \mathtt{DeCasteljau}(p, s)\right| \leq \gamma_{3n}
\sum_{j = 0}^n \left|b_j\right| B_{j, n}(s).
\end{equation}
\end{theorem}
The relative condition number of the evaluation of \(p(s) = \sum_{j = 0}^n
b_j B_{j, n}(s)\) in Bernstein form used in this work is (see
\cite{Mainar1999, Farouki1987}):
\begin{equation}
\cond{p, s} = \frac{\widetilde{p}(s)}{\left|p(s)\right|},
\end{equation}
where
\(\widetilde{p}(s) \coloneqq \sum_{j = 0}^n \left|b_j\right| B_{j, n}(s)\).
To be able to express the algorithm in matrix form, we define
the vectors
\begin{equation}
b^{(k)} = \left[\begin{array}{c c c} b_0^{(k)} & \cdots &
b_k^{(k)}\end{array}\right]^T, \quad
\widehat{b}^{(k)} = \left[\begin{array}{c c c} \widehat{b}_0^{(k)} & \cdots &
\widehat{b}_k^{(k)}\end{array}\right]^T
\end{equation}
and the reduction matrices:
\begin{equation}
U_k = U_k(s) = \left[\begin{array}{c c c c c c}
1 - s & s & 0 & \cdots & \cdots & 0 \\
0 & 1 - s & s & \ddots & & \vdots \\
\vdots & \ddots & \ddots & \ddots & \ddots & \vdots \\
\vdots & & \ddots & \ddots & \ddots & 0 \\
0 & \cdots & \cdots & 0 & 1 - s & s
\end{array}\right] \in \reals^{k \times (k + 1)}.
\end{equation}
With this, we can express (\cite{Mainar1999}) the de Casteljau algorithm as
\begin{equation}\label{eq:matrix-de-casteljau}
b^{(k)} = U_{k + 1} b^{(k + 1)}
\Longrightarrow b^{(0)} = U_1 \cdots U_n b^{(n)}.
\end{equation}
In general, for a sequence \(v_0, \ldots, v_n\) we'll refer to \(v\)
as the vector containing all of the values:
\(v = \left[\begin{array}{c c c} v_0 & \cdots &
v_n\end{array}\right]^T.\)
\section{B\'{e}zier Triangles}
A \emph{B\'{e}zier triangle} (\cite[Chapter~17]{Farin2001}) is a
mapping from the unit triangle
\(\utri\) and is determined by a control net
\(\left\{\bm{p}_{i, j, k}\right\}_{i + j + k = n} \subset \reals^d\).
A B\'{e}zier triangle is a particular kind of B\'{e}zier surface, i.e. one
in which there are two cartesian or three barycentric input parameters.
Often the term B\'{e}zier surface is used to refer to a tensor product or
rectangular patch.
For \((s, t) \in \utri\) we can define barycentric weights
\(\lambda_1 = 1 - s - t, \lambda_2 = s, \lambda_3 = t\) so that
\begin{equation}
1 = \left(\lambda_1 + \lambda_2 + \lambda_3\right)^n =
\sum_{\substack{i + j + k = n \\ i, j, k \geq 0}} \binom{n}{i, j, k}
\lambda_1^i \lambda_2^j \lambda_3^k.
\end{equation}
Using this we can similarly define a (triangular) Bernstein basis
\begin{equation}
B_{i, j, k}(s, t) = \binom{n}{i, j, k} (1 - s - t)^i s^j t^k
= \binom{n}{i, j, k} \lambda_1^i \lambda_2^j \lambda_3^k
\end{equation}
that is in \(\left[0, 1\right]\) when \((s, t)\) is in \(\utri\).
Using this, we define points on the B\'{e}zier triangle as a
convex combination of the control net:
\begin{equation}
b(s, t) = \sum_{i + j + k = n} \binom{n}{i, j, k}
\lambda_1^i \lambda_2^j \lambda_3^k
\bm{p}_{i, j, k} \in \reals^d.
\end{equation}
\begin{figure}
\includegraphics{../images/preliminaries/main_figure01.pdf}
\centering
\captionsetup{width=.75\linewidth}
\caption{Cubic B\'{e}zier triangle}
\label{fig:cubic-bezier-example}
\end{figure}
\noindent Rather than defining a B\'{e}zier triangle by the control net, it can
also be uniquely determined by the image of a standard lattice of
points in \(\utri\): \(b\left(j/n, k/n\right) = \bm{n}_{i, j, k}\);
we'll refer to these as \emph{standard nodes}.
Figure~\ref{fig:cubic-bezier-example} shows these standard nodes for
a cubic triangle in \(\reals^2\). To see the correspondence,
when \(p = 1\) the standard nodes \emph{are} the control net
\begin{equation}
b(s, t) = \lambda_1 \bm{n}_{1, 0, 0} +
\lambda_2 \bm{n}_{0, 1, 0} + \lambda_3 \bm{n}_{0, 0, 1}
\end{equation}
and when \(p = 2\)
\begin{multline}
b(s, t) = \lambda_1\left(2 \lambda_1 - 1\right) \bm{n}_{2, 0, 0} +
\lambda_2\left(2 \lambda_2 - 1\right) \bm{n}_{0, 2, 0} +
\lambda_3\left(2 \lambda_3 - 1\right) \bm{n}_{0, 0, 2} + \\
4 \lambda_1 \lambda_2 \bm{n}_{1, 1, 0} +
4 \lambda_2 \lambda_3 \bm{n}_{0, 1, 1} +
4 \lambda_3 \lambda_1 \bm{n}_{1, 0, 1}.
\end{multline}
However, it's worth noting that the transformation between
the control net and the standard nodes has condition
number that grows exponentially with \(n\) (see \cite{Farouki1991}, which
is related but does not directly show this).
This may make working with
higher degree triangles prohibitively unstable.
A \emph{valid} B\'{e}zier triangle is one which is
diffeomorphic to \(\utri\), i.e. \(b(s, t)\) is bijective and has
an everywhere invertible Jacobian. We must also have the orientation
preserved, i.e. the Jacobian must have positive determinant. For example, in
Figure~\ref{fig:inverted-element}, the image of \(\utri\) under
the map \(b(s, t) = \left[\begin{array}{c c} (1 - s - t)^2 + s^2 & s^2 + t^2
\end{array}\right]^T\) is not valid because the Jacobian is zero along
the curve \(s^2 - st - t^2 - s + t = 0\) (the dashed line). Elements that
are not valid are called \emph{inverted} because they have regions with
``negative area''. For the example, the image \(b\left(\utri\right)\)
leaves the boundary determined by the edge curves: \(b(r, 0)\),
\(b(1 - r, r)\) and \(b(0, 1 - r)\) when \(r \in \left[0, 1\right]\).
This region outside the boundary is traced twice, once with
a positive Jacobian and once with a negative Jacobian.
\begin{figure}
\includegraphics{../images/preliminaries/inverted_element.pdf}
\centering
\captionsetup{width=.75\linewidth}
\caption{The B\'{e}zier triangle given by \(b(s, t) = \left[
(1 - s - t)^2 + s^2 \; \; s^2 + t^2 \right]^T\) produces an
inverted element. It traces the same region twice, once with
a positive Jacobian (the middle column) and once with a negative
Jacobian (the right column).}
\label{fig:inverted-element}
\end{figure}
\section{Curved Elements}\label{sec:curved-elements}
We define a curved mesh element \(\mathcal{T}\) of degree \(p\)
to be a B\'{e}zier triangle in \(\reals^2\) of the same degree.
We refer to the component functions of \(b(s, t)\) (the map that
gives \(\mathcal{T} = b\left(\utri\right)\)) as \(x(s, t)\) and \(y(s, t)\).
This fits a typical definition (\cite[Chapter~12]{FEM-ClaesJohnson})
of a curved element, but gives a special meaning to the mapping from
the reference triangle. Interpreting elements as B\'{e}zier triangles
has been used for Lagrangian methods where
mesh adaptivity is needed (e.g. \cite{CardozeMOP04}). Typically curved
elements only have one curved side (\cite{McLeod1972}) since they are used
to resolve geometric features of a boundary. See also
\cite{Zlmal1973, Zlmal1974}.
B\'{e}zier curves and triangles have a number of mathematical properties
(e.g. the convex hull property) that lead to elegant geometric
descriptions and algorithms.
Note that a B\'{e}zier triangle can be
determined from many different sources of data (for example the control net
or the standard nodes). The choice of this data may be changed to suit the
underlying physical problem without changing the actual mapping. Conversely,
the data can be fixed (e.g. as the control net) to avoid costly basis
conversion; once fixed, the equations of motion and other PDE terms can
be recast relative to the new basis (for an example, see \cite{Persson2009},
where the domain varies with time but the problem is reduced to
solving a transformed conservation law in a fixed reference configuration).
\subsection{Shape Functions}\label{subsec:shape-functions}
When defining shape functions (i.e. a basis with geometric meaning) on a
curved element there are (at least) two choices. When the degree of the
shape functions is the same as the degree of the function being
represented on the B\'{e}zier triangle,
we say the element \(\mathcal{T}\) is \emph{isoparametric}.
For the multi-index
\(\bm{i} = (i, j , k)\), we define \(\bm{u}_{\bm{i}} =
\left(j/n, k/n\right)\) and the corresponding standard node
\(\bm{n}_{\bm{i}} = b\left(\bm{u}_{\bm{i}}\right)\).
Given these points, two choices for shape functions present
themselves:
\begin{itemize}
\itemsep 0em
\item \emph{Pre-Image Basis}:
\(\phi_{\bm{j}}\left(\bm{n}_{\bm{i}}\right) =
\widehat{\phi}_{\bm{j}}\left(\bm{u}_{\bm{i}}\right) =
\widehat{\phi}_{\bm{j}}\left(b^{-1}\left(
\bm{n}_{\bm{i}}\right)\right)\)
where \(\widehat{\phi}_{\bm{j}}\) is a canonical basis function
on \(\utri\), i.e.
\(\widehat{\phi}_{\bm{j}}\) a degree \(p\) bivariate polynomial and
\(\widehat{\phi}_{\bm{j}}\left(\bm{u}_{\bm{i}}\right) =
\delta_{\bm{i} \bm{j}}\)
\item \emph{Global Coordinates Basis}:
\(\phi_{\bm{j}}\left(\bm{n}_{\bm{i}}\right) =
\delta_{\bm{i} \bm{j}}\), i.e. a canonical basis function
on the standard nodes \(\left\{\bm{n}_{\bm{i}}\right\}\).
\end{itemize}
\noindent For example, consider a quadratic B\'{e}zier triangle:
\begin{gather}
b(s, t) = \left[ \begin{array}{c c}
4 (s t + s + t) & 4 (s t + t + 1)
\end{array}\right]^T \\
\Longrightarrow
\left[ \begin{array}{c c c c c c}
\bm{n}_{2, 0, 0} &
\bm{n}_{1, 1, 0} &
\bm{n}_{0, 2, 0} &
\bm{n}_{1, 0, 1} &
\bm{n}_{0, 1, 1} &
\bm{n}_{0, 0, 2}
\end{array}\right] = \left[ \begin{array}{c c c c c c}
0 & 2 & 4 & 2 & 5 & 4 \\
4 & 4 & 4 & 6 & 7 & 8
\end{array}\right].
\end{gather}
In the \emph{Global Coordinates Basis}, we have
\begin{equation}
\phi^{G}_{0, 1, 1}(x, y) = \frac{(y - 4) (x - y + 4)}{6}.
\end{equation}
For the \emph{Pre-Image Basis}, we need the inverse
and the canonical basis
\begin{equation}
b^{-1}(x, y) = \left[ \begin{array}{c c}
\frac{x - y + 4}{4} & \frac{y - 4}{x - y + 8}
\end{array}\right] \quad \text{and} \quad
\widehat{\phi}_{0, 1, 1}(s, t) = 4 s t
\end{equation}
and together they give
\begin{equation}
\phi^{P}_{0, 1, 1}(x, y) = \frac{(y - 4) (x - y + 4)}{x - y + 8}.
\end{equation}
In general \(\phi_{\bm{j}}^P\) may not even be a rational bivariate
function; due to composition with \(b^{-1}\) we can only guarantee that
it is algebraic (i.e. it can be defined as the zero set of polynomials).
\subsection{Curved Polygons}\label{subsec:curved-polygons}
\begin{figure}
\includegraphics{../images/preliminaries/main_figure26.pdf}
\centering
\captionsetup{width=.75\linewidth}
\caption{Intersection of B\'{e}zier triangles form a curved polygon.}
\label{fig:bezier-triangle-intersect}
\end{figure}
When intersecting two curved elements, the resulting surface(s) will
be defined by the boundary, alternating between edges of each
element.
For example, in Figure~\ref{fig:bezier-triangle-intersect}, a
``curved quadrilateral'' is formed when two B\'{e}zier triangles
\(\mathcal{T}_0\) and \(\mathcal{T}_1\) are intersected.
A \emph{curved polygon} is defined by a collection of B\'{e}zier curves
in \(\reals^2\) that determine the boundary. In order to be
a valid polygon, none of the boundary curves may cross, the
ends of consecutive edge curves must meet and the curves must be right-hand
oriented. For our example in
Figure~\ref{fig:bezier-triangle-intersect}, the triangles
have boundaries formed by three B\'{e}zier curves:
\(\partial \mathcal{T}_0 = b_{0, 0} \cup b_{0, 1} \cup b_{0, 2}\) and
\(\partial \mathcal{T}_1 = b_{1, 0} \cup b_{1, 1} \cup b_{1, 2}\).
The intersection \(\mathcal{P}\) is defined by four boundary
curves: \(\partial \mathcal{P} =
C_1 \cup C_2 \cup C_3 \cup C_4\). Each boundary
curve is itself a B\'{e}zier curve\footnote{A specialization of a
B\'{e}zier curve \(b\left(\left[a_1, a_2\right]\right)\)
is also a B\'{e}zier curve.}:
\(C_1 = b_{0, 0}\left(\left[0, 1/8\right]\right)\),
\(C_2 = b_{1, 2}\left(\left[7/8, 1\right]\right)\),
\(C_3 = b_{1, 0}\left(\left[0, 1/7\right]\right)\) and
\(C_4 = b_{0, 2}\left(\left[6/7, 1\right]\right)\).
Though an intersection can be described in terms of the B\'{e}zier triangles,
the structure of the control net will be lost. The region will not in general
be able to be described by a mapping from a simple space like
\(\utri\).
\section{Error-Free Transformation}
An error-free transformation is a computational method where both
the computed result and the round-off error are returned. It
is considered ``free'' of error if the round-off can be represented
exactly as an element or elements of \(\floats\).
The error-free transformations used in this work are
the \texttt{TwoSum} algorithm by Knuth (\cite{Knuth1997}) and
\texttt{TwoProd} algorithm by Dekker (\cite{Dekker1971}, Section 5),
respectively.
\begin{theorem}[\cite{Ogita2005}, Theorem 3.4]\label{thm:eft}
For \(a, b \in \floats\) and \(P, \pi, S, \sigma \in \floats\),
\texttt{TwoSum} and \texttt{TwoProd} satisfy
\begin{alignat}{4}
\left[S, \sigma\right] &= \mathtt{TwoSum}(a, b), & \, S &= \fl{a + b},
S + \sigma &= a + b, \sigma &\leq \mach \left|S\right|,
& \, \sigma &\leq \mach \left|a + b\right| \\
\left[P, \pi\right] &= \mathtt{TwoProd}(a, b),
& \, P &= \fl{a \times b}, P + \pi &= a \times b,
\pi &\leq \mach \left|P\right|,
& \, \pi &\leq \mach \left|a \times b\right|.
\end{alignat}
The letters \(\sigma\) and \(\pi\) are used to indicate that the
errors came from sum and product, respectively. See
Appendix~\ref{chap:appendix-algo} for implementation details.
\end{theorem}
| {
"alphanum_fraction": 0.6546814807,
"avg_line_length": 43.3680387409,
"ext": "tex",
"hexsha": "6ea6e93673262a6e524fadc6b2adbc8d31217d3f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "732c75b4258e6f41b2dafb2929f0e3dbd380239b",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "dhermes/phd-thesis",
"max_forks_repo_path": "doc/preliminaries.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "732c75b4258e6f41b2dafb2929f0e3dbd380239b",
"max_issues_repo_issues_event_max_datetime": "2019-11-16T16:43:00.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-08-21T05:57:22.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "dhermes/phd-thesis",
"max_issues_repo_path": "doc/preliminaries.tex",
"max_line_length": 79,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "732c75b4258e6f41b2dafb2929f0e3dbd380239b",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "dhermes/phd-thesis",
"max_stars_repo_path": "doc/preliminaries.tex",
"max_stars_repo_stars_event_max_datetime": "2020-12-13T01:38:19.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-08-24T15:36:28.000Z",
"num_tokens": 6405,
"size": 17911
} |
\documentclass{article}
\usepackage{minted}
\title{ NGS Tutorial}
\usepackage{amsmath}
\usepackage{minted}
\usepackage{natbib}
\usepackage{mathpazo}
%\usepackage{enumerate}
\usepackage{soul}
\usepackage{cases}
\setlength{\parindent}{0cm}
\definecolor{bg}{rgb}{0.95,0.95,0.95}
\author{Roland Krause$^1$ \\[1em]Luxembourg Centre for Systems Biomedicine (LCSB),\\ Unversity of Luxembourg\\
\texttt{$^[email protected]}}
\usepackage{hyperref}
\begin{document}
\maketitle
\tableofcontents
\section{Introduction}
Exome sequencing is cost-effective way of sequencing
by reducing the genome to the coding part by microarray hybridisation.
In this tutorial, we will map sequences using \verb+bwa+ and learn usual steps for quality improvements.
In order to speed up the tutorial, only chromosome 22 is taken into account.
First you will need to extract the sequences from a given .bam file.
\subsection{Set up}
Create directory \verb+ngs+ for all next-generation sequencing tutorials.
\begin{minted}{bash}
mkdir ngs
cd ngs
\end{minted}
All data source data is kept in the directory \verb+/Users/roland.krause/Public/isb101/+.
For your convenience, create a variable holding the path to the resources.
\begin{minted}{bash}
RESOURCE="/Users/roland.krause/Public/isb101/"
\end{minted}
Note: Not all commands are given in full in this tutorial.
You might need to use the commands you have learned previously.
The programs \verb+samtools+ and \verb+bwa+ should be available from your path.
Question: How do you find out where a program is installed?
% which
\section{Extracting reads from an existing BAM file}
\subsection{Copy the BAM file to your local folder}
This file has already been processed. We use it as source of our data.
The BAM file is called \verb+daughter.Improved.bam+
Create a {\em soft link} to the file.
\begin{minted}{bash}
ln -s $RESOURCE/daughter.Improved.bam .
\end{minted}
% $
Questions:
\begin{enumerate}
\item Why don't we copy the file?
\item Check the properties of the file using options of the \verb+ls+ command.
\item What happens if you would delete the link in your directory?
\item What happens if you delete the file in \verb|$RESOURCE| ?
\end{enumerate}
\subsection{Index the BAM file}
\begin{minted}{bash}
samtools index daughter.Improved.bam
\end{minted}
This will take a few seconds.
Questions
\begin{enumerate}
\item What did the command do?
\item What is an {\em index}?
\end{enumerate}
\subsection{Visualize alignments with samtools tview }
\begin{minted}{bash}
samtools tview \
daughter.Improved.bam \
human_g1k_v37_Ensembl_MT_66.fasta
\end{minted}
\subsection{Extract chromosome 22 from the example BAM}
Slice chromosome 22 and save a piece in SAM format
\begin{minted}{bash}
samtools view daughter.Improved.bam 22 \
> daughter.Improved.22.sam
\end{minted}
\subsection{Convert SAM to FASTQ using PICARD}
Create a soft link to the picard-tools directory (!) in \verb+$RESCOURCE+ in your local \verb+ngs+ directory.
\begin{minted}{bash}
ln -s $RESOURCE/picard-tools/ picard-tools
\end{minted}
Then, run the converter as follows:
\begin{minted}{bash}
java -jar picard-tools/SamToFastq.jar \
I=daughter.Improved.22.sam \
F=daughter.22.1.fq \
F2=daughter.22.2.fq \
VALIDATION_STRINGENCY=SILENT
\end{minted}
Inspect the output files and recapituliate the fastq-format.
\section{Performing quality control of the sequenced reads}
FastQC is a tool kit for quality performance. You will probably not be able to run
this unless you are working from a Linux computer.
If you are working from a Mac, you need to have X11 or XQuartz installed.
On the server login on a remote machine login via ssh with -X for X11 support.
\mint{bash}+ssh -X [email protected]+
the following command opens the FastQC GUI
\begin{minted}{bash}
perl FastQC/fastqc
\end{minted}
Load the new .sam file
We will discuss this together on screen.
\section{Mapping}
\subsection{Indexing the reference }
The following command has to be use. This step is skipped as it takes to much time. The results can be found
in the \verb+$RESOURCE+ directory.
\begin{minted}{bash}
# bwa index -a bwtsw human_g1k_v37_Ensembl_MT_66.fasta
\end{minted}
\subsection{Perform alignment with Burrows-Wheeler Transform}
In this main section of the mapping we will first align all reads and subsequently
prepare the alignment for filtering and clean-up .
We will built indeces for further processing.
%%% -n 7 is used for a more sensitive alignment
%%% -q 15 is used for trimming sequences by quality (default=0=switched off)
Modify the path to the reference genome in the command line.
Question: Why would a link not work in this case?
\begin{minted}{bash}
bwa mem -M $RESOURCE/human_g1k_v37_Ensembl_MT_66.fasta \
daughter.22.1.fq daughter.22.2.fq \
> daughter.22.sam
\end{minted}
% bwa mem -t 4 -M $RESOURCE/human_g1k_v37_Ensembl_MT_66.fasta daughter.22.1.fq daughter.22.2.fq > daughter.22.sam
\subsection{Convert SAM to BAM}
\begin{minted}{bash}
samtools view -bS daughter.22.sam \
> daughter.22.bam
\end{minted}
\subsection{Sort BAM}
The suffix bam is automatically attached. This is for compatibility with PICARD and GATK.
\begin{minted}{bash}
samtools sort daughter.22.bam daughter.22.sorted
\end{minted}
% java7 -Xmx4g -XX:ParallelGCThreads=4 -Djava.io.tmpdir=./ -jar picard-tools//SortSam.jar INPUT= daughter.22.1.fq.bam OUTPUT= daughter.22.1.fq.sorted.bam SORT_ORDER=coordinate VALIDATION_STRINGENCY=LENIENT
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Mark duplicate reads}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Create a temporary folder and run picard tools.
Copy picard tools from the \verb+RESOURCE+ folder.
\begin{minted}{bash}
mkdir tmp
java -Djava.io.tmpdir=tmp -jar picard-tools/MarkDuplicates.jar \
I=daughter.22.sorted.bam \
O=daughter.22.sorted.marked.bam \
METRICS_FILE=daughter.22.sorted.marked.metrics \
VALIDATION_STRINGENCY=LENIENT
\end{minted}
%%% useful links
\url{http://picard.sourceforge.net/command-line-overview.shtml}
%MarkDuplicates
\url{http://sourceforge.net/apps/mediawiki/picard/index.php?title=Main_Page}
%Q:_How_does_MarkDuplicates_work.3F
\subsection{Add read-group}
This step is only necesssary for data generated around 2012.
\begin{minted}{bash}
java -jar picard-tools/AddOrReplaceReadGroups.jar \
INPUT=daughter.22.sorted.marked.bam \
OUTPUT=daughter.22.prepared.bam \
RGID=group1 RGLB= lib1 RGPL=illumina RGPU=unit1 RGSM=sample1
java -jar picard-tools/BuildBamIndex.jar INPUT=daughter.22.prepared.bam
\end{minted}
\section{Quality improvements}
Index the file with picard (Step A).
% rm human_g1k_v37_Ensembl_MT_66.dict
\begin{minted}{bash}
java -Xmx4g -jar picard-tools/CreateSequenceDictionary.jar \
R=human_g1k_v37_Ensembl_MT_66.fasta \
O=human_g1k_v37_Ensembl_MT_66.dict
\end{minted}
Create an index on the reference sequence using samtools.
\begin{minted}{bash}
samtools faidx human_g1k_v37_Ensembl_MT_66.fasta
\end{minted}
\subsection{Realignment}
Create a index using GATK.
%os.system("java -Xmx4g -jar %s/GenomeAnalysisTK.jar -T RealignerTargetCreator -R human_g1k_v37_Ense
%mbl_MT_66.fasta -o %s.bam.list -I %s.addrg_reads.bam --fix_misencoded_quality_scores "%(gatk, read_
%1, read_1))
Note that we use \verb+java7+ for running GATK, required for the latest version.
\begin{minted}{bash}
java7 -Xmx4g -jar GenomeAnalysisTK.jar \
-T RealignerTargetCreator \
-R human_g1k_v37_Ensembl_MT_66.fasta \
-o daughter.bam.list \
-I daughter.22.prepared.bam
\end{minted}
% --fix_misencoded_quality_scores
%java7 -Xmx4g -jar GenomeAnalysisTK.jar -T RealignerTargetCreator -R human_g1k_v37_Ensembl_MT_66.fasta -o daughter.22.1.fq.bam.list -I daughter.22.1.fq.addrg_reads.bam --fix_misencoded_quality_scores
\begin{minted}{bash}
java7 -Xmx4g -Djava.io.tmpdir=./tmp/ -jar GenomeAnalysisTK.jar \
-T IndelRealigner \
-I daughter.22.prepared.bam \
-R human_g1k_v37_Ensembl_MT_66.fasta \
-targetIntervals daughter.bam.list -o daughter.22.real.bam
\end{minted}
% --fix_misencoded_quality_scores
%os.system("java -Xmx4g -Djava.io.tmpdir=./ -jar %s/GenomeAnalysisTK.jar -I %s.addrg_reads.bam -R human_g1k_v37_Ensembl_MT_66.fasta -T IndelRealigner -targetIntervals %s.bam.list -o %s.marked.realign ed.bam --fix_misencoded_quality_scores"%(gatk, read_1, read_1, read_1))
\subsection{BQSR(Base Quality Score Recalibration)}
\begin{minted}{bash}
java7 -Xmx4g -jar GenomeAnalysisTK.jar \
-T BaseRecalibrator \
-I daughter.22.real.bam \
-R human_g1k_v37_Ensembl_MT_66.fasta \
-knownSites dbsnp_135.b37.vcf \
-o recal_data.table
\end{minted}
\subsection{Fix the mate pairs }
%os.system("java -Djava.io.tmpdir=./flx-auswerter -jar %s/FixMateInformation.jar INPUT=%s.marked.realigned.bam OUTPUT=%s_bam.marked.realigned.fixed.bam SO=coordinate VALIDATION_STRINGENCY=LENIENT CREATE_INDEX=true "%(picard, read_1, read_1)
\begin{minted}{bash}
java7 -Djava.io.tmpdir=./tmp -jar picard-tools//FixMateInformation.jar INPUT=daughter.22.real.bam OUTPUT=daughter.22.real.fixed.bam SO=coordinate VALIDATION_STRINGENCY=LENIENT CREATE_INDEX=true
\end{minted}
%%%%%%%%%%%%%%%%%%%%
\section{Variant calling}
%%%%%%%%%%%%%%%%%%%%
The final step of variant calling with the two tools most often used, \verb+samtools+ and \verb+GATK+.
\subsection{Samtools mpileup}
\begin{minted}{bash}
samtools mpileup \
-S -E -g -Q 13 -q 20 \
-f human_g1k_v37_Ensembl_MT_66.fasta \
daughter.22.real.bam | \
bcftools \
view -vc - > daughter.22.mpileup.vcf
\end{minted}
Note: not working at the moment.
\subsection{ GATK Unified Genotyper}
\begin{minted}{bash}
java7 -Djava.io.tmpdir=tmp -jar GenomeAnalysisTK.jar \
-l INFO \
-T UnifiedGenotyper \
-R human_g1k_v37_Ensembl_MT_66.fasta \
-I daughter.22.real.bam \
-stand_call_conf 30.0 \
-stand_emit_conf 10.0 \
--genotype_likelihoods_model BOTH \
--min_base_quality_score 13 \
--max_alternate_alleles 3 \
-A MappingQualityRankSumTest \
-A AlleleBalance \
-A BaseCounts \
-A ChromosomeCounts \
-A QualByDepth \
-A ReadPosRankSumTest \
-A MappingQualityZeroBySample \
-A HaplotypeScore \
-A LowMQ \
-A RMSMappingQuality \
-A BaseQualityRankSumTest \
-L 22 \
-o daughter.22.gatk.vcf
\end{minted}
Questions:
\begin{enumerate}
\item How many variants are called?
\item Are both callers come up with the same variants?
\item Inspect a case for indels and SNP and check those variants using
\verb+samtools tview+. Take screen shots.
\end{enumerate}
Next steps: Annotation and comparison of samples.
Acknowledgement: Holger Thiele, Kamel Jabbari (CCG Cologne), Patrick May, Dheeraj Bobbili (LCSB)
\end{document}
| {
"alphanum_fraction": 0.7603686636,
"avg_line_length": 31,
"ext": "tex",
"hexsha": "8229f212e9161ef17cb734c1f480ba93155ded1d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "40be2fa62a6ca986e4ed9f1833382b2c10478039",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rolandkrause/isb101",
"max_forks_repo_path": "NGS/previous year/NGSII_tutorial.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "40be2fa62a6ca986e4ed9f1833382b2c10478039",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rolandkrause/isb101",
"max_issues_repo_path": "NGS/previous year/NGSII_tutorial.tex",
"max_line_length": 271,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "40be2fa62a6ca986e4ed9f1833382b2c10478039",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "rolandkrause/isb101",
"max_stars_repo_path": "NGS/previous year/NGSII_tutorial.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3097,
"size": 10633
} |
\chapter{Introduction}
The adoption of the cloud computing paradigm has changed the perception of server infrastructure for next-generation applications. The cloud computing model has catalyzed a change from the traditional model of deploying applications on dedicated servers with limited room to scale to a new model of deploying applications on a shared pool of computing resources with (theoretically) unlimited scalability\cite{unlimited_scale}.
%The adoption of cloud computing paradigm has changed the way we look at server infrastructure for next-generation applications. The cloud computing model has catalyzed a change from the traditional model of deploying applications on dedicated servers with limited room to scale to a new model of deploying applications on a shared pool of computing resources with (theoretically) unlimited scalability \cite{unlimited_scale}.
The technology backbone behind cloud computing is \emph{virtualization}. Virtualization is the process of creating multiple isolated operating environments that share the same physical server resources. Virtualization addresses the problem of under-utilization of hardware resources observed in the dedicated server model by running multiple isolated applications simultaneously on a physical machine, also referred to as the ``host''. The isolated operating environments are referred to as virtual machines (VMs) \cite{5346711}, or containers \cite{lxc}. Each virtual machine or container provides an abstracted hardware interface to its applications and utilizes computational resources as needed (or available) from the host.
Virtualization gained widespread adoption because of the many benefits it offers. First, virtual machines can be statefully migrated from one physical machine to another in the event of a hardware failure \cite{live_migration}. This capability can also be used to dynamically reposition virtual machines in such a way that virtual machines that frequently work together are virtualized on the same physical server, improving their performance. Virtual machines can also be automatically migrated to achieve a pre-defined goal, such as power efficiency or load balancing \cite{auto_vm_migration}. Second, virtual machines can be administered more flexibly than physical servers, as they are usually represented as files, making them easier to clone, snapshot, and migrate. Virtual machines can also be dynamically started or stopped without affecting other virtual machines on the host. Finally, virtualization enables an additional layer of control between the applications and the hardware, providing more options to effectively monitor and manage hardware resources.
Virtualization can be implemented in multiple ways, such as creating virtual devices that simulate the required hardware, which are in turn mapped to physical resources on the host, or by restricting the physical resources available to each virtual machine using resource management policies provided by the host. The strengths and weaknesses exhibited by the virtualization platforms that implement these strategies vary based on their design objectives and implementation. These variations makes it important to understand the operation of each virtualization platform before choosing a platform to virtualize a given application infrastructure.
%The technology backbone behind the idea of cloud computing is virtualization. Virtualization is the process of creating virtual devices that simulate the hardware which are in turn mapped to the physical resources on an underlying server. Virtualization primarily addresses the problem of under-utilization of computing resources in the dedicated server model. Virtualization maximizes the utilization of the hardware investment by running an increased number of isolated applications simultaneously on a physical server or "\textit{host}". The isolated applications are run in an operating environment referred as \textit{Virtual Machines (VM)} \cite{5346711} or \textit{Containers} \cite{lxc}. The VMs / containers abstract the hardware interfaces required by the applications they run and utilize as much computational resources as they need (or are available). Virtualization opens the door for several added benefits:
%
% \textbf{Improved Fault-tolerance} :- Virtual machines can be statefully migrated to other physical machines in the event of a hardware failure \cite{live_migration}. The fail-over can also be automatically triggered in some cluster-aware virtualization platforms \cite{auto_vm_migration}. This facility effectively enables continuous availability for critical applications, which would require application-level awareness to achieve in the dedicated server model.
%
% \textbf{Operational Flexibility} :- Virtual Machines and their associated virtual devices are usually represented as a few files, which make them easy to clone, snapshot, and migrate to other physical machines. Also, individual VMs can be dynamically started or stopped without causing any outage to other VMs or the host.
%
% \textbf{New Avenues for Tuning and Customization} :- Since virtualization platforms introduce an additional layer of control between applications and hardware, they enable more options to customize the environment for individual VMs. That also provide more points of control to administer the resources accessible to individual virtual machines.
%
% \textbf{Granular Monitoring} :- The physical server that hosts VMs provides deeper insights and visibility into the performance, capacity, utilization and the overall health of the individual VMs. Analysis of the monitoring data facilitates resource management and control.
%
%The benefits offered by the virtualization platforms form the core of the cloud computing ecosystem. Commercial cloud infrastructure providers have built their solutions around these benefits and offer "pay as you go" services where end-users are billed only for the resources used by the virtual machines or containers. Cloud providers pass on isolation and granular control options for the virtualization platform to end-users with a flexible interface, letting users keep complete control of their operating systems, storage environment, and networking setup without worrying about the underlying physical infrastructure. Large scale cloud computing platforms that lease their servers to virtualize applications of several end-users over a large pool of shared resources are exemplified by Amazon Elastic Compute Cloud (EC2) [cite], RackSpace [cite], and Heroku [cite].They differ by their choice of the virtualization platform. Enterprises and academic institutions also run a private cloud platform and have driven the development of open source cloud computing toolkits like OpenStack [cite], CloudStack [cite] and OpenShift [cite].
\section{Motivation}
The Intelligent River\textsuperscript{\textregistered} is a large-scale environmental monitoring system being built by researchers at Clemson University \cite{ir}. The effort aims to deploy a large and highly distributed wireless sensor network in the Savannah River Basin. The back-end infrastructure uses a real-time distributed middleware system to receive, analyze, and visualize the sensor observation streams. The time sensitivity of the observation streams, along with the anticipated scale of the Savannah deployment demands that the middleware system be flexible, fault-tolerant, and scalable. A key factor in architecting such a dynamic system is virtualization.
Given the availability of multiple open source virtualization platforms, such as KVM \cite{kvm}, Xen \cite{xen}, and Linux Containers \cite{lxc}, the choice of virtualization platform is not obvious. We must identify the most effective platform to virtualize the Intelligent River\textsuperscript{\textregistered} middleware system. This important choice requires a detailed comparative analysis of the available virtualization platforms. Prior work on the analysis of virtualization platforms for High Performance Computing (HPC) applications \cite{younge2011analysis} claims that KVM performs better than Xen based on the HPCC benchmarks \cite{hpcc}. On the other hand, other prior work claims that OpenVZ \cite{openvz}, a virtualization platform based on Linux Containers performs the best, while Xen performed significantly better than KVM, clearly contradicting prior claims. These (and other) contradictions in prior work indicate that comparing virtualization platforms based only on standard benchmarks is not sufficient to arrive at a definitive optimality decision. Several other factors must be considered. This motivated us to perform : (i) an operational analysis of the virtualization platforms, (ii) a quantitative analysis of virtualization overhead, (iii) a workload-specific benchmark analysis, and (iv) a qualitative analysis of operational flexibility.
%
%
%The Intelligent River\textsuperscript{\textregistered} is a large scale environmental monitoring platform that is being built by researchers at Clemson University [cite]. The Intelligent River\textsuperscript{\textregistered} involves a large and distributed sensor network in the Savannah River basin and a real-time distributed middleware system hosted in Clemson University that receive, analyze and visualize the real-time environmental observation streams. The volume and unpredictable nature of the observation streams along with the increasing scale of sensor deployments demanded a distributed middleware system that is flexible, fault-tolerant and designed for scale. Architecting he Intelligent River\textsuperscript{\textregistered} middleware system posed an important design question,
%
%
%\emph{Given the multitude of open source virtualization platforms, KVM, Xen, and Linux Containers, and the complex operational requirements of our middleware stack, which platform to choose ?}
%
%
%Our quest to find the answer to the question needed a detailed analysis of the common virtualization platforms beyond the available results of the standard benchmarks [cite]. Prior research work by Andew J.Younge et al. [cite] on analysis of the virtualization platforms for HPC applications shows that KVM performed better compared to Xen on the standard HPCC benchmarks whereas another research work published by Jianhua Che et al. [cite] using the standard benchmarks claims that OpenVZ, a virtualization platform based on Linux containers performed the best while KVM performed \textit{significantly lower} than Xen which made clear that the comparison using the standard benchmarks were not enough. This motivated us to perform a detailed study of the principles behind the three major open source virtualization platforms, KVM, Xen, and Linux Containers and evaluate them based not just on the quantitative metrics like virtualization overhead and scalability but also on the qualitative metrics like operational flexibility, isolation, resource management, operational flexibility, and security.
\section{Contributions}
This thesis builds on prior work comparing open source virtualization platforms, but performs a more detailed study by comparing them beyond the standard benchmarks. The contributions of this thesis include (i) a detailed discussion of the operation of KVM, Xen, and Linux Containers, (ii) a quantitative comparison of the virtualization platforms based on the overhead they impose in virtualizing CPU, memory, network bandwidth, and disk access, (iii) a workload-specific benchmark analysis covering pybench, phpbench, nginx, Apache benchmark, pgbench, and others, (iv) a comparative analysis of performance specific to Intelligent River\textsuperscript{\textregistered} middleware components, including RabbitMQ, MongoDB, and Apache Jena TDB, (v) a qualitative discussion of the operational flexibility offered by the virtualization platforms, and (vi) a discussion of strategies that can be adopted to enforce resource entitlement with respect to CPU, memory, network bandwidth, and disk access.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%This research thesis builds on the prior work on comparing the open source virtualization platforms and brings out the advantages and disadvantages of each.The contributions of this thesis include (i) Description of the scalable and resilient design of our Intelligent River\textsuperscript{\textregistered} middleware system. (ii) A study on the principles of operations behind the three chosen virtualization platforms which will be useful to the architects who design their applications around them. (iii) A quantitative comparison of the virtualization overhead (and scalability ?) exhibited by KVM, Xen and Linux containers as of date of writing. (iv) A discussion on the differences among the chosen platforms in terms of qualitative factors like ease of deployment, resiliency and security. (v) A discussion on the facilities to assure the resource entitlement and control with respect to CPU,Network bandwidth, Memory and I/O which is often overlooked and an area in need of improvement from the operational perspective.
%\section{Thesis Organization}
%\item How to architect our middleware stack to leverage the strengths of the chosen virtualization platform ?
%Cloud computing has emerged as the model in which the end-user applications are deployed on top of a shared pool of computing resources.The core idea behind the cloud computing model is enabled by virtualization
%\section{Cloud computing and Virtualization}
%Cloud computing has emerged as the model in which the end-user a pplications are deployed on top of a shared pool of computing resources.The primary reasons behind the huge interest in the cloud computing model against the dedicated server model are :
%(i) The computing resources on a dedicated server often goes under-utilized.
%(ii)
%This model is enabled by deploying applications into \textit{virtual machines (VM)} or \textit{containers} instead of directly deploying them on the bare-metal systems.A large number of VMs can be hosted on a fewer number of physical machines.
%Virtualization is the creation of virtual resources from physical resources.The idea of virtualization originated in the 1960s as a way to partition the resources and run multiple applications simultaneously on IBM Mainframes(cite). The original proprietary design is still alive in an improved form in the name of z/VM and being used to run linux on the mainframes (cite). When the usage of x86 based servers increased in the early 2000s and the servers reported very low average utilization, the need for virtualization was realized and the hardware assist (cite) catalyzed the rapid growth of virtualization. Virtualization offers several benefits including greater hardware utilization, improved fault tolerance and recovery capabilities, automation, cloning and many more.
%The core software component that enables virtualization is a \textit{hypervisor} which manages the mapping and control between the physical and virtual resources.The techniques used in the implementaion of such a mapping classifies virtualization into:
%1. Full virtualization (cite) - A complete simulation of the hardware to allow several guest operating systems to run in parallel. Example, Kernel-based Virtual Machines(KVM) (cite)
%2. Para virtualization (cite) - A partial simulation with communication between the guest operating systems and the hypervisor for critical kernel operations. Example, Xen (Cite)
%3. Container-based virtualization (cite) - An operating system level virtualization where multiple operating systems share the host's kernel in which they are isolated by name spaces (cite)
%Though aforesaid virtualization techniques enable great benefits in terms of granular resource control and better utilization of physical resources, they also exhibit an overhead in terms of CPU,Memory,Network and disk I/O.
%Wireless sensor network is a tool to visualize the physical world, We can say that they are our eyes. It is used to monitor volcano, structure of building or bridges, flow of water, temperature, humidity, soil moisture, etc. These are only few examples of what a wireless sensor network does for us. We gather data from these network, process and analyse it. This helps us in predicting things which would not have been possible earlier. By using this gathered data we can predict the lifetime of a bridge or weather conditions.
%Wireless sensor networks consist of small computing devices known as \textit{motes}, connected to each other wirelessly, in the hundreds to thousands of devices. Motes are equipped with sensors to sense physical and/or chemical parameters, and radios to transmit the sensed data to a base station for further processing and storage. These networks are important tools for monitoring the physical world. They can be used to monitor volcanoes \cite{bib_volcanic}, the structure of buildings and bridges \cite{bib_structural}, the flow of water \cite{bib_temp_humidity}, soil moisture \cite{bib_soil}, and other phenomena of interest. These are only a few examples of what wireless sensor networks can support. We can store, process, and analyze the data gathered from these networks to aid in analytical tasks that would not have been possible just a decade ago. Data gathered from a vibration sensor, for example, might be used to enable predictions regarding the lifetime of a bridge. Data gathered from an environmental network might be used for monitoring air pollution \cite{s8063601} or water quality \cite{white2010intelligent}. Data gathered from an earth monitoring network might be used for landslide detection \cite{5346714}.
%Wireless sensor network consists of small computing devices known as motes connected to each other wirelessly in numbers of hundreds and thousands. These motes have sensors on it to sense physical parameter, they transmit this sensed data to the base station for further processing. This motes are deployed in large geographical area and dangerous environment. These motes are low on processing power, memory and has to work in conditions which are not known in advance.
%\section{Motivation}
%The ideal wireless sensor network should be low power, maintenance free, scalable, reliable and inexpensive. But like every other system wireless sensor network has some limitations. The cost of making a network plays an important role. We should be able to build a mote which doesn't cost us much so that we can dispose it without thinking. This low cost of motes will help us in deploying more number of motes at a particular place and hence will provide us more precise data or can cover larger area in same cost. The problem which we normally face in wireless sensor network is of maintenance and most of the time maintenance is not of the components but is of battery which happens very frequently, changing a battery in a wireless sensor network is expensive and sometimes it is impossible to change batteries in every mote as they are deployed in large numbers and at dangerous places. Also reliable reception of data is important in wireless sensor network we cannot afford loss of data.
%Ideally, wireless sensor networks should be maintenance free, inexpensive, reliable, and scalable. But like every other type of system, wireless sensor networks have limitations. The first involves network maintenance, a significant obstacle. Most of the maintenance time is spent in replacing depleted batteries \cite{1607983}. Changing batteries in a large wireless sensor network can be expensive and time consuming. The cost of deploying a network also plays an important role. We should be able to deploy low cost, even disposable motes, in order to achieve better sensor coverage. We must also receive data reliably. Wireless sensor networks are often deployed in remote and perhaps in hostile environments, in large numbers \cite{1607983}. We cannot risk receiving incorrect data or missing important data. Finally, we should be able to accommodate the dynamic addition or unexpected removal of a mote in a network without affecting reliability.
%Our solution approach satisfies all of these requirements. We present the design of a mote that is maintenance free and inexpensive. We present a networking solution that is reliable and scalable.
% First, to develop a maintenance free mote. Second, to develop a inexpensive mote. Third, to develop a reliable network. Fourth to develop a scalable network.
%\section{Problem Statement}
%To solve the above limitations we have to develop a mote which is not expensive and has all the necessary features to sense and communicate the data to its peers. As maintenance of mote is an important issue, we have to build a mote which is maintenance free. Also we have to build a network which can reliably send and receive data, and a network which is scalable.
%The objective of this thesis is to develop a wireless sensor network that satisfies the stated key characteristics. First, the mote hardware must be maintenance free. To achieve a reduction in maintenance overhead, we must design a mote that harvests energy continuously. Second, the cost of each mote must be reduced to allow users to cover a large area at a reasonable cost. To achieve this reduction, we must develop a low-cost mote that provides the necessary components for sensing and transmitting data to a base station. Third, the network must be reliable. We deploy wireless sensor networks in remote, and perhaps even hostile environments. We cannot risk receiving incorrect data or losing data. Fourth, the network must be scalable. It should be able to accommodate the dynamic addition and unexpected removal of a mote without affecting overall reliability.
%\section{Solution Approach}
%Our solution approach consists of several steps. First, to achieve a reduction in maintenance time, our design includes an energy harvesting circuit, which harvests solar energy and stores it in a Li-Ion battery. This stored energy is used whenever there is insufficient solar power. Further, we know energy saved translates to longevity improvements, so we use custom software to keep the microcontroller in a sleep state whenever possible. Second, we focus on low-cost hardware to sense the physical and chemical parameters in a micro environment. We rely on components that are low cost, but still sufficient for a simplified mote design. Instead of using more common, costly radios such as the CC2420 (10.08 USD) \cite{bib_cc2420}, the CC2400 (8.77 USD) \cite{bib_cc2400}, and others, we use the RFM12 (5.56 USD) \cite{bib_rfm12}. This is an inexpensive radio, but has all the features of more popular radios, including high data-rate capabilities, variable transmission power, multi-channel operation, and long range . Further, the RFM12 doesn't require any external components to transmit and receive data, which simplifies the design, whereas the CC2420, and CC2400 each need many external components to function. A complex design and additional components increases the cost of manufacturing. We are reducing the cost of the mote by simplifying its design and using components which best satisfy our needs. Third, we develop network protocols to receive data reliably. To achieve this, we develop a protocol which uses a lightweight time-division multiple access approach (TDMA) \cite{chan2time}, with acknowledgments to ensure that every packet reaches the base station. Fourth, we develop a network that accommodates the dynamic addition and removal of motes without affecting network reliability. The resulting network typically operates at a 10\% duty-cycle, saving significant power.
%\section{Contributions}
%The contribution of this thesis is a system which is low cost with all the necessary features and which doesn't require any maintenance. Also a network which is reliable, synced on the basis of time and scalable.
%--------------------------------------------------------------------------------------------------------------------
%This thesis describes four major contributions. First, we describe a maintenance free network. We accomplish this by harvesting solar energy and by efficiently using the available energy. Second, we describe the design of an inexpensive mote. We accomplish this by designing motes which use basic electronic components \textemdash resistors, capacitors, and diodes \textemdash instead of more complex integrated circuits. Third, we describe a reliable network design. We accomplish this by developing a lightweight TDMA protocol with acknowledgments to ensure reliable data transmission and reception. Fourth, we describe a scalable network design that accommodates the dynamic addition and removal of motes without affecting network reliability. In addition to the above contributions, we also develop a lightweight time synchronization protocol and route formation protocol.
%--------------------------------------------------------------------------------------------------------------------------
%This thesis contributes a simple, and low-cost hardware platform, with a simple, and light-weight software solution, for wireless sensor network. The hardware/software solution consists of four features. First, we describe a maintenance free network. We accomplish this by harvesting solar energy and by efficiently using the available energy. Second, we describe the design of an inexpensive mote. We accomplish this by designing motes which use basic electronic components \textemdash resistors, capacitors, and diodes \textemdash instead of more complex integrated circuits. Third, we describe a reliable network design. We accomplish this by developing a lightweight TDMA protocol with acknowledgments to ensure reliable data transmission and reception. Fourth, we describe a scalable network design that accommodates the dynamic addition and removal of motes without affecting network reliability. In addition to the above contributions, we also develop a lightweight time synchronization protocol and route formation protocol.
%This thesis describes a low-cost, maintenance-free sensor networking platform, supported by lightweight, yet reliable and scalable networking software. Design simplicity is a focal point throughout. The hardware/software solution exhibits four key characteristics. First, we describe a \textemdash maintenance free solution. We accomplish this by harvesting solar energy and by efficiently using the available energy. Second, we describe an \textemdash inexpensive solution. We accomplish this by designing a mote platform which uses basic electronic components \textemdash resistors, capacitors, and diodes \textemdash instead of more complex integrated circuits. Third, we describe a \textemdash reliable solution that ensures high yield, even in the presence of intermittent and permanent device faults\footnote{The solution assumes that the underlying physical topology always accommodates a connected routing topology.}. We accomplish this by developing lightweight route formation, time synchronization, and TDMA protocols; the latter includes application-level acknowledgments to ensure reliable data transmission and reception. Finally, we describe a \textemdash scalable solution that accommodates the dynamic addition and removal of motes without affecting network reliability. The sum total of these characteristics yields a low-cost sensor networking solution that enables large-scale, long-lived network deployments \textemdash the contribution of this thesis.
%Our implementation of harvesting solar energy, technique of utilizing the diodes in specific ways to switch circuits to save power, and our hardware design choices to bring down the total cost and making the system maintenance free are the major contributions towards the design of the future sensor networks. The software that drives this efficient design introduces a new network formation technique and is proven to be scalable, reliable and self-healing in case of mote failures.
%\section{Thesis Organization}
%Chapter 2 surveys the most closely related work in the field of wireless sensor networks and solar energy harvesting. Chapter 3 describes the design of the hardware. Chapter 4 describes the design of the network software and associated protocols. Chapter 5 describes the results of our evaluation. Finally, chapter 6 presents a summary of contributions and conclusions. | {
"alphanum_fraction": 0.8146043771,
"avg_line_length": 186.3529411765,
"ext": "tex",
"hexsha": "f67e115fc9b60bbf97f079195590b3c4265d6c07",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7203cebdeb54a555e10722d2b7b4ec9e1c58e219",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "aravindhsampath/virtualizing-Intelligent-River",
"max_forks_repo_path": "introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7203cebdeb54a555e10722d2b7b4ec9e1c58e219",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "aravindhsampath/virtualizing-Intelligent-River",
"max_issues_repo_path": "introduction.tex",
"max_line_length": 1898,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7203cebdeb54a555e10722d2b7b4ec9e1c58e219",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "aravindhsampath/virtualizing-Intelligent-River",
"max_stars_repo_path": "introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5290,
"size": 28512
} |
\section{Top Level}
In top level, the system maintains the interaction with user and
adds the user input, namely, \textit{Axiom}, \textit{Definition}, \textit{Fixpoint}, and \textit{Inductive Definition},
into the context.\par
It is also in this part, the inductive rule forms.
\subsection{Acceptance of Command}
\subsubsection{Axiom}
After parsing and checking, the command will be like
$$
\tt Ax\ \ \it name\ term,
$$
where {\it term} denotes the type of this axiom.
Since it is an axiom, we do not have to (sometimes can not) build the corresponding term.\par
Just build the corresponding term as {\it Nothing} then put it into the context.
\subsubsection{Definition}
After parsing and checking, the command will be like
$$
\tt Def\ \ {\it name\ term2\ term1},
$$
where {\it term2} is the type of {\it term1}.\par
Simply bind the name, term, type together and put it into the context.
\subsubsection{Fixpoint}
After parsing and checking, the command will be like
$$
\tt Fix\ {\it name }\ \left(\lambda {\it f}:{\it term1},\ {\it term2}\right),
$$
where {\it term1} is the type of {\it term2} and it is a recursive function of {\it f}.\par
Since this recursive function has passed all the type check and safety check, we can safely use it without worrying termination
problem.
On the other hand, whether it is a {\it Fixpoint} definition will not influence any reduction, because the reduction
always finds the term in context according to its index.\par
So simply remove the {\it Fixpoint} mark and put it into the context.
\subsubsection{Inductive Definition}
After parsing and checking, the command will be like
$$
\tt Ind\ \ {\it name\ p\ term2\ term1\ constructors},
$$
where {\it p} is the number of parameters of the inductive type, {\it term1} is the type of the inductive definition,
and {\it term2} is the corresponding term.\par
Apart from the ordinary operation, we also need to add inductive rule, which is actually a type theory view of mathematical
induction. Since the proof of a claim becomes a term of certain type, the induction rule is a term offering inductive scheme.\par
For example,
\begin{center}
\begin{minted}{coq}
Inductive nat : Type :=
| O : nat
| S : nat -> nat.
(* build inductive rule*)
fun (P:nat -> Type)(f:P O)(f0:forall (n:nat), P n -> P (S n))
fix F (n:nat) : P n :=
match n as n0 in nat return (P n0) with
| O => f
| S n0 => f0 n0 (F n0)
end
:
forall (P:nat -> Type), P O ->
(forall (n:nat), P n -> P (S n)) ->
forall (n:nat), P n
\end{minted}
\end{center}
The intuition here is that for a proposition \mintinline{coq}|P|,
\begin{itemize}
\item it is true on \mintinline{coq}|O|;
\item if it is true on \mintinline{coq}|n|, then it is true on \mintinline{coq}|S n|.
\end{itemize}
Then it is true on all term of \mintinline{coq}|nat| type, which is reasonable according
mathematical induction.\par
Basically, to build such term, we should build weakened assumptions for all constructors first,
like the \mintinline{coq}|f|,
\mintinline{coq}|f0| above. After that, the final proposition which applies to all the terms of such inductive type
shall come out, like the \mintinline{coq}|F| above.\par
Every occurrence of the inductive type on the constructors demands a verification of the proposition,
which explains why for constructor \mintinline{coq}|S : nat -> nat|, which depends on a \mintinline{coq}|nat| term,
the weakened assumption is \mintinline{coq}|f0 : forall (n:nat), P n -> P (S n)|.\par
The reason why the inductive rule of \mintinline{coq}|nat| requires {\it Fixpoint} is that some constructors of it
rely on the term of type \mintinline{coq}|nat|. Here is a case which do not need recursive function.
\begin{center}
\begin{minted}{coq}
Inductive eq (T : Type) (x : T) : T -> Type :=
| eq_refl : eq T x x.
(* build inductive rule*)
fun (T:Type) (x:T) (P:forall (t:T) (_:eq T x t), Type)
(f:P x (eq_refl T x)) (t:T) (e:eq T x t) =>
match e as e0 in eq _ _ a0 return (P a0 e0) with
| eq_refl _ _ => f
end
:
forall (T:Type) (x:T) (P:forall (t:T) (_:eq T x t), Type)
(f:P x (eq_refl T x)) (t:T) (e:eq T x t),
P t e
\end{minted}
\end{center}
Sadly, we have to admit that because of the lack of references, time, and energy,
the induction rule in our system is not complete.
The inductive definition acceptable to our system must satisfy:\par
Assume the inductive type is $\tt A_1\to A_2\to \cdots A_n$, then $\tt A_k$ must be
any one of the following
\begin{itemize}
\item A ordinary term, like \mintinline{coq}|Type|, \mintinline{coq}|T|.
\item An application, like \mintinline{coq}|P n|.
\item An inductive type, like \mintinline{coq}|nat|, \mintinline{coq}|eq T x y|.
\end{itemize}
Others like product type \mintinline{coq}|U -> V| is not supported.
\subsection{Requests to Environment}
\subsubsection{\tt Print \sl ident}
This command displays on the screen information about the declared or defined object referred by {\sl ident}.
\subsubsection{\tt Check \sl term}
This command displays the type of {\sl term}.
\subsection{Top Loop}
The main work flow of our top level loop:
\subsubsection*{Reading raw input}
The MiniProver will read the user's input until a dot (\textquotesingle .\textquotesingle), and any further input in the same line will be abandoned.
\subsubsection*{Parsing}
The raw input will be parsed without nameless representation.
Here is an example, the raw input
\begin{center}
\begin{minipage}{0.6\textwidth}
\begin{minted}{coq}
(* raw input *)
Fixpoint plus (n:nat) (m:nat) : nat :=
match n as n0 in nat return nat with
| O => m
| S n0 => S (plus n0 m)
end.
\end{minted}
\end{minipage}
\end{center}
will be parsed as the AST
\begin{center}
\begin{minipage}{0.9\textwidth}
\begin{minted}{haskell}
Fix "plus"
( TmFix (-1)
( TmLambda "plus"
( TmProd "n" ( TmVar "nat" )
( TmProd "m" ( TmVar "nat" ) ( TmVar "nat" )))
( TmLambda "n" ( TmVar "nat" )
( TmLambda "m" ( TmVar "nat" )
( TmMatch (-1) ( TmVar "n" ) "n0" [ "nat" ]
( TmVar "nat" )
[ Equation [ "O" ] ( TmVar "m" )
, Equation [ "S", "n0" ]
( TmAppl
[ TmLambda "n" ( TmVar "nat" )
( TmAppl [ TmVar "S", TmVar "n" ])
, TmAppl [ TmVar "plus", TmVar "n0", TmVar "m" ]])])))))
\end{minted}
\end{minipage}
\end{center}
\subsubsection*{Duplicate global name checking}
After parsing, we can get the name of the input command, and the name should not be the same with
any defined or declared object in the environment.
\subsubsection*{Name checking}
Before building the nameless representation, there should be no unbounded name in the AST.
\subsubsection*{Nameless representation building}
If all names are bounded, we can build the nameless representation. The variable pointed to
a type constructor or a term constructor will be unfolded to its functional representation.
Here is an example, the nameless AST will be built for the previous AST:
\begin{center}
\begin{minipage}{0.9\textwidth}
\begin{minted}{haskell}
Fix "plus"
( TmFix (-1)
( TmLambda "plus"
( TmProd "n" ( TmIndType "nat" [])
( TmProd "m" ( TmIndType "nat" []) ( TmIndType "nat" [])))
( TmLambda "n" ( TmIndType "nat" [])
( TmLambda "m" ( TmIndType "nat" [])
( TmMatch 0 ( TmRel "n" 1 ) "n0" [ "nat" ] ( TmIndType "nat" [])
[ Equation [ "O" ] ( TmRel "m" 0 )
, Equation [ "S", "n0" ]
( TmAppl
[ TmLambda "n" ( TmIndType "nat" [])
( TmConstr "S" [ TmRel "n" 0 ])
, TmAppl
[ TmRel "plus" 3
, TmRel "n0" 0
, TmRel "m" 1 ]])])))))
\end{minted}
\end{minipage}
\end{center}
\subsubsection*{Positivity Checking (Inductive Definition Only)}
For an inductive definition, after building it's nameless representation, the positivity
could be checked.
\subsubsection*{Termination Checking}
All subterms with fixpoint definitions will be checked if they are terminating. After checking,
annotations for the indices of decreasing variables will be added to the AST.
\subsubsection*{Type checking}
Before actually dealing with the command, the top level will check if it's a well-typed command.
\subsubsection*{Processing the command}
The definitions and declarations will be processed as described before. And an assertion will lead to the proof editing mode.
| {
"alphanum_fraction": 0.6820118343,
"avg_line_length": 41.0194174757,
"ext": "tex",
"hexsha": "bf1feacf4cd97feb92059eb99680119150274b09",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0aa4cdf3b495ddf6707f27dcbee810d519b43177",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "lsrcz/mini-prover",
"max_forks_repo_path": "tex/report/toplevel.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0aa4cdf3b495ddf6707f27dcbee810d519b43177",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "lsrcz/mini-prover",
"max_issues_repo_path": "tex/report/toplevel.tex",
"max_line_length": 149,
"max_stars_count": 11,
"max_stars_repo_head_hexsha": "0aa4cdf3b495ddf6707f27dcbee810d519b43177",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "lsrcz/mini-prover",
"max_stars_repo_path": "tex/report/toplevel.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-30T20:17:52.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-05-31T05:55:09.000Z",
"num_tokens": 2505,
"size": 8450
} |
\cleardoublepage
\def\pageLang{pas}
\section{Custom Data Types in Pascal} % (fold)
\label{sec:custom_types_in_pas}
\input{topics/type-decl/pascal/pas-small-db}
\input{topics/type-decl/pascal/pas-type-decl}
\input{topics/type-decl/pascal/pas-struct-decl}
\input{topics/type-decl/pascal/pas-enum-decl}
% section more_data_types_in_pas (end)
| {
"alphanum_fraction": 0.7829912023,
"avg_line_length": 28.4166666667,
"ext": "tex",
"hexsha": "97a01546e3f04506742363f3f08d2ff8f3f3d4bf",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2022-03-24T07:42:53.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-06-02T03:18:37.000Z",
"max_forks_repo_head_hexsha": "8f3040983d420129f90bcc4bd69a96d8743c412c",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "macite/programming-arcana",
"max_forks_repo_path": "topics/type-decl/pascal/type-decl-in-pascal.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07",
"max_issues_repo_issues_event_max_datetime": "2021-12-29T19:45:10.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-12-29T19:45:10.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "thoth-tech/programming-arcana",
"max_issues_repo_path": "topics/type-decl/pascal/type-decl-in-pascal.tex",
"max_line_length": 47,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "thoth-tech/programming-arcana",
"max_stars_repo_path": "topics/type-decl/pascal/type-decl-in-pascal.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-10T04:50:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-08-10T04:50:54.000Z",
"num_tokens": 108,
"size": 341
} |
\documentclass[12pt,a4]{article}
\usepackage{graphicx,subfigure,a4wide,noparindent}
\title{Designing complex Gabor filters}
\author{Bill Christmas}
\begin{document}
\maketitle
\section{Introduction}
In order to detect texture in scenes, the human visual system uses a set of bandpass filters that vary in orientation and centre frequency. A Gabor filter bank is a set of regularly spaced filters that very approximately mimic this behaviour. Typical filters might have point spread functions such as:
\begin{center}
\includegraphics{psf} \hspace{2cm} \includegraphics{psf45}
\end{center}
\section{Design principles}
Consider first a single prototype filter, for example the left-hand example above. Such a filter has a point spread function of the form:
\[ g(x,y) = {1 \over 2 \pi \sigma_r\sigma_\theta}\;
\exp \left\{ -{1\over2} \left[{\left(x\over\sigma_r\right)^2+\left(y \over \sigma_\theta\right)^2}\right]\right\} \;
\cos\left(2\pi U x\right) \]
The exponential term defines a Gaussian-shaped envelope of elliptical shape, and the cosine term defines the centre frequency. The Gaussian envelope shape is used to get the best compromise of minimising the filter width in both spatial and frequency domains. The constants are chosen so that the filter has unity gain at zero frequency, with parameters $\sigma_r, \sigma_\theta$ that express the filter width as a ``standard deviation'' in units of pixels.
The problem with such a p.s.f.\ is that the response ripples in a similar manner to the cosine term in the p.s.f. A convenient way to deal with this is instead to use a complex p.s.f.:
\[ g(x,y) = {1 \over 2 \pi \sigma_r\sigma_\theta}\;
\exp \left\{ -{1\over2} \left[{\left(x\over\sigma_r\right)^2+\left(y \over \sigma_\theta\right)^2}\right]\right\} \;
\exp\left(i2\pi U x\right) \]
and to deal with the resulting complex filtered image by taking its modulus. This gives a final response whose shape is influence by the Gaussian envelope of the p.s.f.\ instead of the p.s.f.\ itself.
The Fourier transform of the p.s.f.\ is given by:
\begin{equation}
G(u,v) = \exp \left\{ -2\pi^2\left[ \sigma_r^2 (u-U)^2 +\sigma_\theta^2 v^2 \right] \right\}\label{eq:G}
\end{equation}
--- i.e.\ an elliptical shape whose centre is offset on the $u$ axis by the centre frequency $U$.
The frequency coordinate system can then be rotated and scaled to create an array of filters of $N$ different orientations and $M$ different scales, using the substitution:
\[ {u \choose v} = s^m \left( \begin{array}{rr}\cos \pi{n\over N} & \sin
\pi{n\over N} \\ -\sin \pi{n\over N} & \cos \pi{n\over N}
\end{array}\right) {u' \choose v'} ,\; m = 0, 1, \ldots, M-1,\; n = 0, 1,
\ldots N-1,\; s > 1 \] where $u', v'$ are the coordinates of a general filter.
The scale ratio parameter $s$ thus defines both the ratio of the centre
frequencies of filters of adjacent scales, and also their widths.
If we are using these filters for digital image processing, we also need to determine the frequency scales, to ensure that the same proportion of the spectrum is covered in both dimensions. In the above analysis, the frequencies $u, v$ are in the continuous domain, and have units of cycles per unit length. We can arbitrarily choose our unit length to be a pixel width, $p$ (assuming square pixels), in which case the frequency units are $1/p$. In the sampled domain however the frequency units are related to the image dimensions (which are generally different from each other). If the image width and height are $L_w$ and $L_h$ pixels respectively, the units of the frequency samples are $1/(L_w p)$, $(1/L_h p)$. Hence if the sampled frequencies are $k_u, k_v$, then
\[ {u' \choose v'} = {{k_u \over L_w} \choose {k_v \over L_h}} \]
The result is a set of filters that covers one half of the complex frequency plane, shown in Fig.~\ref{fig:spectrum}. (We do not populate the lower half of the plane because the extra filters would have the same response as the existing ones.) The set of filters can be made a little more compact by offsetting every other scale by $1\over 2$ of the angular separation, shown in Fig.~\ref{fig:shifted} (here the middle of the 3 scales). Note however that this will reduce the number of filters that explicitly detect perceptually important vertical lines.
\begin{figure}[t]\centering
\subfigure[no offset]{\includegraphics{maskfig}\label{fig:spectrum}}
\hspace{2em}
\subfigure[with alternate scales offset]{\includegraphics{offsetfig}\label{fig:shifted}}
\caption{Spectral representation of filters: $M=3$, $N=6$, $s=2$\label{fig:spectra}}
\end{figure}
\section{Setting the scale parameters}
The filter scale parameters are set as follows. Note that $\sigma_\theta$, $\sigma_r$ are the filter scales in the {\em spatial} domain.
\paragraph{$\mathbf{\sigma_\theta}$:}
This represents the filter scale in pixels in the tangential direction of the highest frequency filters. It is more or less inversely proportional to the tangential filter separation. Half-way between the centres of two filters adjacent in the tangential direction, the filter responses are equal ($=\delta$, say where $\delta < 1$). The distance between the two filters is given by (Fig.~\ref{fig:circles}):
\def\Sinpbnn{\sin{\left(\pi\over2N\right)}}
\[ 2r = 2U \Sinpbnn \]
Since $\delta$ is the response of a filter at a distance $U\sin(\pi/2N)$ from its centre in what is approximately the tangential direction, and $u$ is constant in the tangential direction ($u = U$ for the highest frequency filters), then from (\ref{eq:G}):
\[ \delta = \exp \left[ -2\pi^2 \left(\sigma_\theta U \Sinpbnn\right)^2 \right] \]
\[ \rightarrow\quad \sigma_\theta =
\sqrt{-\ln\delta \over 2\pi^2}\; {1 \over U\Sinpbnn}
\]
Using $\delta = {1\over2}$ gives a reasonably uniform overall coverage of the spectrum in the tangential direction, in which case:
\[ \sigma_\theta \approx {0.19 \over U\Sinpbnn} \]
\paragraph{$\mathbf{\sigma_r}$, no offsetting:}
$\sigma_r$ represents the filter scale in the radial direction of the highest frequency filters (the outer semi-ring in Fig.~\ref{fig:spectra}). If we do not offset alternate scales (Fig.~\ref{fig:spectrum}), $\sigma_r$ and $\sigma_\theta$ are independent, so $\sigma_r$ depends only on the radial filter separation, which is a function of the scale ratio $s$. The responses $\delta$ of the two outermost radially adjacent filters are equal when
\[ \exp\left[ -2\pi^2 \sigma_r^2 (u-U)^2\right]
= \exp\left[ -2\pi^2 \sigma_r^2 \left(su-U\right)^2\right] = \delta \]
Selecting appropriate roots:
\begin{eqnarray}
\sigma_r &=& \sqrt{-\ln{\delta}\over 2\pi^2} \;{s+1\over U(s-1)} \label{eq:sigma_r}\\
&=& {s+1\over s-1}\; \Sinpbnn\; \sigma_\theta \nonumber
\end{eqnarray}
For $\delta={1\over2}$:
\[ \sigma_r \approx 0.19 {s+1\over U(s-1)} \]
\paragraph{$\mathbf{\sigma_r}$, with offsetting:}
When alternate scales are offset by one half of the filter width (Fig.~\ref{fig:shifted}) the situation is more complicated. Consider first the case when the filters are circular, i.e.\
\begin{equation}
\label{eq:circ}
\sigma_r = \sigma_\theta
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{circles}
\caption{Geometry for offset circular Gabor filters; $N = 3$}
\label{fig:circles}
\end{figure}
From basic geometrical properties (Fig.~\ref{fig:circles}), we can see that
\[ \alpha + \beta = {N+1\over 2N}\pi, \quad \sin\alpha = {s_c\over 1+s_c}, \quad \cos\beta = {1\over 1+s_c} \]
where $N$ is the number of filters at the same scale, and $s=s_c$ is the scale ratio required to generate circular filters.
Defining $\tau = \sin\left({N+1\over 2N}\right)$, we can solve for $s_c$:
\[ s_c = {1+\tau-\tau^2 \pm \sqrt{(1+2\tau)(1-\tau^2)}\over \tau^2} \]
The positive root is the required one; the negative one corresponds to $1\over s$. For $N=4$, $s \approx 2$.
In the general case, for a desired value of $s$ we can use (\ref{eq:sigma_r}), (\ref{eq:circ}) to determine an appropriate value for $\sigma_r$:
\[ \sigma_r = {s+1 \over s-1}\: {s_c-1 \over s_c+1}\: \sigma_\theta \]
\section{Speeding up the filtering}
A common complaint of Gabor filters is that the inverse FFT that has to be performed for each of the filters makes the system too slow to be useful. The basic method performs a 2D inverse FFT over the whole filtered image spectrum. However, most of this spectrum has almost no energy for a given filter. If instead we shift the coordinates of the spectrum to the centre frequency of the filter, and crop this spectrum so that we only include regions where the filter has a significant response, the FFT will be correspondingly smaller and hence faster. Because of the shift of frequency coordinates, the output image is multiplied by a phase term. Since we take the modulus of the output image, this phase term is removed. The output image is also smaller: it is the same size as the cropped spectrum. However, if needed, it can be reinterpolated up to the original image size.
\end{document}
| {
"alphanum_fraction": 0.7296819788,
"avg_line_length": 73.0322580645,
"ext": "tex",
"hexsha": "334d5b0c0c914d3685a678159703cd06bf85be19",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "317e0ae1cb51e320b877c3bad6a362447b5e52ec",
"max_forks_repo_licenses": [
"BSD-Source-Code"
],
"max_forks_repo_name": "isuhao/ravl2",
"max_forks_repo_path": "RAVL2/Image/Processing/Filters/Gabor/Gabor.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "317e0ae1cb51e320b877c3bad6a362447b5e52ec",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-Source-Code"
],
"max_issues_repo_name": "isuhao/ravl2",
"max_issues_repo_path": "RAVL2/Image/Processing/Filters/Gabor/Gabor.tex",
"max_line_length": 887,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "317e0ae1cb51e320b877c3bad6a362447b5e52ec",
"max_stars_repo_licenses": [
"BSD-Source-Code"
],
"max_stars_repo_name": "isuhao/ravl2",
"max_stars_repo_path": "RAVL2/Image/Processing/Filters/Gabor/Gabor.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2599,
"size": 9056
} |
% !TEX program = xelatex
\documentclass[10pt, compress,notheorems]{beamer}
\usetheme{m}
\usepackage{booktabs}
\usepackage[scale=2]{ccicons}
\usepackage{minted}
\usepackage{amsmath}
\usepackage{bm}
\usepackage{hyperref}
\usepackage{url}
\usepackage{graphicx}
\usepackage{multirow}
\usepackage{multicol}
\usepackage{amsfonts}
\usepackage{pgf,tikz}
\usepackage{tgpagella}
\usepackage{centernot}
\usepackage{caption}
\usepackage{xcolor}
\usepackage{graphicx}
\usepackage{xspace}
\newcommand{\themename}{\textbf{\textsc{metropolis}}\xspace}
\usepgfplotslibrary{dateplot}
\usemintedstyle{trac}
\title[Text Analysis]{Text Analysis}
\subtitle{}
\date{}
\author{
\href{mailto:[email protected]}{Christopher Gandrud}
}
\institute{SG1022, City University London}
\begin{document}
\maketitle
\frame{
\frametitle{Aims}
\begin{itemize}
\item What is text analysis and why use it?
\item Human vs. machine coding
\item The general process
\item Specific issues with machine coding
\item Pros and Cons
\end{itemize}
}
\frame{
\frametitle{Simple definition of text analysis}
\begin{center}
``When we perform textual analysis on a text, we make an {\large{educated guess}} at some of the {\large{most likely interpretations}} that might be made of that text.'' {\small{(McKee 2003, 1)}}
\end{center}
}
\section{Why use text analysis?}
\frame{
\frametitle{You}
\begin{center}
{\large{You \emph{use} and \emph{contribute} to text analysis}} {\large{every day.}}
\end{center}
}
\frame{
\begin{center}
\includegraphics[scale=0.45]{img/google_search_kanye.png}
\end{center}
}
\frame{
\begin{center}
\includegraphics[scale=0.45]{img/google_search_labour.png}
\end{center}
}
\frame{
\frametitle{You}
\begin{center}
{\large{(Some of you) are building a data set that will be used for text analysis}} {\LARGE{right now}}.
\end{center}
}
%\frame{
% \begin{center}
% \includegraphics[scale=0.45]{img/whats_app.png}
% \end{center}
% {\tiny{Source: http://www.buzzfeed.com/shayanroy/blocking-you-now#.obE7eXDgAP}}
%}
\frame{
\begin{center}
\includegraphics[scale=0.4]{img/facebook.png}
\end{center}
{\tiny{Source: http://www.buzzfeed.com/jessicamisener/stupidest-things-ever-said-on-facebook#.elRz03arDM}}
}
\frame{
\frametitle{Social Science}
\begin{center}
People are {\emph{interacting more through texts}} and so creating {\large{increasingly more}} (machine accessible) texts.
\vspace{1cm}
{\large{Massive new source of data}} for social science analysis.
\end{center}
}
\frame{
\frametitle{Text analysis in social science (examples)}
We may have research questions where we conducted a survey with an {\large{open-ended question}}.
\vspace{1cm}
We need some {\large{systematic way}} to {\large{understand}} these texts and {\large{make comparisons}} across survey respondents.
}
\frame{
\frametitle{Text analysis in social science (examples)}
We may have research questions where we want to interview a group of people that are {\large{hard to access}}, but who produce many texts.
\vspace{1cm}
For example, in an ideal world we may want to survey {\large{world leaders}} for their preferences on handling Syrian refugees. We may want to see how these preferences change over time.
\vspace{1cm}
World leaders don't give many interviews (especially not multiple interviews on the same topic over time), but they--often filtered through a press office--do create many texts.
}
\frame{
\frametitle{German Chancellory Press Release Sept. 2015}
\begin{center}
\includegraphics[scale=0.35]{img/sept_chancellor.pdf}
\end{center}
}
\frame{
\frametitle{Cologne New Years Eve Assaults}
\begin{center}
\includegraphics[scale=0.35]{img/sd_deutsche.png}
\end{center}
{\tiny{Source: http://www.sueddeutsche.de/panorama/ermittlungen-zu-den-uebergriffen-in-koeln-vor-allem-marokkaner-fallen-auf-1.2814336}}
}
\frame{
\frametitle{Cologne New Years Eve Assaults}
\begin{center}
\includegraphics[scale=0.35]{img/sd_english.png}
\end{center}
{\tiny{Source: http://www.sueddeutsche.de/panorama/ermittlungen-zu-den-uebergriffen-in-koeln-vor-allem-marokkaner-fallen-auf-1.2814336 via Google Translate}}
}
\frame{
\frametitle{German Chancellory Press Release January 2015}
\begin{center}
\includegraphics[scale=0.35]{img/jan_chancellor.pdf}
\end{center}
}
\frame{
\frametitle{Text analysis in social science (examples)}
We may have research questions about units that are {\large{not able to answer a questionnaire}}, but that produce texts.
\vspace{1cm}
E.g. International organisations, political parties, neighbourhood groups.
}
\frame{
\frametitle{Comparative Party Manifestos Project}
{\small{Left-Right Position of UK Parties Based on their Party Manifestos (\emph{rile} negative values = more left-wing)}}
\begin{center}
\includegraphics[scale=0.35]{img/uk_party_lr.pdf}
\end{center}
{\tiny{Source: https://visuals.manifesto-project.wzb.eu/mpdb-shiny/cmp\_dashboard/}}
}
\frame{
\frametitle{Text analysis in social science (examples)}
We may have research questions about {\large{how actors communicate}} to achieve goals.
\vspace{1cm}
For example, what topics do monetary policy bureaucrats talk about more when there is a financial crisis?
}
\frame{
\frametitle{Topics of US Federal Reserve Governor Speeches}
\begin{center}
{\small{(y-axis = proportion of speech discussing topic)}}
\end{center}
\begin{center}
\includegraphics[scale=0.32]{img/TopicBasic.pdf}
\end{center}
{\tiny{Source: Young and Gandrud (2016)}}
}
\frame{
\frametitle{Text analysis in social science (examples)}
We may have research questions about widely held beliefs across time, for which a survey would be {\large{too costly}} or even impossible to run.
\vspace{1cm}
For example, if we wanted to study monthly perceptions of financial market stress across 180 countries.
}
\frame{
\frametitle{Real-Time Perceptions of Financial Market Stress}
\begin{center}
{\small{(y-axis = level of financial stress)}}
\includegraphics[scale=0.5]{img/uk_compare.pdf}
\end{center}
{\tiny{Source: Gandrud and Hallerberg (2016)}}
}
\frame{
\frametitle{Real-Time Perceptions of Financial Market Stress}
\begin{center}
{\small{(y-axis = level of financial stress)}}
\includegraphics[scale=0.16]{img/perceptions_compare.png}
\end{center}
{\tiny{Source: Gandrud and Hallerberg (2016)}}
}
\frame{
\frametitle{Text analysis in social science (examples)}
\begin{center}
Or we want to study perceptions of world leaders over time\ldots
\end{center}
}
\frame{
\frametitle{GDELT Global Leaders Press Coverage}
\begin{center}
{\small{(y-axis = larger values indicate more positive press)}}
\end{center}
\begin{center}
\includegraphics[scale=0.35]{img/gdelt_leaders.png}
\end{center}
{\tiny{Source: http://data.gdeltproject.org/worldleadersindex/GDELT\_Leaders\_Index-2016-02-05.pdf}}
}
\section{Defining text analysis}
\frame{
\begin{center}
``When we perform textual analysis on a text, we make an {\large{educated guess}} at some of the {\large{most likely interpretations}} that might be made of that text.'' {\small{(McKee 2003, 1)}}
\end{center}
}
\frame{
\frametitle{Define content analysis}
\begin{center}
``{\large{Content Analysis}} is a research technique for making {\large{replicable and valid inferences}} from texts (or other meaningful matter [e.g. videos, audio]) to the {\large{contexts}} of their use.'' {\small{(Krippendorff 2013, 24)}}
\end{center}
}
\frame{
\frametitle{Replicable}
\begin{center}
\textcolor{gray}{``Content Analysis is a research technique for making {\large{replicable}} and valid inferences from texts (or other meaningful matter [e.g. videos, audio]) to the contexts of their use.''}
\end{center}
{\large{Replicable}}: different researchers, independent of each other should get the same results when applying the same technique.
\vspace{0.5cm}
Replicable results are more reliable.
}
\frame{
\frametitle{Valid}
\begin{center}
\textcolor{gray}{``Content Analysis is a research technique for making replicable and {\large{valid}} inferences from texts (or other meaningful matter [e.g. videos, audio]) to the contexts of their use.''}
\end{center}
{\large{Valid}}: research is open to careful scrutiny and your claims can be upheld given independently available evidence.
}
\frame{
\frametitle{Texts}
\begin{center}
\textcolor{gray}{``Content Analysis is a research technique for making replicable and valid inferences from {\large{texts}} (or other meaningful matter [e.g. videos, audio]) to the contexts of their use.''}
\end{center}
{\large{Texts}}: something that is produced by someone to have meaning for someone else.
\vspace{0.5cm}
E.g. newspaper articles, treaties, transcripts, tweets, maps, advertisements, press releases, movies, party manifestos.
\vspace{0.5cm}
In this course we focus exclusively on written texts composed of words.
}
\frame{
\frametitle{Texts in contexts}
\begin{enumerate}
\item<1-> Texts have no objective--reader-independent qualities. Meaning (data) arises from someone reading the text, often expecting other's understanding.
\item<2-> Texts do not have single meanings.
\item<3-> Meanings invoked by texts need not be shared.
\item<5-> Texts have meanings relative to particular contexts.
\item<6-> Content analysts infer answers to particular research questions from their texts. Their inferences are merely more systematic, explicitly informed, and verifiable \ldots than what ordinary readers do.
\end{enumerate}
}
\frame{
\frametitle{Common data output}
Text analysis can create data that is:
\vspace{0.5cm}
\begin{itemize}
\item {\large{Discrete}}: e.g. the main topics of a text.
\vspace{0.5cm}
\item {\large{Continuous}}: e.g. proportion of a document dedicated to a specific word or words, scale (negative to positive, left-right).
\end{itemize}
\vspace{0.5cm}
The choice largely depends on your {\large{research question}}.
}
\section{Human and Machine Coding}
\frame{
\frametitle{Human vs. Machine coding}
You can analyse texts either by relying exclusively on human coders or also rely on machine-assistance.
}
\frame{
\frametitle{Human vs. Machine coding}
Note: you should {\large{never exclusively rely on machine coding}}. At a minimum, you need to check the validity of your machine assigned codes.
\vspace{0.5cm}
{\large{Face validity:}} Do the machine assigned codes make sense in relation to the context?
}
\frame{
\frametitle{Human vs. Machine coding}
Machine coding has the advantage of being much more {\large{efficient}} for large numbers of texts.
\begin{itemize}
\item For example, it would basically be impossible for GDELT to create a daily updated index of world leader press coverage with human coders.
\end{itemize}
\vspace{1cm}
Machine coding is often more easily {\large{reproducible}} and update-able.
}
\frame{
\frametitle{Human and Machine coding similarities}
\begin{center}
Regardless of whether you use human or machine coding, the general text analysis {\large{process is the same}}.
\end{center}
}
\section{General Text Analysis Steps}
\frame{
\frametitle{Text analysis steps}
\begin{enumerate}
\item {\large{Define}} the population of texts you are interested in (e.g. press releases by a particular organisation, open-ended survey responses).
\vspace{0.5cm}
\item Gather your {\large{sample of texts}}
\vspace{0.5cm}
\item {\large{Develop}} a coding scheme and {\large{classify}} your texts.
\vspace{0.5cm}
\item Establish the {\large{reliability and validity}} of your classifications.
\end{enumerate}
{\tiny{Modified from: http://psc.dss.ucdavis.edu/sommerb/sommerdemo/content/doing.htm}}
}
\frame{
\frametitle{Population}
At least two items to consider when defining your population of texts:
\vspace{0.5cm}
\begin{itemize}
\item Should be relevant for your research question.
\vspace{0.5cm}
\item Texts should be accessible.
\end{itemize}
}
\frame{
\frametitle{Gather your sample}
As with all data gathering, how you {\large{sample}} your texts can {\large{greatly affect your results}}.
\vspace{0.5cm}
For example, if you want to code press attitudes towards immigrants, but only gather articles from \emph{The Guardian}, you will get much different results than if you only sample \emph{The Daily Mail}.
\vspace{1cm}
We will discuss sampling in more detail in Week 7.
}
\frame{
\frametitle{Advanced: web scraping}
New tools for {\large{automatically gathering--web scraping--}}large numbers of texts from the internet.
\vspace{1cm}
Can make the data collection process {\large{dramatically faster}} and {\large{more reproducible}}.
\vspace{1cm}
We do not cover these tools in this course.
}
\frame{
\frametitle{Gather your sample}
In order to enhance reproducibility, when you gather your sample (your {\large{Corpus}}) it should be {\large{well-organised}} and {\large{electronically available}}.
}
\frame{
\frametitle{Develop a coding scheme(1)}
Always consider {\large{reliability}} when developing your coding scheme.
\begin{itemize}
\item Will another coder make the same choices given only the information in your coding scheme?
\end{itemize}
\vspace{1cm}
So, always {\large{fully document}} your coding scheme and {\large{explain your rationale}}.
}
\frame{
\frametitle{Develop a coding scheme (2)}
Determine if you want to create a {\large{discrete}} (e.g. main topic of the text) or {\large{continuous}} coding scheme (e.g. attitude scale).
\vspace{1cm}
This decision should be based on {\large{relevance to your research question}}.
\vspace{1cm}
{\large{Skim}} a sub-sample of the texts to {\large{make a list}} of possible topics or words that would indicate a particular attitude, etc.
}
\frame{
\frametitle{Develop a coding scheme (3)}
From this initial list, create {\large{operational definitions}} of your topic categories or scale.
\begin{itemize}
\item In order to enable replication, make these definitions as {\large{clear and specific}} as possible.
\vspace{0.5cm}
\item Check that your definitions are {\large{comprehensive}}. Do they cover as many topics, words related to attitudes as possible?
\vspace{0.5cm}
\item Make sure that your definitions are {\large{mutually exclusive}}, i.e. there is {\large{no overlap}}.
\end{itemize}
}
\frame{
\begin{center}
Now, {\large{apply}} your coding scheme.
\end{center}
}
\frame{
\frametitle{Check reliability}
You should always have {\large{at least one other rater}} independently apply your coding scheme.
\vspace{0.5cm}
Then check the {\large{level of aggreement}}. Ideally, different coders will give the same codes to the same texts based on the same coding scheme.
\begin{itemize}
\item This is known as high {\large{inter-rater reliability}}
\item Simple tests for this would be a correlation coefficient (for continuous codes) or $X^2$ tests (for discrete codes). Note: a {\large{lack of independence}} between the two raters' scores {\large{is what you want}}.
\end{itemize}
}
\frame{
\frametitle{Check reliability}
If there is considerable disagreement between raters, you need to {\large{re-evaluate your coding scheme}} and possibly {\large{recode your corpus}}.
}
\frame{
\frametitle{Check machine coded reliability}
If you used machine coding, then you should {\large{select a random sub-sample}} of the texts and check to see if the machine codes match {\large{your intended coding scheme}}.
}
\section{Special issues with machine coding}
\frame{
\frametitle{Special issues: machine coding}
There are many different advanced techniques for machine coding:
\begin{center}
\includegraphics[scale=0.35]{img/grimmer_stewart.png}
\end{center}
{\tiny{Grimmer and Stewart (2013, 268)}}
}
\frame{
\frametitle{This course}
In this course we will focus on:
\begin{itemize}
\item The text {\large{preprocessing step}}.
\vspace{0.5cm}
\item {\large{Simple word frequency}} methods of text analysis.
\end{itemize}
}
\frame{
\frametitle{Preprecessing}
Regardless of the type of machine coding you use, you need to {\large{preprocess your texts}}.
\vspace{1cm}
This can include\ldots
}
\frame{
\frametitle{Preprecessing}
Removing unnecessary white space (spacing between words), punctuation, capitalisation, numbers, etc.
}
\frame{
\frametitle{Preprecessing}
Removing {\large{stopwords}}: function words that do not convey meaning like ``a'' and ``the''.
}
\frame{
\frametitle{Preprecessing}
{\large{Stem}} your words: reduce the ends of words to reduce the total number of unique words.
\begin{itemize}
\item For example: \emph{family}, \emph{families}, \emph{families'}, \emph{familial}, are changed to their stem: \emph{famili}.
\item Stemming is related to linguistic concept called \emph{lemmatization}.
\end{itemize}
}
\frame{
\begin{center}
Note: each preprocessing decision affects your results and so should be {\large{fully justified}}.
\end{center}
}
\frame{
\begin{center}
Once you have preprocessed your data, then you can have your computer code the corpus.
\end{center}
}
\frame{
\frametitle{Word frequency}
In this course, we are going to focus on {\large{word frequency}} methods.
\vspace{0.5cm}
Note, you should focus on the {\large{relative word frequency}}. Most simply:
\begin{equation}
\mathrm{Relative\:Frequency} = \frac{\mathrm{Freq.\:of\:word\:in\:text}}{\mathrm{Total\:words\:in\:text}}
\end{equation}
\vspace{1cm}
Corrects for words being more frequent because a text is longer.
}
\frame{
\frametitle{Word frequency}
Word frequency is an efficient way to help you {\large{reproducibly summarise}} many texts,
\begin{center}
\emph{but\ldots}
\end{center}
word frequencies {\large{have no inherent meaning}}.
\vspace{0.5cm}
You still need to code what the frequency of a word means, relative to the context and your research question.
}
\frame{
\frametitle{Simple example: January 2016 Press Release}
\begin{center}
\includegraphics[scale=0.3]{img/jan_chancellor.pdf}
\end{center}
Refugees discussed in terms of the topic \ldots
}
\frame{
\frametitle{Simple example: January 2016 Press Release}
\begin{center}
\includegraphics[scale=0.3]{img/jan_chancellor.pdf}
\end{center}
Refugees discussed in terms of the topic \emph{crimminality}.
}
\section{Pros and Cons}
\frame{
\frametitle{Some Pros and Cons of Text Analysis}
\begin{table}
{\footnotesize{
\begin{tabular}{p{4.5cm} p{4.5cm}}
{\large{Pros}} & {\large{Cons}} \\
\hline\hline \\[0.1cm]
Texts are a massive new source of social data & Results can be misleading if we don't appreciate the context within which the speech acts takes place. \\[0.3cm]
Social interactions are increasingly happening via texts & Purely descriptive, need to do more work to understand why \\[0.3cm]
Useful for tracking changes over time & Sampling bias (including if writers delete texts) can be a major challenge \\[0.3cm]
Can be an inexpensive way to gather social data & \\[0.1cm]
\hline
\end{tabular}
}}
\end{table}
{\tiny{Partially from http://psc.dss.ucdavis.edu/sommerb/sommerdemo/content/strengths.htm}}
}
\end{document}
| {
"alphanum_fraction": 0.6888578448,
"avg_line_length": 26.2108585859,
"ext": "tex",
"hexsha": "f8768c179d0daaccb6aa5fa167a7c41e14f86a7c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "19afaec49317d191b8311475f41d3dbb977a5adb",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "christophergandrud/city_sg1022",
"max_forks_repo_path": "lectures/text_analysis/sg1022_text_analysis.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "19afaec49317d191b8311475f41d3dbb977a5adb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "christophergandrud/city_sg1022",
"max_issues_repo_path": "lectures/text_analysis/sg1022_text_analysis.tex",
"max_line_length": 250,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "19afaec49317d191b8311475f41d3dbb977a5adb",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "christophergandrud/city_sg1022",
"max_stars_repo_path": "lectures/text_analysis/sg1022_text_analysis.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5442,
"size": 20759
} |
\section{Poisson Processes}
\label{sec:Poisson-Processes}
The Poisson process is the most widely used model for arrivals into a system because it is often a realistic representation of natural models and, since it exposes the Markovian property, it is analytically tractable.
It greatly models natural process where we observe aggregation of large number of independent entities \footnote{This is true thanks to the Limiting Theorem \cite{ross2014introduction}}.
The Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since the last event. If inter-arrival times are Exponentially distributed, then the number of arrivals within a given time is Poisson distributed.
\begin{definition}[Poisson Distribution]
\label{def:Poisson-Distribution}
A random variable $N(t)$ is Poisson distributed with rate\footnote{if $N(t) \sim Poisson(\lambda)$, $\lambda$ is called \textit{rate} because $\expected{N(t)}=\lambda t$.} $\lambda$ ($N(t) \sim Poisson(\lambda)$) if $N(t)$ has the following probability mass function:
\begin{equation}
\label{eqn:Poisson-PMF}
f(k,t) = \frac{(\lambda t)^{k} e^{-\lambda t}}{k!}
\end{equation}
\end{definition}
The p.d.f of $X \sim Poisson(\lambda)$ is shown in \Cref{fig:poisson-pdf}.
\begin{figure}[tp]
\label{fig:Poisson-PMF}
\centering
\includegraphics{fig/poisson-pdf}
\caption{Poisson p.m.f.}
\end{figure}
From the \Cref{def:Poisson-Distribution} follows that the cumulative distribution function of $N(t) \sim Poisson(\lambda)$ is
\begin{equation}
\label{eqn:Poisson-CDF}
F(k,t) = \sum_{x=0}^{t} f(k,x)
\end{equation}
The Poisson distribution has mean
\begin{equation}
\label{eqn:Poisson-Mean}
\expected{N(t)} = \lambda t
\end{equation}
and variance
\begin{equation}
\label{eqn:Poisson-Variance}
\variance{N(t)} = \lambda t
\end{equation}
\begin{definition}[Poisson Process]
\label{def:poisson-process-statistical}
A Poisson process with rate $\lambda$ is a sequence of events such that
(i) $N(0)=0$,
(ii) it has independent increments,
(iii) $\probability{N(s+t)-N(s) = n} \sim Poisson(\lambda t)$
\end{definition}
Notice that point (iii) in \Cref{def:poisson-process-statistical} implies stationary increments.
Furthermore, the assumption of independent and stationary increments implies that, at any point in time, the process statistically restarts itself; that is, the process from any point onward is independent of all previously occurred (independent increments) and has the same distribution of the original process (stationary increments).
\begin{definition}[Poisson Process]
\label{def:poisson-process-practical}
A Poisson process with rate $\lambda$ is a sequence of events such that
(i) the interarrival times are Exponential r.v. with rate $\lambda$, and
(ii) $N(0)=0$.
\end{definition}
A Poisson process has stationary and independent increments.
Here are some useful properties of the Poisson distribution.
\begin{theorem}[Poisson Merging]
\label{thm:Poisson-Merging}
Given two independent Poisson process with rate $\lambda_{1}$ and $\lambda_{2}$, the merged process is a Poisson process with rate $(\lambda_{1} + \lambda_{2})$.
\end{theorem}
\begin{theorem}[Poisson Splitting]
\label{thm:Poisson-Splitting}
Given a Poisson process with rate $\lambda$, whose events are partitioned in class-A with probability $p$ and class-B with probability $(1-p)$, the class-A process is a Poisson process with rate $p \lambda$ and the class-B process is a Poisson process with rate $(1-p) \lambda$, and these processes are independent.
\end{theorem}
\begin{theorem}[Poisson Uniformity - Single Event]
\label{thm:Poisson-Uniformity-Single-Event}
Given that one event of a Poisson process has occurred by time $t$, that event is equally likely to have occurred anywhere in $[0,t]$.
\end{theorem}
\begin{theorem}[Poisson Uniformity - Multiple Events]
\label{thm:Poisson-Uniformity-Multiple-Events}
If $k$ events of a Poisson process occur by time $t$, then the $k$ events are distributed independently and uniformly in $[0,t]$.
\end{theorem} | {
"alphanum_fraction": 0.7601327329,
"avg_line_length": 47.404494382,
"ext": "tex",
"hexsha": "b3c00694e9d41fdb4156d1aaf0588753d9583549",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-02-17T13:30:49.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-02-17T13:30:49.000Z",
"max_forks_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "gmarciani/research",
"max_forks_repo_path": "performance-modeling/sec/poisson-processes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "gmarciani/research",
"max_issues_repo_path": "performance-modeling/sec/poisson-processes.tex",
"max_line_length": 399,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "gmarciani/research",
"max_stars_repo_path": "performance-modeling/sec/poisson-processes.tex",
"max_stars_repo_stars_event_max_datetime": "2018-07-20T12:54:12.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-07-27T13:31:43.000Z",
"num_tokens": 1150,
"size": 4219
} |
\section{Related Works}
\paragraph{Related work.}
\paragraph{Programming language for robots and embedded systems.}
Many dialect of C, Giotto (Labview), Lustre/Esterelle, Stateflow, ROS, etc.
\paragraph{Dynamic systems.}
Bond graphs,
Modelica,
Simulink,
wolfram's stuff
%most of these stuff have also been hybridized
\paragraph{Verification of hybrid systems.}
Hybrid Automata~\cite{DBLP:conf/lics/Henzinger96}\\
Verified Integrations of ODEs: Taylor models [Berz \& Makino]\\
Flow*\\
group doing formal specification of CPS at Vanderbild\\
\paragraph{Languages for CSG.}
openscad, coffeescad, pretty much any 3D CAD program (usually hidden by the GUI).
\paragraph[Kinetic and dynamic of robots.]
wheeled robot [Campion et. al.] \\
some generic method ...
| {
"alphanum_fraction": 0.7716535433,
"avg_line_length": 28.2222222222,
"ext": "tex",
"hexsha": "5e5acf4f2978a87da1babad17bb603546467c161",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "18041bbf1f43668b3f600c2d6daa994264915881",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "aleksandarmilicevic/react-lang",
"max_forks_repo_path": "doc/draft/related.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "18041bbf1f43668b3f600c2d6daa994264915881",
"max_issues_repo_issues_event_max_datetime": "2016-08-25T14:08:11.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-07-22T12:51:50.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "aleksandarmilicevic/react-lang",
"max_issues_repo_path": "doc/draft/related.tex",
"max_line_length": 81,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "18041bbf1f43668b3f600c2d6daa994264915881",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "aleksandarmilicevic/react-lang",
"max_stars_repo_path": "doc/draft/related.tex",
"max_stars_repo_stars_event_max_datetime": "2018-10-27T11:09:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-03-15T10:31:55.000Z",
"num_tokens": 206,
"size": 762
} |
\documentclass[11pt]{article}
\usepackage{myreport}
\usepackage[american]{babel}
\usepackage[backend=biber, sorting=none]{biblatex}
\addbibresource{report.bib}
\begin{document}
\title{\vspace{-2.5em} Excitation and Detection Efficiency Under Light-Sheet
Illumination \vspace{-1.0em}} \author{Talon Chandler}
\date{\vspace{-1em}June 21, 2017\\ (Updated: June 26, 2017)\vspace{-1em}}
\maketitle
\section{Introduction}
In these notes I will calculate the excitation and detection efficiency of a
single fluorophore under polarized scanned light-sheet illumination. I will
start by finding the transverse and longitudinal fields at all positions in a
Gaussian beam using reasonable approximations. Next I will calculate the
excitation efficiency of a single fluorophore as the beam is scanned across the
fluorophore. I will combine these results with the results from previous notes
to write the complete forward model. Finally I will discuss the approximations
we may need to simplify reconstruction.
\section{Transverse and Longitudinal Fields In A Scanned Gaussian Beam}
The complex spatial electric field of a paraxial Gaussian beam propagating along the
$\mh{z}$ axis is \cite{nov}
\begin{align}
\mb{E}(x,y,z) &= \mb{A}\frac{w_0}{w(z)}\text{exp}\left\{{-\frac{(x^2+y^2)}{w^2(z)}}+i\left[kz - \eta(z) + \frac{k(x^2+y^2)}{2R(z)}\right]\right\}\label{eq:gauss}
\end{align}
where
\begin{alignat}{2}
\mb{A} &= \cos\phi_{\text{pol}}\mh{x} + \sin\phi_{\text{pol}}\mh{y}\qquad &&\text{is the input Jones vector in 3D},\\
w_0 &\approx \frac{2n}{k(\text{NA})}\qquad &&\text{is the beam waist radius,}\\
z_0 &= \frac{kw_0^2}{2} &&\text{is the Rayleigh range,}\\
k &= \frac{2\pi n}{\lambda}\qquad &&\text{is the wave number,}\\
w(z) &= w_0\sqrt{1+\frac{z^2}{z_0^2}}\qquad &&\text{is the beam radius,}\\
R(z) &= z\left(1+\frac{z_0^2}{z^2}\right)\qquad &&\text{is the wavefront radius,}\\
\eta(z) &= \text{arctan}\left(\frac{z}{z_0}\right)\qquad &&\text{is the phase correction.}
\end{alignat}
Equation \ref{eq:gauss} uses the paraxial approximation, so it can only be used
for beams with a waist that is significantly larger than the reduced wavelength
($w_0 \gg \lambda/n$). Notice that under the paraxial approximation the beam is uniformly polarized in the transverse plane.
We would like to calculate the longitudinal component of a Gaussian beam when
the beam waist approaches the reduced wavelength ($w_0 > \lambda/n$). One
approach is to numerically evaluate the Richards-Wolf diffraction integral
\cite{richards, biobeam}. This approach is time consuming and too
accurate for our needs---we only want to model weak longitudinal
fields. Instead, I will follow Novotny et. al. \cite{nov} and use a longitudinal
correction to equation \ref{eq:gauss}.
As written, equation \ref{eq:gauss} doesn't satisfy Gauss' law
($\nabla \cdot \mb{E} = 0$), so we will add a longitudinal field to correct
it. If the input beam is polarized along the $\mh{x}$ axis
($\phi_{\text{pol}}=0$), then we can rearrange Gauss' law to relate the
longitudinal and transverse fields with
\begin{align}
E_z(x, y, z) = - \int \left[\frac{\partial}{\partial x} E_x(x,y,z)\right]dz.
\end{align}
Carnicer et al. \cite{carnicer} worked through this integral using the angular
spectrum representation and found that
\begin{align}
E_z(x,y,z) = -i\frac{2x}{kw_0^2}E_x(x,y,z) = -i\frac{x}{z_0}E_x(x,y,z)\label{eq:longit}.
\end{align}
Equation \ref{eq:longit} means that:
\begin{itemize}
\item There is no longitudinal polarization in the $\mh{y}-\mh{z}$ plane
because $E_z(0,y,z) = 0$.
\item There are longitudinal field lobes on both sides of the optical axis
along the transverse polarization direction.
\item The factor of $i$ means that the longitudinal fields are $90^{\circ}$ out
of phase with the transverse fields which means that the total field is
elliptically polarized off axis.
\item The longitudinal field strength is proportional to
$\lambda/w_0^2$---highly focused beams have the strongest longitudinal fields.
\end{itemize}
If the input is polarized along the $\mh{x}$ axis, the corrected 3D Jones vector is
\begin{align}
\mb{A}(x,y,z) = \mh{x}- i\frac{x}{z_0}\mh{z}.
\end{align}
If the input polarization is arbitrary then the corrected 3D Jones vector is
\begin{align}
\mb{A}(x,y,z) = \cos\phi_{\text{pol}}\mh{x} + \sin\phi_{\text{pol}}\mh{y} - i\frac{x\cos\phi_{\text{pol}} + y\sin\phi_{\text{pol}}}{z_0}\mh{z}.
\end{align}
If the beam is scanned along the $\mh{y}$ axis with velocity $v$ then the time
dependent 3D Jones vector is
\begin{align}
\mb{A}(x,y,z,t) = \cos\phi_{\text{pol}}\mh{x} + \sin\phi_{\text{pol}}\mh{y} - i\frac{x\cos\phi_{\text{pol}} + (y - vt)\sin\phi_{\text{pol}}}{z_0}\mh{z}\label{eq:scanned_j}
\end{align}
and the time-dependent electric field is
\begin{align}
\mb{E}(x,y,z,t) &= \mb{A}(x,y,z,t)\frac{w_0}{w(z)}\text{exp}\left\{{-\frac{(x^2+(y-vt)^2)}{w^2(z)}}+i\left[kz - \eta(z) + \frac{k(x^2+(y-vt)^2)}{2R(z)}\right]\right\}\label{eq:scanned_e}.
\end{align}
We will use equations \ref{eq:scanned_j} and \ref{eq:scanned_e} to calculate
the excitation efficiency of a single fluorophore. Notice that as the Rayleigh
range $z_0$ increases the longitudinal electric field decreases, so we can ignore the
longitudinal component when the beam is weakly focused.
\section{Scanned Beam Excitation Efficiency}
We define the excitation efficiency of a fluorophore as the fraction of
incident power that excites the fluorophore. If a fluorophore with absorption
dipole moment $\bs{\mu}_{\text{abs}}$ is placed in a time-independent complex
electric field $\mb{E}$, then the excitation efficiency is given by
\begin{align}
\eta_{\text{exc}} = \frac{|\bs{\mu}_{\text{abs}} \cdot \mb{E}(x,y,z)|^2}{|\mb{E}(x,y,z) |^2}.
\end{align}
If the fluorophore is placed in the path of a scanned laser then the electric
field becomes time dependent. If the laser is scanned quickly we would need to
consider the coherence of the electric field, but we will only consider slow
scanning here to simplify the calculation. Specifically, we require that
$v \ll w_0/\tau_c$---the scan velocity must be much less than the beam width
divided by the coherence time. In this case the excitation efficiency is
\begin{align}
\eta_{\text{exc}} = \frac{\intinf|\bs{\mu}_{\text{abs}} \cdot \mb{E}(x,y,z,t)|^2dt}{\intinf|\mb{E}(x,y,z,t)|^2dt}\label{eq:excitationx}
\end{align}
We plug equation \ref{eq:scanned_e} into equation \ref{eq:excitationx}, express
$\bs{\mu}_{\text{abs}}$ in spherical coordinates
\begin{align}
\bs{\mu}_{\text{abs}} = \cos\Phi\sin\Theta\mh{x} + \sin\Phi\sin\Theta\mh{y} + \cos\Theta\mh{z},
\end{align}
and evaluate the integrals (see Appendix for details) to write the
excitation efficiency as
\begin{align}
\eta_{\text{exc}} = \frac{\sin^2\Theta\cos^2(\Phi-\phi_{\text{pol}}) + \cos^2\Theta\frac{x^2\cos^2\phi_{\text{pol}} + \frac{1}{4}w^2(z)\sin^2\phi_{\text{pol}}}{z_0^2}}{1+\frac{x^2\cos^2\phi_{\text{pol}} + \frac{1}{4}w^2(z)\sin^2\phi_{\text{pol}}}{z_0^2}}.\label{eq:strong}
\end{align}
If the beam is weakly focused then we can ignore the longitudinal excitation and
the excitation efficiency simplifies to
\begin{align}
\eta_{\text{exc}} \approx \sin^2\Theta\cos^2(\Phi-\phi_{\text{pol}}).\label{eq:weak_approx}
\end{align}
\textbf{How good is the approximation in equation \ref{eq:weak_approx}?}
For longitudinal fluorophores equation \ref{eq:weak_approx} is a very bad
approximation---it predicts that a longitudinal fluorophore will not be excited
at all. We can't use a percentage error because the approximation completely
ignores the excitation of longitudinal dipoles.
Instead, we can compare the size of signals from longitudinal and transverse
fluorophores. If the signal from longitudinal fluorophores is less than the
signal from noise and background, then we can ignore the signal from
longitudinal fluorophores.
\begin{align}
\text{Excitation Ratio} &= \frac{\text{Max Longitudinal Excitation}}{\text{Max Transverse Excitation}}\\
&= \frac{\eta_{\text{exc}}(\Theta = 0)}{\eta_{\text{exc}}(\Theta = \pi/2, \Phi = \phi_{\text{pol}})}\\
&= \frac{x^2\cos^2\phi_{\text{pol}} + \frac{1}{4}w^2(z)\sin^2\phi_{\text{pol}}}{z_0^2}\label{eq:ratio}
\end{align}
At first glance equation \ref{eq:ratio} looks bleak---the excitation ratio grows
without bound in the $x$ direction which means that the longitudinal excitation
becomes a larger fraction of the total excitation as we move away from the plane
of the light-sheet. Fortunately, we only care about regions of the beam with a
high intensity, so we look at the intensity-weighted excitation ratio instead.
\begin{align*}
\text{Intensity-Weighted Excitation Ratio} &= \frac{w_0}{w(z)}e^{-\frac{2x^2}{w^2(z)}}\frac{x^2\cos^2\phi_{\text{pol}} + \frac{1}{4}w^2(z)\sin^2\phi_{\text{pol}}}{z_0^2}\label{eq:intratio} \\
\end{align*}
We can interpret the intensity-weighted excitation ratio as the fraction of the
maximum signal (created by a transverse fluorophore at the origin) that we
ignore by ignoring longitudinal excitation. Figure \ref{fig:error} shows that
the intensity weighted excitation ratio is \textless 2\% for the imaging parameters used
in Wu et. al. \cite{wu2013}.
\fig{../figures/error.pdf}{1.0}{Intensity-weighted excitation ratio as a
function of position using the parameters in Wu et. al. \cite{wu2013}:
$w_0 = 1.2\ \mu$m, $z_0 = 9\ \mu$m, $\lambda = 488$ nm, FOV$\approx$ 80$\times$80
$\mu$m${}^2$. The maximum values are at the edge of the FOV and are \textless 2\%.}{error}
As a rough heuristic, if the fraction of the signal from noise and background is
greater than the intensity-weighted excitation ratio we can justifiably ignore
the longitudinal component.
Figure 2 shows the maximum intensity-weighted excitation ratio for an
80$\times$80 $\mu$m${}^2$ FOV as a function of $w_0$. The maximum
intensity-weighted excitation ratio decreases as $w_0$ increases because wider
beams have smaller longitudinal components.
\fig{../figures/max-error.pdf}{0.5}{Maximum intensity-weighted excitation ration (see equation \ref{eq:intratio}) for an 80$\times$80 $\mu$m${}^2$ FOV with $\lambda = 488$ nm.}{max-error}
\section{Detection Efficiency}
If we detect fluorescence in wide-field mode with an orthogonal arm and no
polarizer then the detection efficiency is
\begin{align}
\eta_{\text{det}} = 2A + 2B\sin^2\Theta'
\end{align}
where
\begin{align}
A &= \frac{1}{4} - \frac{3}{8}\cos\alpha + \frac{1}{8}\cos^3\alpha,\\
B &= \frac{3}{16}\cos\alpha - \frac{3}{16}\cos^3\alpha,
\end{align}
$\alpha = \text{arcsin(NA}/n)$ is the detection cone half angle, and $\Theta'$
is the angle between the dipole axis and the detection optical axis. See the
2017-06-09 notes for the relationship between $\Theta'$ and $\Theta,\Phi$. See
\cite{fourkas} and the 2017-04-25 notes for the derivation of the detection
efficiencies and additional expression for detection arms that use a polarizer.
\section{Orientation Forward Model}
The detected intensity is proportional to the the product of the excitation and
detection efficiencies. Using a weakly focused excitation beam and an
unpolarized detection arm gives us the following model
\begin{align}
I_{\phi_{\text{pol}}} &= I_{\text{tot}}\sin^2\Theta\cos^2(\Phi - \phi_{\text{pol}})(2A+2B\sin^2\Theta')\label{eq:forward}
\end{align}
where $I_{\text{tot}}$ is the intensity we would collect if we had an excitation and detection efficiency of 1.
\section{Discussion}
Equation \ref{eq:strong} shows that for strongly focused beams the excitation
efficiency is a function of position. This couples the orientation and location
of the fluorophore and complicates our reconstruction. For now we'll use only
weakly focused beams so that we can ignore the longitudinal component.
To ignore the longitudinal component, we require that the fraction of the signal
from noise and background is greater than the intensity weighted excitation
ratio (\textless 2\% with current imaging parameters). We'll need to be careful about
longitudinal excitation if we want to use beams that are more strongly focused.
Under the weak-focusing approximation the orientation and position of
fluorophores are decoupled. This will allow us to split the reconstruction into
two steps (1) estimate the position of the fluorophores using unpolarized frames
(or the sum of orthogonally polarized frames) with established reconstruction
techniques then (2) estimate the orientation or the fluorophores using polarized
frames and equation \ref{eq:forward}.
Note that we are working in a different regime than Agrawal et. al. \cite{agrawal}. They
are considering imaging systems with better resolution than ours, so the
position and orientation are coupled and must be estimated together. At our
resolution, the position and orientation are decoupled so we can estimate them
separately.
\section{References}
\setlength\biblabelsep{0.025\textwidth}
\printbibliography[heading=none]
\pagebreak
\section{Appendix}
We will evaluate the following integrals to find the excitation efficiency
\begin{align}
\eta_{\text{exc}}(x,y,z) = \frac{\intinf|\bs{\mu}_{\text{abs}} \cdot \mb{E}(x,y,z,t)|^2dt}{\intinf|\mb{E}(x,y,z,t)|^2dt}\label{eq:excitation}
\end{align}
where
\begin{align}
\bs{\mu}_{\text{abs}} &= \cos\Phi\sin\Theta\mh{x} + \sin\Phi\sin\Theta\mh{y} + \cos\Theta\mh{z}\\
\mb{E}(x,y,z,t) &= \mb{A}(x,y,z,t)\frac{w_0}{w(z)}\text{exp}\left\{{-\frac{(x^2+(y-vt)^2)}{w^2(z)}}+i\left[kz - \eta(z) + \frac{k(x^2+(y-vt)^2)}{2R(z)}\right]\right\}\\
\mb{A}(x,y,z,t) &= \cos\phi_{\text{pol}}\mh{x} + \sin\phi_{\text{pol}}\mh{y} - i\frac{x\cos\phi_{\text{pol}} + (y - vt)\sin\phi_{\text{pol}}}{z_0}\mh{z}.\label{eq:scanned_jones}
\end{align}
We'll need the following facts
\begin{align}
\intinf e^{-ax^2}dx &= \sqrt{\frac{\pi}{a}}\\
\intinf xe^{-ax^2}dx &= 0 \label{eq:odd}\\
\intinf x^2e^{-ax^2}dx &= \frac{1}{2a}\sqrt{\frac{\pi}{a}}.
\end{align}
The numerator is
\begin{align}
= &\intinf|\bs{\mu}_{\text{abs}} \cdot \mb{E}(x,y,z,t)|^2dt\\
\begin{split}
= &\intinf\Bigg |\left[\cos\Phi\sin\Theta\cos\phi_{\text{pol}} + \sin\Phi\sin\Theta\sin\phi_{\text{pol}} - i\cos\Theta\frac{x\cos\phi_{\text{pol}} + (y - vt)\sin\phi_{\text{pol}}}{z_0}\right]\\
&\ \ \ \ \ \ \ \ \frac{w_0}{w(z)}\text{exp}\left\{{-\frac{(x^2+(y-vt)^2)}{w^2(z)}}+i\left[kz - \eta(z) + \frac{k(x^2+(y-vt)^2)}{2R(z)}\right]\right\}\Bigg |^2dt.
\end{split}\\
\intertext{After changing variables $y' = y - vt$ and moving constants outside the integral we get}
= &\frac{w_0^2}{w^2(z)}e^{-\frac{2x^2}{w^2(z)}}\intinf\left|\left[\cos\Phi\sin\Theta\cos\phi_{\text{pol}} + \sin\Phi\sin\Theta\sin\phi_{\text{pol}} - i\cos\Theta\frac{x\cos\phi_{\text{pol}} + y'\sin\phi_{\text{pol}}}{z_0}\right]\right|^2e^{-\frac{2y'^2}{w^2(z)}}dy'.
\intertext{After expanding the square brackets we get}
= &\frac{w_0^2}{w^2(z)}e^{-\frac{2x^2}{w^2(z)}}\intinf \Bigg [(\cos\Phi\sin\Theta\cos\phi_{\text{pol}} + \sin\Phi\sin\Theta\sin\phi_{\text{pol}})^2 +\cos^2\Theta\frac{(x\cos\phi_{\text{pol}} +y'\sin\phi_{\text{pol}})^2}{z_0^2} \Bigg ]e^{-\frac{2y'^2}{w^2(z)}}dy'\\
= &\frac{w_0^2}{w(z)}e^{-\frac{2x^2}{w^2(z)}}\sqrt{\frac{\pi}{2}}\Bigg [\sin^2\Theta\cos^2(\Phi - \phi_{\text{pol}}) + \cos^2\Theta\frac{x^2\cos^2\phi_{\text{pol}} + \frac{1}{4}w^2(z)\sin^2\phi_{\text{pol}}}{z_0^2} \Bigg ].
\end{align}
The denominator is
\begin{align}
= &\intinf|\mb{E}(x,y,z,t)|^2dt\\
= &\frac{w_0^2}{w^2(z)}e^{-\frac{2x^2}{w^2(z)}}\intinf\left|\left[\cos\phi_{\text{pol}}\mh{x} + \sin\phi_{\text{pol}}\mh{y} - i\frac{x\cos\phi_{\text{pol}} + y'\sin\phi_{\text{pol}}}{z_0}\mh{z}\right]\right|^2e^{-\frac{2y'^2}{w^2(z)}}dy'\\
= &\frac{w_0^2}{w^2(z)}e^{-\frac{2x^2}{w^2(z)}}\intinf\left[1 + \frac{(x\cos\phi_{\text{pol}} + y'\sin\phi_{\text{pol}})^2}{z^2_0}\right]e^{-\frac{2y'^2}{w^2(z)}}dy'\\
= &\frac{w_0^2}{w(z)}e^{-\frac{2x^2}{w^2(z)}}\sqrt{\frac{\pi}{2}}\left[1 + \frac{x^2\cos^2\phi_{\text{pol}} + \frac{1}{4}w^2(z)\sin^2\phi_{\text{pol}}}{z^2_0}\right].
\end{align}
The final excitation efficiency is
\begin{align*}
\eta_{\text{exc}}(x,y,z) = \frac{\sin^2\Theta\cos^2(\Phi - \phi_{\text{\text{pol}}}) + \cos^2\Theta\frac{x^2\cos^2\phi_{\text{pol}} + \frac{1}{4}w^2(z)\sin^2\phi_{\text{pol}}}{z_0^2}}{1 + \frac{x^2\cos^2\phi_{\text{pol}} + \frac{1}{4}w^2(z)\sin^2\phi_{\text{pol}}}{z^2_0}}.
\end{align*}
\end{document}
| {
"alphanum_fraction": 0.7057318633,
"avg_line_length": 58.0106007067,
"ext": "tex",
"hexsha": "9b36b4b5d7f7d5dde02fd6423c2cabd248df9110",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "04904871924276fd1662ca15b7224166d271c0d8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "talonchandler/dipsim",
"max_forks_repo_path": "notes/2017-06-21-light-sheet-illumination/report/report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "04904871924276fd1662ca15b7224166d271c0d8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "talonchandler/dipsim",
"max_issues_repo_path": "notes/2017-06-21-light-sheet-illumination/report/report.tex",
"max_line_length": 274,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "04904871924276fd1662ca15b7224166d271c0d8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "talonchandler/dipsim",
"max_stars_repo_path": "notes/2017-06-21-light-sheet-illumination/report/report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5630,
"size": 16417
} |
\subsection{Proposed approach}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The main purpose of this work is to investigate the feasibility of deep learning approaches for delamination identification in CFRP materials by only utilizing frames of the full wavefield propagation of the guided waves.
Accordingly, two deep learning models based on time data sequence were developed.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure} [!h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=.2\textheight]{Fully_ConvLSTM2d_MODEL_updated.png}
\caption{Convolutional LSTM model}
\label{fig:convlstm_model}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=.2\textheight]{RNN_LSTM_MODEL_updated.png}
\caption{Time distributed AE model}
\label{fig:AE_convlstm}
\end{subfigure}
\caption{The architecture of the proposed deep learning models.}
\label{fig:proposed_models}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The developed models presented in Fig.~\ref{fig:proposed_models} have a scheme of Many-to-One sequence prediction, in which it takes \(n\) number of frames representing the waves propagation through time and their interaction with the damage in order to extract the damage features, and finally predicts the delamination location, shape and size in a single output image.
The first proposed model presented in~\ref{fig:convlstm_model}
consists of a three ConvLSTM layers with the filter sizes of 10, 5, and 10 respectively.
The kernel size of the ConvLSTM layers was set to (\(3\times3\)) with a stride of 1 padding was set to "same" which makes the output as same as the input in the case of stride 1.
Furthermore, a \(tanh\) (the hyperbolic tangent) activation function was used at the ConvLSTM layers, which outputs a values in a range between \(-1\) and \(1\).
Moreover, batch normalization~\cite{Santurkar2018} was also applied after each ConvLSTM layer.
The final layer is used a simple 2D convolutional layer followed by a sigmoid activation function which outputs values in a range between \(0\) and \(1\) to indicate the delamination probability.
In this model, we applied a binary cross-entropy as the objective function.
Moreover,Adadelta~\cite{zeiler2012adadelta} optimization technique was utilised that performs back-propagation through time (BPTT)~\cite{goodfellow2016deep}.
Since the sigmoid activation function produces probability values between \(0\) and \(1\), a threshold value must be chosen to classify the output into a damaged or undamaged classes.
Accordingly, the threshold value was set to (\(0.5\)).
The reason for choosing this value for the sigmoid activation function is explained in our previous research work~\cite{ijjeh2021full}.
In the second model presented in~\ref{fig:AE_convlstm} we have applied an autoencoder technique (AE) which is well-known technique for features extraction.
The idea of AE is to compress the input data within the encoding process then learn how to reconstruct it back from the reduced encoded representation (latent space) to a representation that is as close to the original input as possible.
Accordingly, AE reduces data dimensions by learning how to discard the noise in the data.
In this model, we have investigated the use of AE to process a sequence of input frames in order to perform image segmentation operation.
Accordingly, a Time Distributed layer presented in Fig.~\ref{fig:TD} was introduced to the model, in which it distributes the input frames into the AE to keep the independently among frames.
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{Time_ditributed_layer.png}
\caption{Flow of input frames using Time distributed layer}
\label{fig:TD}
\end{figure}
The AE consists of three parts: the encoder, the bottleneck, and the decoder.
The encoder is responsible for learning how to reduce the input dimensions and compress the input data into an encoded representation.
In Fig.~\ref{fig:AE_convlstm}, the encoder part consists of four levels of downsampling.
The purpose of having different scale levels is to extract feature maps from the input image at different scales.
Every level at the encoder consists of two 2D convolution operations followed by a Batch Normalization then a dropout is applied.
Furthermore, at the end of each level a Maxpooling operation is applied to reduce the dimensionality of the inputs.
The bottleneck presented in Fig.\ref{fig:AE_convlstm} has the lowest level of dimensions of the input data, further it consists of two 2D convolution operations followed by a Batch Normalization.
The decoder part presented in Fig.\ref{fig:AE_convlstm}, is responsible of learning how to restore the original dimensions of the input.
The decoder part consists of two 2D convolutional operations followed by Batch Normalization and Dropout, and an upsampling operation is applied at the end of each decoder level to retrieve the dimensions of its inputs.
Moreover, to enhance the performance learning of the decoder skip connections linking the encoder with the corresponding decoder levels were added.
The outputs sequences of the decoder part is fed into the ConvLSTM2D layer that is utilized to learn long-term spatiotemporal features.
Finally, a 2D convolution operations is applied on the output of the ConvLSTM2d layer followed by a sigmoid activation function. | {
"alphanum_fraction": 0.7675929267,
"avg_line_length": 87.9682539683,
"ext": "tex",
"hexsha": "76ebbec34a279c3cb0e29cb3adf06b79f60f49a7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9fb0ad6d5e6d94531c34778a66127e5913a3830c",
"max_forks_repo_licenses": [
"RSA-MD"
],
"max_forks_repo_name": "IFFM-PAS-MISD/aidd",
"max_forks_repo_path": "reports/journal_papers/ConvLSTM Paper/Model_architecture.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9fb0ad6d5e6d94531c34778a66127e5913a3830c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"RSA-MD"
],
"max_issues_repo_name": "IFFM-PAS-MISD/aidd",
"max_issues_repo_path": "reports/journal_papers/ConvLSTM Paper/Model_architecture.tex",
"max_line_length": 372,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "9fb0ad6d5e6d94531c34778a66127e5913a3830c",
"max_stars_repo_licenses": [
"RSA-MD"
],
"max_stars_repo_name": "IFFM-PAS-MISD/aidd",
"max_stars_repo_path": "reports/journal_papers/ConvLSTM Paper/Model_architecture.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-03T05:36:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-03T05:36:07.000Z",
"num_tokens": 1252,
"size": 5542
} |
\documentclass[11pt]{article}
\usepackage{format}
\newtheorem{theorem}{Theorem}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{float}
\usepackage[toc,page]{appendix}
\usepackage[font=small,skip=0pt]{caption}
\usepackage{url}
\usepackage[numbers]{natbib}
\begin{document}
\begin{center}
\underline{\huge A Report on Non-Local Means Denoising}
\end{center}
\section{A description of the non-local means denoising algorithm}
\cite{schmid_cvpr_2005} Non local means denoising uses samples from all around the image, instead of conventional denoising which will just look at the area around the given pixel to increase the accuracy of the colour. The reason it does this is due to the fact that patterns and shapes will be repeated in images, meaning that there will likely be an area somewhere else in the image that looks very similar to the patch around the pixel looking to be corrected. By finding these areas and taking averages of the pixels in similar areas, the noise will reduce as the random noise will converge around the true value.\\
\\
So the method by which non-local means runs is to look at many patches throughout the image, and compare the similarities of those patches with the patch around the pixel looking to be denoised. This comparison then allows for assigning a weight to each patch looked at, which are then used (along with the colour of the pixel in the centre of the patch) in the calculation of the colour of the pixel to be denoised.
\section{Various implementations of the algorithm and their efficiency}
\subsection{Pixelwise}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.3\linewidth]{drawing}
\end{center}
\caption{Visualisation of pixelwise denoising}
\end{figure}
\cite{buades_non-local_2011}Taking an image u and a pixel in it you want to denoise, p, you first need to decide a patch size, given by r, as the dimensions of the patch (blue) are $(2r+1)\times(2r+1)$. You then look at all the other pixels, $q\in Q$, but as it is intensive to do the calculations, specifying a research zone (red) allows you to make the processing faster as fewer comparisons have to be done. When looking at the other pixels, calculate their patch of the same size as the patch of p, then compare each pixel in the patch of q with the corresponding pixel in the patch of p. This similarity is then used to compute the similarity between the patch around p and the patch around q, and a weighting is given to q to describe this. These weightings are then averaged with the colours of the pixels to provide a more accurate representation of the pixel.
\subsection{Patchwise}
\cite{buades_non-local_2011} The main way in which patchwise differs from pixelwise is in the formulation of the weighting, as you can see below
\begin{figure}[H]
\centering
\begin{subfigure}{.45\textwidth}
\centering
$$C(p)=\sum_{q \in B(p, r)} w(p, q)$$
\caption{Pixelwise}
\label{fig:pixelwise}
\end{subfigure}%
\begin{subfigure}{.45\textwidth}
\centering
$$
C=\sum_{Q=Q(q, f) \in B(p, r)} w(B, Q)
$$
\caption{Patchwise}
\label{fig:patchwise}
\end{subfigure}
\end{figure}
By calculating weights for pixels instead of patches we can make one calculation per patch, therefore not needing to do $(2f+1)^2$ calculations per pixel, providing a large increase in performance. The overall quality of the two methods are the same, and so the patchwise method is preferred as it has no drawbacks for an improvement in speed.
\section{The strengths and limitations of non-local means compared to other denoising algorithms}
\subsection{Method noise}
\cite{buades_review_2005}\textbf{Definition (method noise)}. Let u be a (not necessarily noisy) image and $D_h$ a denoising operator depending on h. Then we define the method noise of u as the image difference
$$n(D_h,u)=u-D_h(u)$$
This method noise should be as similar to white noise as possible. The image below is sourced from \cite{buades_review_2005}
\begin{center}
\includegraphics[scale=0.4]{method_noise}
\end{center}
From left to right and from top to bottom: original image,Gaussian convolution, mean curvature motion, total variation, Tadmor–Nezzar–Vese iterated total variation, Osher et al. total variation, neighborhood filter, soft TIWT, hard TIWT, DCT empirical Wiener filter, and the NL-means algorithm.\\
\\
You can see that the NL means algorithm is closest to white noise, as it is very difficult to make out the original image from the method noise, and so is the best in this area
\subsection{Mean square error}
\cite{machine_2018}The mean square error measures the average squared difference between the estimated values and what is estimated. In images this acts as a measure of how far from the true image the denoised image is. These results are taken from \cite{buades_review_2005}
\begin{center}
\includegraphics[scale=0.7]{"mean"}
\end{center}
Here it can be seen that the NL-means algorithm gives images that are closest to the true image, and so performs best for image denoising under this measurement.
\newpage
\section{The influence of the algorithmic parameters on the output}
In the following images I am changing the values of h, the template window size and the search window size, from a standard set at h=5, template window size=7 and search window size =21. I will adjust each one in turn to show the differences yielded by changing them.
\begin{figure}[H]
\centering
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{h2}
\caption{h=2}
\label{fig:h2}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{h10}
\caption{h=10}
\label{fig:h10}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{template2}
\caption{template width=2}
\label{fig:template2}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{template15}
\caption{template width=10}
\label{fig:template15}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{search10}
\caption{search window size=10}
\label{fig:search10}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{search30}
\caption{search window size=30}
\label{fig:search30}
\end{subfigure}
\caption{The effects of adjusting the value of the various parameters}
\label{fig:parameters}
\end{figure}
By adjusting the value of h you get a large change in the amount of smoothing, although a large amount of noise is still present. Increasing the value of h does increase the PSNR from 28.60 to 29.66\\
\\
The effects from adjusting the template width are much more subtle than that of adjusting h, it can be noticed in the wires overhead that a larger template width has reduced the detail. An increase in the template width yields a small reduction in the PSNR from 28.68 to 28.51.\\
\\
The effects for the value of the search window are also very subtle, and again can only be noticed fully in the overhead wires. An increase in the search window yields a marginal increase in the PSNR from 28.51 to 28.52.
\section{Modifications and extensions of the algorithm that have been proposed in the literature}
\subsection{Testing stationarity}
\cite{buades_review_2005} One proposed modification is one to test stationarity. The original algorithm works under the conditional expectation process:
\begin{theorem}
(conditional expectation theorem)\\
Let $Z_j=\{X_j,Y_j\}$ for $j=1,2,...$ be a strictly stationary and mixing process. For $i\in I$, let $X$ and $Y$ be distributed as $X_i$ and $Y_i$. Let J be a compact subset $J\subset \mathbb{R}^p$ such that
$$\inf\{f_X(x);x\in J\}>0$$
\end{theorem}
However this is not true everywhere, as each image may contain exceptional, non-repeated structures, these would be blurred out by the algorithm, so the algorithm should have a detection phase and special treatment of nonstationary points. In order to use this strategy a good estimate of the mean and variance at every pixel is needed, fortunately the non-local means algorithm converges to the conditional mean, and the variance can just be calculated using $EX^2-(EX)^2$
\newpage
\subsection{Multiscale version}
Another improvement to make is one to speed up the algorithm, this is proposed using a multiscale algorithm.
\begin{enumerate}
\item Zoom out the image $u_0$ by a factor of 2. This gives the new image $u_1$
\item Apply the NL means algorithm to $u_1$, so that with each pixel of $u_1$, a list of windows centered in $(i_1,j_1)...(i_k,j_k)$ is associated
\item For each pixel of $u_0$, $(2i+r,2j+s)$ with $r,s\in \{0,1\}$, we apply the NL means algorithm. However instead of comparing with all the windows in the search zone, we just compare with the 9 neighbouring windows of each pixel
\item This procedure can be applied in a pyramid fashion
\end{enumerate}
\section{Applications of the original algorithm and its extensions}
\subsection{Medical Imaging}
\cite{zhang_applications_2017} It has been proposed that Non-Local means can be used in X-Ray imaging, allowing for a reduction of noise in the scans, making them easier to interpret. In CT scans a higher dose can be given to give a clearer image, but with that is more dangerous, however by applying the NL means algorithm a lower dose can be given for the same clarity. It benefits from the improvement stated above to test stationarity as the the noise and streak artifacts are non stationary. The original algorithm was also not good at removing the streak artifacts in low-flux CT images resulting from photon starvation. However by applying one-dimensional nonlinear diffusion in the stationary wavelet domain before applying the non-local means algorithm these could be reduced.
\subsection{Video Denoising}
\cite{ali_recursive_2017} NLM can also be applied in video denoising, it has an adaptation as the denoising can be improved by using the data from sequential frames. In the implementation proposed in the paper, the current input frame and prior output frame are used to form the current output frame. In the paper the measurements they make fail to show that this algorithm is an improvement from current algorithms, however the algorithm does have much better subjective visual performance.
\bibliographystyle{unsrtnat}
\bibliography{bibliography}
\newpage
\begin{appendices}
Image \ref{fig:h2}\\
\includegraphics[width=0.8\linewidth]{h2}\\
Image \ref{fig:h10}\\
\includegraphics[width=0.8\linewidth]{h10}
\newpage
Image \ref{fig:template2}\\
\includegraphics[width=0.8\linewidth]{template2}\\
Image \ref{fig:template15}\\
\includegraphics[width=0.8\linewidth]{template15}
\newpage
Image \ref{fig:search10}\\
\includegraphics[width=0.8\linewidth]{search10}\\
Image \ref{fig:search30}\\
\includegraphics[width=0.8\linewidth]{search30}\\
\end{appendices}
\end{document} | {
"alphanum_fraction": 0.7777472779,
"avg_line_length": 64.2882352941,
"ext": "tex",
"hexsha": "cc7b9daca63066d27889d4e5711caa24221b9701",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5222974280e292dfbe63d72eba1cad80a15f0c19",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "samrobbins85/SM-IP-Coursework",
"max_forks_repo_path": "Essay/Image Processing Assignment.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5222974280e292dfbe63d72eba1cad80a15f0c19",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "samrobbins85/SM-IP-Coursework",
"max_issues_repo_path": "Essay/Image Processing Assignment.tex",
"max_line_length": 868,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5222974280e292dfbe63d72eba1cad80a15f0c19",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "samrobbins85/SM-IP-Coursework",
"max_stars_repo_path": "Essay/Image Processing Assignment.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2856,
"size": 10929
} |
\documentclass[12pt,twoside,notitlepage]{report}
\usepackage{a4}
\usepackage{verbatim}
\usepackage{graphicx}
\usepackage{listings}
\raggedbottom % try to avoid widows and orphans
\setlength{\topskip}{1\topskip plus 7\baselineskip}
\sloppy
\clubpenalty1000%
\widowpenalty1000%
\addtolength{\oddsidemargin}{6mm} % adjust margins
\addtolength{\evensidemargin}{-8mm}
\renewcommand{\baselinestretch}{1.1} % adjust line spacing to make
% more readable
\setlength{\parskip}{1.3ex plus 0.2ex minus 0.2ex}
\setlength{\parindent}{0pt}
\setcounter{secnumdepth}{5}
\setcounter{tocdepth}{5}
\lstset{
basicstyle=\small,
language={[Sharp]C},
tabsize=4,
numbers=left,
frame=none,
frameround=tfft
}
\begin{document}
\bibliographystyle{plain}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Title
\pagestyle{empty}
\hfill{\LARGE \bf David Srbeck\'y}
\vspace*{60mm}
\begin{center}
\Huge
{\bf .NET Decompiler} \\
\vspace*{5mm}
Part II of the Computer Science Tripos \\
\vspace*{5mm}
Jesus College \\
\vspace*{5mm}
May 16, 2008
\end{center}
\cleardoublepage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Proforma, table of contents and list of figures
\setcounter{page}{1}
\pagenumbering{roman}
\pagestyle{plain}
\section*{Proforma}
{\large
\begin{tabular}{ll}
Name: & \bf David Srbeck\'y \\
College: & \bf Jesus College \\
Project Title: & \bf .NET Decompiler \\
Examination: & \bf Part II of the Computer Science Tripos, 2008 \\
Word Count: & \bf 11949\footnotemark[1] \\
Project Originator: & David Srbeck\'y \\
Supervisor: & Alan Mycroft \\
\end{tabular}
}
\footnotetext[1]{This word count was computed
by {\tt detex diss.tex | tr -cd '0-9A-Za-z $\tt\backslash$n' | wc -w}
}
\stepcounter{footnote}
\subsection*{Original Aim of the Project}
The aim of this project was to create a .NET decompiler.
That is, to create a program that translates .NET executables
back to C\# source code.
%This is essentially the opposite of a compiler.
The concrete requirement was to decompile a
quick-sort algorithm. Given an implementation of a quick-sort
algorithm in executable form, the decompiler was required
to re-create semantically equivalent C\# source code.
Although allowed to be rather cryptic, the generated source code
had to compile and work without any additional modifications.
\subsection*{Work Completed}
Implementation of the quick-sort algorithm decompiles successfully.
Many optimizations were implemented and perfected to the
point where the generated source code for the quick-sort
algorithm is practically identical to the original%
\footnote{See pages \pageref{Original quick-sort} and \pageref{Decompiled quick-sort} in the evaluation chapter.}.
In particular, the source code compiles back to identical .NET CLR code.
Further challenges were sought in more complex programs.
The decompiler deals with several issues which did not
occur in the quick-sort algorithm. It also implements some
optimizations that are only applicable to the more complex programs.
\subsection*{Special Difficulties}
None.
\newpage
\section*{Declaration}
I, David Srbeck\'y of Jesus College, being a candidate for Part II of the Computer
Science Tripos, hereby declare
that this dissertation and the work described in it are my own work,
unaided except as may be specified below, and that the dissertation
does not contain material that has already been used to any substantial
extent for a comparable purpose.
\bigskip
\leftline{Signed}
\medskip
\leftline{Date}
\cleardoublepage
\tableofcontents
\listoffigures
\cleardoublepage % just to make sure before the page numbering
% is changed
\setcounter{page}{1}
\pagenumbering{arabic}
\pagestyle{headings}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% 3 Pages
\chapter{Introduction}
\section{The goal of the project}
% Decompiler
% .NET
The goal of this project is to create a .NET decompiler.
Decompiler is a tool that translates machine code back to source
code. That is, it does the opposite of a compiler -- it takes the
executable file and it tries to recreate the original source code.
In general, decompilation can be extremely difficult or even impossible.
Therefore this project focuses on something slightly simpler --
decompilation of .NET executables.
The advantage of .NET executables is that they consist of processor
independent bytecode which is easier to decompile then
the traditional machine code because
the instructions are more high level and the code is less optimized.
The .NET executables also preserve a lot of metadata about the code like
type information and method names.
Note that despite its name, the \emph{.NET Framework}
is unrelated to networking or the Internet and it is basically the
equivalent of \emph{Java Development Kit}.
To succeed, the produced decompiler is required to successfully decompile
a console implementation of a quick-sort algorithm back to C\# source code.
\section{Decompilation}
Decompilation is the opposite of compilation. It takes the executable
file and attempts to recreate the original source code.
Despite of doing the reverse, decompilation is very similar to
compilation. This is because both compilation and decompilation do
in essence that same thing -- translate program from one form to
other form and then optimize the final form.
Because of this, many analysis and optimizations used in compilation
are also applicable in decompilation.
Decompilation may not always be possible. In particular,
self-modifying code is very problematic.
\section{Uses of decompilation}
\begin{itemize}
\item Recovery of lost source code.
\item Reverse engineering of programs that ship with no
source code. This, however, might be a bit controversial use
due to legal issues.
\item Interoperability.
\item Security analysis of programs.
\item Error correction or minor improvements.
\item Porting of executable to other platform.
\item Translation between languages.
Source code in some obscure language can be compiled into an
executable which can in turn be decompiled into C\#.
\item Debugging of deployed systems. Deployed system
might not ship with source code or it might be difficult
to `link' it to the source code because it is optimized.
Also note that it might not be acceptable to restart
the running system because the issue is not easily reproducible.
\end{itemize}
\section{The .NET Framework}
% Bytecode - stack-based, high-level
% JIT
% Much metadata
% Typesafe
The \emph{.NET Framework} is a general-purpose software development platform
which is very similar to the \emph{Java Development Kit}.
It includes an extensive class library
and, similarly to Java, it is based on a virtual machine model which
makes it platform independent.
To be platform independent the program is stored in form of bytecode
(also called Common Intermediate Language or \emph{IL}).
The bytecode instructions use the stack to pass data around rather then
registers.
The instructions are aware of the object-oriented programing
model and are more high level then traditional processor instructions.
For example, there is special instruction to allocate an object in memory,
to access a field of an object, to cast an object to a different type
and so on.
The byte code is translated to machine code of the
target platform at run-time. The code is just-in-time compiled rather then
interpreted. This means that the first execution of a function will be slow
due to the translation overhead, but the subsequent executions of the
same function will be very fast.
The .NET executables preserve a lot of information about the
structure of the program. The complete structure of classes is
preserved including all the typing information. The code is still
separated into distinct methods and the complete signatures of
the methods are preserved -- that is, the executable includes
the method name, the parameters and the return type.
The virtual machine is intended to be type-safe. Before any code
is executed it is verified and code that has been successfully
verified is guaranteed to be type-safe. For example, it is
guaranteed not access unallocated memory.
However, the framework also provides some inherently unsafe
instructions. This allows support for languages like C++, but
it also means that some parts of the program will be unverifiable.
Unverifiable code may or may not be allowed to execute depending
on the local security policy.
In general, any programming language can be compiled to \emph{.NET} and
there are already dozens, if not hundreds, of compliers in existence.
The most commonly used language is \emph{C\#}.
\section{Scope of the project}
The \emph{.NET Framework} is mature comprehensive platform and handling
all of its features is out the scope of the project. Therefore, it is
necessary to state which features should be supported and which
should not.
The decompiler was specified to at least handle all the features required
to decompile the quick-sort algorithm. This includes integer arithmetic,
array operations, conditional statements, loops and system method calls.
The decompiler will not support so-called unverifiable instructions --
for example, pointer arithmetic and access to arbitrary memory locations.
These operations are usually used only for
interoperability with native code (code not running under the .NET
virtual machine). As such, these instructions
are relatively rare and do not even occur in many programs at all.
Generics also will not be supported. They were introduced in the
second version of the Framework and many programs still do not make any
use of them. The generics also probably does not provide any
theoretical challenge for the decompilation. It just involves handing
of more bytecodes and more special cases of the existing bytecodes.
\section{Previous work}
Decompilers are in existence nearly as long as compilers.
The very first decompiler was written by Joel Donnelly in 1960
under the supervision of Professor Maurice Halstead.
Initially, decompilers were created to aid in the program
conversion process from second to third generation computers.
Decompilers were actively developed over the following
years and were used for variety of applications.
Probably the best know and most influential research paper
is \emph{Cristina Cifuentes' PhD Thesis ``Reverse Compilation
Techniques'', 1994}. As part of the paper Cristina
created a prototype \emph{dcc} which translates Intel i80286
machine code into C using various data and control flow
analysis techniques.
The introduction of virtual machines made decompilation
more feasible then ever (e.g.\ \emph{Java Virtual Machine} or
\emph{.NET Framework Runtime}).
There are already several closed-source .NET decompilers
in existence. The most known one is probably
Lutz Roeder's \emph{Reflector for .NET}.
% Cristina
% Current .NET Decompilers
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% 12 Pages
\cleardoublepage
\chapter{Preparation}
This chapter explains the general idea behind the decompilation
process. The implementation chapter then explains how these
methods were implemented and what measures were taken
to ensure the correctness of the produced output.
\section{Overview of the decompilation process}
% Read metadata
% Starting point
% Translating bytecodes into C# expressions;
% High-level; Stack-Based; Conditional gotos
% Dataflow analysis; inlining
% Control flow analysis (interesting)
% Suggar
The first step is to read and disassemble the \emph{.NET} executable files.
There is an open-source library called \emph{Cecil} which was created exactly
for this purpose. It loads the whole executable into memory and then
it provides simple object-oriented API to access the content of the executable.
Using this library, the access to the executable is straightforward.
It is worth noting the wealth of information we are starting with.
The \emph{.NET} executables contain quite high-level representation
of the program. Complete typing information is still present --
we have access to all the classes including the fields and methods.
The types and names are also preserved.
The bytecode is stored on per-method basis and even the local variables
used in the method body are still explicitly represented including the type.%
\footnote{However, the names of local variables are not preserved.}
Ultimately, the challenge is to translate the bytecode of a given
method to a C\# code.
For example, consider decompilation of following code:
\begin{verbatim}
// Load "Hello, world!" on top of the stack
ldstr "Hello, world!"
// Print the top of the stack to the console
call System.Console::WriteLine(string)
\end{verbatim}
We start by taking the bytecodes one by one and translate them
to C\# statements. The bytecodes are quite high level so
the translation is usually straightforward.
The two bytecodes can thus be represented as the following
two C\# statements (\verb|output1| and \verb|some_input| are just
dummy variables):
\begin{verbatim}
string output1 = "Hello, world!";
System.Console.WriteLine(some_input);
\end{verbatim}
The first major challenge of decompilation is data-flow
analysis. In this example, we would analyze the stack
behavior and we would find that \verb|output1| is on the top
of the stack when \verb|WriteLine| is called. Thus, we would get:
\begin{verbatim}
string output1 = "Hello, world!";
System.Console.WriteLine(output1);
\end{verbatim}
At this point the produced source code should already compile
and work as intended.
The second major challenge of decompilation is control-flow
analysis. The use of control-flow analysis
is to replace \verb|goto| statements with high-level structures
like loops and conditional statements. This makes the produced
source code much more easier to read and understand.
Finally, there are several sugarings that can be done to the code to
make it more readable -- for example, replacing \verb|i = i + 1|
with the equivalent \verb|i++| or using rules of logic to
simplify boolean expressions.
\section{Translating bytecodes into C\# expressions}
% One at a time
% Byte code is high level (methods, classes, bytecode commands)
% Types preserved (contrast with Java)
The first task is to convert naively the individual bytecodes into
equivalent C\# expressions or statements.
This task is reasonably straightforward because the bytecodes
are quite high level and are aware of the object-oriented
programming model. For example, there are bytecodes
for integer addition, object creation, array access,
field access, object casting and so on; all of these trivially
translate to C\# expressions.
Any branching commands are also easy to handle because
C\# supports labels and \verb|goto|s. Unconditional branches
are translated to \verb|goto| statements and conditional
branches are translated to \verb|if (condition) goto Label;|.
C\# uses local variables for data transfer whereas bytecodes
use the stack. This mismatch will be handled during the
data-flow analysis phase. Until then, dummy variables
are used as placeholders. For example, \verb|add| bytecode
would be translated to \verb|input1 + input2|.
\section{Data-flow analysis}
\subsection{Stack in .NET}
% Stack behavior
% Deterministic stack (tested - failed)
Bytecodes use a stack to pass data between consecutive
instructions. When a bytecode is executed, it pops its arguments
from the stack, it performs some operation on them and then
it pushes the result, if any, back to the stack.
The number of popped elements is fixed and known before execution time
- for example, any binary operation pops exactly two arguments and
a method call pops one element for each of its arguments.
Most bytecodes either push only one element on the stack or none at all.%
\footnote{There is one exception to this -- the `dup' instruction.
More on this later.}
This makes the data-flow analysis slightly simpler.
The framework imposes some useful restrictions with regards to
control-flow. In theory, we could arrive at a given point in program
and depending on which execution path was taken, we could end up with
two completely different stacks. This is not allowed. Regardless
of the path taken, the stack must have the same number of elements in it.
The typing is also restricted -- if one execution path pushes an integer
on the stack, any other execution path must push an integer as well.
\subsection{Stack simulation}
Bytecodes use the stack to move data around, but there is no such stack
concept in C\#. Therefore we need to use something that C\# does
have -- local variables.
The transformation from stack to local variables is simpler then
it might initially seem. Basically, pushing something on the
stack means that we will need that value later on. So in C\#,
we create a new temporary variable and store the value in it.
On the other hand, popping is a usage of the value. So in C\#,
we just reference the temporary variable that
holds the value we want. The only difficult is to figure out
which temporary variable it is.
This is where the restrictions on the stack come handy -- we can
figure out what the state of the stack will be at any given point.
That is, we do not know what the value will be, but we do know
who pushed the value on the stack and what the type of the value is.
For example, after executing the following code
\begin{verbatim}
IL_01: ldstr "Hello''
IL_02: ldstr "world''
\end{verbatim}
we know that there will be exactly two values on the stack --
the first one pushed by bytecode labeled \verb|IL_01| and the
second one pushed by bytecode labeled \verb|IL_02|. Both of
them of are of type \verb|String|. Let's say that the state
of the stack is \{\verb|IL_01|, \verb|IL_02|\}.
Now, let us pop value from the stack by calling
\verb|System.Console.WriteLine|. This method takes one
\verb|String| argument. Just before the method is called
the state of the stack is \{\verb|IL_01|, \verb|IL_02|\}.
Therefore the value allocated by \verb|IL_02| is popped
from the stack and we are left with \{\verb|IL_01|\}.
Furthermore, we know which temporary local variable to
use in the method call -- the one that stores the value
pushed by \verb|IL_02|.
Let us call \verb|System.Console.WriteLine| again to completely
empty the stack and let us see what the produced C\# source code
would look like (the comments show the state of the stack
between the individual commands).
\begin{verbatim}
// {}
String temp_for_IL_01 = "Hello'';
// {IL_01}
String temp_for_IL_02 = "world'';
// {IL_01, IL_02}
System.Console.WriteLine(temp_for_IL_02);
// {IL_01}
System.Console.WriteLine(temp_for_IL_01);
// {}
\end{verbatim}
\subsection{The `dup' instruction}
In general, bytecodes push either one value on the stack or none at all.
There is only one exception to this -- the \verb|dup| instruction.
The purpose of this instruction is to duplicate the value on top of the
stack. Fortunately, it does not break our model. The only significance
of the \verb|dup| instruction is that the same value will be used twice -
that is, the temporary local variable will be referenced twice.
The \verb|dup| instruction, however, makes some future optimizations
more difficult.
The \verb|dup| instruction is used relatively rarely and
it usually just saves a few bytes of disk space.
Note that the \verb|dup| instruction could always be replaced
by store to a local variable and two loads.
The \verb|dup| instruction can be used in expression like
\verb|this.a + this.a| -- the value \verb|this.a| is obtained
once, duplicated and then the duplicates are added.
\subsection{Control merge points}
This in essence the most complex part of data-flow analysis.
It is caused by the possibility of multiple execution paths.
There are two scenarios that branching can cause.
The nice scenario is when a value is pushed in one location and,
depending on the execution path, can be popped in two or more different
locations. This translates well into the local variable model -
it simply means that the temporary local variable will be referenced
several times and there is nothing special we need to do.
The worse scenario is when a value can be pushed on the sack
in two or more different locations and then it is poped only in one locations.
Example of this would be
\verb|Console.WriteLine(condition ? "true": "false")| --
depending on the value of the condition either ``true'' or
``false'' is printed. This code snippet compiles into bytecode
with two possible execution paths -- one in which ``true'' is pushed
on the stack and one in which ``false'' is pushed on the stack.
In both cases the value is poped only once when
\verb|Console.WriteLine| is called.
This scenario does not work well with the local variable
model. Two values can be pushed on the stack and so two temporary
local variables are created -- one holding ``true'' and
one holding ``false''.
Depending on the execution path either of these can be on the
top of the stack when \verb|Console.WriteLine| is executed and
therefore we do not know which temporary variable to use as the argument
for \verb|Console.WriteLine|. It is impossible to know.
This can be resolved by creating yet another temporary variable
which will hold the actual argument for the \verb|Console.WriteLine|
call. Depending on the execution path taken, one of the previous
variables needs to be copied into this one.
So the produced C\# source code would look like this:
\begin{verbatim}
String sigma;
if (condition) {
String temp1 = "true";
sigma = temp1;
} else {
String temp2 = "false";
sigma = temp2;
}
Console.WriteLine(sigma);
\end{verbatim}
Note that this is essentially the trick that is used to obtain
single static assignment (SSA) form of programs during compilation.
In reality, this happens extremely rarely and it is almost exclusively
caused by the C\# \verb|?| operator.
\subsection{In-lining expressions}
% Generally difficult, but here SSA-SR
% Validity considered - order, branching
The code produced so far contains many temporary variables.
These variables are often not necessary and can be
optimized away. For example, the code
\begin{verbatim}
string output1 = "Hello, world!";
System.Console.WriteLine(output1);
\end{verbatim}
can be simplified into a single line:
\begin{verbatim}
System.Console.WriteLine("Hello, world!");
\end{verbatim}
Doing this optimization makes the produced code much more readable.
In general, removing a local variable from a method is non-trivial.
However, these generated local variables are special -- they were generated
to remove the stack-based data-flow model.
This means that the local variables are assigned only once and are
read only once%
\footnote{With the exception of the `dup' instruction which is read twice
and thus needs to be handled specially.}
(corresponding to the push and pop operations respectively).
There are some safety issues with this optimization.
Even though we are sure that the local variable is assigned and read
only once, we need to make sure that doing the optimization will not
change the semantics of the program. For example consider the
following code:
\begin{verbatim}
string output1 = f();
string output2 = g();
System.Console.WriteLine(output2 + output1);
\end{verbatim}
The functions \verb|f| and \verb|g| may have side-effects and in this
case \verb|f| is called first and then \verb|g| is called.
Optimizing the code would produce
\begin{verbatim}
System.Console.WriteLine(g() + f());
\end{verbatim}
which calls the functions in the opposite order.
Being conservative, we will allow this optimization only if the
declaration of the variable immediately precedes its use.
In other words, the variable is declared and then immediately
used with no possible side-effects in between these.
Using this rule, the previous example can be optimized to
\begin{verbatim}
string output1 = f();
System.Console.WriteLine(g() + output1);
\end{verbatim}
but it can not be optimized further because the function
\verb|g| is executed between the declaration of \verb|output1|
and its use.
\section{Control-flow analysis}
% Recovering high-level structures
The main point of control-flow analysis is to recover high-level
structures like loops and conditional blocks of code.
This is not necessary to produce semantically correct code,
but it does make the code much more readable compared to
only using \verb|goto| statements. It is also
the most theoretically interesting part of the whole
decompilation process.
In general, \verb|goto|s can be always completely removed
by playing some `tricks' like duplicating code or adding
boolean flags. However, as far as readability of the
code is concerned, this does more harm than good.
Therefore, we want to remove \verb|goto|s using the `natural'
way -- by recovering the high-level structures they represent
in the program. Any \verb|goto|s that can not be removed
in such way are left in the program.
See figures \ref{IfThen}, \ref{IfThenElse} and \ref{Loop}
for examples of control-flow graphs.
\begin{figure}[tbh]
\centerline{\includegraphics{figs/IfThen.png}}
\caption{\label{IfThen}Conditional statement (if-then)}
\end{figure}
\begin{figure}[tbh]
\centerline{\includegraphics{figs/IfThenElse.png}}
\caption{\label{IfThenElse}Conditional statement (if-then-else)}
\end{figure}
\begin{figure}[tbh]
\centerline{\includegraphics{figs/Loop.png}}
\caption{\label{Loop}Loop}
\end{figure}
\subsection{Finding loops}
% T1-T2 reduction
% Nested loops
% Goto=Continue=Break; Multi-level break/continue
T1-T2 transformations are used to find loops.
These transformation are usually used to determine whether
a control flow graph is reducible or not, but
it turns out that they are suitable for finding loops as well.
If a control-flow graph is irreducible, it means that the
program can not be represented only using high-level structures.
That is, \verb|goto| statements are necessary to represent the
program. This is undesirable since \verb|goto| statements
usually make the decompiled source code less readable.
See figure \ref{Ireducible} for the classical example
of irreducible control-flow graph.
\begin{figure}[tbh]
\centerline{\includegraphics{figs/Ireducible.png}}
\caption{\label{Ireducible}Ireducible control-flow graph}
\end{figure}
If a control-flow graph is reducible, the program may or may not be
representable only using high-level structures. That is,
\verb|goto| statements still may be necessary on some occasions.
For example, C\# has a \verb|break| statement which exits
the inner most loop, but it does not have any statement which
would exit multiple levels of loops.%
\footnote{This is language dependent.
The Java language, for example, does provide such command.}
Therefore in such occasion we have to use the \verb|goto| statement.
The algorithm works by taking the initial control-flow graph
and then repeatedly applying simplifying transformations to it.
As we can see from the name, the algorithm consists of two
transformations called T1 and T2.
If the graph is reducible, the algorithm will eventually simplify
the graph to only a single node. If the graph is not reducible,
we will end up with multiple nodes and no more transformations
that can be performed.
Both of the transformations look for a specific pattern in
the graph and then replace the pattern with something simpler.
See figure \ref{T1T2} for a diagram showing the T1 and T2 transformations.
\begin{figure}[tbh]
\centerline{\includegraphics{figs/T1T2.png}}
\caption{\label{T1T2}T1 and T2 transformations}
\end{figure}
The T1 transformation removes self-loop. If one of the successors
of a node is the node itself, then the node is a self-loop.
That is, the node `points' to itself. The T1 transformation
replaces the node with a node without such link.
The number of preformed T1 transformations corresponds to the
number of loops in the program.
The T2 transformation merges two consecutive nodes together.
The only condition is the the second node has to have
only one predecessor which is the first node.
Preforming this transformation repeatedly basically reduces
any directed acyclic graph into a single node.
See figures \ref{IfThenElseReduction}, \ref{NestedLoops} and
\ref{NestedLoops2} for examples how T1 and T2 transformations
can be used to reduce graphs.
\begin{figure}[p]
\centerline{\includegraphics{figs/IfThenElseReduction.png}}
\caption{\label{IfThenElseReduction}Reduction of if-then-else statement}
\end{figure}
\begin{figure}[p]
\centerline{\includegraphics{figs/NestedLoops.png}}
\caption{\label{NestedLoops}Reduction of two nested loops}
\end{figure}
\begin{figure}[tbhp]
\centerline{\includegraphics{figs/NestedLoops2.png}}
It is not not immediately obvious that these are, in fact,
two nested loops. Fifth conceptual node has been added
to demonstrate why the loops can be considered nested.
\caption{\label{NestedLoops2}Reduction of two nested loops 2}
\end{figure}
The whole algorithm is guaranteed to terminate if and only
the graph is reducible. However, it may produce different
intermediate states depending on the order of the transformations.
For example, the number of T1 transformations used can vary.
This affects the number of apparent loops in the program.
Note that although reducible graphs are preferred,
irreducible graphs can be easily decompiled as well
using \verb|goto| statements. Programs that were originally
written in C\# are most likely going to be reducible.
\subsection{Finding conditionals}
% Compound conditions
Any T1 transformation (self-loop reduction) produces a loop.
Similarly, T2 transformation or several T2 transformations
produce a directed acyclic sub-graph. This sub-graph can
usually be represented as one or more conditional statements.
Any node in the control-flow graph that has two successors
is potentially a conditional -- that is, \verb|if-then-else|
statement. The node itself determines which branch will
be taken, so the node is the condition. We now need to
determine what is the `true' body and what is the `false'
body. That is, we look for the blocks of code that will
be executed when the condition will or will not hold respectively.
This is done by considering reachability. A code that is
reachable only by following the `true' branch can be
considered as the `true' body of the conditional.
Similarly, code that is reachable only by following the
`false' branch is the `false' body of the conditional.
Finally, the code that is reachable from both branches is
not part of the conditional -- it is the code that follows
the whole \verb|if-then-else| statement.
See figure \ref{IfThenElseRechablility} for a diagram of this.
\begin{figure}[tbhp]
\centerline{\includegraphics{figs/IfThenElseRechablility.png}}
\caption{\label{IfThenElseRechablility}Reachablity of nodes for a conditional}
\end{figure}
\subsection{Short-circuit boolean expressions}
Short-circuit boolean expressions are boolean expressions
which may not evaluate completely.
The semantics of normal boolean expression \verb|f() & g()| is
to evaluate both \verb|f()| and \verb|g()| and then return
the logical `and' of these.
The semantics of short-circuit boolean expression \verb|f() && g()|
is different. The expression \verb|f()| is evaluated as before,
but \verb|g()| is evaluated only if \verb|f()| returned \emph{true}.
If \verb|f()| returned \emph{false} then the whole expression
will return \emph{false} anyway and so there no point in evaluating
\verb|g()|.
In general, conditionals depending on this short-circuit logic
cannot be expressed as normal conditionals without the use of \verb|goto|s
or code duplication.
Therefore it is desirable to find the short-circuit boolean expressions
in the control-flow graph.
The short-circuit boolean expressions will manifest themselves as
one of four patterns in the control-flow diagrams.
See figure \ref{shortcircuit}.
\begin{figure}[tbh]
\centerline{\includegraphics{figs/shortcircuit.png}}
\caption{\label{shortcircuit}Short-circuit control-flow graphs}
\end{figure}
These patterns can be easily searched for and replaced with a single node.
Nested expressions like \verb/(f() && g()) || (h() && i())/
will simplify well by applying the algorithm repeatedly.
\subsection{Basic blocks}
% Performance optimization only (from compilation)
Basic block is a block of code which is always executed as
a unit. That is, no other code can jump in the middle of the
block and there are no jumps from the middle of the block.
The implication of this is that as far as control-flow
analysis is concerned, the whole basic block can be
considered as a single instruction.
This primary reason for implementing this is performance gain.
\section{Requirements Analysis}
The decompiler must successfully round-trip a quick-sort algorithm.
That is, given an executable containing an implementation of
the quick-sort algorithm, the decompiler must produce
C\# source code that is both syntactically and semantically correct.
The produced source code must compile and work correctly
without any additional modifications to the source code.
To achieve this the Decompiler needs to handle at least the following:
\begin{itemize}
\item Integers and integer arithmetic
\item Create and be able to use integer arrays
\item Branching must be successfully decompiled
\item Several methods can be defined
\item Methods can have arguments and return values
\item Methods can be called recursively
\item Integer command line arguments can be read and parsed
\item Text can be outputted to the standard console output
\end{itemize}
\section{Development model}
The development is based on the spiral model.
The decompiler will internally consist of several consecutive code
representations. The decompiled program will be gradually
transformed from one representation to other until it reaches
the last one and is outputted as source code.
This matches the spiral model well. Successful implementation
of each code representation can be seen as one iteration in the
spiral model.
Furthermore, once a code representation is finished, it can
be improved by implementing an optimization which transforms it.
This can be seen as another iteration.
\section{Reference documentation}
The .NET executables will be decompiled into C\# source code.
It is therefore crucial to be intimately familiar with the language.
Evaluation semantics, operator precedence, associativity and other
aspects of the language can be found in the \emph{ECMA-334 Standard --
C\# Language Specification}.
In oder to decompile .NET executables, it is necessary to get
familiar with the .NET Runtime (i.e.\ the .NET Virtual Machine).
In particular, it is necessary to learn how the execution engine
works -- how does the stack behave, what limitations are imposed
on the code, what data types are used and so on.
Most importantly, it is necessary to get familiar with majority
of the bytecodes.
The primary reference for the .NET Runtime is the
\emph{ECMA-335 Standard -- Common Language Infrastructure (CLI)}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% 25 Pages
\cleardoublepage
\chapter{Implementation}
\section{Development process}
% SVN
% GUI -- options; progressive optimizations
% Testing
% Refactoring
The whole project was developed in C\# on the \emph{Windows} platform.
SharpDevelop\footnote{Open-source integrated development environment
for the .NET framework. I am actively participating in its development,
currently being one of the major developers.}
was used as the integrated development environment for writing of the code.
The source code and all files related to the project are stored on SVN%
\footnote{Subversion (SVN) is an open-source version control system.
It is very similar to CVS and aims to be its successor.}
server. This has several advantages such as back-up of all data,
documented project history (the commit messages) and the ability
to revet back to any previous state. The graphical user interface
for SVN also makes it possible to easily see and review all changes
that were made since previous commit to the server.
The user interface of the decompiler is very simple.
It consists of a single window which is largely covered by a text area
containing the output of the decompilation process.
There are only a few controls in the top part of the window.
Several check boxes can be used to completely turn off or on
optimization stages of the decompilation process.
Numerical text boxes can be used to specify the maximal number of
interactions for several transformations. By gradually increasing
these it is possible to observe changes to the output step-by-step.
These options significantly aid in debugging and testing of
the decompiler.
The source code generated by the decompiler is also stored on
the disk and committed to SVN together with all other files.
This is the primary method of regression testing. The generated source code
is the primary output of the decompiler and thus any change
in it may potentially signify a regression in the decompiler.
Before every commit the changes to this file were reviewed
to confirm that the new features did yield the expected
results and that no regressions were introduced.
\section{Overview of representation structures}
% Stack-based bytecode
% Variable-based bytecode
% High-level blocks
% Abstract Syntax Tree
The decompiler uses the following four intermediate representations
of the program (sequentially in this order):
\begin{itemize}
\item Stack-based bytecode
\item Variable-based bytecode
\item High-level blocks
\item Abstract Syntax Tree
\end{itemize}
The code is translated from one representation to other and some
transformations/optimizations are done at each of these representations.
The representations are very briefly described in the following
four subsections and then they are covered again in more detail.
Trivial artificial examples are used to aid the explanation.
Note that the evaluation chapter shows the actual outputs
for the quick-sort implementation.
\subsection{Stack-based bytecode representation}
The stack-based bytecode is the original bytecode as read from the
assembly. It is never modified. However, it is analyzed and
annotated with the results of the analysis. Most data-flow
analysis is performed at this stage.
This is an example of code in this representation:
\begin{verbatim}
ldstr "Hello, world!"
call System.Console::WriteLine
\end{verbatim}
\subsection{Variable-based bytecode representation}
The variable-based bytecode is the result of removing the
stack-based data-flow model. Local variables are now used instead.
Even though the stack-based model is now completely removed,
we can still use the bytecodes to represent the program.
We just need to think about them in a slightly different way.
The bytecodes are in essence functions which take arguments
and return values. For example, the \verb|add| bytecode
pops two values and pushes back the result and so it is now
a function which takes two arguments and returns the result.
The \verb|ldstr "Hello, world!"| bytecode used above is slightly
more complicated. It does not pop anything from the stack,
but it still has the implicit argument \verb|"Hello, world!"|
which comes from the immediate operand of the bytecode.
Continuing with the example, the variable-based from would be:
\begin{verbatim}
temp1 = ldstr("Hello, world!");
call(System.Console::WriteLine, temp1);
\end{verbatim}
where the variable \verb|temp1| was used to remove the stack data-flow model.
Some transformations are performed at this stage, modifying the
code. For example, the in-lining optimization would transform
the previous example to:
\begin{verbatim}
call(System.Console::WriteLine, ldstr("Hello, world!"));
\end{verbatim}
Note that expressions can be be nested as seen in the code above.
\subsection{High-level blocks representation}
The next data representation introduces high-level blocks.
In this representation, related parts of code can be grouped
together into a block of code.
The block signifies some high-level structure and blocks can
be nested (for example, conditional within a loop).
All control-flow analysis is performed at this stage.
The blocks can be visualized by a pair of curly braces:
\begin{verbatim}
{ // Block of type 'method body'
IL_01: call(System.Console::WriteLine, ldstr("Hello, world"));
{ // Block of type 'loop'
IL_02: call(System.Console::WriteLine, ldstr("!"));
IL_03: br(IL_02) // Branch to IL_02
}
}
\end{verbatim}
Initially there is only one block containing the whole method
body and as the high-level structures are found, new smaller
blocks are created. At this stage the blocks have no function
other than grouping of code.
% Validity -- gotos
This data representation also allows code to be reordered
to better fit the ordinal high-level structure.
\subsection{Abstract Syntax Tree representation}
This is the final representation. The abstract syntax tree (AST)
is a structured way of representing the produced source code.
For example, the expression \verb|(a + 1)| would be represented
as four objects -- parenthesis, binary operator, identifier and
constant. The structure is a tree so the identifier and the constant
are children of the binary operator which is in turn child of the
parenthesis.
% The high-level structures are directly translated to nodes in the
% abstract syntax tree. For example, loop will translate to
% a node representing code \verb|for(;;) {}|.
% The bytecodes are translated to the abstract syntax tree on
% individual basis. For example, the bytecode expression \verb|add(1, 2)|
% is translated to \verb|1 + 2|. The previously seen
% \verb|call(System.Console::WriteLine, ldstr("Hello, world"))|
% is translated to \verb|System.Console.WriteLine("Hello, world")|.
This initial abstract syntax tree will be quite verbose due to
constructs that ensure correctness of the produced source code.
There will be an especially high quantity of \verb|goto|s and
parentheses. For example, a simple increment is initially
represented as \verb|((i) = (i) + (1));| where the parentheses
are used to ensure safety.
Transformations are done on the abstract syntax tree to simplify the code
in occasions where it is know to be safe.
Several further transformations are done to make the code more readable.
For example, renaming of local variables or use of common
programming idioms.
\section{Stack-based bytecode representation}
\label{Stack-based bytecode representation}
This is the first and simplest representation of the code.
It directly corresponds to the bytecode in the .NET executable.
It is the only representation which is not transformed in any way.
% Cecil
% Offset, Code, Operand
The bytecode is loaded from the .NET executable with the
help of the Cecil library.
Three fundamental pieces of information are loaded for each bytecode --
its offset from the start of the method, opcode
(name) and the immediate operand.
The immediate operand is additional argument
present for some bytecodes -- for example, for the \verb|call| bytecode,
the operand specifies the method that should be called.
% Branch target, branches here
For bytecodes that can branch, the operand is the branch target.
Each bytecode stores a reference to the branch target (if any) and
each bytecode also stores a reference to all bytecodes that can branch to it.
\subsection{Stack analysis}
Once all bytecodes are loaded, the stack behavior of the program is analyzed.
The precise state of the stack is determined for each bytecode.
The .NET Framework dictates that this must be possible for valid executables.
It is possible to determine the exact number of elements on the
stack and for each of these elements is possible to determine which bytecode
pushed it on the stack%
\footnote{In some control merge scenarios, there might be more
bytecodes that can, depending on the execution path, push the value on the stack.}
and what the type of the element is.
% Stack before / after
% StackBehaviour -- difficult for methods
There are two stack states for each bytecode -- the first is the known
state \emph{before} the bytecode is executed and the second is the state
\emph{after} the bytecode is executed. Knowing what the bytecode does,
the later state can be obviously derived from the former.
This is usually easy -- for example, the \verb|add| bytecode pops
two elements from the stack and pushes one back. The pushed element
is obviously pushed by the \verb|add| bytecode
and the element is of the same type as the two popped elements%
\footnote{The 'add' bytecode can only sum numbers of the same type.}.
The \verb|ldstr| bytecode just pushes one element of type \verb|string|.
The behaviour of the \verb|call| bytecode is slightly more complex
because it depends on the method that it is invoking.
Implementing these rules for all bytecodes is
tedious, but usually straightforward.
The whole stack analysis is performed by iterative algorithm.
The stack \emph{before} the very first bytecode is empty.
We can use this to find the stack state \emph{after} the first instruction.
The stack state \emph{after} the first bytecode is the
initial stack state for other bytecode.
We can apply this principle again and again until all states are known.
% Branching -- 1) fall 2) branch 3) fall+branch veryfy same
Branching dictates the propagation of the states between bytecodes.
For two simple consecutive bytecodes, the stack state \emph{after}
the first bytecode is same as the state \emph{before} the second.
Therefore we just copy the state.
If the bytecode is a branch then we need to copy
the stack state to the target of the branch as well.
Dead-code can never be executed and thus it does not have
any stack state associated with it.
At the bytecode level, the .NET Runtime handles boolean values
as integers. For example, the bytecode \verb|ldc.i4 1| can
be interpreted equivalently as `push \verb|1|' and `push \verb|true|'.
Therefore three integer types are used --
`zero integer', `non-zero integer' and generic `integer' (of
unknown value). Bytecode \verb|ldc.i4 1| therefore has type
`non-zero integer'.
\section{Variable-based bytecode representation}
\label{Variable-based bytecode representation}
% Removed stack-based model -- local variables, nesting
This is a next representation which differs from the stack-based
one by completely removing the stack-based data-flow model.
Data is passed between instructions using local variables
or using nested expressions.
Consider the following stack-based program which evaluates
$2 * 3 + 4 * 5$.
\begin{verbatim}
ldc.i4 2
ldc.i4 3
mul
ldc.i4 4
ldc.i4 5
mul
add
\end{verbatim}
The program is transformed to the variable-based form by
interpreting the bytecodes as functions which return values.
The result of every function call is stored in new temporary
variable so that the value can be referred to latter.
\begin{verbatim}
int temp1 = ldc.i4(2);
int temp2 = ldc.i4(3);
int temp3 = mul(temp1, temp2);
int temp4 = ldc.i4(4);
int temp5 = ldc.i4(5);
int temp6 = mul(temp4, temp5);
int temp7 = add(temp3, temp6);
\end{verbatim}
The stack analysis performed earlier is used to determine
what local variables should be used as arguments.
For example, the stack analysis tells us that the stack contains
precisely two elements just before the \verb|add| instruction.
It also tells us that these two elements have been pushed on the stack by the
\verb|mul| instructions (the third and sixth instruction respectively).
Therefore to access the results of the \verb|mul| instructions,
the \verb|add| instruction needs to use the temporary variables
\verb|temp3| and \verb|temp6|.
\subsection{Representation}
Internally, each expression (`function call') consists of three
main elements -- the opcode (e.g.\ \verb|ldc.i4|), the immediate operand
(e.g.\ \verb|1|) and the arguments (e.g.\ \verb|temp1|).
The argument can be either a reference to a local variable or
other nested expression. In the case of
\begin{verbatim}
call(System.Console::WriteLine, ldstr("Hello, world"))
\end{verbatim}
\verb|call| is the opcode,
\verb|System.Console::WriteLine| is the operand and
\verb|ldstr("Hello, world")| is the argument (nested expression).
Expressions like \verb|ldstr("Hello, world")| or \verb|ldc.i4|
do not have any arguments other then the immediate operand.
% stloc, ldloc
There are bytecodes for storing and referencing a local variables
(\verb|ldloc| and \verb|stloc| respectively).
Therefore no special data structures are needed to declare
and reference the local variables because we we can store
program like
\begin{verbatim}
int temp1 = ldc.i4(2);
int temp2 = ldc.i4(3);
int temp3 = mul(temp1, temp2);
\end{verbatim}
only using bytecodes as
\begin{verbatim}
stloc(temp1, ldc.i4(2));
stloc(temp2, ldc.i4(3));
stloc(temp3, mul(ldloc(temp1), ldloc(temp2)));
\end{verbatim}
% Dup
Local variables \verb|temp1|, \verb|temp2| and \verb|temp3|
are tagged as being `single static assignment and single read'.
This is a necessary property for the in-lining optimization.
All generated temporary variables satisfy this property
except for the ones storing the result of a \verb|dup| instruction
which does not satisfy this property because it is read twice.
\subsection{In-lining of expressions}
\label{In-lining of expressions}
The variable-based representation passes data between instructions
either using local variables or using expression nesting.
Expression nesting is the preferred more readable form.
The in-lining transformations simplifies the code by removing
some temporary variables and using expression nesting instead.
Consider the following code.
\begin{verbatim}
stloc(temp1, ldstr("Hello, world!"));
call(System.Console::WriteLine, ldloc(temp1));
\end{verbatim}
This code can be simplified into a single line by in-lining the
local variable \verb|temp1|:
(that is, the \verb|stloc| expression is nested within the
\verb|call| expression)
\begin{verbatim}
call(System.Console::WriteLine, ldstr("Hello, world!"));
\end{verbatim}
The algorithm is iterative. Two consecutive lines are considered
and if the first line assigns to a local variable which is used
in the second line then the variable is in-lined.
This is repeated until no more optimizations can be done.
There are some safety concerns with this optimizations.
The optimization would break the program if the local variable
was read later in the program.
Therefore the optimization is done only for variables that have
the property of being `single static assignment and single read'.
The second concern is change of execution order.
Consider the following pseudo-code:
\begin{verbatim}
temp1 = f();
add(g(), temp1);
\end{verbatim}
In-ling \verb|temp1| would change the order in which the
functions \verb|f| and \verb|g| are called and thus the
optimization can not be performed.
On the other hand, the following code can be optimized:
\begin{verbatim}
temp1 = f();
add(1, temp1);
\end{verbatim}
The effect of the optimization will be that the expression
\verb|1| will be evaluated before \verb|f| rather then
after it. However, this does not change the semantics
of the program because evaluation of \verb|1| does not have
any side-effects and it will still evaluate to the same value.
More importantly, the same property holds for \verb|ldloc| (load local variable)
instruction. It does not have any side-effects and it will
evaluate to the same value even if evaluated before the
function \verb|f| (or any other expression being in-lined).
This is true because the function \verb|f|
(or any other expression being in-lined)
can not change the value of the local variable.
% The only way to change a local
% variable would be using the \verb|stloc| instruction.
% However, \verb|stloc| can not be part of the in-lined
% expression because it does not return any value -
% it is a statement and as such can not be nested within an expression.
% The property holds for \verb|1|, \verb|ldloc| and it would hold
% for some other expressions as well, but due to the
% way the stack-based model was removed it is sufficient
% to consider only \verb|ldloc|. This is by far the most common case.
To sum up, it is safe to in-line a local variable if it has
property `single static assignment and single read' and
if the argument referencing it is preceded only
by \verb|ldloc| instructions.
The following code would be optimized in two iterations:
\begin{verbatim}
stloc(temp1, ldc.i4(2));
stloc(temp2, ldc.i4(3));
stloc(temp3, mul(ldloc(temp1), ldloc(temp2)));
\end{verbatim}
First iteration (in-line \verb|temp2|):
\begin{verbatim}
stloc(temp1, ldc.i4(2));
stloc(temp3, mul(ldloc(temp1), ldc.i4(3)));
\end{verbatim}
Second iteration (in-line \verb|temp1|):
\begin{verbatim}
stloc(temp3, mul(ldc.i4(2), ldc.i4(3)));
\end{verbatim}
\subsection{In-lining of `dup' instruction}
The \verb|dup| instruction can not be in-lined because
it does not satisfy the `single static assignment and single read'
property. This is because the data is referenced twice.
For example, in-lining the following code would cause
the function \verb|f| to be called twice.
\begin{verbatim}
stloc(temp1, dup(f()));
add(temp1, temp1);
\end{verbatim}
However, there are circumstances where this would be acceptable.
If the expression within the \verb|dup| instruction is a
constant then it is possible to in-line it without any harm.
The following code can be optimized:
\begin{verbatim}
stloc(temp1, dup(ldc.i4(2)));
add(temp1, temp1);
\end{verbatim}
Even more elaborate expressions like
\verb|mul(ldc.i4(2), ldc.i4(3))| are still a constant.
The instruction \verb|ldarg.0| (load \verb|this|) is also
constant relative to the single invocation of method.
\section{High-level blocks representation}
\label{High-level blocks representation}
% Links
% Link regeneration
So far the code is just a sequential list of statements.
The statements are in the exactly same order as found in the
original executable and all control-flow is achieved
only by the \verb|goto| statements.
The `high-level block' representation structure introduces
greater freedom to the code. High-level structures are recovered
and the code can be arbitrarily reordered.
The code is organized into blocks in this representation.
A block can can contain executable code as well as other nested blocks.
Therefore this resembles a tree-like data structure.
The root block of the three represents the method and contains
all of its code.
A node in the three represents some high-level data structure
(loop, \verb|if| statement). Nodes can also represent `meta'
high-level structures which are used during the optimizations.
For example, node can encapsulate a block of code that is known
to be acyclic (without loops).
Finally, all leaves in the tree are basic blocks.
\subsection{Safety}
This data representation allows the code to be restructured
and reordered. Therefore care is needed to make sure that
the produced code is semantically correct.
% Validity -- arbitrary nodes
It would be possible to make sure that every transformation is
valid. However, there is even more robust solution --
give up at the start and allow arbitrary transformations.
The only constraint is that the code can be only moved. It cannot
be duplicated and it cannot be deleted. Deleting of code would
indeed cause problems. Anything else is permitted.
When the code is generated, the correctness is ensured by
explicit \verb|label|s and \verb|goto|s placed in front
of and after every basic block respectively. For example
consider the follow piece of code in which the order of
basic blocks was reversed and two unnecessary high-level
nodes were added:
\begin{verbatim}
goto BasicBlock1; // Method prologue
for(;;) {
BasicBlock2:
Console.WriteLine("world");
goto End;
}
if (true) {
BasicBlock1:
Console.WriteLine("Hello");
goto BasicBlock2;
}
End:
\end{verbatim}
The inclusion of explicit \verb|label|s and \verb|goto|s
makes the high-level structures and the order of basic blocks
irrelevant and thus the produced source code will be correct.
Note that in the code above the lines \verb|for(;;) {| and
\verb|if (true) {| will never be reached during execution.
In general, the transformations done on the code will be
more sane then in the example above and most of the
\verb|label|s and \verb|goto|s will be redundant.
In the following code all of the \verb|label|s and \verb|goto|s
can be removed:
\begin{verbatim}
goto BasicBlock1; // Method prologue
BasicBlock1:
Console.WriteLine("Hello");
goto BasicBlock2;
BasicBlock2:
Console.WriteLine("world");
goto End;
End:
\end{verbatim}
The removal of \verb|label|s and \verb|goto|s is done once
the Abstract Syntax Tree is created.
To sum up, finding and correctly identifying all high-level
structures will produce nice results, but failing to
do so will not cause any harm as far as correctness is concerned.
\subsection{Tree data structure}
The data structure to store the tree is very simple --
the tree consists of nodes where each node contains a link
to its parent and a list of children. A child can be either
another node or a basic block (leaf).
The API of the tree structure is severely restricted.
Once the tree is created, it is only possible to add new empty
nodes and to move basic blocks from one node to other.
This ensures that basic blocks can not be duplicated or
deleted which is the safety requirement as discussed in the
previous section.
All transformations are limited by this constraint.
Each node also provides events which are fired whenever
the child collection of the node is changed.
\subsection{Creating basic blocks}
The previous data representation stored the code as a sequence
of statements. Some of these statements will be always
executed sequentially without interference of branching.
In control-flow analysis we are primarily interested in
branching and therefore it is desirable to group such
sequential parts of code together to basic blocks.
Basic block is a block of code that is always guaranteed to
be executed together without any branching. Basic blocks
can be easily found by determining which statements start
a basic block. The very first statement in a method
naturally starts the first basic block. Branching transfers
control somewhere else so the statement immediately following
a branch is a start of a basic block.
Similarly, the target of a branch command is a start of
basic block.
Using these three simple rules the whole method body
is slit into a few basic blocks.
It is desirable to store control-flow links between basic blocks.
These links are used for finding high-level structures.
Each basic block has \emph{successors} and \emph{predecessors}.
\emph{Successor} is an other basic block that may be executed
immediately after this one.
There can be up to two \emph{successors} -- the following basic
block and the basic block being branched to. Both of these
exist if the basic block ends with conditional branch.
% If the condition is false, the control falls though to the
% following block and if the condition is true the branch is taken.
\emph{Predecessor} is just link in the opposite direction
of the \emph{successor} link. Note that the number of
\emph{predecessors} is not limited -- several \verb|goto|s
can branch to the same location.
\subsection{Control-flow links between nodes}
The \emph{predecessor} and \emph{successor} links of basic
blocks reflect the control flow between basic blocks.
However, once basic blocks are `merged' by moving them
into a node, we might be interested in flow properties
of the whole node rather then just the individual basic blocks.
Consider two sibling nodes that represent two loops.
In other to put the nodes in correct order, we need to know
which one is \emph{successor} of the other.
Therefore the \emph{predecessor} and \emph{successor} links
apply to whole nodes as well. Consider two sibling nodes $A$ and $B$.
Node $B$ is a \emph{successor} of node $A$ if there exists
a basic block within node $A$ that branches to a basic block
within node $B$.
The algorithm to calculate \emph{successors} of a node is
as flows:
\begin{itemize}
\item Get a list of all basic blocks that are children of
node $A$. The node can contain other nested nodes so this part
needs to be performed recursively.
\item Get a list of all succeeding basic blocks of node $A$.
This is the union of all successors for all basic blocks
within the node.
\item However, we do not want succeeding basic blocks,
we want a succeeding siblings of the node. Therefore,
for each succeeding basic block traverse the data structure
up until a sibling node is reached.
\end{itemize}
The algorithm to calculate \emph{predecessors} of a node is similar.
The \emph{predecessor} and \emph{successor} links are used
extensively and it would be computationally expensive to
recalculate them every time they are needed. Therefore the
links and intermediate results of the calculation are cached.
The events of the tree structure are used to invalidate relevant
caches.
\subsection{Finding loops}
\label{Finding loops}
The loops are found by using the T1-T2 transformations which where
discussed in the preparation chapter.
There are two node types that directly correspond to the results
of these two transformations. Preforming a T1 transformation
produces a node of type \emph{Loop} and performing a T2 transformation
produces a node of type \emph{AcyclicGraph}.
The algorithm works by considering all the nodes and basic blocks
in the tree individually and evaluating the conditions required for the
transformations.
If transformation can be performed for a given node, it is immediately
preformed and the algorithm is restarted on the new transformed graph.
The condition for the T1 transformation (\emph{Loop}) is that the node
must a be self-loop. The condition for the T2 transformation
(\emph{AcyclicGraph}) is that the node must have only one predecessor.
By construction each \emph{AcyclicGraph} node contains exactly
two nodes. For example, five sequential statements would be
represented as four nested \emph{AcyclicGraph} nodes.
This makes the tree more complex and more difficult to analyse.
The acyclic property is only required for loop finding and is not
needed for anything else later on. Therefore once loops are found,
the \emph{AcyclicGraph} nodes are flattened (the children of the node
are moved to its parent and the node is removed).
Note that the created loops are trivial -- the \emph{Loop} node
represents an infinite loop without initializer, condition or
iterator. The specifics of the loop are not represented
in this tree data structure. However, they are eventually
recovered in the abstract syntax tree phase of the decompiler.
\subsection{Finding conditionals}
\label{Finding conditionals}
% Reachable sets
In order to recover conditional statements three new node types
are introduced: \emph{Condition}, \emph{Block} and \emph{IfStatement}.
\emph{Condition} represents a piece of code that is guaranteed to
branch into one of two locations. \emph{Block} has no special
semantics and merely groups several nodes into one.
\emph{IfStatement} represents the whole conditional including
the condition and two bodies.
The \emph{IfStatement} node is still a genuine node in the tree
data structure. However, subtyping is used to restrict its children.
The \emph{IfStatement} node must have precisely three children
of types \emph{Condition}, \emph{Block} and \emph{Block} representing
the condition, `true' body and `false' body respectively.
This defines the semantics of the \emph{IfStatement} node.
The algorithm starts by encapsulating all two-successors basic blocks
with \emph{Condition} node. Two-successor basic blocks are conditional
branches and thus they are conditions of \verb|if| statements.
Note that the bytecode \verb|br| branches unconditionally and thus
is it not a condition of an \verb|if| statement (basic blocks ending
with \verb|br| have only one successor).
The iterative part of the algorithm is to look for free-standing
\emph{Condition} nodes and encapsulate them with \emph{IfStatement} nodes.
This involves finding the `true' and `false' bodies for the if statements.
The `true' and `false' bodies of an \verb|if| statement are found by
considering reachability as discussed in the preparation chapter.
Nodes reachable only by following the `true' branch define the `true'
body of the \verb|if| statement. Similarly for the `false' body.
The rest of the nodes is the set reachable from both and it is not
part of the \verb|if| statement -- it is the code following the
\verb|if| statement. Note that these three set are disjoint.
The reachable nodes nodes are found by iterative algorithm in
which successors are added to a collection until the collection
does not change anymore.
Nodes that are reachable \emph{only} from the `true' branch are
obtained by taking the set reachable from `true' branch and removing
nodes reachable from the `false' branch.
Consider the special case where there are no nodes reachable from both
the `true' and `false' branches. In this case, there is no code
following the \verb|if| statement. Such code might look like this:
\begin{verbatim}
if (condition) {
return null;
} else {
return result;
}
\end{verbatim}
In this special case the `false' body is moved outside the \verb|if|
statement producing more compact code:
\begin{verbatim}
if (condtion) {
return null;
}
return result;
\end{verbatim}
\subsection{Short-circuit conditionals}
% Short-circuit
Consider condition like \verb|a & b| (\verb|a| and \verb|b|).
This or even more complex expressions compile into a code without
any branches. Therefore the expression will be contained within
a single basic block and thus the approach described so far
would work well.
Conditions like \verb|a && b| are more difficult to handle
(\verb|a| and \verb|b|, but evaluate \verb|b| only if \verb|a|
is true). These short-circuit conditionals introduce additional
branching and thus they are more difficult to decompile.
% The short-circuit conditionals will manifest themselves as
% one of four flow graphs. See figure \ref{impl_shortcircuit}
% (repeated from the preparation chapter).
% Implementation
% \begin{figure}[tbh]
% \centerline{\includegraphics{figs/shortcircuit.png}}
% \caption{\label{impl_shortcircuit}Short-circuit control-flow graphs}
% \end{figure}
The expression \verb|a| is always evaluated first.
The flow graph is characterized by two things -- in which case the
expression \verb|b| is evaluated and which path is taken if the
expression \verb|b| is not evaluated.
The \emph{Condtition} node was previously defined as a piece of
code that is guaranteed to branch into one of two locations.
This still holds for the flow graphs above. If two nodes
that evaluate \verb|a| and \verb|b| are considered together,
they still define a proper condition for an \verb|if| statement.
We define a new node type \emph{ShortCircuitCondition}
which is a special case of \emph{Condition}.
This node has exactly two children (expressions \verb|a| and
\verb|b|) and a meta-data field which specifies which one of
the four short-circuit patters it represents. This is for
convenience so that the pattern matching does not have to be
done again in the future.
The whole program is repeatedly searched and if one of the four patters
is found, the two nodes are merged into a \emph{ShortCircuitCondition} node.
This new node represents the combined expression (e.g.\ \verb|a && b|).
Since it is a single node is eligible to fit into the pattern
again and thus nested short-circuit expressions will be also
found (e.g.\ \verb/(a && b) || (c && d)/).
Note that this is an extension to the previous algorithm rather
then a new one. It needs to be preformed just before
\emph{Condition} nodes are encapsulated in \emph{IfStatement} nodes.
\section{Abstract Syntax Tree representation}
\label{Abstract Syntax Tree representation}
Abstract syntax tree (AST) is the last representation of the program.
It very closely corresponds to the final textual output.
\subsection{Data structure}
External library is used to hold the abstract syntax tree.
Initially the \emph{CodeDom} library that ships with the
\emph{.NET Framework} was used, but it was soon abandoned due
to lack of support for many crucial C\# features.
It was replaced with \emph{NRefactory} which is an open-source
library used in SharpDevelop for parsing and representation of
C\# and VB.NET files.
It focuses exclusively on these two languages and
thus provides excellent support for them.
NRefactory allows perfect control over the generated code.
Parsing arbitrary C\# file and then generating it again
will almost exactly reproduce the original text. Whitespace
would be reformated but anything else is explicitly represented
in NRefactory's abstract syntax tree.
For example, \verb|(a + b)| and \verb|a + b| have
different representations and so do \verb|if (a) return;| and
\verb|if (a) { return; }|.
The decompiler aims to generate not only correct code, but also as
readable code as possible. Therefore this precise control is desirable.
\subsection{Generating skeleton}
The first task is to generate code skeleton that is
equivalent to the one in the executable.
The \emph{Cecil} library helps with reading of the executable
meta-data and thus it is not necessary to work directly with
the binary format of the executable.
The following four object types can be found in .NET executables:
classes, structures, interfaces and enumerations.
For the purpose of logical organization these can be nested
within namespaces and, in some cases, nested within each other.
Classes can be inherited and may implement one or more interfaces.
Depending on the type, the objects can contain fields,
properties, events and methods.
Specific set of modifies can be applied to almost everything.
Handling all these cases is tedious, but usually straightforward.
\subsection{Generating method bodies}
\label{Generating method bodies}
Once the skeleton is created, high-level blocks and the bytecode
expressions contained within them are translated to
abstract syntax tree. For example, the \emph{Loop} node is translated
to a \emph{ForStatement} in the AST and the bytecode expression \verb|add|
is translated to \emph{BinaryOperatorExpression} in the AST.
In terms of lines of code, this is by far the largest part of the
decompiler. The reason for this is that there are many bytecodes
and every bytecode needs to have its own specific translation code.
Conceptually\footnote{In reality there are more functions
that are loosely coupled.},
the translation code consists of three main functions:
\emph{TranslateMethodBody}, \emph{TranslateHighLevelNode} and
\emph{TranslateBytecodeExpression}.
All of these return complete abstract syntax tree for the given input.
\emph{TranslateMethodBody} is the simples function which encapsulates
the whole algorithm. It takes tree of high-level blocks as input and
produces the complete abstract syntax tree of the method.
Internally, it calls \emph{TranslateHighLevelNode} for each top-level
node and then merely concatenates the results.
\emph{TranslateHighLevelNode} translates one particular high-level
node into abstract syntax tree. There are several types of high-level
nodes and each of them needs to considered individually.
If the high-level node has child nodes then this function is
called recursively. Some comments about particular node types:
The \emph{BasicBlock} node needs to be preceded by explicit label
and succeeded by explicit \verb|goto| in order to ensure safety.
The \emph{Loop} node is easy to handle since it is just an infinite loop.
The \emph{IfStatement} node is most difficult to handle since it
needs to handle creation of the condition for the \verb|if| statement.
Remember, that the condition can include short-circuit booleans and
that these can be arbitrarily nested.
\emph{TranslateBytecodeExpression} is the lowest level function.
It translates the individual bytecode expressions.
Note that bytecode expression may have other bytecode expressions
as arguments and therefore this function is often called recursively.
Getting abstract syntax tree for all arguments is in fact the
first thing that is done. After that these arguments are combined in
a way that is appropriate for the given bytecode and operand.
This logic is completely different for each bytecode.
Simple example is the \verb|add| bytecode --
in this case the arguments end up being children of
\emph{BinaryOperatorExpression} (which is an AST node).
At this point we have the abstract syntax tree representing the
bytecode expression.
The expression is further encapsulated by parenthesis for safety
and the type of the expression is recoded in the AST meta-data.
Unsupported bytecodes are handled gracefully and are represented as
function calls. For example if the \verb|add| bytecode would
not be implemented, it would be outputed as \verb|IL__add(1, 2)|
rather then \verb|1 + 2|.
Recoding the type of AST is useful so that it can be converted
depending on a context. If AST of type \verb|IntegerOne| is
used in context where \verb|bool| is expected, the AST can
be simply replaced by \verb|true|.
Here is a list of bytecodes whose translation into abstract syntax
tree has been implemented
(star is used to represent several bytecodes with similar names):
Arithmetic: \texttt{add* div* mul* rem* sub* and xor shl shr* neg not} \\
Arrays: \texttt{newarr ldlen ldelem.* stelem.*} \\
Branching: \texttt{br brfalse brtrue beq bge* bgt* ble* blt* bne.un} \\
Comparison: \texttt{ceq cgt* clt*} \\
Conversions: \texttt{conv.*.*} \\
Object-oriented: \texttt{newobj castclass call callvirt} \\
Constants: \texttt{ldnull ldc.* ldstr ldtoken} \\
Data access: \texttt{ldarg ldfld stfld ldsfld stsfld ldloc stloc} \\
Miscellaneous: \texttt{nop dup ret}
The complexity of implementing a translation for a bytecode varies
between a single line (\verb|nop|) to over a page of code (\verb|callvirt|).
\subsection{Optimizations}
The produced abstract syntax tree represents a fully functional program.
That is, it can be pretty printed and the generated source code would
compile and work without problems.
However, the produced source code is still quite verbose and
therefore several optimizations are done to simplify it or make it `nicer'.
The optimizations are implemented by using the visitor pattern
provided by the NRefactory library. For each optimization
a new class is created which inherits from the \emph{AstVisitor}
class. Instance of this class is created and it is applied to
the root of the abstract syntax tree.
As a result of this, NRefactory will traverse the whole abstract
syntax tree and notify the visitor (the optimization) about
every element it encounters. For example, whenever a \verb|goto|
statement is encountered, the method \verb|VisitGotoStatement|
will be invoked on the visitor. If the optimization wants to
act upon this, it has to override the \verb|VisitGotoStatement| method.
The abstract syntax tree can be modified during the traversal.
For example, when the \verb|goto| statement is encountered,
the visitor (the optimization) can modify it, replace it or simply
remove it.
\subsubsection{Removal of dead labels}
\label{Removal of dead labels}
This is not the first optimization to be executed, but it is the
simplest one and therefore it is explained first.
The purpose of this optimization is to remove dead labels.
That is, to remove all labels that are not referenced by one
or more \verb|goto| statements.
The optimization uses two visitors. The first visitor overrides
the \verb|VisitGotoStatement| method and is thus notified about
all \verb|goto| statements in the method. The body of the overridden
\verb|VisitGotoStatement| method records the name of the label
being targeted and puts in into an `alive' list.
The second visitor overrides the \verb|VisitLabelStatement|
method and is thus notified about all labels. If the label
is not in the `alive' list, it is removed.
Note that all optimizations are done on per-method basis
and therefore we do not have to worry about name clashes
of labels from different methods.
\subsubsection{Removal of negations}
\label{Removal of negations}
In some cases negations can be removed by using the rules of
logic. Negations are represented as \emph{UnaryOperatorExpression}
in the abstract syntax tree.
When a negation is encountered, it is matched against the following
patterns and simplified if possible.
\verb|!((x)) = !(x)| \\
\verb|!!(x) = (x)| \\
\verb|!(a > b) = (a <= b)| \\
\verb|!(a >= b) = (a < b)| \\
\verb|!(a < b) = (a >= b)| \\
\verb|!(a <= b) = (a > b)| \\
\verb|!(a == b) = (a != b)| \\
\verb|!(a != b) = (a == b)| \\
\verb:!(a & b) = (!(a) | !(b)): \\
\verb:!(a && b) = (!(a) || !(b)): \\
\verb:!(a | b) = (!(a) & !(b)): \\
\verb:!(a || b) = (!(a) && !(b)): \\
\subsubsection{Removal of parenthesis}
\label{Removal of parenthesis}
% C precedence and associativity
To ensures safety, the abstract syntax tree will contain extensive
amount of parenthesis. Even simple increment would be expressed
as \verb|((i) = ((i) + (1)))|.
This optimization removes parenthesis where it is known to be safe.
Parenthesis can be removed from primitive values (\verb|(1)|),
identifiers (\verb|(i)|) and already parenthesized expressions
(\verb|((...))|).
Several parenthesis can also be removed due to the unambiguous
context they are in. This includes cases like
\verb|return (...);|, \verb|array[(...)]|, \verb|if((...))|,
\verb|(...);| and several others.
The rest of the cases is governed by the C\# precedence
and associativity rules. There are 15 precedence groups in
C\# and expressions within the same group are left-associative%
\footnote{Only assignment and conditional operator (?:)
are right associative, but these are never generated in nested
form by the decompiler.}.
The optimization implements a function \verb|GetPrecedence| that
returns the precedence for the given expression as an integer.
Majority of expressions are binary expressions. When a binary
expression is encountered, precedence is calculated for it
and both of its operands.
For example consider \verb|(a + b) - (c * d)|.
The precedence of the main binary expression is 12 and the
precedences of the operands are 12 and 13 respectively.
Since the right operand has higher precedence then the binary
expression itself, its parenthesis can be removed.
The left operand has the same precedence. However, in this special
case (left operand, same precedence), parenthesis can be removed
as well due to left-associativity of C\#.
Similar, but simpler, logic applies to unary expressions.
\subsubsection{Removal of `goto' statements}
\label{Removal of `goto' statements}
% Goto X; labe X: ;
% Replacing with break/continue
This is the most complex optimization performed on the
abstract syntax tree. Its purpose is the remove or replace
\verb|goto| statements.
In the following case it is obvious that the \verb|goto| can be removed:
\begin{verbatim}
// Code
goto Label;
Label:
// Code
\end{verbatim}
However, in general, more complex scenarios can occur.
It is not immediately obvious that the \verb|goto| statement
can be removed in the following code:
\begin{verbatim}
if (condition1) {
if (condition2) {
// Code
goto Label;
} else {
// Code
}
}
for(;;) {
for(;;) {
Label:
// Code
}
}
\end{verbatim}
This optimization is based on simulation of execution under
various conditions.
There are three important questions that the simulation answers: \\
What would happen if the \verb|goto| was replaced by no-op (i.e.\ removed)? \\
What would happen if the \verb|goto| was replaced by \verb|break|? \\
What would happen if the \verb|goto| was replaced by \verb|continue|?
In the first case, the \verb|goto| is replaced by
no-op and the simulator is started at that location.
The simulation will traverse the AST tree structure until
it finds the next executable line of code or an label.
If the reached location is the same as the one that would be reached
by following the original \verb|goto| then the original \verb|goto|
can be removed.
Similar simulation runs are performed by replacing the
\verb|goto| by \verb|break| and \verb|continue|.
After this optimization is done, it is desirable to remove the dead labels.
\subsubsection{Simplifying loops}
\label{Simplifying loops}
The \verb|for| loop statement has the following form in C\#:
\begin{verbatim}
for(initializer; condition; iterator) { body };
\end{verbatim}
The semantics of the \verb|for| loop statement is:
\begin{verbatim}
initializer;
for(;;) {
if (!condition) break;
body;
iterator;
}
\end{verbatim}
So far the decompiler generated only infinite loops in form
\verb|for(;;)|. Now it is time to make the loops more interesting.
This optimization does pattern matching on the code and tries
to identify the initializer, condition or iterator.
If the loop is preceded by something like \verb|int i = 0;| then it is most
likely the initializer of the loop and it will be pushed into
the \verb|for| statement producing \verb|for(int i = 0;;)|.
If it is not really the initializer, it does not matter --
semantics is same.
If the first line of the body is \verb|if (not_condition) break;|,
it is assumed to be the condition for the loop and it is pushed
in to produce \verb|for(;!not_condition;)|.
If the last line of the body is \verb|i = ...| or \verb|i++|,
it is assumed to be the iterator.
Consider the following code:
\begin{verbatim}
int i = 0;
for(;;) {
if (i >= 10) break;
// body
i++;
}
\end{verbatim}
Performing this optimization will simplify the code to:
\begin{verbatim}
for(int i = 0; !(i >= 10); i++) {
// body
}
\end{verbatim}
This is then further simplified by the `removal of negation'
optimization to:
\begin{verbatim}
for(int i = 0; i < 10; i++) {
// body
}
\end{verbatim}
\subsubsection{Further optimizations}
\label{Further optimizations}
Several further optimizations are done which are briefly
discussed here.
% Rename variables -- i, j,k ...
Integer local variables are renamed from the more generic names
like \verb|V_0| to \verb|i|, \verb|j|, \verb|k|, and so on.
Boolean local variables are renamed to \verb|flag1|, \verb|flag2|
and so on.
% Type names
C\# aliases for types are used. For example, \verb|System.Int32|
is replaced with \verb|int|. C\# \verb|using| (\verb|import| in
java) is used to simplify \verb|System.Console| to just \verb|Console|.
Types in scope of same namespace also do not have to be
fully qualified.
% Remove `this' reference
Explicit \verb|this.field| is simplified to just \verb|field|.
% Idioms -- i++
The increments in form \verb|i = i + 1| are simplified to \verb|i++|.
The increments in form \verb|i = i + 2| are simplified to \verb|i += 2|.
Analogous forms hold for decrements.
% Idioms -- string concat, i++
String concatenation \verb|String.Concat("Hello", "world")|
can be expressed in C\# just as \verb|"Hello" + "world"|.
Sometimes `false' body of an \verb|if| statement can end up being
empty and can be harmlessly removed.
Some control-flow commands are sometimes redundant since the
implicit behavior is equivalent. For example, \verb|return;|
at the end of method is redundant and \verb|continue;| at
the end of loop is redundant.
\subsection{Pretty Printing}
Pretty printing%
\footnote{Pretty printing is the act of converting abstract
syntax tree into a textual form.}
is provided as part of the NRefactory library and therefore does not
need to be implemented as part of the decompiler.
The NRefactory provides provides several options that control
the format of the produced source code (for example whether an opening
curly brace is placed on the same line as \verb|if| statement or on
the following line).
NRefactory is capable to output the code both in C\# and in VB.NET.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% 10 Pages
\cleardoublepage
\chapter{Evaluation}
\newcommand{\impref}[1]{See section \emph{\ref{#1} #1} on page \pageref{#1} for reference.}
\section{Success Criteria}
The principal criterion was that the quick-sort algorithm must successfully
decompile. This goal was achieved.
The decompiled source code is, in fact, almost identical to the original.
This is demonstrated at the end of the following section.
\section{Evolution of quick-sort}
This section demonstrates how the quick-sort algorithm is decompiled.
Four different representations are used during the decompilation process.
The output of each of these representations is shown.
Unless otherwise specified the shown pieces of code are
verbatim outputs of the decompiler.
\subsection{Original source code}
To save space we will initially consider only the \verb|Main| method of
the quick-sort algorithm. It parses textual arguments,
performs the quick-sort and then prints the result.
\begin{verbatim}
public static void Main(string[] args)
{
int[] intArray = new int[args.Length];
for (int i = 0; i < intArray.Length; i++) {
intArray[i] = int.Parse(args[i]);
}
QuickSort(intArray, 0, intArray.Length - 1);
for (int i = 0; i < intArray.Length; i++) {
System.Console.Write(intArray[i].ToString() + " ");
}
}
\end{verbatim}
This code is compiled with all optimizations turn on and its
binary form is considered is the following sections.
\subsection{Stack-based bytecode representation}
\impref{Stack-based bytecode representation}
This is a snippet of the bytecode as loaded from the executable.
This part represents the line \verb|intArray[i] = int.Parse(args[i]);|.
\verb|V_0| is \verb|intArray| and \verb|V_1| is \verb|i|.
Comments denote the stack behaviour of the bytecodes.
The (manually) stared bytecodes represent just the part
\verb|int.Parse(args[i]);|.
\begin{verbatim}
IL_0F: ldloc V_0 # Pop0->Push1
IL_10: ldloc V_1 # Pop0->Push1
IL_11: ldarg args * # Pop0->Push1
IL_12: ldloc V_1 * # Pop0->Push1
IL_13: ldelem.ref * # Popref_popi->Pushref
IL_14: call Parse() * # Varpop->Varpush Flow=Call
IL_19: stelem.i4 # Popref_popi_popi->Push0
\end{verbatim}
The used bytecodes are \verb|ldloc| (load local variable),
\verb|ldarg| (load argument), \verb|ldelem| (get array element),
\verb|call| (call method) and \verb|stelem| (set array element).
Here is the same snippet again after the stack analysis
has been performed.
\begin{verbatim}
// Stack: {}
IL_0F: ldloc V_0 # Pop0->Push1
// Stack: {IL_0F}
IL_10: ldloc V_1 # Pop0->Push1
// Stack: {IL_0F, IL_10}
IL_11: ldarg args # Pop0->Push1
// Stack: {IL_0F, IL_10, IL_11}
IL_12: ldloc V_1 # Pop0->Push1
// Stack: {IL_0F, IL_10, IL_11, IL_12}
IL_13: ldelem.ref # Popref_popi->Pushref
// Stack: {IL_0F, IL_10, IL_13}
IL_14: call Parse() # Varpop->Varpush Flow=Call
// Stack: {IL_0F, IL_10, IL_14}
IL_19: stelem.i4 # Popref_popi_popi->Push0
// Stack: {}
\end{verbatim}
For example, we can see that bytecode \verb|IL_13: ldelem.ref| pops
the last two elements from the stack -- \verb|{IL_11, IL_12}|.
These will hold the values \verb|args| and \verb|V_1| respectively.
The bytecode \verb|IL_19: stelem.i4| pops three values.
\subsection{Variable-based bytecode representation}
\impref{Variable-based bytecode representation}
This is the same snippet after the stack-based data-flow
is replaced with variable-based one:
(code manually simplified for clarity)
\begin{verbatim}
expr0D = ldloc(V_0);
expr0E = ldloc(i);
expr0F = ldarg(args);
expr10 = ldloc(i);
expr11 = ldelem.ref(expr0F, expr10);
expr12 = call(Parse(), expr11);
stelem.i4(expr0D, expr0E, expr12);
\end{verbatim}
Here is the actual unedited output:
\begin{verbatim}
stloc(expr0D, ldloc(V_0));
stloc(expr0E, ldloc(i));
stloc(expr0F, ldarg(args));
stloc(expr10, ldloc(i));
stloc(expr11, ldelem.ref(ldloc(expr0F), ldloc(expr10)));
stloc(expr12, call(Parse(), ldloc(expr11)));
stelem.i4(ldloc(expr0D), ldloc(expr0E), ldloc(expr12));
\end{verbatim}
\subsubsection{In-lining of expressions}
\impref{In-lining of expressions}
Here is the same snippet after two in-lining optimizations.
(this is actual output -- the GUI of the decompiler allows performing
of optimizations step-by-step)
\begin{verbatim}
stloc(expr0D, ldloc(V_0));
stloc(expr0E, ldloc(i));
stloc(expr11, ldelem.ref(ldarg(args), ldloc(i)));
stloc(expr12, call(Parse(), ldloc(expr11)));
stelem.i4(ldloc(expr0D), ldloc(expr0E), ldloc(expr12));
\end{verbatim}
All posible in-lines are performed:
\begin{verbatim}
stelem.i4(ldloc(V_0), ldloc(i),
call(Parse(), ldelem.ref(ldarg(args), ldloc(i))));
\end{verbatim}
\subsection{Abstract Syntax Tree representation (part 1)}
\impref{Abstract Syntax Tree representation}
Let us omit the optimization of high-level structures
for this moment and skip directly to the Abstract Syntax Tree.
After transforming a snippet of code to variable-based bytecode
and then performing the in-lining optimization we have obtained
\begin{verbatim}
stelem.i4(ldloc(V_0), ldloc(i),
call(Parse(), ldelem.ref(ldarg(args), ldloc(i))));
\end{verbatim}
Let us see the same result for larger part of the method:
\begin{verbatim}
BasicBlock_1: stloc(V_0, newarr(System.Int32,
conv.i4(ldlen(ldarg(args)))));
BasicBlock_2: stloc(i, ldc.i4(0));
BasicBlock_3: goto BasicBlock_6;
BasicBlock_4: stelem.i4(ldloc(V_0), ldloc(i), call(Parse(),
ldelem.ref(ldarg(args), ldloc(i))));
BasicBlock_5: stloc(i, @add(ldloc(i), ldc.i4(1)));
BasicBlock_6: if (ldloc(i) < conv.i4(ldlen(ldloc(V_0))))
goto BasicBlock_4;
BasicBlock_7: call(QuickSort(), ldloc(V_0), ldc.i4(0),
sub(conv.i4(ldlen(ldloc(V_0))), ldc.i4(1)));
\end{verbatim}
This code includes the first \verb|for| loop of the \verb|Main|
method. Note that \verb|br| and \verb|blt| were already
translated to \verb|goto| and \verb|if (...) goto|.
\subsubsection{Translation of bytecodes to C\# expressions}
\impref{Generating method bodies}
Let us translate just the bytecodes \verb|ldloc|, \verb|ldarg|
and \verb|ldc.i4|.
\begin{verbatim}
BasicBlock_1: stloc(V_0, newarr(System.Int32,
conv.i4(ldlen((args)))));
BasicBlock_2: stloc(i, (0));
BasicBlock_3: goto BasicBlock_6;
BasicBlock_4: stelem.i4((V_0), (i), call(Parse(),
ldelem.ref((args), (i))));
BasicBlock_5: stloc(i, @add((i), (1)));
BasicBlock_6: if ((i) < conv.i4(ldlen((V_0)))) goto BasicBlock_4;
BasicBlock_7: call(QuickSort(), (V_0), (0),
sub(conv.i4(ldlen((V_0))), (1)));
\end{verbatim}
This is a small improvement. Note the safety
parenthesis that were inserted during the translation
(e.g.\ \verb|(0)|, \verb|(args)|).
After all translations are performed the code will look like this:
\begin{verbatim}
BasicBlock_1: System.Int32[] V_0 =
(new int[((int)((args).Length))]);
BasicBlock_2: int i = (0);
BasicBlock_3: goto BasicBlock_6;
BasicBlock_4: ((V_0)[(i)] = (System.Int32.Parse(((args)[(i)]))));
BasicBlock_5: (i = ((i) + (1)));
BasicBlock_6: if ((i) < ((int)((V_0).Length))) goto BasicBlock_4;
BasicBlock_7: (QuickSortProgram.QuickSort((V_0), (0),
(((int)((V_0).Length)) - (1))));
\end{verbatim}
\subsubsection{Removal of parenthesis}
\impref{Removal of parenthesis}
C\# rules of precedence are used to remove unnecessary parenthesis:
\begin{verbatim}
BasicBlock_1: System.Int32[] V_0 = new int[(int)args.Length];
BasicBlock_2: int i = 0;
BasicBlock_3: goto BasicBlock_6;
BasicBlock_4: V_0[i] = System.Int32.Parse(args[i]);
BasicBlock_5: i = i + 1;
BasicBlock_6: if (i < (int)V_0.Length) goto BasicBlock_4;
BasicBlock_7: QuickSortProgram.QuickSort(V_0, 0, (int)V_0.Length - 1);
\end{verbatim}
\subsubsection{Removal of dead labels}
\impref{Removal of dead labels}
Unreferenced labels are removed.
\begin{verbatim}
System.Int32[] V_0 = new int[(int)args.Length];
int i = 0;
goto BasicBlock_6;
BasicBlock_4: V_0[i] = System.Int32.Parse(args[i]);
i = i + 1;
BasicBlock_6: if (i < (int)V_0.Length) goto BasicBlock_4;
QuickSortProgram.QuickSort(V_0, 0, (int)V_0.Length - 1);
\end{verbatim}
\subsubsection{Further optimizations}
\impref{Further optimizations}
The remainder of applicable AST optimizations is performed.
\begin{verbatim}
int[] V_0 = new int[args.Length];
int i = 0;
goto BasicBlock_6;
BasicBlock_4: V_0[i] = Int32.Parse(args[i]);
i++;
BasicBlock_6: if (i < V_0.Length) goto BasicBlock_4;
QuickSortProgram.QuickSort(V_0, 0, V_0.Length - 1);
\end{verbatim}
\subsection{High-level blocks representation}
\impref{High-level blocks representation}
\subsubsection{Initial state}
This representation is based on tree structure. However,
initially the tree is flat with all basic blocks being siblings.
These are the first four basic blocks in the \verb|Main| method.
The relations between the nodes are noted in the brackets
(e.g.\ basic block 1 has one successor which is basic block number 2).
See figure \ref{QuickSortMain} for graphical representation of
these four basic blocks.
\begin{verbatim}
// BasicBlock 1 (Successors:2 Parent:0)
BasicBlock_1:
int[] V_0 = new int[args.Length];
int i = 0;
goto BasicBlock_2;
// BasicBlock 2 (Predecessors:1 Successors:4 Parent:0)
BasicBlock_2:
goto BasicBlock_4;
\end{verbatim}
\begin{verbatim}
// BasicBlock 3 (Predecessors:4 Successors:4 Parent:0)
BasicBlock_3:
V_0[i] = Int32.Parse(args[i]);
i++;
goto BasicBlock_4;
// BasicBlock 4 (Predecessors:3,2 Successors:5,3 Parent:0)
BasicBlock_4:
if (i < V_0.Length) goto BasicBlock_3;
goto BasicBlock_5;
\end{verbatim}
\begin{figure}[tbh]
\centerline{\includegraphics{figs/QuickSortMain.png}}
\caption{\label{QuickSortMain}Loop in the `Main' method of quick-sort}
\end{figure}
\subsubsection{Finding loops and conditionals}
\impref{Finding loops}
Loops are found by repeatedly applying the T1-T2 transformations.
In this case the first transformation to be performed will be
a T2 transformation on basic blocks 1 and 2.
This will result in this `tree':
\begin{verbatim}
// AcyclicGraph 10 (Successors:4 Parent:0)
AcyclicGraph_10:
{
// BasicBlock 1 (Successors:2 Parent:10)
BasicBlock_1:
int[] V_0 = new int[args.Length];
int i = 0;
goto BasicBlock_2;
// BasicBlock 2 (Predecessors:1 Parent:10)
BasicBlock_2:
goto BasicBlock_4;
}
// BasicBlock 3 (Predecessors:4 Successors:4 Parent:0)
BasicBlock_3:
V_0[i] = Int32.Parse(args[i]);
i++;
goto BasicBlock_4;
// BasicBlock 4 (Predecessors:3,10 Successors:5,3 Parent:0)
BasicBlock_4:
if (i < V_0.Length) goto BasicBlock_3;
goto BasicBlock_5;
\end{verbatim}
Note how the links changed. Basic blocks 1 and 2 are now considered as a
single node 10 -- the predecessor of basic block 4 is now 10 rather then 2.
Links within the node 10 are limited just to the scope of that node.
Several further transformations are performed until all loops are found.
Conditional statements are identified as well.
This code has one conditional statement which
does not have any code in the bodies other then the \verb|goto|s.
\newpage
\begin{verbatim}
// BasicBlock 1 (Successors:2 Parent:0)
BasicBlock_1:
int[] V_0 = new int[args.Length];
int i = 0;
goto BasicBlock_2;
// BasicBlock 2 (Predecessors:1 Successors:11 Parent:0)
BasicBlock_2:
goto BasicBlock_4;
// Loop 11 (Predecessors:2 Successors:5 Parent:0)
Loop_11:
for (;;) {
// ConditionalNode 22 (Predecessors:3 Successors:3 Parent:11)
ConditionalNode_22:
BasicBlock_4:
if (i >= V_0.Length) {
goto BasicBlock_5;
// Block 21 (Parent:22)
Block_21:
} else {
goto BasicBlock_3;
// Block 20 (Parent:22)
Block_20:
}
// BasicBlock 3 (Predecessors:22 Successors:22 Parent:11)
BasicBlock_3:
V_0[i] = Int32.Parse(args[i]);
i++;
goto BasicBlock_4;
}
\end{verbatim}
\subsubsection{Final version}
Let us see the code once again without the comments and empty lines.
\begin{verbatim}
BasicBlock_1:
int[] V_0 = new int[args.Length];
int i = 0;
goto BasicBlock_2;
BasicBlock_2:
goto BasicBlock_4;
Loop_11:
for (;;) {
ConditionalNode_22:
BasicBlock_4:
if (i >= V_0.Length) {
goto BasicBlock_5;
Block_21:
} else {
goto BasicBlock_3;
Block_20:
}
BasicBlock_3:
V_0[i] = Int32.Parse(args[i]);
i++;
goto BasicBlock_4;
}
\end{verbatim}
\newpage
\subsection{Abstract Syntax Tree representation (part 2)}
\subsubsection{Removal of `goto' statements}
\impref{Removal of `goto' statements}
All of the \verb|goto| statements in the code above can be
removed or replaced. After the removal of \verb|goto| statements
we get the following much simpler code:
\begin{verbatim}
int[] V_0 = new int[args.Length];
int i = 0;
for (;;) {
if (i >= V_0.Length) {
break;
}
V_0[i] = Int32.Parse(args[i]);
i++;
}
\end{verbatim}
Note that the `false' body of the \verb|if| statement was removed
because it was empty.
\subsubsection{Simplifying loops}
\impref{Simplifying loops}
The code can be further simplified by pushing the loop initializer,
condition and iterator inside the \verb|for(;;)|:
\begin{verbatim}
int[] V_0 = new int[args.Length];
for (int i = 0; i < V_0.Length; i++) {
V_0[i] = Int32.Parse(args[i]);
}
\end{verbatim}
This concludes the decompilation of quick-sort.
\clearpage
{\vspace*{60mm} \centering This page is intentionally left blank\\ Please turn over \newpage}
\subsection{Original quick-sort (complete source code)}
\label{Original quick-sort}
\lstinputlisting[basicstyle=\footnotesize]
{./figs/QuickSort_original.cs}
\newpage
\subsection{Decompiled quick-sort (complete source code)}
\label{Decompiled quick-sort}
\lstinputlisting[basicstyle=\footnotesize]
{./figs/10_Short_type_names_2.cs}
\section{Advanced and unsupported features}
This section demonstrates decompilation of more advanced application.
The examples that follow were produced by decompiling a reversi game%
\footnote{Taken from the CodeProject. Written by Mike Hall.}.
The examples demonstrate advanced features of the decompiler
and in some cases its limitations (i.e.\ still unimplemented features).
\subsection{Properties and fields}
Properties and fields are well-supported.
\emph{Original:}
\begin{verbatim}
private int[,] squares;
private bool[,] safeDiscs;
public int BlackCount {
get { return blackCount; }
}
public int WhiteCount {
get { return whiteCount; }
}
\end{verbatim}
\emph{Decompiled:}
\begin{verbatim}
private System.Int32[,] squares;
private System.Boolean[,] safeDiscs;
public int BlackCount {
get { return blackCount; }
}
public int WhiteCount {
get { return whiteCount; }
}
\end{verbatim}
The decompiled code is correct.
The type \verb|System.Int32[,]| was not simplified to \verb|int[,]|.
\subsection{Short-circuit boolean expressions}
\emph{Original:}
\begin{verbatim}
if (r < 0 || r > 7 || c < 0 || c > 7 ||
(r - dr == row && c - dc == col) ||
this.squares[r, c] != color)
return false;
\end{verbatim}
\emph{Decompiled:}
\begin{verbatim}
if (i < 0 || i > 7 || j < 0 || j > 7 ||
i - dr == row && j - dc == col ||
squares[i, j] != color) {
return false;
}
\end{verbatim}
The decompiled code is correct. Variable names were lost during compilation.
By precedence rules it was possible to remove the parenthesis.
\subsection{Short-circuit boolean expressions 2}
\emph{Original:}
\begin{verbatim}
if ((hasSpaceSide1 && hasSpaceSide2 ) ||
(hasSpaceSide1 && hasUnsafeSide2) ||
(hasUnsafeSide1 && hasSpaceSide2 ))
return true;
\end{verbatim}
\emph{Decompiled:}
\begin{verbatim}
if (flag && flag2 || flag && flag4 || flag3 && flag2) {
return true;
}
\end{verbatim}
The decompiled code is correct. Variable names were lost during compilation.
\subsection{Complex control nesting}
\emph{Original:}
\begin{verbatim}
bool statusChanged = true;
while (statusChanged)
{
statusChanged = false;
for (i = 0; i < 8; i++)
for (j = 0; j < 8; j++)
if (this.squares[i, j] != Board.Empty &&
!this.safeDiscs[i, j] &&
!this.IsOutflankable(i, j)) {
this.safeDiscs[i, j] = true;
statusChanged = true;
}
}
\end{verbatim}
\emph{Decompiled:}
\begin{verbatim}
for (bool flag = true; flag;) {
flag = false;
for (int i = 0; i < 8; i++) {
for (int j = 0; j < 8; j++) {
if (squares[i, j] != Empty &&
!safeDiscs[i, j] &&
!IsOutflankable(i, j)) {
safeDiscs[i, j] = true;
flag = true;
}
}
}
}
\end{verbatim}
The decompiled code is correct.
A \verb|for| loop was used instead of \verb|while| loop.
The decompiler always uses \verb|for| loop.
In the example, four control structures are nested;
even more levels do not cause any problems.
\subsection{Multidimensional arrays}
Multidimensional arrays are supported.
\emph{Original:}
\begin{verbatim}
this.squares = new int[8, 8];
this.safeDiscs = new bool[8, 8];
// Clear the board and map.
int i, j;
for (i = 0; i < 8; i++)
for (j = 0; j < 8; j++) {
this.squares[i, j] = Board.Empty;
this.safeDiscs[i, j] = false;
}
\end{verbatim}
\emph{Decompiled:}
\begin{verbatim}
squares = new int[8, 8];
safeDiscs = new bool[8, 8];
for (int i = 0; i < 8; i++) {
for (int j = 0; j < 8; j++) {
squares[i, j] = Empty;
safeDiscs[i, j] = false;
}
}
\end{verbatim}
The decompiled code is correct and even slightly simplified.
\subsection{Dispose method}
This is the standard .NET dispose method for a form.
\emph{Original:}
\begin{verbatim}
if(disposing) {
if (components != null) {
components.Dispose();
}
}
\end{verbatim}
\emph{Decompiled:}
\begin{verbatim}
if (disposing && components) {
components.Dispose();
}
\end{verbatim}
The decompiler did a good job identify that the two
\verb|if|s can be expressed as short-circuit expression.
On the hand, it failed to distinguish being `true' and being `non-null'
-- on the bytecode level these are the same thing (together with being `non-zero').
\subsection{Event handlers}
Event handlers are not supported.
\emph{Original:}
\begin{verbatim}
this.aboutMenuItem.Click +=
new System.EventHandler(this.aboutMenuItem_Click);
\end{verbatim}
\emph{Decompiled:}
\begin{verbatim}
aboutMenuItem.add_Click(
new System.EventHandler(this, IL__ldftn(aboutMenuItem_Click()))
);
\end{verbatim}
Note how the unsupported bytecode \verb|ldftn| is presented.
\subsection{Constructors}
\emph{Original:}
\begin{verbatim}
this.infoPanel.Location = new System.Drawing.Point(296, 32);
\end{verbatim}
\emph{Decompiled:}
\begin{verbatim}
infoPanel.Location = new System.Drawing.Point(296, 32);
\end{verbatim}
The decompiled code is correct including the constructor arguments.
\subsection{Property access}
\emph{Original:}
\begin{verbatim}
this.infoPanel.Name = "infoPanel";
\end{verbatim}
\emph{Decompiled:}
\begin{verbatim}
infoPanel.Name = "infoPanel";
\end{verbatim}
The decompiled code is correct.
Note that properties are actually set methods. This is,
\verb|set_Name(string)| here.
\subsection{Object casting}
\emph{Original:}
\begin{verbatim}
this.Icon = ((System.Drawing.Icon)(resources.GetObject("$this.Icon")));
\end{verbatim}
\emph{Decompiled:}
\begin{verbatim}
Icon = (System.Drawing.Icon)V_0.GetObject("$this.Icon");
\end{verbatim}
The decompiled code is correct.
\subsection{Boolean values}
In the .NET Runtime there is no difference between
integers and booleans. Handling of this is delicate.
\emph{Original:}
\begin{verbatim}
return false;
\end{verbatim}
\emph{Decompiled:}
\begin{verbatim}
return false;
\end{verbatim}
The decompiled code is correct.
\emph{Original:}
\begin{verbatim}
this.infoPanel.ResumeLayout(false);
\end{verbatim}
\emph{Decompiled:}
\begin{verbatim}
infoPanel.ResumeLayout(0);
\end{verbatim}
This code is incorrect -- the case where boolean is
passed as argument still has no been considered in the decompiler.
This is just case-by-case laborious work.
\subsection{The `dup' instruction}
The \verb|dup| instruction is handled well.
\emph{Decompiled: (`dup' unsupported)}
\begin{verbatim}
expr175 = this;
expr175.whiteCount = expr175.whiteCount + 1;
\end{verbatim}
\emph{Decompiled: (`dup' supported)}
\begin{verbatim}
whiteCount++;
\end{verbatim}
% coverage for a typical binaries (I like that one :-) )
% Anonymous methods, Enumerators, LINQ
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% 2 Page
\cleardoublepage
\chapter{Conclusion}
The project was a success.
The goal of the project was to decompile an implementation
of a quick-sort algorithm and this goal was achieved well.
The C\# source code produced by the decompiler is
virtually identical to the original source code%
\footnote{See pages \pageref{Original quick-sort} and \pageref{Decompiled quick-sort} in the evaluation chapter.}.
In particular, the source code compiles back to identical .NET CLR code.
More complex programs than the quick-sort implementation
were considered and the decompiler was improved to handle them better.
The improvements include:
\begin{itemize}
\item Better support for several bytecodes related to:
creation and casting of objects, multidimensional arrays,
virtual method invocation, property access and field access.
\item Support for the \verb|dup| bytecode which does not
occur very frequently in programs. In particular, it does not
occur in the quick-sort implementation.
\item Arbitrarily complicated short-circuit boolean expression
are handled. Furthermore, both the short-circuit
and the traditional boolean expressions are simplified
using rules of logic (e.g.\ using De Morgan's laws).
\item Nested high-level control structures are handled
well (e.g.\ nested loops).
\item The performance is improved by using basic blocks.
\item The removal of \verb|goto|s was improved to work
in almost any scenario (by using the `simulator' approach).
\end{itemize}
\section{Hindsight}
I am very satisfied with the current implementation.
Whenever I have seen an opportunity for improvement,
I have have refactored the code right away.
As a result of this, I think that the algorithms and data structures
currently used are very well suited for the purpose.
If I were to reimplement the project, I would merely
save time by making the right design decisions right away.
Here are some issues that I have encountered are resolved:
\begin{itemize}
\item During the preparation phase, I have spend reasonable
amount of time researching algorithms for finding of loops.
I found most of the algorithms either unintuitive or not
not sufficiently robust for all scenarios.
Therefore, I have derived my own algorithm based on the
T1-T2 transformations.
\item I have initially used the \emph{CodeDom} library
for the abstract syntax tree.
In the end, \emph{NRefactory} proved much more suitable.
% \item I have initially directly used the objects provided by
% \emph{Cecil} library as the first code representation.
% Making an own copy turned out to be better since the objects
% could be more easily manipulated and annotated.
\item I have eventually made the stack-based and
variable-based code representations quite distinct and
decoupled. This allowed greater freedom for transformations.
\item Initially the links between nodes were manually kept
in synchronization during transformations.
It is more convenient and robust just to throw the links away
and recalculate them.
\item The removal of \verb|goto|s was gradually pushed
all the way to the last stage of decompilation where it is
most powerful and safest.
\end{itemize}
\section{Future work}
The theoretical problems have been resolved.
However, a lot of laborious work still needs to be done before
the decompiler will be able to handle all .NET executables.
The decompiler could be extended to recover C\# compile-time
features like anonymous methods, enumerators or LINQ expressions.
The decompiler greatly overlaps with a verification of .NET executables
(the code needs to be valid in order to be decompiled).
The project could be forked to create .NET verifier.
I am the primary developer of the SharpDevelop debugger.
I would like integrate the decompiler with the debugger so
that is it possible to debug applications with no source code.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\cleardoublepage
\chapter{Project Proposal}
\input{proposal}
\end{document}
| {
"alphanum_fraction": 0.7573166062,
"avg_line_length": 35.6341780377,
"ext": "tex",
"hexsha": "d3ea7820b57e7e7997b12c1e9c7e0f72389ac992",
"lang": "TeX",
"max_forks_count": 43,
"max_forks_repo_forks_event_max_datetime": "2022-01-10T23:46:06.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-14T10:59:51.000Z",
"max_forks_repo_head_hexsha": "235cc35ffe6fea0cfa4b2ed93d295ec08f32771e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "z7059652/ILSpy",
"max_forks_repo_path": "doc/Dissertation/Dissertation.tex",
"max_issues_count": 12,
"max_issues_repo_head_hexsha": "235cc35ffe6fea0cfa4b2ed93d295ec08f32771e",
"max_issues_repo_issues_event_max_datetime": "2021-06-08T04:57:07.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-01-15T11:33:32.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "z7059652/ILSpy",
"max_issues_repo_path": "doc/Dissertation/Dissertation.tex",
"max_line_length": 114,
"max_stars_count": 119,
"max_stars_repo_head_hexsha": "235cc35ffe6fea0cfa4b2ed93d295ec08f32771e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "z7059652/ILSpy",
"max_stars_repo_path": "doc/Dissertation/Dissertation.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-09T14:24:21.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-09T14:02:17.000Z",
"num_tokens": 26736,
"size": 109682
} |
\section{The \texorpdfstring{\lstinline{linqtowiki-codegen}}{linqtowiki-codegen} application}
\label{ltw-ca}
\lstinline{linqtowiki-codegen} is a simple console application
that can be used to access the functionality of LinqToWiki.Codegen.
In other words, it can generate a library for accessing a specific wiki using LinqToWiki.
Using its command-line arguments, one can specify the \ac{URL} of the wiki,
the directory to where the files will be generated, the namespace of the generated types,
the name of the generated assembly and the location of the props default file.
Some of the more advanced features of LinqToWiki.Codegen,
such as writing the generated C\# code to a specific directory,
are not available from this application.
The application writes a short usage note when run without arguments, which can be seen in Figure~\ref{lca-usage}.
\begin{figure}[htbp]
\begin{verbatim}
Usage: linqtowiki-codegen url-to-api [namespace [output-name]]
[-d output-directory] [-p props-file-path]
Examples: linqtowiki-codegen en.wikipedia.org LinqToWiki.Enwiki
linqtowiki-enwiki -d C:\Temp -p props-defaults-sample.xml
linqtowiki-codegen https://en.wikipedia.org/w/api.php
\end{verbatim}
\caption{Usage note of the \lstinline{linqtowiki-codegen} application}
\label{lca-usage}
\end{figure} | {
"alphanum_fraction": 0.7854961832,
"avg_line_length": 42.2580645161,
"ext": "tex",
"hexsha": "3a1b59128a39f212202d8871feafb2a61c98e46a",
"lang": "TeX",
"max_forks_count": 33,
"max_forks_repo_forks_event_max_datetime": "2021-08-28T03:57:08.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-20T22:11:14.000Z",
"max_forks_repo_head_hexsha": "3bda58a74d1b398d70965f17467ec268b3ebe91c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "svick/LINQ-to-Wiki",
"max_forks_repo_path": "Doc/Thesis/ltw-codegen-app.tex",
"max_issues_count": 9,
"max_issues_repo_head_hexsha": "3bda58a74d1b398d70965f17467ec268b3ebe91c",
"max_issues_repo_issues_event_max_datetime": "2018-04-04T16:57:03.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-09-02T11:25:37.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "svick/LINQ-to-Wiki",
"max_issues_repo_path": "Doc/Thesis/ltw-codegen-app.tex",
"max_line_length": 114,
"max_stars_count": 74,
"max_stars_repo_head_hexsha": "3bda58a74d1b398d70965f17467ec268b3ebe91c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "svick/LINQ-to-Wiki",
"max_stars_repo_path": "Doc/Thesis/ltw-codegen-app.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-07T03:07:52.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-05-06T10:32:19.000Z",
"num_tokens": 338,
"size": 1310
} |
\section{Conclusion}
\p{When Conceptual Space Theory migrated from a natural-language and
philosophical environment to a more technical and scientific
foundation \mdash{} as a basis for modeling scientific data
and analyzing scientific theories and theory-formation
\mdash{} it also picked up certain evident practical applications.
For example, \CSML{} was a concrete proposal for technical
data modeling whose exlplicit goal was to be more
conceptually expressive and scientifically rigorous than
conventional \mdash{} or even \q{Semantic Web} \mdash{} data
sharing tactics. So one obvious domain for concrete
applying Conceptual Space Theory lies in the
communicating and annotating of scientific
(and other technical research) data.
This use-case could certainly benefit from the
added structure of Hypergraph syntactic models
(which can engender hypergraph serialization formats)
and hypergraph-based type theories.
}
\p{So a Hypergraph/Conceptual Space hybrid can readily be
imagined as a kind of next-generation
extension of the Semantic Web or reincarnation of
\CSML{}, with an emphasis on sharing scientific
data in a format conducive to capturing the
theoretical context within which data is generated.
This is still removed from the \i{natural language}
origins of Conceptual Spaces, but it would mark
a further step in the evolution of
\Gardenfors{}'s theory from a linguistic to
a metascientific paradigm.
}
\p{But going even a step further, a data-sharing
framework emerging in the scientific context
may retroactively be utilized in a more
humanistic context as well; so an \HCS{} hybrid
may find applications in the conveying
of \i{humanities} data \mdash{} natural language
structures (parse trees or graphs, lexicons,
and so forth), sociological/demographic
data sets, digitized artifacts (art,
archaeology, museum science), etc.
In this scenario Conceptual Spaces might
be relevant to, say, Cognitive Linguistics
on two level \mdash{} a practical, software-oriented
tool for linguistic research in its digital
practice, alongside a paradigm for natural language
semantic at the theoretical level. These two
modes of application may not have fully aligned
theoretical commitments, but they would
reveal a core Conceptual Space theory diversify,
branching, and adapting to different practical
and theoretical requirements.
}
| {
"alphanum_fraction": 0.7979166667,
"avg_line_length": 44.4444444444,
"ext": "tex",
"hexsha": "19c20e14e8f7a2e70fe157e8754ef607e11e8d93",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b",
"max_forks_repo_licenses": [
"BSL-1.0"
],
"max_forks_repo_name": "ScignScape-RZ/ntxh",
"max_forks_repo_path": "conc/conclusion.ngml.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSL-1.0"
],
"max_issues_repo_name": "ScignScape-RZ/ntxh",
"max_issues_repo_path": "conc/conclusion.ngml.tex",
"max_line_length": 69,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b",
"max_stars_repo_licenses": [
"BSL-1.0"
],
"max_stars_repo_name": "ScignScape-RZ/ntxh",
"max_stars_repo_path": "conc/conclusion.ngml.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 523,
"size": 2400
} |
\subsection{Quasi-likelihood function}
| {
"alphanum_fraction": 0.8048780488,
"avg_line_length": 10.25,
"ext": "tex",
"hexsha": "e2be8ba5f0ec0388066cf2de2c18f67719b00091",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/statistics/likelihood/05-01-quasi.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/statistics/likelihood/05-01-quasi.tex",
"max_line_length": 38,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/statistics/likelihood/05-01-quasi.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 10,
"size": 41
} |
\documentclass[12pt]{amsart}
\addtolength{\hoffset}{-2.25cm}
\addtolength{\textwidth}{4.5cm}
\addtolength{\voffset}{-2.5cm}
\addtolength{\textheight}{5cm}
\setlength{\parskip}{0pt}
\setlength{\parindent}{15pt}
\usepackage{amsthm}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{xfrac}
\usepackage[colorlinks = true, linkcolor = black, citecolor = black, final]{hyperref}
\usepackage{graphicx}
\usepackage{multicol}
\usepackage{ marvosym }
\usepackage{wasysym}
\newcommand{\ds}{\displaystyle}
\pagestyle{myheadings}
\setlength{\parindent}{0in}
\pagestyle{empty}
\begin{document}
\thispagestyle{empty}
{\scshape [CLASS]} \hfill {\scshape \Large Notes} \hfill {\scshape [SEMESTER] [YEAR]}
\medskip
\hrule
\bigskip
\section*{[NAME] ([DATE])}
Go ahead -- put some text here!
\end{document} | {
"alphanum_fraction": 0.736318408,
"avg_line_length": 19.1428571429,
"ext": "tex",
"hexsha": "455569b607cc9cd134c1bb661687ec5d7126daba",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b1f7c7ec027fa3eab2ac8102b86e3a993e8f52cf",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "TorinK2/LaTeX-Shortcuts",
"max_forks_repo_path": "quick_notes_template/template/template.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b1f7c7ec027fa3eab2ac8102b86e3a993e8f52cf",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "TorinK2/LaTeX-Shortcuts",
"max_issues_repo_path": "quick_notes_template/template/template.tex",
"max_line_length": 85,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b1f7c7ec027fa3eab2ac8102b86e3a993e8f52cf",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "TorinK2/LaTeX-Shortcuts",
"max_stars_repo_path": "quick_notes_template/template/template.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 279,
"size": 804
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Conclusion %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter*{Conclusion}
\addcontentsline{toc}{chapter}{Conclusion}
\label{cha:conclusion}
\markboth{Conclusion}{Conclusion}
In this Ph.D. thesis, we contributed to the formulation and the resolution of the posture generation problem for robotics.
%Which is a subproblem of many robotics applications such as planning, trajectory generation and control.
%Such problems aim at finding a robot configuration that satisfies some high-level requests while ensuring its viability, in the sense that, in this configuration, the robot is stable, avoid collisions and respects its intrinsic limitations.
Posture generation is the problem of finding a robot configuration to fulfill high-level task goals under viability or user-specified constraints (e.g. find a posture that is statically stable, avoids non-desired contacts and respects the robot's intrinsic limitations).
This problem is traditionally formulated and solved as an optimization problem by minimizing a specific cost function under a set of constraints.
We described the formulation of the basic building blocks of the posture generation problems and proposed some extensions, such as a 'smooth' formulation of non-inclusive contact constraints between two polygons; which allows finding optimal contact configurations in complex situations where two surfaces in contact cannot be included in each other.
This formulation proved very helpful for planning complex scenarios such as making the HRP-2 robot climb a ladder, it allowed to automatically find some contact configurations that would otherwise take a long time to find manually by trial and errors.
%even when generate viable configurations of contact that would otherwise not be considered by usual formulations; it relies on the idea of adding to the problem a set of variables that represent an ellipse included in both polygons.
Robotics problems often contain variables that belong to non-Euclidean manifolds, they are traditionally handled by modifying the mathematical definition of the problem and adding extra variables and constraints to it.
We present a generic way to handle such variables in our formulation, without modifying the mathematical problem, and most importantly, we propose an adaptation of existing optimization techniques to solve constrained nonlinear optimization problems defined on non-Euclidean manifolds.
We then detail our implementation of such an algorithm, based on an SQP approach, which, to our knowledge, is the first implementation of a nonlinear solver on manifolds that can handle constraints.
This, not only allows us to formulate mathematical problems more elegantly and ensure the validity of our variables at every step of the optimization process, but also enables us to have a better understanding of our solver, and flexibility in tuning it for robotics problems.
In turn, this allows us to investigate new ways of solving posture generation problems.
The search space of a posture generation problem is the Cartesian product of several (sub)manifolds in which different quantities/variables are defined (e.g.translations, rotations, joint angles, contact forces, etc.).
%We design a new the posture generation frameword that takes advantage of the structure of the search space.
%We present a new formulation of the posture generation problem that takes into account the inherent structure of its configuration space as a cartesian product of submanifolds representing different quantities(translations, rotations, joint angles, contact forces, etc.).
%Each submanifold of the search space is considered as a separate entity, and variables on it can be considered separately from each other.
%We take advantage of that structure to propose a posture generation framework where the variables, submanifolds, and function's derivations are managed automatically, and geometric expressions can be written intuitively, which simplifies the work of the developer of new functions.
We take advantage of that structure to propose a posture generation framework where the variables and submanifolds are managed automatically, and geometric expressions can be written intuitively and are differentiated automatically.
This framework helps reducing the work the developer of new functions has to do, as functions of geometric expressions can be quickly prototyped and their derivatives are computed automatically, and the variable management allows to simply write the function on its input space without worrying about the location of said input space within the global search space.
Our framework allows to easily and elegantly write custom constraints on selected sets of variables defined on the submanifolds of the search space.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
We exploit the capabilities of our framework to generate viable robot's configurations with contacts on non-flat surfaces by parametrizing the location of the contact point with additional variables.
That approach allowed us to compute manipulation and locomotion postures involving solids defined only by their meshes and where the choice of the contact location was left entirely to the solver.
That way, we computed some postures of an HRP-4 robot holding the leg of an HRP-2 robot in its grippers, and others, for example, where HRP-4 is climbing on a stack of cubes and the contacts locations are chosen automatically.
This was made possible by proposing a generic way to parametrize the surface of a solid represented by a mesh, based on the Catmull-Clark subdivision algorithm, and using it to compute configurations where the optimal location of contact on a complex object is chosen by the solver.
We took a great care to eliminate many of the cumbersome aspects of writing a posture generation problem, and in the end, the genericity of our framework allows the definition and use of a wide range of functions applied to any variables of the problem (joint angles, forces, torques or any additional variables).
Such function can then be used to define and solve custom posture generation problems with our solver, and compute viable postures that satisfy any user-defined tasks.
We evaluate the performances of our solver on problems relying heavily on the manifold formulation and show that it is superior to the classic approach in terms of convergence speed and time spent per iteration.
%We then present some evaluations of our solver and posture generator: we solved a cube stacking problem that relies heavily on the manifold formulation to showcase that, the manifold formulation performs better than the traditional one in that problem and in particular, the time spent per iteration is, as expected, shorter.
%We studied the influence of the distance between the initial guess and the problem's solution on the success rate of the posture generation problem.
%This showed that when starting close to the solution, convergence is almost always reached whereas when starting remotely, it is more difficult to find the solution, and more work on the solver could help increase this success rate.
%Such study allowed us to compare results for different solver options.
%We showed that in terms of hessian update method, the self-scaling BFGS update on individual Hessians gives us the best results.
We then present an approach to evaluate and compare the performances of different resolution methods for solving several types of posture generation problems, which could help develop strategies to tune our solver, based on the problem at hand.
We developed our solver in order to solve posture generation problems, but we show that its capabilities can be leveraged to solve other kinds of problems such as the identification of inertial parameters, where the manifold formulation guarantees that the optimization iterates are always physically valid.
For this problem, our solver and formulation proved more efficient than the traditional formulation solved with an off-the-shelf solver.
Finally, we presented our preliminary work in using posture generation in multi-contact planning in real sensory acquired environment.
%Overall, we present an efficient and user-friendly framework for posture generation along with a nonlinear solver on manifold that can be used to compute viable robotic postures satisfying some tasks.
%The framework is designed such that creating custom functions and constraints to the problem is simple.
%Although our solver fairs correctly, compared to other solvers on some problems (Schittkowsky, Inertial identification), we believe that some improvements could still be made to it.
%This study showed that our solver is able to solve posture generation problems in times comparable to the state of the art, but we are fully aware that those computation times could be improved.
Although we manage to get some satisfying results out of our posture generator, the solver still requires some tuning of its options, and of the initial guess, to ensure good convergence rates.
We are fully aware that even though the computation times that we observe are of the same order of magnitude as the ones found in the literature, improvements could and should be made in the future to make our posture generator more robust and faster, by not only improving the solver's implementation, but also specializing it for robotic posture generation problems.
%We believe that improvements could be made to make our solver more reliable and to specialize it for the resolution of posture generation problems.
In particular, we believe that the restoration phase, especially the treatment of the Hessian updates, can be improved.
We can also try using other QP solvers than LSSOL.
A sparse solver, for example, may be more suited to take advantage of the structure of our problem.
Most importantly, future works need to develop the solver to make it specialized for posture generation problems, either by finding optimal solver options or by modifying the optimization algorithm.
In future research, having an open solver and being able to modify it will be a crucial element of our ability to find and use the most suited algorithm to solve posture generation and other problems encountered in robotics more efficiently.
%An initial study of the problem to solve, followed by an automatic tuning of the solver seems like a promising idea to improve the posture generator's performances.
One could, for example, choose an initial guess from a database of robot postures and some solver options automatically, based on an initial study of the structure of the problem.
It would also be possible to take into account the very specific structure of a robot in the resolution algorithm.
%Having an open and functionnal solver is a crutial tool in
%Our open solver, which can be modified to suit the needs [xxx of what], can be crutial tool to achieve this goal.
%This is made possible by the fact that we now have an open solver that we can modify to suit our needs.
%It could be done through defining families of typical problems and finding an optimal set of solver options to solve them.
%That could be done through finding an optimal set of solver options to solve a family of problems.
%For example we could allow it to treat a variable number of constraints along the iterations, that way, some constraints could be dynamically added or ignored during the resolution.
%This could be useful to reduce the amount of computation per iterations, for example by ignoring some collision constraints when they are far from the iterate.
It would be interesting to use our posture generator in a higher-level software, like a multi-contact stance planner, to automatically generate sequences of viable configuration to use to guide the motion of the robot.
Ideally, we would like to integrate the posture generation with our multi-contact control framework to allow on-the-fly replanning of postures when the initial plan becomes not feasible because of issues, such as drift of the robot motion from the original plan.
%We present a formulation of the posture generation problem that takes into account the fact that some variables such as the 3D rotations live in non-Euclidean manifolds while making the writing of problems and of specific constraints more versatile.
%In order to solve nonlinear optimization problems on manifolds, we developped our own solver, which not only allows us to formulate mathematical problems more elegantly and ensure the validity of our variables at every step of the optimization process, but also enables us to have a better understanding and mastering in the way to tune our solver for robotic problems, and allows us to try new ways of solving those problems.
%We proposed several new types of constraint formulation that make the spectrum of discoverable viable postures larger.
%We presented a formulation of non-inclusive flat contacts that allows to generate contacts between two surfaces while monitoring the size of their intersections.
%Leveraging our problem formulation and its ability to manage manifolds allowed us to generate contacts with
| {
"alphanum_fraction": 0.8069027746,
"avg_line_length": 143,
"ext": "tex",
"hexsha": "a291c0a243da59de3086c91fa20441a9d26b633f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7f4d2d46dfdd1f59ac29770585e8cee6dc4f2668",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "stanislas-brossette/phd-thesis",
"max_forks_repo_path": "Chapter7-Conclusion/chapter7.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7f4d2d46dfdd1f59ac29770585e8cee6dc4f2668",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "stanislas-brossette/phd-thesis",
"max_issues_repo_path": "Chapter7-Conclusion/chapter7.tex",
"max_line_length": 427,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7f4d2d46dfdd1f59ac29770585e8cee6dc4f2668",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "stanislas-brossette/phd-thesis",
"max_stars_repo_path": "Chapter7-Conclusion/chapter7.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2416,
"size": 13299
} |
\chapter{Bootstrapping}
\section{General technique}
\sysname{} is bootstrapped from an existing \commonlisp{}
implementation that, in addition to the functionality required by the
standard, also contains the library \texttt{closer-mop}. This
\commonlisp{} system is called the \emph{host}. The result of the
bootstrapping process is an \emph{image} in the form of an executable
file containing a \sysname{} system. This system is called the
\emph{target}. The target image does not contain a complete
\commonlisp{} system. It only contains enough functionality to load
the remaining system from compiled files.
\seesec{sec-bootstrapping-viable-image}.
In general, the target image can be thought of as containing
\emph{graph} of \commonlisp{} objects that have been placed in memory
according to the spaces managed by the memory manager. To create this
graph, we first generate an \emph{isomorphic} graph of host objects in
the memory of an executing host system. To generate the target image,
the isomorphic host graph is traversed, creating a target version of
each object in the host graph, and placing that object on an
appropriate address in the target image.
The isomorphic host graph contains objects that are \emph{analogous}
to their target counterparts as follows:
\begin{itemize}
\item A target \texttt{fixnum} is represented as a host integer.
Whether the integer is a host fixnum or not depends on the fixnum
range of the host.
\item A target \texttt{character} is represented as a host character.
\item A target \texttt{cons} cell is represented as a host
\texttt{cons} cell.
\item A target general instance is represented as a host
\texttt{standard-object} for the \emph{header} and a host
\texttt{simple-vector} for the \emph{rack}.
\item Target objects such as bignums or floats are not needed at this
stage of bootstrapping, so they do not have any representation as
host objects.
\end{itemize}
\section{Global environments for bootstrapping}
During different stages of bootstrapping, a particular \emph{name} (of
a function, class, etc) must be associated with different objects. As
a trivial example, the function \texttt{allocate-object} in the host
system is used to allocate all standard objects. But
\texttt{allocate-object} is also a target function for allocating
objects according to the way such objects are represented by the
target system. These two functions must be available simultaneously.
Most systems solve this problem by using temporary names for target
packages during the bootstrapping process. For example, even though
in the final target system, the name \texttt{allocate-object} must be
a symbol in the \texttt{common-lisp} package, during the bootstrapping
process, the name might be a symbol in a package with a different
name.
In \sysname{} we solve the problem by using multiple \emph{first-class
global environments}.
For the purpose of bootstrapping, it is convenient to think of
\texttt{eval} as consisting of two distinct operations:
\begin{itemize}
\item Compile. A \emph{compilation environment} is used to expand
macros and for other compilation purposes. The result of
compilation is code that is \emph{untied} to any particular
environment.
\item Tie. The untied code produced by the first step is \emph{tied}
to a particular run-time environment. Tying is accomplished by
calling the top-level function created by the compilation, passing
it the result of evaluating \texttt{load-time-value} forms in a
particular environment.
\end{itemize}
The reason we need to separate these two operations is that for
bootstrapping purposes, the two are going to use distinct global
environments.
\section{Viable image}
\label{sec-bootstrapping-viable-image}
An image I is said to be \emph{viable} if and only if it is possible
to obtain a complete \commonlisp{} system by starting with I and loading a
sequence of ordinary compiled files.
There are many ways of making a viable image. We are interested in
making a \emph{small} viable image. In particular, we want the
initial image \emph{not to contain any evaluator} (no compiler, no
interpreter), because we want to be able to load the compiler as a
bunch of compiled files.
By requiring the viable image not to contain the compiler, we place
some restrictions on the code in it.
One such restriction is that creating the discriminating function of a
generic function is not allowed to invoke the compiler. Since doing
so is typical in any CLOS implementation, we must find a different
way. We solve this problem by using a general-purpose discriminating
function that traverses the call history and invokes the effictive
method functions that corresponds to the arguments of the call, and if
no effective method function can be found, calls the machinery to
compute the effective method function and adds the new effective
method function to the call history.
Another such restriction concerns the creation of classes. Typically,
when a class is created, the reader and writer methods are created by
invoking the compiler. Again, a different way must be found. To solve
this problem, the method function of an accessor method is created
as a closure that closes over the index of the slot in the rack.
\subsection{Arguments passed to \texttt{fasl} function}
\label{sec-bootstrapping-arguments-to-fasl-function}
Recall that the top-level function in a compiled file is passed a
certain number of arguments that are functions that can be called as
part of the loading process:
\begin{itemize}
\item A function for allocating a string of a certain length. This
allocation can not be accomplished with any more general function
that requires arguments other than fixnums.
\item The function \texttt{(setf aref)} to fill a string with
characters.
\item The function \texttt{find-package} to obtain a package from its
name, which is a string.
\item The function \texttt{intern} to obtain a symbol from its name
and a package.
\item The function \texttt{fdefinition} to obtain a globally defined
function, given its name.
\end{itemize}
Other operations, such as obtaining a class metaobject or finding the
value of a special variable, are accomplished by calling functions
obtained using the steps above.
The next thing to determine is exactly what functions must exist in
the minimal viable image in order for it to be possible to create a
complete \commonlisp{} system.
\subsection{Creating a string}
\label{sec-bootstrapping-creating-a-string}
The code in a compiled file for creating a string looks like this:
\begin{verbatim}
(let ((string (funcall make-string <length>)))
(funcall setf-aref <char-0> string 0)
(funcall setf-aref <char-1> string 1)
...
\end{verbatim}
where \texttt{<length>} is the length of the string, given as a
fixnum.
Here \texttt{funcall} is not a call to the \commonlisp{} function of
that name, but rather a built-in mechanism for calling a function that
is the value of a lexical variable. The variable \texttt{make-string}
is the first argument to the top-level function of the compiled file
as mentioned in \ref{sec-bootstrapping-arguments-to-fasl-function}.
Similarly, the variable \texttt{setf-aref} contains the function in
the second argument to the top-level function of the compiled file.
\subsection{Finding a package}
\label{sec-bootstrapping-finding-a-package}
To find a package when a compiled file is loaded, the following code
is executed:
\begin{verbatim}
(let* ((name <make-name-as-string>)
(package (funcall find-package name)))
...
\end{verbatim}
where \texttt{<make-name-as-string>} is the code that is shown in
\ref{sec-bootstrapping-creating-a-string}.
The variable \texttt{find-package} is the third argument to the
top-level function of the compiled file.
\subsection{Finding an existing interned symbol}
\label{sec-bootstrapping-finding-an-existing-interned-symbol}
To find an existing interned symbol when a compiled file is loaded,
the following code is executed:
\begin{verbatim}
(let* ((package <find-the-package>)
(name <make-name-as-string>)
(symbol (funcall intern name package)))
\end{verbatim}
where \texttt{<find-the-package>} is the code that is shown in
\ref{sec-bootstrapping-finding-a-package} and
\texttt{<make-name-as-string>} is the code that is shown in
\ref{sec-bootstrapping-creating-a-string}.
The variable \texttt{intern} is the fourth argument to the
top-level function of the compiled file.
\subsection{Finding an existing function named by a symbol}
\label{sec-bootstrapping-finding-an-existing-function-named-by-a-symbol}
To find an existing function named by a symbol when a compiled file is
loaded, the following code is executed:
\begin{verbatim}
(let* ((name <find-the-symbol>)
(function (funcall fdefinition name)))
...
\end{verbatim}
where \texttt{<find-the-symbol>} is the code that is shown in
\ref{sec-bootstrapping-finding-an-existing-interned-symbol}.
The variable \texttt{fdefinition} is the fifth argument to the
top-level function of the compiled file.
\subsection{Finding an existing function named by a list}
\label{sec-bootstrapping-finding-an-existing-function-named-by-a-list}
To find an existing function named by a list when a compiled file is
loaded, the following code is executed:
\begin{verbatim}
(let* ((symbol <find-the-symbol>)
(setf <find-the-symbol-setf>)
(nil <find-the symbol-nil)
(cons <find-the-function-cons>)
(name (funcall cons setf (funcall cons symbol nil)))
(function (funcall fdefinition name)))
...
\end{verbatim}
Here, \texttt{<find-the-symbol>}, \texttt{<find-the-symbol-setf>}, and
\texttt{<find-the-symbol-nil>} contain the code shown in
\ref{sec-bootstrapping-finding-an-existing-interned-symbol}. The code
indicated by \texttt{<find-the-function-cons>} is the code shown in
\ref{sec-bootstrapping-finding-an-existing-function-named-by-a-symbol}.
The variable \texttt{fdefinition} is the fifth argument to the
top-level function of the compiled file.
\subsection{Finding an existing class}
\label{sec-bootstrapping-finding-an-existing-class}
To find an existing class when a compiled file is loaded, the
following code is executed:
\begin{verbatim}
(let* ((name <find-the-symbol>)
(find-class <find-the-function-find-class>)
(class (funcall find-class name)))
...
\end{verbatim}
Here, \texttt{<find-the-symbol>} contain the code shown in
\ref{sec-bootstrapping-finding-an-existing-interned-symbol}, and
\texttt{<find-the-function-find-class>} contain the code shown in
\ref{sec-bootstrapping-finding-an-existing-function-named-by-a-symbol}.
\subsection{Executing generic functions}
\label{sec-bootstrapping-executing-generic-functions}
The functionality that implements generic function dispatch must also
be in place in the minimal viable image, including:
\begin{itemize}
\item \texttt{compute-discriminating-function}. In the initial
imagine, this function will be a general function that scans the
call history and invokes the corresponding effective method. If no
effective method corresponding to the arguments exists, then a new
one is computed from the applicable methods.
\item \texttt{compute-applicable-methods}.
\item \texttt{compute-applicable-methods-using-classes}.
\item \texttt{compute-effective-method}. This function returns a form
to be compiled, but since we do not have the compiler, we instead
use some more direct method. We suggest calling a
\sysname{}-specific function called (say)
\texttt{compute-effective-method-function} instead. This call
returns a closure that can be added to the call history of the
generic function. This function does not call the compiler, so it
can be part of the initial image. However, this new function can be
a generic function, so it can take different method combinations
into account, making it possible to support all the method
combinations used by the compiler.
\end{itemize}
\subsection{Creating general instances}
\label{sec-bootstrapping-creating-general-instances}
In order to create an instance of a class, the following functions
are called, so they must exist and be executable:
\begin{itemize}
\item \texttt{make-instance}. Called directly to create the instance.
\item \texttt{allocate-instance}. Called by \texttt{make-instance} to
allocate storage for the instance.
\item \texttt{initialize-instance}. Called by \texttt{make-instance}
to initialize the slots of the instance.
\item \texttt{shared-initialize}. Called by
\texttt{initialize-instance}.
\item \texttt{slot-boundp-using-class}. Called by
\texttt{shared-initialize}. This function works by taking the slot
index from the effective-slot object and then using the function
\texttt{standard-instance-access} to check the value of the slot.
Thus, it does not use any complex generic-function machinery.
\item \texttt{(setf slot-value-using-class)}. Called by
\texttt{shared-initialize}. Like the previous function, this one
also uses the slot index directly to set the slot value, so no
complicated machinery is required.
\end{itemize}
Most of these functions are generic functions, so it will use the
functionality described in
\refSec{sec-bootstrapping-executing-generic-functions}.
\subsection{Creating class metaobjects}
A class metaobject is itself a general instance, so the machinery
described in \refSec{sec-bootstrapping-creating-general-instances} is
required. In addition, all the functionality for initializing class
metaobjects is necessary. This functionality is fairly simple,
because the complex part is accomplished during class finalization.
Some error checking is typically done, but it does not have to be
present in the initial viable image. It can be loaded later as
auxiliary methods on \texttt{shared-initialize}.
\subsection{Class finalization}
The class finalization protocol involves calling the following
functions:
\begin{itemize}
\item \texttt{finalize-inheritance}, the main entry point.
\item \texttt{compute-class-precedence-list} to compute the
precedence list. The value computed by this function is associated
with the class metaobject using some unknown mechanism. We can
either use \texttt{reinitialize-instance} or a slot writer.
\item \texttt{compute-slots} to compute a list of effective
slot definition metaobjects.
\item \texttt{effective-slot-definition-class}. This function is
called by \texttt{compute-slots} to determine what class to use in
order to instantiate the effective slot definition metaobjects.
\item \texttt{compute-effective-slot-definition}. This function is
also called by \texttt{compute-slots}.
\end{itemize}
\subsection{Creating generic-function metaobjects}
A generic-function metaobject is itself a general instance, so the
machinery described in
\refSec{sec-bootstrapping-creating-general-instances} is required. In
addition, all the functionality for initializing generic-function
metaobjects is necessary.
Initializing a generic function metaobject is mainly a matter of
checking certain argument combinations for validity, and of supplying
defaults for certain keyword arguments if not supplied. This
functionality is provided by auxiliary methods on
\texttt{initialize-instance} and \texttt{shared-initialize}, so those
auxiliary methods must be present in the image.
\subsection{Creating method metaobjects}
A method metaobject is itself a general instance, so the machinery
described in \refSec{sec-bootstrapping-creating-general-instances} is
required. In addition, all the functionality for initializing method
metaobjects is necessary.
Notice that methods will be created when compiled files are being
loaded, so the code in the method will already exist in the compiled
file. In particular, the function given as the \texttt{:function}
keyword argument to \texttt{make-instance} has already been processed
by \texttt{make-method-lambda} and then compiled by the cross
compiler. Therefore, none of this machinery needs to exist in the
minimal viable image.
In other words, initializing a method metaobject is mainly a matter of
checking the validity of supplied arguments.
\subsection{Creating slot definition metaobjects}
A slot definition metaobject is itself a general instance, so the
machinery described in
\refSec{sec-bootstrapping-creating-general-instances} is required. In
addition, all the functionality for initializing slot definition
metaobjects is necessary.
The only potential complication here is the keyword argument
\texttt{:initfunction}. However, the code for this function has
already been created and compiled by the cross compiler by the time
the compiled file is being loaded. For the remaining keyword
arguments, it is just a matter of checking their validity.
\section{Bootstrapping stages}
\subsection{Stage 1, bootstrapping CLOS}
\subsubsection{Definitions}
\begin{definition}
A \emph{simple instance} is an instance of some class, but that is
also neither a class nor a generic function.
\end{definition}
\begin{definition}
A \emph{host class} is a class in the host system. If it is an
instance of the host class \texttt{standard-class}, then it is
typically created by the host macro \texttt{defclass}.
\end{definition}
\begin{definition}
A \emph{host instance} is an instance of a host class. If it is an
instance of the host class \texttt{standard-object}, then it is
typically created by a call to the host function
\texttt{make-instance} using a host class or the name of a host class.
\end{definition}
\begin{definition}
A \emph{host generic function} is a generic function created by the
host macro \texttt{defgeneric}, so it is a host instance of the host
class \texttt{generic-function}. Arguments to the discriminating
function of such a generic function are host instances. The host
function \texttt{class-of} is called on some required arguments in
order to determine what methods to call.
\end{definition}
\begin{definition}
A \emph{host method} is a method created by the host macro
\texttt{defmethod}, so it is a host instance of the host class
\texttt{method}. The class specializers of such a method are host
classes.
\end{definition}
\begin{definition}
A \emph{simple host instance} is a host instance that is neither a
host class nor a host generic function.
\end{definition}
\begin{definition}
An \emph{ersatz instance} is a target instance represented as a host
data structure, using a host standard object to represent the
\emph{header} and a host simple vector to represent the \emph{rack}.
In fact, the header is an instance of the host class
\texttt{funcallable-standard-object} so that some ersatz instances can
be used as functions in the host system.
\end{definition}
\begin{definition}
An ersatz instance is said to be \emph{pure} if the class slot of the
header is also an ersatz instance. An ersatz instance is said to be
\emph{impure} if it is not pure. See below for more information on
impure ersatz instances.
\end{definition}
\begin{definition}
An \emph{ersatz class} is an ersatz instance that can be instantiated
to obtain another ersatz instance.
\end{definition}
\begin{definition}
An \emph{ersatz generic function} is an ersatz instance that is also a
generic function. It is possible for an ersatz generic function be
executed in the host system because the header object is an instance
of the host class \texttt{funcallable-standard-object}. The methods
on an ersatz generic function are ersatz methods.
\end{definition}
\begin{definition}
An \emph{ersatz method} is an ersatz instance that is also a method.
\end{definition}
\begin{definition}
A \emph{bridge class} is a representation of a target class as a
simple host instance. An impure ersatz instance has a bridge class in
the class slot of its header. A bridge class can be instantiated to
obtain an impure ersatz instance.
\end{definition}
\begin{definition}
A \emph{bridge generic function} is a target generic function
represented as a simple host instance, though it is an instance of the
host function \texttt{funcallable-standard-object} so it can be
executed by the host.
Arguments to a bridge generic function are ersatz instances. The
bridge generic function uses the
\emph{stamp}
\seesec{sec-generic-function-dispatch-the-discriminating-function} of
the required arguments to dispatch on.
The methods on a bridge generic function are bridge methods.
\end{definition}
\begin{definition}
A \emph{bridge method} is a target method represented by a simple host
instance. The class specializers of such a method are bridge classes.%
\fixme{check the truth of that statement.}
The \emph{method function} of a bridge method is an ordinary host
function.
\end{definition}
\subsubsection{Preparation}
Before bootstrapping can start, a certain number of \asdf{} systems
must be loaded, and in particular the system called
\texttt{sicl-clos-boot-support}.
In addition to the host environment, four different \sysname{}
first-class environments are involved. We shall refer to them as
$E_1$, $E_2$, $E_3$, and $E_4$.
\subsubsection{Phase 1}
The purpose of phase~1 is:
\begin{itemize}
\item to create host generic functions in $E_2$ corresponding to all
the accessor functions defined by \sysname{} on standard MOP
classes, and
\item to create a hierarchy in $E_1$ of host standard classes that has
the same structure as the hierarchy of MOP classes.
\end{itemize}
Three different environments are involved in phase~1:
\begin{itemize}
\item The global host environment $H$ is used to find the host classes
named \texttt{standard-class}, \texttt{funcallable-standard-class},
\texttt{built-in-class}, and \texttt{standard-generic-function}.
These classes are used by \texttt{make-instance} in order to create
the classes in $E_1$ and the generic functions in $E_2$
\item The run-time environment $E_1$ is where instances of the host
classes named \texttt{standard-class},
\texttt{funcallable-standard-class}, and \texttt{built-in-class}
will be associated with the names of the MOP hierarchy of classes.
These instances are thus host classes.
\item The run-time environment $E_2$ is where instances of the host
class named \texttt{standard-generic-function} will be associated
with the names of the different accessors specialized to host
classes created in $E_1$.
\end{itemize}
One might ask at this point why generic functions are not defined in
the same environment as classes. The simple answer is that there are
some generic functions that were automatically imported into $E_1$
from the host, that we still need in $E_1$, and that would have been
overwritten by new ones if we had defined new generic functions in
$E_1$.
Several adaptations are necessary in order to accomplish phase~1:
\begin{itemize}
\item The function \texttt{ensure-generic-function} is defined in
$E_1$ and operates in $E_2$. It checks whether there is already a
function with the name passed as an argument in $E_2$, and if so, it
returns that function. It makes no verification that such an
existing function is really a generic function; it assumes that it
is. It also assumes that the parameters of that generic function
correspond to the arguments of \texttt{ensure-generic-function}. If
there is no generic function with the name passed as an argument in
$E_2$, it creates an instance of the host class
\texttt{standard-generic-function} and associate it with the name in
$E_2$. We define \texttt{ensure-generic-function} in $E_1$ so that
it can also be used to find accessor functions when MOP classes are
instantiated and methods need to be added to these accessor
functions.
\item The function \texttt{ensure-class} has a special version in
$E_1$. Rather than checking for an existing class, it always
creates a new one. Furthermore, before calling the host function
\texttt{make-instance}, it removes the \texttt{:readers} and
\texttt{:writers} entries from the canonicalized slot specifications
in the \texttt{:direct-slots} keyword argument. The result is that
the class will be created without any slot accessors. We must do it
this way to avoid that the class-initialization protocol of the host
create these accessors in the host global environment $H$. Instead,
\texttt{ensure-class} will create instances of the host class
\texttt{standard-method} using a function that call the host
functions \texttt{slot-value} and \texttt{(setf slot-value)} and it
will add these methods to the appropriate generic function found in
$E_2$.
\end{itemize}
Phase~1 is divided into two steps:
\begin{enumerate}
\item First, the \texttt{defgeneric} forms corresponding to the
accessors of the classes of the MOP hierarchy are evaluated using
$E_1$ as both the compilation environment and run-time environment.
The result of this step is a set of host generic functions in $E_2$,
each having no methods.
\item Next, the \texttt{defclass} forms corresponding to the classes
of the MOP hierarchy are evaluated using $E_1$ as both the
compilation environment and run-time environment. The result of
this step is a set of host classes in $E_1$ and host standard
methods on the accessor generic functions created in step~1
specialized to these classes.
\end{enumerate}
\subsubsection{Phase 2}
The purpose of phase~2 is to create a hierarchy of bridge classes that
has the same structure as the hierarchy of MOP classes.
Four different environments are involved in phase~2:
\begin{itemize}
\item The compilation environment $C_2$.
\item The run-time environment $E_1$ is used to look up metaclasses to
instantiate in order to create the bridge classes.
\item The run-time environment $E_2$ is the one in which bridge
classes will be associated with names.
\item The run-time environment $E_3$ is the one in which bridge
generic functions will be associated with names.
\end{itemize}
We create bridge classes corresponding to the classes of the MOP
hierarchy. When a bridge class is created, it will automatically
create bridge generic functions corresponding to slot readers and
writers. This is done by calling \texttt{ensure-generic-function} of
phase 1.
Creating bridge classes this way will also instantiate the host class
\texttt{target:direct-slot-definition}, so that our bridge classes
contain host instances bridge in their slots.
In this phase, we also prepare for the creation of ersatz instances.
\subsubsection{Phase 3}
The purpose of this phase is to create ersatz generic functions and
ersatz classes, by instantiating bridge classes.
At the end of this phase, we have a set of ersatz instances, some of
which are ersatz classes, except that the \texttt{class} slot of the
header object of every such instance is a bridge class. We also have
a set of ersatz generic functions (mainly accessors) that are ersatz
instances like all the others.
\subsubsection{Phase 4}
The first step of this phase is to finalize all the ersatz classes
that were created in phase 3. Finalization will create ersatz
instances of bridge classes corresponding to effective slot
definitions.
The second step is to \emph{patch} all the ersatz instances allocated
so far. There are two different ways in which ersatz instances must
be patched. First of all, all ersatz instances have a bridge class in
the \texttt{class} slot of the header object. We patch this
information by accessing the \emph{name} of the bridge class and
replacing the slot contents by the ersatz class with the same name.
Second, ersatz instances of the class \texttt{standard-object} contain
a list of effective slot definition objects in the second word of the
rack, except that those effective slot definition objects
are bridge instances, because they were copied form the
\texttt{class-slots} slot of the bridge class when the bridge class
was instantiated to obtain the ersatz instance. Since all ersatz
classes were finalized during the first step of this phase, they now
all have a list of effective slot definition objects, and those
objects are ersatz instances. Patching consists of replacing the
second word of the rack of all instances of
\texttt{standard-object} by the contents of the \texttt{class-slots}
slot of the class object of the instance, which is now a ersatz
class.
The final step in this phase is to \emph{install} some of the
remaining bridge generic functions so that they are the
\texttt{fdefinition}s of their respective names. We do not install
all remaining bridge generic functions, because some of them would
clobber host generic functions with the same name that are still
needed.
At the end of this phase, we have a complete set of bridge generic
functions that operate on ersatz instances. We still need bridge
classes to create ersatz instances, because the \emph{initfunction}
needs to be executed for slots that require it, and only host
functions are executable at this point.
\subsubsection{Phase 5}
The purpose of this phase is to create ersatz instances for all
objects that are needed in order to obtain a viable image, including:
\begin{itemize}
\item ersatz built-in classes such as \texttt{package}, \texttt{symbol},
\texttt{string}, etc.,
\item ersatz instances of those classes, such as the required
packages, the symbols contained in those packages, the names of
those symbols, etc.
\item ersatz standard classes for representing the global environment
and its contents.
\item ersatz instances of those classes.
\end{itemize}
\subsubsection{Phase 6}
The purpose of this phase is to replace all the host instances that
have been used so far as part of the entire ersatz structure, such as
symbols, lists, and integers by their ersatz counterparts.
\subsubsection{Phase 7}
The purpose of this phase is to take the simulated graph of objects
used so far and transfer it to a \emph{memory image}.
\subsubsection{Phase 8}
Finally, the memory image is written to a binary file.
\subsection{Stage 2, compiling macro definitions}
Stage 1 of the bootstrapping process consists of using the cross
compiler to compile files containing definitions of standard macros
that will be needed for compiling other files.
When a \texttt{defmacro} form is compiled by the cross compiler, we
distinguish between the two parts of that defmacro form, namely the
\emph{expander code} and the \emph{resulting expansion code}. The
\emph{expander code} is the code that will be executed in order to
compute the resulting expansion code when the macro is invoked. The
\emph{resulting expansion code} is the code that replaces the macro
call form.
As an example, consider the hypothetical definition of the
\texttt{and} macro shown in \refCode{code-defmacro-and}.
\begin{codefragment}
\inputcode{code-defmacro-and.code}
\caption{\label{code-defmacro-and}
Example implementation of the \texttt{and} macro.}
\end{codefragment}
In \refCode{code-defmacro-and}, the occurrences of \texttt{car},
\texttt{cdr}, \texttt{null}, and \texttt{cond} are part of the
\emph{expander code} whereas the occurrence of \texttt{when}, of
\texttt{t}, and the occurrence of \texttt{and} in the last line are
part of the resulting expansion code.
The result of expanding the \texttt{defmacro} form in
\refCode{code-defmacro-and} is shown in
\refCode{code-macro-expansion-and}.
\begin{codefragment}
\inputcode{code-macro-expansion-and.code}
\caption{\label{code-macro-expansion-and}
Expansion of the macro call.}
\end{codefragment}
Thus, when the code in \refCode{code-defmacro-and} is compiled by the
cross compiler, it is first expanded to the code in
\refCode{code-macro-expansion-and}, and the resulting code is compiled
instead. Now \refCode{code-macro-expansion-and} contains an
\texttt{eval-when} at the top level with all three situations (i.e.,
\texttt{:compile-toplevel}, \texttt{:load-toplevel}, and
\texttt{:execute}. As a result, two things happen to the
\texttt{funcall} form of \refCode{code-macro-expansion-and}:
\begin{enumerate}
\item It is \emph{evaluated} by the \emph{host function}
\texttt{eval}.
\item It is \emph{minimally compiled} by the cross compiler.
\end{enumerate}
In order for the evaluation by the host function \texttt{eval} to be
successful, the following must be true:
\begin{itemize}
\item All the \emph{functions} and \emph{macros} that are
\emph{invoked} as a result of the call to \texttt{eval} must exist.
In the case of \refCode{code-macro-expansion-and}, the function
\texttt{(setf sicl-environment::macro-function)} must exist, and that
is all.
\item All the \emph{macros} that occur in macro forms that are
\emph{compiled} as a result of the call to \texttt{eval} must
exist. These are the macros of the expansion code; in our example
only \texttt{cond}. Clearly, if only standard \commonlisp{} macros are
used in the expansion code of macros, this requirement is
automatically fulfilled.
\item It is \emph{preferable}, though not absolutely necessary for the
\emph{functions} that occur in function forms that are
\emph{compiled} as a result of the call to \texttt{eval} exist. If
they do not exist, the compilation will succeed, but a warning will
probably be issued. These functions are the functions of the
expansion code; in our example \texttt{car}, \texttt{cdr}, and
\texttt{null}. Again, if only standard \commonlisp{} function are used in
the expansion code of macros, this requirement is automatically
fulfilled. It is common, however, for other functions to be used as
well. In that case, those functions should preferably have been
loaded into the host environment first.
\end{itemize}
In order for the minimal compilation by the cross compiler to be
successful, the following must be true:
\begin{itemize}
\item All the \emph{macros} that occur in macro forms that are
\emph{minimally compiled} by the cross compiler must exist. These
are again the macros of the expansion code; in our example only
\texttt{cond}. Now, the cross compiler uses the \emph{compilation
environment} of the \emph{target} when looking up macro
definitions. Therefore, in order for the example in
\refCode{code-defmacro-and} to work, a file containing the
definition of the macro \texttt{cond} must first be compiled by the
cross compiler.
\item While it would have been desirable for the \emph{functions} that
occur in function forms that are \emph{minimally compiled} by the
cross compiler to exist, this is typically not the case.%
\fixme{Investigate the possibility of first compiling a bunch of
\texttt{declaim} forms containing type signatures of most
standard \commonlisp{} functions used in macro expansion code.}
As a
result, the cross compiler will emit warnings about undefined
functions. The generated code will still work, however.
\end{itemize}
Of the constraints listed above, the most restrictive is the one that
imposes an order between the files to be cross compiled, i.e., that
the macros of the expansion code must be cross compiled first. It is
possible to avoid this restriction entirely by using \emph{auxiliary
functions} rather than macros. The alternative implementation of
the \texttt{and} macro in \refCode{code-defmacro-and-2} shows how this
is done in the extreme case.
\begin{codefragment}
\inputcode{code-defmacro-and-2.code}
\caption{\label{code-defmacro-and-2}
Alternative implementation of the \texttt{and} macro.}
\end{codefragment}
We use the technique of \refCode{code-defmacro-and-2} only when the
expansion code is fairly complex. An example of a rather complex
expansion code is that of the macro \texttt{loop} which uses mutually
recursive functions and fairly complex data structures. When this
technique is used, we can even use a macro to implement its own
expansion code. For instance, nothing prevents us from using
\texttt{loop} to implement the functions of the expander code of
\texttt{loop}, because when the \texttt{loop} macro is used to expand
code in the cross compiler, the occurrences of \texttt{loop} in the
functions called by the expander code are executed by the host. As it
turns out, we do not do that, because we would like for the \sysname{}
implementation of \texttt{loop} to be used as a drop-in extension in
implementations other than \sysname{}.
| {
"alphanum_fraction": 0.7814242166,
"avg_line_length": 42.6881594373,
"ext": "tex",
"hexsha": "b3ace52823c7b7bad4a0dd8a83cb10788fb6417e",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2015-05-20T16:05:36.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-05-20T16:05:36.000Z",
"max_forks_repo_head_hexsha": "28fced8b1bb394ca8676638bb2cda375fb449831",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "cracauer/SICL",
"max_forks_repo_path": "Specification/chap-bootstrapping.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "28fced8b1bb394ca8676638bb2cda375fb449831",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "cracauer/SICL",
"max_issues_repo_path": "Specification/chap-bootstrapping.tex",
"max_line_length": 76,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "28fced8b1bb394ca8676638bb2cda375fb449831",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "cracauer/SICL",
"max_stars_repo_path": "Specification/chap-bootstrapping.tex",
"max_stars_repo_stars_event_max_datetime": "2019-05-11T18:29:52.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-05-11T18:29:52.000Z",
"num_tokens": 8951,
"size": 36413
} |
%!TEX root = ../OGUSAdoc.tex
The previous chapters derive all the equations necessary to solve for the steady-state and nonsteady-state equilibria of this model. However, because labor productivity is growing at rate $g_y$ as can be seen in the firms' production function \eqref{EqFirmsCESprodfun} and the population is growing at rate $\tilde{g}_{n,t}$ as defined in \eqref{EqPopGrowthTil}, the model is not stationary. Different endogenous variables of the model are growing at different rates. We have already specified three potential budget closure rules \eqref{EqUnbalGBCclosure_Gt}, \eqref{EqUnbalGBCclosure_TRt}, and \eqref{EqUnbalGBCclosure_TRGt} using some combination of government spending $G_t$ and transfers $TR_t$ that stationarize the debt-to-GDP ratio.
Table \ref{TabStnrzStatVars} lists the definitions of stationary versions of these endogenous variables. Variables with a ``$\:\,\hat{}\,\:$'' signify stationary variables. The first column of variables are growing at the productivity growth rate $g_y$. These variables are most closely associated with individual variables. The second column of variables are growing at the population growth rate $\tilde{g}_{n,t}$. These variables are most closely associated with population values. The third column of variables are growing at both the productivity growth rate $g_y$ and the population growth rate $\tilde{g}_{n,t}$. These variables are most closely associated with aggregate variables. The last column shows that the interest rate $r_t$ and household labor supply $n_{j,s,t}$ are already stationary.
\begin{table}[htbp] \centering \captionsetup{width=3.5in}
\caption{\label{TabStnrzStatVars}\textbf{Stationary variable definitions}}
\begin{threeparttable}
\begin{tabular}{>{\small}c >{\small}c >{\small}c |>{\small}c}
\hline\hline
\multicolumn{3}{c}{Sources of growth} & Not \\
& & & \\[-4mm]
$e^{g_y t}$ & $\tilde{N}_t$ & $e^{g_y t}\tilde{N}_t$ & growing\tnote{a} \\
\hline
& & \\[-4mm]
$\hat{c}_{j,s,t}\equiv\frac{c_{j,s,t}}{e^{g_y t}}$ & $\hat{\omega}_{s,t}\equiv\frac{\omega_{s,t}}{\tilde{N}_t}$ & $\hat{Y}_t\equiv\frac{Y_t}{e^{g_y t}\tilde{N}_t}$ & $n_{j,s,t}$ \\[2mm]
$\hat{b}_{j,s,t}\equiv\frac{b_{j,s,t}}{e^{g_y t}}$ & $\hat{L}_t\equiv\frac{L_t}{\tilde{N}_t}$ & $\hat{K}_t\equiv\frac{K_t}{e^{g_y t}\tilde{N}_t}$ & $r_t$ \\[2mm]
$\hat{w}_t\equiv\frac{w_t}{e^{g_y t}}$ & & $\hat{BQ}_{j,t}\equiv\frac{BQ_{j,t}}{e^{g_y t}\tilde{N}_t}$ & \\[2mm]
$\hat{y}_{j,s,t}\equiv\frac{y_{j,s,t}}{e^{g_y t}}$ & & $\hat{C}_t\equiv \frac{C_t}{e^{g_y t}\tilde{N}_t}$ & \\[2mm]
$\hat{T}_{s,t}\equiv\frac{T_{j,s,t}}{e^{g_y t}}$ & & $\hat{TR}_t\equiv\frac{TR_t}{e^{g_y t}\tilde{N}_t}$ & \\[2mm]
\hline\hline
\end{tabular}
\begin{tablenotes}
\scriptsize{\item[a]The interest rate $r_t$ in \eqref{EqFirmFOC_K} is already stationary because $Y_t$ and $K_t$ grow at the same rate. Household labor supply $n_{j,s,t}\in[0,\tilde{l}]$ is stationary.}
\end{tablenotes}
\end{threeparttable}
\end{table}
The usual definition of equilibrium would be allocations and prices such that households optimize \eqref{EqHHeul_n}, \eqref{EqHHeul_b}, and \eqref{EqHHeul_bS}, firms optimize \eqref{EqFirmFOC_L} and \eqref{EqFirmFOC_K}, and markets clear \eqref{EqMarkClrLab} and \eqref{EqMarkClrCap}, and \eqref{EqMarkClrBQ}. In this chapter, we show how to stationarize each of these characterizing equations so that we can use our fixed point methods described in Sections \ref{SecEqlbSSsoln} and \ref{SecEqlbNSSsoln} to solve for the equilibria in Definitions \ref{DefSSEql} and \ref{DefNSSEql}.
\section{Stationarized Household Equations}\label{SecStnrzHH}
The stationary version of the household budget constraint \eqref{EqHHBC} is found by dividing both sides of the equation by $e^{g_y t}$. For the savings term $b_{j,s+1,t+1}$, we must multiply and divide by $e^{g_y(t+1)}$, which leaves an $e^{g_y} = \frac{e^{g_y(t+1)}}{e^{g_y t}}$ in front of the stationarized variable.
\begin{equation}\label{EqStnrzHHBCstat}
\begin{split}
\hat{c}_{j,s,t} + e^{g_y}\hat{b}_{j,s+1,t+1} &= (1 + r_{t})\hat{b}_{j,s,t} + \hat{w}_t e_{j,s} n_{j,s,t} + \zeta_{j,s}\frac{\hat{BQ}_t}{\lambda_j\hat{\omega}_{s,t}} + \eta_{j,s,t}\frac{\hat{TR}_{t}}{\lambda_j\hat{\omega}_{s,t}} - \hat{T}_{s,t} \\
&\quad\forall j,t\quad\text{and}\quad s\geq E+1 \quad\text{where}\quad b_{j,E+1,t}=0\quad\forall j,t
\end{split}
\end{equation}
Because total bequests $BQ_t$ and total government transfers $TR_t$ grow at both the labor productivity growth rate and the population growth rate, we have to multiply and divide each of those terms by the economically relevant population $\tilde{N}_t$. This stationarizes total bequests $\hat{BQ}_t$, total transfers $\hat{TR}_t$, and the respective population level in the denominator $\hat{\omega}_{s,t}$.
We stationarize the Euler equations for labor supply \eqref{EqHHeul_n} by dividing both sides by $e^{g_y(1-\sigma)}$. On the left-hand-side, $e^{g_y}$ stationarizes the wage $\hat{w}_t$ and $e^{-\sigma g_y}$ goes inside the parentheses and stationarizes consumption $\hat{c}_{j,s,t}$. On the right-and-side, the $e^{g_y(1-\sigma)}$ terms cancel out.
\begin{equation}\label{EqStnrzHHeul_n}
\begin{split}
&\hat{w}_t e_{j,s}\bigl(1 - \tau^{mtrx}_{s,t}\bigr)(\hat{c}_{j,s,t})^{-\sigma} = \chi^n_{s}\biggl(\frac{b}{\tilde{l}}\biggr)\biggl(\frac{n_{j,s,t}}{\tilde{l}}\biggr)^{\upsilon-1}\Biggl[1 - \biggl(\frac{n_{j,s,t}}{\tilde{l}}\biggr)^\upsilon\Biggr]^{\frac{1-\upsilon}{\upsilon}} \\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\forall j,t, \quad\text{and}\quad E+1\leq s\leq E+S \\
\end{split}
\end{equation}
We stationarize the Euler equations for savings \eqref{EqHHeul_b} and \eqref{EqHHeul_bS} by dividing both sides of the respective equations by $e^{-\sigma g_y t}$. On the right-hand-side of the equation, we then need to multiply and divide both terms by $e^{-\sigma g_y(t+1)}$, which leaves a multiplicative coefficient $e^{-\sigma g_y}$.
\begin{equation}\label{EqStnrzHHeul_b}
\begin{split}
&(\hat{c}_{j,s,t})^{-\sigma} = e^{-\sigma g_y}\biggl[\chi^b_j\rho_s(\hat{b}_{j,s+1,t+1})^{-\sigma} + \beta\bigl(1 - \rho_s\bigr)\Bigl(1 + r_{t+1}\bigl[1 - \tau^{mtry}_{s+1,t+1}\bigr]\Bigr)(\hat{c}_{j,s+1,t+1})^{-\sigma}\biggr] \\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\forall j,t, \quad\text{and}\quad E+1\leq s\leq E+S-1 \\
\end{split}
\end{equation}
\begin{equation}\label{EqStnrzHHeul_bS}
(\hat{c}_{j,E+S,t})^{-\sigma} = e^{-\sigma g_y}\chi^b_j(\hat{b}_{j,E+S+1,t+1})^{-\sigma} \quad\forall j,t \quad\text{and}\quad s = E+S
\end{equation}
\section{Stationarized Firms Equations}\label{SecStnrzFirms}
The nonstationary production function \eqref{EqFirmsCESprodfun} can be stationarized by dividing both sides by $e^{g_y t}\tilde{N}$. This stationarizes output $\hat{Y}_t$ on the left-hand-side. Because the general CES production function is homogeneous of degree 1, $F(xK,xL) = xF(K,L)$, which means the right-hand-side of the production function is stationarized by dividing by $e^{g_y t}\tilde{N}_t$.
\begin{equation}\label{EqStnrzCESprodfun}
\hat{Y}_t = F(\hat{K}_t, \hat{L}_t) \equiv Z_t\biggl[(\gamma)^\frac{1}{\ve}(\hat{K}_t)^\frac{\ve-1}{\ve} + (1-\gamma)^\frac{1}{\ve}(\hat{L}_t)^\frac{\ve-1}{\ve}\biggr]^\frac{\ve}{\ve-1} \quad\forall t
\end{equation}
Notice that the growth term multiplied by the labor input drops out in this stationarized version of the production function. We stationarize the nonstationary profit function \eqref{EqFirmsProfit} in the same way, by dividing both sides by $e^{g_y t}\tilde{N}_t$.
\begin{equation}\label{EqStnrzProfit}
\hat{PR}_t = (1 - \tau^{corp})\Bigl[F(\hat{K}_t,\hat{L}_t) - \hat{w}_t \hat{L}_t\Bigr] - \bigl(r_t + \delta\bigr)\hat{K}_t + \tau^{corp}\delta^\tau \hat{K}_t \quad\forall t
\end{equation}
The firms' first order equation for labor demand \eqref{EqFirmFOC_L} is stationarized by dividing both sides by $e^{g_y t}$. This stationarizes the wage $\hat{w}_t$ on the left-hand-side and cancels out the $e^{g_y t}$ term in front of the right-hand-side. To complete the stationarization, we multiply and divide the $\frac{Y_t}{e^{g_y t}L_t}$ term on the right-hand-side by $\tilde{N}_t$.
\begin{equation}\label{EqStnrzFOC_L}
\hat{w}_t = (Z_t)^\frac{\ve-1}{\ve}\left[(1-\gamma)\frac{\hat{Y}_t}{\hat{L}_t}\right]^\frac{1}{\ve} \quad\forall t
\end{equation}
It can be seen from the firms' first order equation for capital demand \eqref{EqFirmFOC_K} that the interest rate is already stationary. If we multiply and divide the $\frac{Y_t}{K_t}$ term on the right-hand-side by $e^{t_y t}\tilde{N}_t$, those two aggregate variables become stationary. In other words, $Y_t$ and $K_t$ grow at the same rate and $\frac{Y_t}{K_t} = \frac{\hat{Y}_t}{\hat{K}_t}$.
\begin{equation}\tag{\ref{EqFirmFOC_K}}
\begin{split}
r_t &= (1 - \tau^{corp})(Z_t)^\frac{\ve-1}{\ve}\left[\gamma\frac{\hat{Y}_t}{\hat{K}_t}\right]^\frac{1}{\ve} - \delta + \tau^{corp}\delta^\tau \quad\forall t \\
&= (1 - \tau^{corp})(Z_t)^\frac{\ve-1}{\ve}\left[\gamma\frac{Y_t}{K_t}\right]^\frac{1}{\ve} - \delta + \tau^{corp}\delta^\tau \quad\forall t
\end{split}
\end{equation}
\section{Stationarized Government Equations}\label{SecStnrzGovt}
Each of the tax rate functions $\tau^{etr}_{s,t}$, $\tau^{mtrx}_{s,t}$, and $\tau^{mtry}_{s,t}$ is stationary. The total tax liability function $T_{s,t}$ is growing at the rate of labor productivity growth $g_y$ This can be see by looking at the decomposition of the total tax liability function into the effective tax rate times total income \eqref{EqTaxCalcLiabETR}. The effective tax rate function is stationary, and household income is growing at rate $g_y$. So household total tax liability is stationarized by dividing both sides of the equation by $e^{g_y t}$.
\begin{equation}\label{EqStnrzLiabETR}
\begin{split}
\hat{T}_{s,t} &= \tau^{etr}_{s,t}(\hat{x}_{j,s,t}, \hat{y}_{j,s,t})\left(\hat{x}_{j,s,t} + \hat{y}_{j,s,t}\right) \qquad\qquad\qquad\quad\:\:\forall t \quad\text{and}\quad E+1\leq s\leq E+S \\
&= \tau^{etr}_{s,t}(\hat{w}_t e_{j,s}n_{j,s,t}, r_t\hat{b}_{j,s,t})\left(\hat{w}_t e_{j,s}n_{j,s,t} + r_t\hat{b}_{j,s,t}\right) \quad\forall t \quad\text{and}\quad E+1\leq s\leq E+S
\end{split}
\end{equation}
We can stationarize the simple expressions for total government spending on public goods $G_t$ in \eqref{EqUnbalGBC_Gt} and on household transfers $TR_t$ in \eqref{EqUnbalGBCtfer} by dividing both sides by $e^{g_y t}\tilde{N}_t$,
\begin{equation}\label{EqStnrz_Gt}
\hat{G}_t = g_{g,t}\:\alpha_{g}\:\hat{Y}_t \quad\forall t
\end{equation}
\begin{equation}\label{EqStnrzTfer}
\hat{TR}_t = g_{tr,t}\:\alpha_{tr}\:\hat{Y}_t \quad\forall t
\end{equation}
where the time varying multipliers $g_{g,t}$ and $g_{tr,t}$, respectively, are defined in \eqref{EqStnrzClosureRule_Gt} and \eqref{EqStnrzClosureRule_TRt} below. These multipliers $g_{g,t}$ and $g_{tr,t}$ do not have a ``$\:\,\hat{}\,\:$'' on them because their specifications \eqref{EqUnbalGBCclosure_Gt} and \eqref{EqUnbalGBCclosure_TRt} that are functions of nonstationary variables are equivalent to \eqref{EqStnrzClosureRule_Gt} and \eqref{EqStnrzClosureRule_TRt} specified in stationary variables.
We can stationarize the expression for total government revenue $Rev_t$ in \eqref{EqUnbalGBCgovRev} by dividing both sides of the equation by $e^{g_y t}\tilde{N}_t$.
\begin{equation}\label{EqStnrzGovRev}
\hat{Rev}_t = \underbrace{\tau^{corp}\bigl[\hat{Y}_t - \hat{w}_t\hat{L}_t\bigr] - \tau^{corp}\delta^\tau \hat{K}_t}_{\text{corporate tax revenue}} + \underbrace{\sum_{s=E+1}^{E+S}\sum_{j=1}^J\lambda_j\hat{\omega}_{s,t}\tau^{etr}_{s,t}\left(\hat{x}_{j,s,t},\hat{y}_{j,s,t}\right)\bigl(\hat{x}_{j,s,t} + \hat{y}_{j,s,t}\bigr)}_{\text{household tax revenue}} \quad\forall t
\end{equation}
Every term in the government budget constraint \eqref{EqUnbalGBCbudgConstr} is growing at both the productivity growth rate and the population growth rate, so we stationarize it by dividing both sides by $e^{g_y t}\tilde{N}_t$. We also have to multiply and divide the next period debt term $D_{t+1}$ by $e^{g_y(t+1)}\tilde{N}_{t+1}$, leaving the term $e^{g_y}(1 + \tilde{g}_{n,t+1})$.
\begin{equation}\label{EqStnrzGovBC}
e^{g_y}\left(1 + \tilde{g}_{n,t+1}\right)\hat{D}_{t+1} + \hat{Rev}_t = (1 + r_t)\hat{D}_t + \hat{G}_t + \hat{TR}_t \quad\forall t
\end{equation}
The three potential budget closure rules \eqref{EqUnbalGBCclosure_Gt}, \eqref{EqUnbalGBCclosure_TRt}, and \eqref{EqUnbalGBCclosure_TRGt} are the last government equations to stationarize. In each of the cases, we simply divide both sides by $e^{g_y t}\tilde{N}_t$.
\begin{equation}\label{EqStnrzClosureRule_Gt}
\begin{split}
&\hat{G}_t = g_{g,t}\:\alpha_{g}\: \hat{Y}_t \\
&\text{where}\quad g_{g,t} =
\begin{cases}
1 \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\:\:\text{if}\quad t < T_{G1} \\
\frac{e^{g_y}\left(1 + \tilde{g}_{n,t+1}\right)\left[\rho_{d}\alpha_{D}\hat{Y}_{t} + (1-\rho_{d})\hat{D}_{t}\right] - (1+r_{t})\hat{D}_{t} - \hat{TR}_{t} + \hat{Rev}_{t}}{\alpha_g \hat{Y}_t} \quad\text{if}\quad T_{G1}\leq t<T_{G2} \\
\frac{e^{g_y}\left(1 + \tilde{g}_{n,t+1}\right)\alpha_{D}\hat{Y}_{t} - (1+r_{t})\hat{D}_{t} - \hat{TR}_{t} + \hat{Rev}_{t}}{\alpha_g \hat{Y}_t} \qquad\qquad\qquad\text{if}\quad t \geq T_{G2}
\end{cases} \\
&\quad\text{and}\quad g_{tr,t} = 1 \quad\forall t
\end{split}
\end{equation}
or
\begin{equation}\label{EqStnrzClosureRule_TRt}
\begin{split}
&\hat{TR}_t = g_{tr,t}\:\alpha_{tr}\: \hat{Y}_t \\
&\text{where}\quad g_{tr,t} =
\begin{cases}
1 \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\text{if}\quad t < T_{G1} \\
\frac{e^{g_y}\left(1 + \tilde{g}_{n,t+1}\right)\left[\rho_{d}\alpha_{D}\hat{Y}_{t} + (1-\rho_{d})\hat{D}_{t}\right] - (1+r_{t})\hat{D}_{t} - \hat{G}_{t} + \hat{Rev}_{t}}{\alpha_{tr} \hat{Y}_t} \quad\text{if}\quad T_{G1}\leq t<T_{G2} \\
\frac{e^{g_y}\left(1 + \tilde{g}_{n,t+1}\right)\alpha_{D}\hat{Y}_{t} - (1+r_{t})\hat{D}_{t} - \hat{G}_{t} + \hat{Rev}_{t}}{\alpha_{tr} \hat{Y}_t} \qquad\qquad\qquad\text{if}\quad t \geq T_{G2}
\end{cases} \\
&\quad\text{and}\quad g_{g,t} = 1 \quad\forall t
\end{split}
\end{equation}
or
\begin{equation}\label{EqStnrzClosureRule_TRGt}
\begin{split}
&\hat{G}_t + \hat{TR}_t = g_{trg,t}\left(\alpha_g + \alpha_{tr}\right)\hat{Y}_t \quad\Rightarrow\quad \hat{G}_t = g_{trg,t}\:\alpha_g\:\hat{Y}_t \quad\text{and}\quad \hat{TR}_t = g_{trg,t}\:\alpha_{tr}\:\hat{Y}_t \\
&\text{where}\quad g_{trg,t} =
\begin{cases}
1 \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\:\:\,\text{if}\quad t < T_{G1} \\
\frac{e^{g_y}\left(1 + \tilde{g}_{n,t+1}\right)\left[\rho_{d}\alpha_{D}\hat{Y}_{t} + (1-\rho_{d})\hat{D}_{t}\right] - (1+r_{t})\hat{D}_{t} + \hat{Rev}_{t}}{\left(\alpha_g + \alpha_{tr}\right)\hat{Y}_t} \quad\text{if}\quad T_{G1}\leq t<T_{G2} \\
\frac{e^{g_y}\left(1 + \tilde{g}_{n,t+1}\right)\alpha_{D}\hat{Y}_{t} - (1+r_{t})\hat{D}_{t} + \hat{Rev}_{t}}{\left(\alpha_g + \alpha_{tr}\right)\hat{Y}_t} \qquad\qquad\quad\:\:\:\:\,\text{if}\quad t \geq T_{G2}
\end{cases}
\end{split}
\end{equation}
\section{Stationarized Market Clearing Equations}\label{SecStnrzMC}
The labor market clearing equation \eqref{EqMarkClrLab} is stationarized by dividing both sides by $\tilde{N}_t$.
\begin{equation}\label{EqStnrzMarkClrLab}
\hat{L}_t = \sum_{s=E+1}^{E+S}\sum_{j=1}^{J} \hat{\omega}_{s,t}\lambda_j e_{j,s}n_{j,s,t} \quad \forall t
\end{equation}
The capital market clearing equation \eqref{EqMarkClrCap} is stationarized by dividing both sides by $e^{g_y t}\tilde{N}_t$. Because the right-hand-side has population levels from the previous period $\omega_{s,t-1}$, we have to multiply and divide both terms inside the parentheses by $\tilde{N}_{t-1}$ which leaves us with the term in front of $\frac{1}{1+\tilde{g}_{n,t}}$.
\begin{equation}\label{EqStnrzMarkClrCap}
\hat{K}_t + \hat{D}_t = \frac{1}{1 + \tilde{g}_{n,t}}\sum_{s=E+2}^{E+S+1}\sum_{j=1}^{J}\Bigl(\hat{\omega}_{s-1,t-1}\lambda_j \hat{b}_{j,s,t} + i_s\hat{\omega}_{s,t-1}\lambda_j \hat{b}_{j,s,t}\Bigr) \quad \forall t
\end{equation}
We stationarize the goods market clearing \eqref{EqMarkClrGoods} condition by dividing both sides by $e^{g_y t}\tilde{N}_t$. On the right-hand-side, we must multiply and divide the $K_{t+1}$ term by $e^{g_y(t+1)}\tilde{N}_{t+1}$ leaving the coefficient $e^{g_y}(1+\tilde{g}_{n,t+1})$. And the term that subtracts the sum of imports of next period's immigrant savings we must multiply and divide by $e^{g_(t+1)}$, which leaves the term $e^{g_y}$.
\begin{equation}\label{EqStnrzMarkClrGoods}
\begin{split}
\hat{Y}_t &= \hat{C}_t + e^{g_y}(1 + \tilde{g}_{n,t+1})\hat{K}_{t+1} - e^{g_y}\biggl(\sum_{s=E+2}^{E+S+1}\sum_{j=1}^{J}i_s\hat{\omega}_{s,t}\lambda_j \hat{b}_{j,s,t+1}\biggr) - (1-\delta)\hat{K}_t + \hat{G}_t \quad\forall t \\
&\quad\text{where}\quad \hat{C}_t \equiv \sum_{s=E+1}^{E+S}\sum_{j=1}^{J}\hat{\omega}_{s,t}\lambda_j\hat{c}_{j,s,t}
\end{split}
\end{equation}
We stationarize the law of motion for total bequests $BQ_t$ in \eqref{EqMarkClrBQ} by dividing both sides by $e^{g_y t}\tilde{N}_t$. Because the population levels in the summation are from period $t-1$, we must multiply and divide the summed term by $\tilde{N}_{t-1}$ leaving the term in the denominator of $1+\tilde{g}_{n,t}$.
\begin{equation}\label{EqStnrzMarkClrBQ}
\hat{BQ}_{t} = \left(\frac{1+r_{t}}{1 + \tilde{g}_{n,t}}\right)\left(\sum_{s=E+2}^{E+S+1}\sum_{j=1}^J\rho_{s-1}\lambda_j\hat{\omega}_{s-1,t-1}\hat{b}_{j,s,t}\right) \quad\forall t
\end{equation}
| {
"alphanum_fraction": 0.6718923111,
"avg_line_length": 99.7348066298,
"ext": "tex",
"hexsha": "444ad2b7114564d97170d2366b61b439c8f3eb98",
"lang": "TeX",
"max_forks_count": 44,
"max_forks_repo_forks_event_max_datetime": "2021-07-08T07:03:26.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-08-16T15:10:39.000Z",
"max_forks_repo_head_hexsha": "269ee172b837882c826ee7f99507d93f9643128e",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "keshavchoudhary87/OG-India",
"max_forks_repo_path": "docs/LaTeXsource/Chapters/Chap_Stnrz.tex",
"max_issues_count": 11,
"max_issues_repo_head_hexsha": "269ee172b837882c826ee7f99507d93f9643128e",
"max_issues_repo_issues_event_max_datetime": "2019-10-16T07:07:15.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-08-16T15:40:52.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "keshavchoudhary87/OG-India",
"max_issues_repo_path": "docs/LaTeXsource/Chapters/Chap_Stnrz.tex",
"max_line_length": 803,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "269ee172b837882c826ee7f99507d93f9643128e",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "keshavchoudhary87/OG-India",
"max_stars_repo_path": "docs/LaTeXsource/Chapters/Chap_Stnrz.tex",
"max_stars_repo_stars_event_max_datetime": "2019-08-17T19:49:22.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-08-17T19:49:22.000Z",
"num_tokens": 6762,
"size": 18052
} |
% ------------------------------------------------------------------------+
% Copyright (c) 2001 by Punch Telematix. All rights reserved. |
% |
% Redistribution and use in source and binary forms, with or without |
% modification, are permitted provided that the following conditions |
% are met: |
% 1. Redistributions of source code must retain the above copyright |
% notice, this list of conditions and the following disclaimer. |
% 2. Redistributions in binary form must reproduce the above copyright |
% notice, this list of conditions and the following disclaimer in the |
% documentation and/or other materials provided with the distribution. |
% 3. Neither the name of Punch Telematix nor the names of other |
% contributors may be used to endorse or promote products derived |
% from this software without specific prior written permission. |
% |
% THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED |
% WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF |
% MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. |
% IN NO EVENT SHALL PUNCH TELEMATIX OR OTHER CONTRIBUTORS BE LIABLE |
% FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR |
% CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF |
% SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR |
% BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, |
% WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE |
% OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN |
% IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. |
% ------------------------------------------------------------------------+
%
% $Id: exception.tex,v 1.1.1.1 2004/07/12 14:07:44 cvs Exp $
%
\subsection{Exception Handling}
\subsubsection{Operation}
\subsubsection{Exception Structure Definition}
The structure definition of an exception and the ancillary structures, is as follows:
\bcode
\begin{verbatim}
1: typedef void (*x_exception_cb)(void * arg);
2: typedef struct x_Xcb * x_xcb;
3:
4: typedef struct x_Xcb {
5: x_xcb previous;
6: x_xcb next;
7: x_exception_cb cb;
8: void * arg;
9: } x_Xcb;
10:
11: typedef struct x_Exception * x_exception;
12:
13: typedef struct x_Mutex {
14: void * pc;
15: void * sp;
16: unsigned int registers[NUM_CALLEE_SAVED];
17: x_boolean fired;
18: x_xcb callbacks;
19: x_Xcb Callbacks;
20: } x_Exception;
\end{verbatim}
\ecode
The relevant fields in the exception structure are the following:
\begin{itemize}
\item \txt{x\_exception$\rightarrow$pc} This field is used by the
\textsf{x\_context\_save} function to store the program counter of the
instruction following the call.
\item \txt{x\_exception$\rightarrow$sp} This field is used by the
\textsf{x\_context\_save} function to store the stack pointer at the moment
of the the call.
\item \txt{x\_exception$\rightarrow$registers} Is used to store all the
registers that are normally saved by the callee\footnote{The function that
is being called.}. This array is filled by the CPU specific
\textsf{x\_context\_save} function. Its contents are later restored when an
exception is being thrown, by the \textsf{x\_context\_restore} function. The
size of this array is CPU specific and the macro definition
\textsf{NUM\_CALLEE\_SAVED} is set in the include file of the specific CPU.
\item \txt{x\_exception$\rightarrow$fired} Is a boolean that indicates wether
an exception has been thrown or not.
\item \txt{x\_exception$\rightarrow$callbacks} Is the begin of a circular
linked list of callbacks that will be executed when an exception has been
thrown. This pointer is preset to the address of the embedded
\textsf{Callbacks} structure that is the last element of this structure.
\end{itemize}
\subsubsection{Using Native Exceptions}
| {
"alphanum_fraction": 0.6708104231,
"avg_line_length": 45.967032967,
"ext": "tex",
"hexsha": "e25ecd28cdc4bcfc7d4eba2a95637e8d7a1b231a",
"lang": "TeX",
"max_forks_count": 9,
"max_forks_repo_forks_event_max_datetime": "2021-07-13T11:35:45.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-05-05T15:19:17.000Z",
"max_forks_repo_head_hexsha": "079bcf51dce9442deee2cc728ee1d4a303f738ed",
"max_forks_repo_licenses": [
"ICU"
],
"max_forks_repo_name": "kifferltd/open-mika",
"max_forks_repo_path": "vm-cmp/kernel/oswald/doc/exception.tex",
"max_issues_count": 11,
"max_issues_repo_head_hexsha": "079bcf51dce9442deee2cc728ee1d4a303f738ed",
"max_issues_repo_issues_event_max_datetime": "2020-12-14T18:08:58.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-04-11T10:45:33.000Z",
"max_issues_repo_licenses": [
"ICU"
],
"max_issues_repo_name": "kifferltd/open-mika",
"max_issues_repo_path": "vm-cmp/kernel/oswald/doc/exception.tex",
"max_line_length": 85,
"max_stars_count": 41,
"max_stars_repo_head_hexsha": "079bcf51dce9442deee2cc728ee1d4a303f738ed",
"max_stars_repo_licenses": [
"ICU"
],
"max_stars_repo_name": "kifferltd/open-mika",
"max_stars_repo_path": "vm-cmp/kernel/oswald/doc/exception.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-28T20:18:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-05-14T12:03:18.000Z",
"num_tokens": 938,
"size": 4183
} |
\documentclass[letter,10pt]{article}
\usepackage[utf8]{inputenc}
%====================
% File: resume.tex
% Editor: https://github.com/TimmyChan
% https://www.linkedin.com/in/timmy-l-chan/
% Last Updated: Jan 7th, 2022
% Notes: All formatting is done in TLCresume.sty
% No need to touch that file.
% All actual content is in the sections/ folder.
%====================
\usepackage{TLCresume}
%====================
% CONTACT INFORMATION
%====================
\def\name{Mohamed Ali Elfeky} % Name Here
\def\phone{01061011936}
\def\email{[email protected]}
\def\city{giza, 6 October city}
\def\LinkedIn{mohamed-el-feky-278704199} % linkedin.com/in/______
\def\github{mohamedaliELfeky} % github username
\def\kaggle{mohamedelfeky} % kaggle username
\def\role{Computer vision Engineer} % JOB TITLE
\input{_header}
\begin{document}
\input{sections/objective}
\section{Education}
\input{sections/education}
% \section{Technical Experience}
% \input{sections/experience}
% skills
\section{Skills}
\input{sections/skills}
% Projects
\section{Projects}
\input{sections/Projcts}
% volunteer work
\section{Extracurricular Activities}
\input{sections/Extracurricular Activities}
% make a newpage wherever it is a clean break
\newpage
% Competitions
\section{Competitions}
\input{sections/Competitions}
\section{Courses}
\input{sections/Courses}
\end{document}
| {
"alphanum_fraction": 0.6842105263,
"avg_line_length": 20.0555555556,
"ext": "tex",
"hexsha": "1fa70438b28e3e8ad590ed1f2506878c963be066",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "065aa38a8d94df53abb600062d670488d022ed89",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mohamedaliELfeky/CV_Latex",
"max_forks_repo_path": "Latex CV/resume.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "065aa38a8d94df53abb600062d670488d022ed89",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mohamedaliELfeky/CV_Latex",
"max_issues_repo_path": "Latex CV/resume.tex",
"max_line_length": 65,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "065aa38a8d94df53abb600062d670488d022ed89",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mohamedaliELfeky/CV_Latex",
"max_stars_repo_path": "Latex CV/resume.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 409,
"size": 1444
} |
\FloatBarrier
\section{Maximization problems}\label{sec:maximization}
The evolutionary dynamics can be used to solve convex optimization problems.
We can use the properties of population games to design games that maximize some function $f(\bs{z})$, where $\bs{z}\in\mathbb{R}^{n}$ is a vector of $n$ variables, i.e., $\bs{z} = [z_1, \ldots, z_k, \ldots, z_n]$. Below we show two alternatives to solve this optimization problem using either a single population or $n$ populations.
\subsection{Single Population Case}
First, let us consider a population where each agent can choose one of the $n+1$ strategies. In this case, the first $n$ strategies correspond one variable of the objective function and the $n+1\th$ strategy can be seen as a slack variable.
Thus, $x_k$ is the proportion of agents that use the $k\th$ strategy, and it corresponds to the $k\th$ variable, i.e., $x_k = z_k$.
We define the fitness function of the $k\th$ strategy $F_k$ as the derivative of the objective function with respect to the $k\th$ variable, thus, $F_k(\bs{x}) \equiv \frac{\partial }{\partial x_k} f(\bs{x})$.
Note that if $f(\bs{x})$ is a concave function, then its gradient is a decreasing function.
Recall that users attempt to increase their fitness by adopting the most profitable strategy in the population, say the $k\th$ strategy. This lead to an increase of $x_k$, which in turns decrease the fitness $F_k(\bs{x})$.
Furthermore, the equilibrium is reached when all agents that belong to the same population have the same fitness.
Thus, at the equilibrium $F_i(\bs{x}) = F_j(\bs{x})$, where $i,j\in\{1, \ldots, n \}$.
If we define $F_{n+1}(\bs{x}) = 0$, then at the equilibrium we have $F_i(\bs{x}) = 0$ for every strategy $i\in\{1, \ldots, n\}$.
%
Since the fitness function decreases with the action of users, we can conclude that the strategy of the population evolves to make the gradient of the objective function equal to zero (or as close as possible). This resembles a gradient method to solve optimization problems.
% A characteristic of this implementation is that the function of users depends on their strategy. Specifically, there are $n+1$ different strategy functions.
Recall that the he evolution of the strategies lies in the simplex, that is, $\sum_{i \in S^p} z_i = m$. Hence, this implementation solves the following optimization problem:
\begin{equation}
\begin{aligned}
& \underset{\bs{z}}{\text{maximize}}
& & f(\bs{z}) \\
& \text{subject to}
& & \sum_{i=1}^n z_i \leq m,
\end{aligned}
\label{eq:opt_problem}
\end{equation}
where m is the total mass of the population.
%Next we present a different implementation in which each user of the same population has the same fitness function.
%
Figure \ref{fig:maximization_a} shows an example of the setting described above for the function
\begin{equation}\label{eq:objective_f}
f(\bs{z}) = - (z_1-5)^2 - (z_2-5)^2.
\end{equation}
The simulation is executed during $0.6$ time units.
\begin{figure}[htb]
\centering
\includegraphics[width=.5\textwidth]{./images/maximization_a.eps}
\caption{Evolution of the maximization setting using only one population.}
\label{fig:maximization_a}
\end{figure}
\subsection{Multi-population Case}
Let us consider $n$ populations where each agent can choose one of two strategies.
We define a population per each variable of the maximization problem and also $n$ additional strategies that resemble slack variables.
Thus, $x_i^p$ is the proportion of agents that use the $i\th$ strategy in the $p\th$ population. In this case $x_1^k$ corresponds to the $k\th$ variable, that is, $x_1^k = z_k$, while $x_2^k$ is a slack variable.
%
The fitness function $F_1^k$ of the $k\th$ population is defined as the derivative of the objective function with respect to the $k\th$ variable, that is, $F_1^k(\bs{x}) \equiv \frac{\partial }{\partial x_1^k} f(\bs{x})$. On the other hand, $F_2^k(\bs{x}) = 0$.
This implementation solves the following optimization problem:
\begin{equation}
\begin{aligned}
& \underset{\bs{z}}{\text{maximize}}
& & f(\bs{z}) \\
& \text{subject to}
& & z_i \leq m^i, i =\{1,\ldots,n\}.
\end{aligned}
\label{eq:opt_problem}
\end{equation}
Figure \ref{fig:maximization_b} shows an example of the setting described above for the function in Eq. (\ref{eq:objective_f}).
The simulation is executed during $0.6$ time units. Note that the implementation using multiple populations reach the optimal value faster than the single population implementation.
\begin{figure}[htb]
\centering
\includegraphics[width=.5\textwidth]{./images/maximization_b.eps}
\caption{Evolution of the the maximization setting using $n$ populations.}
\label{fig:maximization_b}
\end{figure}
The speed of convergence to the optimum depends on the dynamics and their parameters. For instance, we observed that the equilibrium of the BNN dynamics might be closer to the optimal solution $\bs{z}^*$ if the mass of the population $m^p$ is close to $\sum_{i=1}^N z_i$. Note that close to the optimum $\hat{F}^p$ is small, and if $m^p$ is too large, then the slack variable, such as $x_2^k$, might be too large, making $x_1^k$ small. These conditions might hinder the convergence to the optimum because updates in the strategies are too small. | {
"alphanum_fraction": 0.7416904084,
"avg_line_length": 58.5,
"ext": "tex",
"hexsha": "00391f7e5f3e6a4afc0d49626e7a32738e1241c4",
"lang": "TeX",
"max_forks_count": 17,
"max_forks_repo_forks_event_max_datetime": "2021-12-26T10:20:34.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-07-16T00:40:13.000Z",
"max_forks_repo_head_hexsha": "fea827a80aaa0150932e6e146907f71a83b7829b",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "sjtudh/PDToolbox_matlab",
"max_forks_repo_path": "docs/maximization.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "fea827a80aaa0150932e6e146907f71a83b7829b",
"max_issues_repo_issues_event_max_datetime": "2020-07-03T21:16:17.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-07-25T13:04:08.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "sjtudh/PDToolbox_matlab",
"max_issues_repo_path": "docs/maximization.tex",
"max_line_length": 546,
"max_stars_count": 28,
"max_stars_repo_head_hexsha": "fea827a80aaa0150932e6e146907f71a83b7829b",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "carlobar/PDToolbox_matlab",
"max_stars_repo_path": "docs/maximization.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-08T09:22:42.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-08-13T09:50:02.000Z",
"num_tokens": 1450,
"size": 5265
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.