repo
stringlengths 26
115
| file
stringlengths 54
212
| language
stringclasses 2
values | license
stringclasses 16
values | content
stringlengths 19
1.07M
|
---|---|---|---|---|
https://github.com/TempContainer/typnotes | https://raw.githubusercontent.com/TempContainer/typnotes/main/sample-page.typ | typst | #import "/book.typ": book-page
#import "/templates/my-template.typ": *
#show: book-page.with(title: "Home")
// #show: template
= Home
这个网站是博客的一部分, 用来记录数学和物理等带有大量公式的文章. 由于不想用 TeX 的冗杂语法写公式, 但同时 typst 也不支持 HTML 渲染, 只能出此权宜之计: 使用 #link("https://github.com/Myriad-Dreamin/shiroa")[shiroa] 将 typst 的结果导出为 SVG 并渲染为网页. 期待 typst 支持 HTML 导出的那一天, 届时将第一时间并入主站.
== Structure and Interpretation of Classical Mechanics
读 _Structure and Interpretation of Classical Mechanics_ 做的笔记. 它是一本通过建立严谨的符号系统 (并且可计算) 来教授经典力学 (分析力学) 的书. 其他的参考资料将列于具体出现的章节之中.
== Optimization
学习 CMU 10-725 Convex Optimization 的笔记. 可能还会包括其他的数值优化方法, 其参考材料将列于具体出现之时. |
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compute/calc-11.typ | typst | Other | // Error: 16-19 divisor must not be zero
#calc.rem(3.0, 0.0)
|
https://github.com/7sDream/fonts-and-layout-zhCN | https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/02-concepts/dimension/adv-position.typ | typst | Other | #import "/template/template.typ": web-page-template
#import "/template/components.typ": note
#import "/template/lang.typ": arabic
#import "/lib/glossary.typ": tr
#show: web-page-template
// ### Advance and Positioning
=== #tr[advance]和#tr[position]
// *advance*, whether horizontal or vertical, tells you how far to increment the cursor after drawing a glyph. But there are situations where you also want to change *where* you draw a glyph. Let's take an example: placing the fatha (mark for the vowel "a") over an Arabic consonant in the world ولد (boy):
无论是水平或垂直的*#tr[advance]*,都是在告诉我们#tr[cursor]需要在绘制完当前#tr[glyph]后移动多远。但有时你也会希望改变绘制这个#tr[glyph]的*位置*。比如这个阿拉伯语的例子:需要将 fatha(标识元音a的符号)放在单词#arabic[وَلَد](孩子)中的辅音上:
#figure(caption: [
“孩子”的阿拉伯文
], placement: none)[#include "walad.typ"] <figure:walad>
// We place the first two glyphs (counting from the left, even though this is Arabic) normally along the baseline, moving the cursor forward by the advance distance each time. When we come to the fatha, though, our advance is zero - we don't move forward at all. At the same time, we don't just draw the fatha on the baseline; we have to move the "pen" up and to the left in order to place the fatha in the right place over the consonant that it modifies. Notice that when we come to the third glyph, we have to move the "pen" again but this time by a different amount - the fatha is placed higher over the lam than over the waw.
我们沿着基线正常放置开头(从左往右算,即使是阿拉伯文)的两个#tr[glyph],每次将#tr[cursor]向前步进一段距离。当遇到fatha符号,发现它的步进值是0,也就是完全不需要向前移动。但此时并不能直接在基线上绘制这个符号,我们需要将“画笔”往左上角移动,来让 fatha 符号位于它需要标注的辅音上方的正确位置处。对于第三个#tr[glyph],为了绘制它上面的 fatha,“画笔”也需要进行类似的移动,但移动的距离不同:字母lam(#arabic[ل])上的 fatha 会比字母waw(#arabic[و])上的高一些。
// This tells us that when rendering glyphs, we need two concepts of where things go: *advance* tells us where the *next* glyph is to be placed, *position* tells us where the current glyph is placed. Normally, the position is zero: the glyph is simply placed on the baseline, and the advance is the full width of the glyph. However, when it comes to marks or other combining glyphs, it is normal to have an advance of zero and the glyph moved around using positioning information.
这告诉我们,当#tr[rendering]#tr[glyph]时,我们需要两个概念:*#tr[advance]*告诉我们*下一个*#tr[glyph]放在哪,*#tr[position]*则告诉我们当前#tr[glyph]放在哪。通常来说,#tr[position]会是0值,表示#tr[glyph]直接放置在#tr[baseline]上。但是,当遇到诸如需要结合到其他#tr[glyph]上的附加符号时,通常会将步进设置为0,然后使用#tr[position]信息来将当前#tr[glyph]移动到所需位置。
// If you just perform layout using purely advance information, your mark positioning will go wrong; you need to use both advance and glyph position information provided by the shaper to correctly position your glyphs. Here is some pseudocode for a simple rendering process:
如果只有步进信息的话,我们无法把这种附加符号放到正确的位置上。#tr[shaper]提供的步进和位置信息都是非常重要的。以下是#tr[rendering]过程的伪代码示例:
```python
def render_string(glyphString, xPosition, yPosition):
cursorX = xPosition
cursorY = yPosition
for glyph in glyphString:
drawGlyph(glyph,
x = cursorX + glyph.xPosition,
y = cursorY + glyph.yPosition)
cursorX += glyph.horizontalAdvance
cursorY += glyph.verticalAdvance
```
|
https://github.com/miyaji255/Typst-Utilities | https://raw.githubusercontent.com/miyaji255/Typst-Utilities/main/src/lib.typ | typst | MIT License | #let _era-map = json("./era-map.json").map(
era => (
:..era,
from: datetime(year: era.from.year, month: era.from.month, day: era.from.day),
),
)
#let _get-year-regex = regex("\[year( sign:(kanji|alphabet|upper_alphabet|lower_alphabet))?\]")
#let get-era(date, with-nen: true, era-style: "kanji") = {
assert.eq(datetime, type(date), message: "The type of 'date' must be datetime.")
assert.eq(bool, type(with-nen), message: "The type of 'with-nen' must be bool.")
assert(
("kanji", "alphabet", "lower_alphabet", "upper_alphabet").contains(era-style),
message: "'era-style' must be 'kanji', 'alphabet', 'lower_alphabet' or 'upper_alphabet'.",
)
assert(
_era-map.at(-1).from <= date,
message: _era-map.at(-1).from.display("'date' must be later than [year]-[month]-[day]."),
)
let era = ""
let year = 0
for (from, name, alphabet) in _era-map {
if (from <= date) {
year = date.year() - from.year() + 1
era = if era-style == "kanji" {
name
} else if era-style == "alphabet" {
alphabet
} else if era-style == "upper_alphabet" {
upper(alphabet)
} else if era-style == "lower_alphabet" {
lower(alphabet)
} else {
assert(false, message: "unreachable")
}
break
}
}
if with-nen {
return era + if year == 1 { "元年" } else { str(year) + "年" }
} else {
return era + if year == 1 { "元" } else { str(year) }
}
}
#let display-era(date, format) ={
assert.eq(datetime, type(date), message: "The type of 'date' must be datetime.")
assert.eq(str, type(format), message: "The type of 'format' must be str.")
return date.display(
format.replace(
_get-year-regex,
(era-style) => {
if era-style.captures.at(1) == none {
return get-era(date, with-nen: false)
} else {
return get-era(date, with-nen: false, era-style: era-style.captures.at(1))
}
},
),
)
}
#let _get-digits = (value) => {
if value > 0 {
if value >= 1 {
let digits = 0;
let base = 1;
while value >= base * 10 {
digits += 1
base *= 10
}
return (digits, base)
} else {
let digits = 0;
let base = 1
while value * base < 1 {
digits -= 1
base *= 10
}
return (digits, 1 / base)
}
} else if value < 0 {
if value <= -1 {
let digits = 0;
let base = -1
while value <= base * 10 {
digits += 1;
base *= 10
}
return (digits, -base)
} else {
let digits = 0;
let base = -1
while value * base < 1 {
digits -= 1;
base *= 10
}
return (digits, 1 / -base)
}
} else { // value == 0
return (0, 1)
}
}
#let fmt-float(
value,
accuracy: 3,
dot: math.dot,
is-equation: true,
ignore-invalid: true,
) = {
assert.eq(type(accuracy), int, message: "Type of 'accuracy' must be integer.");
assert.eq(type(is-equation), bool)
assert(accuracy > 0, message: "'accuracy' must be more than 0");
if (
ignore-invalid and type(value) != int and type(value) != float and str(value).match(regex("^(|\+|-)\d+(.\d+)?([eE](|\+|-)\d+)?$")) == none
) {
return value
}
value = if type(value) == int { value } else { float(value) }
if value < 0 {
accuracy += 3;
}
let (digits, base) = _get-digits(value)
let valuestr = str(calc.round(value / base, digits: accuracy - 1))
if valuestr.contains(".") or valuestr.contains(",") {
while valuestr.len() < accuracy + 1 {
valuestr += "0";
}
valuestr = valuestr.slice(0, accuracy + 1)
} else {
if valuestr.len() < accuracy {
valuestr += "."
while valuestr.len() < accuracy + 1 {
valuestr += "0";
}
} else {
valuestr = valuestr.slice(0, accuracy)
}
}
if is-equation {
return $valuestr#"\u{200D}"dot#"\u{200D}"10^digits$
} else {
return [#valuestr#"\u{00A0}"#dot#"\u{00A0}"10#super(str(digits))]
}
}
|
https://github.com/camp-d/Notes | https://raw.githubusercontent.com/camp-d/Notes/main/ece3520.typ | typst | #set par(justify: true)
= ECE 3520 - Digital Computer Design - Dr. Ligon
= Formal Grammar
- Formal grammar describes how to form valid strings in a language from an alphabet of symbols.
= Chapter 4 minic and parsing
- Parsing is done recursively via recursive descent.
- recusrion can be right recursive or left recursive.
- Derivation tree
- takes some program source as input, parsing while applying grammar.
- forms tree that represents symbols in program and program structure.
- parser will incrementally build tree, then reduce, and repeat
- Cocke-Younger-Kasami (CYK) Parsing Algorithm
- Informally, this algorithm considers every possible substring of the input string and sets $P[l,s,v]$ to be true if the
substring of length l starting from $s$ can be generated from the nonterminal $R_v$. once all substrings of length 1 have been considered
then substrings of length 2 are considered.
- yak is precursor of bison
= FEB 1 12:32
- STUDY CYK algorithm. It will be on the test.
Auto grader.
- Term matching, use study guide, take practice test.
= Chapter 7 part 1
- last part of class semantics/other items.
L(G) = Language (L) generated by grammar (G).
- main thing you need to know, what is an attribute grammar, why do we use them or need them.
|
|
https://github.com/Kasci/LiturgicalBooks | https://raw.githubusercontent.com/Kasci/LiturgicalBooks/master/SK/casoslov/utierne/velkaUtierenBezKnaza.typ | typst | #import "../../../style.typ": *
#import "/SK/texts.typ": *
#import "../styleCasoslov.typ": *
= Veľká utiereň <X>
#show: rest => columns(2, rest)
TODO: zaciatok
#lettrine("Pane Ježišu Kriste, Bože náš, pre modlitby našich svätých otcov zmiluj sa nad nami.")
#lettrine("Amen.")
#lettrine("Sláva na výsostiach Bohu a na zemi pokoj ľuďom dobrej vôle.") #note[(3x)]
#lettrine("Pane, otvor moje pery, a moje ústa budú ohlasovať tvoju slávu.") #note[(2x)]
== Šesťžalmie <X>
#zalm(3)
#zalm(37)
#zalm(62)
#si
#lettrine("Aleluja, aleluja, aleluja, sláva tebe, Bože.") #note[(3x)]
#ektenia(3)
#zalm(87)
#zalm(102)
#zalm(142)
#si
#lettrine("Aleluja, aleluja, aleluja, sláva tebe, Bože.") #note[(3x)]
#ektenia(12)
== Boh je Pán <X>
#verseSoZvolanim("Boh je Pán a zjavil sa nám, * požehnaný, ktorý prichádza v mene Pánovom.",
(
"Oslavujte Pána, lebo je dobrý, * lebo jeho milosrdenstvo trvá naveky.",
"Obkľúčili ma nepriatelia zovšadiaľ, * ale v mene Pánovom som ich porazil.",
"Ja nezomriem, budem žiť * a vyrozprávam skutky Pánove.",
"Kameň, čo stavitelia zavrhli, sa stal kameňom uholným. * To sa stalo na pokyn Pána; vec v našich očiach obdivuhodná."
))
== Tropár <X>
#note[Berieme tropáre podľa predpisu]
== Katizmy a sedaleny <X>
#note[Berieme katizmy podľa predpisu.]
#note[Medzi katizmami berieme sedaleny.]
== Polyelej
TODO:
== Anjelsky zbor
#verseSoZvolanim("Blahoslavený si, Pane, nauč ma svoje prikázania.",
(
"Anjelský zbor sa podivil, * keď videl, Spasiteľ, že si mŕtvy, * že si zbúral mocné hradby smrti, * že aj Adama, spolu so sebou, si vzkriesil * a z moci pekla * všetkých vyslobodil.",
"„Prečo, učeníčky, * s láskyplnými slzami * voňavky otvárate?“ * Pýtal sa belostný anjel, * keď prišli k hrobu sväté ženy. * „Presvedčte sa, že Pánov hrob je prázdny. * Veď Spasiteľ skutočne vstal z mŕtvych.“",
"Zavčas ráno * sväté ženy s voňavkami * ponáhľali sa k tvojmu hrobu a plakali. * Zastavil ich anjel a riekol: * „Už bolo dosť sĺz, neplačte! * Kristovo vzkriesenie apoštolom oznámte!“",
"Sväté ženy plakali, * keď s vonnými masťami * prišli k tvojmu hrobu, Spasiteľ. * Anjel ich láskavo napomenul: * „Prečo hľadáte živého medzi mŕtvymi? * Veď ako Boh slávne vstal z mŕtvych!“ "
), slava: [#secText("(Trojičen)") Pokloňme sa Otcu \* i jeho Synovi, i Svätému Duchu, \* Svätej Trojici \* jednej podstaty. \* So serafínmi volajme: \* „Svätý, svätý, \* svätý si, Pane.“],
iteraz: [#secText("(Bohorodičník)") Panna, Darcu života si porodila, \* Adama si hriechov zbavila, \* srdce Evy \* šťastím miesto bolesti si naplnila. \* Tým, čo život stratili, \* späť vrátil život \* ten, čo sa z teba narodil, \* Pán, Boh a človek.])
Aleluja, aleluja, aleluja, \* sláva tebe, Bože. (3x)
== Velebenie sviatku
== Ypakoj
== 3. sedalen
== Stepenna
== Prokimen
== Evanjelium
== Videli sme
#lettrine("Videli sme Kristovo vzkriesenie, pokloňme sa svätému Pánu Ježišovi, jedinému bezhriešnemu. Klaniame sa tvojmu krížu, Kriste, a tvoje sväté vzkriesenie ospevujeme a oslavujeme. Lebo ty si náš Boh, okrem teba iného nepoznáme, tvoje meno vyznávame. Poďte všetci verní, pokloňme sa svätému Kristovmu vzkrieseniu, veď hľa, krížom prišla radosť do celého sveta. Vždy dobrorečme Pánovi, ospevujme jeho vzkriesenie, lebo pretrpel ukrižovanie a smrťou premohol smrť.")
#zalm(50)
TODO: Modlitba po 50. zalme
#lettrine("Spas, Bože, svoj ľud a požehnaj svoje dedičstvo. Navštív svoj svet milosrdenstvom a zľutovaním, pozdvihni slávu pravoverných kresťanov a zošli na nás svoje bohaté milosrdenstvo; na prosby našej prečistej Vládkyne, Bohorodičky Márie, vždy Panny; mocou úctyhodného a životodarného kríža; pod ochranou úctyhodných nebeských beztelesných mocností; na prosby ctihodného a slávneho proroka, predchodcu a krstiteľa Jána; svätých slávnych a všechválnych apoštolov; našich svätých otcov a veľkých učiteľov celého sveta i svätiteľov: Bazila Veľkého, <NAME>ga a <NAME>ústeho; našich svätých otcov alexandrijských arcibiskupov Atanáza a Cyrila; nášho otca svätého Mikuláša arcibiskupa, lýcijskomyrského divotvorcu; svätých apoštolom rovných Cyrila a Metoda, učiteľov Slovanov; svätého apoštolom rovného veľkého kniežaťa Vladimíra; svätého hieromučeníka Jozafáta; blažených hieromučeníkov Pavla a Vasiľa, prešovských biskupov; blaženého hieromučeníka Teodora, mukačevského biskupa; blaženého hieromučeníka Metoda, michalovského protoigumena; svätých slávnych mučeníkov víťazných v dobrom boji; našich prepodobných a bohonosných otcov Antona a Teodózia Pečerských a ostatných našich prepodobných a bohonosných otcov i matiek; svätých a spravodlivých Pánových predkov Joachima a Anny, našej prepodobnej matky <NAME>ptskej a všetkých svätých, prosíme ťa, najmilostivejší Pane, vypočuj nás hriešnych, ktorí sa k tebe modlíme, a zmiluj sa nad nami.")
== Kánon <X>
#note[Berieme kánon podľa predpisu]
#note[Po ôsmej piesni, berieme Velebenie Bohorodičky]
#verseSoZvolanim("Čestnejšia si ako cherubíni * a neporovnateľne slávnejšia ako serafíni, * bez porušenia si porodila Boha Slovo, * opravdivá Bohorodička, velebíme ťa.",
(
"Velebí moja duša Pána * a môj duch jasá v Bohu, mojom Spasiteľovi.",
"Lebo zhliadol na poníženosť svojej služobnice, * hľa, od tejto chvíle blahoslaviť ma budú všetky pokolenia.",
"Lebo veľké veci mi urobil ten, ktorý je mocný, a sväté je jeho meno. * A jeho milosrdenstvo z pokolenia na pokolenie s tými, čo sa ho boja.",
"Ukázal silu svojho ramena, * rozptýlil tých, čo v srdci pyšne zmýšľajú.",
"Mocnárov zosadil z trónov a povýšil ponížených. * Hladných nakŕmil dobrotami a bohatých prepustil naprázdno.",
"Ujal sa Izraela, svojho služobníka, lebo pamätá na svoje milosrdenstvo, * ako sľúbil našim otcom, Abrahámovi a jeho potomstvu naveky."
), prve: false)
#ektenia(3)
==== Svätý je Pán
#verseSoZvolanim(
"Velebte Pána, nášho Boha, a padnite k jeho nohám, lebo je svätý.",
("Svätý je Pán, Boh náš.","Svätý je Pán, Boh náš.")
, prve: false, add_note: false)
== Svitilen <X>
#postne_svitileny
== Chvály <X>
#include "../../zalmy/Z_ChvalyVelke.typ"
#verse((
"Aby ich súdili podľa písaného práva. * Všetkým jeho svätým to bude na slávu.",
"Chváľte Pána v jeho svätyni! * Chváľte ho na jeho vznešenej oblohe.",
"Chváľte ho za jeho činy mohutné; * chváľte ho za jeho nesmiernu velebnosť.",
"Chváľte ho zvukom poľnice, * chváľte ho harfou a citarou.",
"Chváľte ho bubnom a tancom, * chváľte ho lýrou a flautou.",
"Chváľte ho ľubozvučnými cimbalmi, chváľte ho jasavými cimbalmi; všetko, čo dýcha, nech chváli Pána.",
"Povstaň, Pane, Bože, zdvihni svoju ruku, * nezabúdaj na úbohých.",
"Oslavovať ťa budem, Pane, celým svojím srdcom * a vyrozprávam všetky tvoje diela zázračné.",
"Povstaň, Pane, Bože môj, * zdvihni svoju ruku.",
), start: 9)
#ektenia(3)
== Evanjeliová sloha
TODO:
== Slávoslovie <X>
#slavoslovieVelke
#lettrine("Svätý Bože, svätý Silný, svätý Nesmrteľný, zmiluj sa nad nami.") #note("(3x)")
== Tropár
TODO
#ektenia(40)
#ektenia(12)
#prepustenieBezKnaza
|
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/021%20-%20Battle%20for%20Zendikar/010_Shaping%20an%20Army.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Shaping an Army",
set_name: "Battle for Zendikar",
story_date: datetime(day: 21, month: 10, year: 2015),
author: "<NAME>",
doc
)
#emph[<NAME> was once a lullmage, one of a specialized tradition of wizards who learned, through painstaking practice, to calm the fury of Zendikar. The problem was the Roil, the unpredictable magical "weather" that could bring anything from squalls and windshears to uprooted earth and sudden vegetation. Explorers of the wilds who knew what they were doing always brought at least one lullmage on their expeditions, lest they find themselves at the mercy of the elements.]
#emph[But times have changed. The Eldrazi have risen. Gide<NAME> is ] seeking allies to fight the Eldrazi#emph[ at Sea Gate. And the Roil, once Zendikar's most deadly danger, might now become a crucial weapon in its salvation.]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
"Balance is death!" The voices of the initiates rang clearly, though not beautifully, in the damp air. They had been taught to yell the litany, even shriek it, regardless of what the resulting sound did to <NAME>'s ears. Dissonance was not to be ignored, but embraced.
"Calm is death!" The chanter performed a strange dance as she led the group through the litany. Ripples of earth moved under her feet in a desultory fashion, but still with enough momentum to generate an occasional stumble. As she tripped, she emitted a high-pitched intonation on whatever word escaping her lips, regardless of meter, pace, or good sense. Hearing a whinnying, croaking cry of "deeaaath!" was not a pleasant way to spend the morning.
Which was, sadly, entirely the point.
The whole arrangement was unpleasant. Noyan imagined what type of environment a typical brilliant merfolk scholar-mage—who loved the ocean and the suitable moment in which to indulge some witty repartee—would create for himself. It would clearly be the exact opposite of this retreat at Coralhelm, which was miles away from wit, surrounded by either lunatics or the incompetent and often both.
#figure(image("010_Shaping an Army/01.jpg", width: 100%), caption: [Retreat to Coralhelm | Art by <NAME>], supplement: none, numbering: none)
The fact that he was the one responsible for creating the retreat only gave him the briefest ironic pleasure. Mostly it made him very irritated.
Which was again, sadly, entirely the point.
"Peace is death!" <NAME> resented many things about the awakening of the Eldrazi. He had lost his home, his quiet, and the ability to fight opponents he could visibly irritate. But what he resented most were the near infinite times he had heard this damned chant. That he had written the litany gave him no pleasure, not even ironically. It was a deliberately poor and arrhythmic chant, and apparently he was doomed to hear it repeated for the rest of his life.
Or only until the Eldrazi came and ripped apart his innards or emulsified his brains or transmuted him into dust. It was important to have things to look forward to.
At least Noyan no longer had to lead the initiates in their rituals. Other initiates distinguished by being slightly less incompetent than the rest had taken to the rituals as their personal salvation. Actually mastering how to use the Roil without killing yourself or those around you was very hard, but butchering rhyme and meter and <NAME>'s ears was trivial.
The chanter stumbled through the next sentence, treating everyone to a pained rendition of "The world heaves!" The other initiates dutifully strived to match her strained tone, many of them adding their own unique atonality to the mix, creating what Noyan thought was the literal definition of cacophony.
It was all to the greater good, and like most sacrifices to the greater good, enjoyed by no one.
The next words in the litany shook around in his head, "It shakes. It strives," but he realized there were no sounds coming from reality to match. He looked up to see the chanter and initiates all staring toward the sky behind him to the south. Noyan turned and saw a kor kitesailer carrying a passenger in a harness. They were only minutes away from landing, but they were coming in from the wrong direction.
#figure(image("010_Shaping an Army/02.jpg", width: 100%), caption: [Roil Spout | Art by <NAME>], supplement: none, numbering: none)
The retreat on Coralhelm was difficult to access. Protected by a large gorge on all sides, their floating landmass was roped to the edges of the cliffs. It was possible for a skilled Kor cliffwalker to shimmy down the ropes, but most people coming to the retreat flew there. But only from the north. Even without the Roil, the winds in the canyons were unpredictable and dangerous. With the Roil, and especially with tens of roilmages—most of them not particularly skilled—the winds could be very predictably dangerous. Especially coming from the south, which is why all of their comings and goings to their floating retreat came in from the other side. The fool kitesailer was going to become one with the earth very intimately and very mortally.
Noyan ran forward, his arms flapping, his lungs rasping. The kitesailer could not hear him and was making his preparation to swoop in and land when a ferocious updraft took him and his passenger fifty feet up and sideways with such force the harness attachment sheared off and his passenger plummeted to the ground hundreds of feet below.
Noyan could only watch in horror and then puzzlement as the man fell. Unlike <NAME>, the man was not flailing or yelling or for that matter looking perturbed at all. He fell with #emph[grace] , if such a thing were possible, though he was clearly falling to his certain death. Noyan continued to run forward and began casting a spell to buffer the man's fall—though at his speed, the buffer would just leave the corpse in slightly nicer shape.
There were several flashes of sparking golden light and the man's body glowed. Just before he hit the ground, Noyan saw some kind of shimmering wave erupt beneath him, and he hit with an impact that Noyan felt travel up through his own legs and pitch him forward tumbling to the earth.
As Noyan lay there splayed to the ground, groaning while checking he hadn't broken anything, he tilted his head up, expecting to see some form of gruesome blood pancake. Instead he saw a tall, armored man, standing with the sun glinting off his armor behind him, with nary a scream, blood, broken bone, or even a bruise in sight.
Noyan slowly got to his feet, still wondering how the man survived. Behind them the kor kitesailer had managed to land safely and was running towards the two of them, presumably to check on the health of his passenger. The man looked at him closely and said, "I'm <NAME>. I'm looking for the roilmage, <NAME>. You have some blood on your nose. Are you well?" The look of concern was so genuine Noyan wanted to scream.
He did, just a little. It was the best he felt all morning.
#figure(image("010_Shaping an Army/03.jpg", width: 100%), caption: [Gideon, Ally of Zendikar | Art by <NAME>], supplement: none, numbering: none)
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
"It destroys or it dies!" The initiates had resumed their cheerful litany, and <NAME> raised an eyebrow.
"I was told you were the head of an elite elementalist force." Gideon's head swiveled, glancing briefly at the twenty or so of Noyan's initiates shrieking into the air and waving their arms in random patterns in the courtyard. "Are they practicing in the building . . . ?" Gideon peered through the courtyard into the empty living quarters behind.
"They're invisible. It's hard to remain an elite fighting force when just anyone can see them." Gideon stared at Noyan flatly. Noyan continued to feel better.
"Find your inner peace! Kill it! Crush it!" Many of the initiates made chopping or jumping motions as they went through this part of the ritual. Some really enjoyed demonstrating just how thoroughly they were demolishing their inner peace. So much grass had suffered for the sake of conquering inner peace.
Gideon raised an eyebrow. "Those are some . . . unusual rallying cries. Is there somewhere quieter we can go?" A small group of uncoordinated and tone-deaf initiates were able to do what falling from two hundred feet had not—make Gideon Jura uncomfortable.
Noyan Dar raised a hand and brought it down hard. The earth rumbled for a second and then quieted. The initiates and chanter quieted as well. "Initiates, practice your forms. Please use . . . discretion." The initiates had learned through very painful error and error what discretion meant.
As they walked towards the center of the clearing, Noyan noticed the balance of the man beside him. He walked in a perfectly measured stride, each step balanced and poised, able to turn into a crouch or leap or attack depending on his whim. Noyan had never seen someone so fully in control of their movement and their body.
Gideon Jura would make an atrocious roilmage.
"How did you survive the fall?" Noyan thought it quite a remarkable feat. If roilmages could learn that type of protection, there would be more live roilmages. Though that would mean the life expectancy of non-roilmages would suffer.
"I am . . . resistant to harm." Gideon paused and looked at him, saying nothing. Noyan also said nothing, hoping the absence of words would encourage Gideon to fill the void. After several moments of silence, Noyan tried to help.
"You seem . . . resistant to explanation." More staring from Gideon. He seemed adept at it.
"I was told you and your troops could control the earth, the air, the water. We need your help at Sea Gate." Gideon then decided to again stop speaking. Gideon seemed far more comfortable using poignant pauses and searching looks to communicate than, say, actual words. Noyan thought perhaps this was a language worth learning.
#figure(image("010_Shaping an Army/04.jpg", width: 100%), caption: [Scatter to the Winds | Art by Raymond Swanland], supplement: none, numbering: none)
"First, we are in the middle of training and can't just gallivant off to Sea Gate. Second, we are not . . . elementalists." He paused to let his full scorn dwell on the word, and looked expectantly at Gideon's face. Apparently Gideon did not understand the language as well as he spoke it. After a few more seconds of silence, Noyan was disgruntled. It was boring to be deliberately silent.
"Do people sneeze where you come from?" Far better to be insulting.
Gideon gave him a blank stare. "You know, like #emph[ah-choo] !" Noyan mimed a human sneeze, with a copious amount of snot at the end. Gideon's stare flattened.
"Yes, I know what sneezing is," Gideon responded. At least there was no poignant pause or searching glance at the end.
"My people have many stories and myths about the three gods. One of the favorites for children is 'Ula and the Ocean's Sneeze.' Cosi convinces Ula that there is a powerful magical pearl, hidden deep in the heart of the ocean. So Ula searches for the heart of the ocean, so he can steal the pearl. Eventually he finds the heart and reaches inside, but as he's pulling the pearl out, Ula's sleeve scrapes the inside of the heart, and the heart sneezes. Ula is trapped in a giant cocoon of solid white snot, until Cosi comes along to free him." Noyan smiled.
"White snot." The flat stare on Gideon's face threatened to become a permanent fixture.
"The point is not the white snot, as interesting as it is; the point is the sneeze." No enlightenment sought to wrestle with the flat stare. The flat stare remained the clear winner. Noyan sighed. What point to be cleverer than your enemy when they could not perceive it? He could not tell whether Gideon or the Eldrazi were worse in this regard.
"The #emph[Roil] ," Noyan continued, "the Roil is the sneeze. The Eldrazi are an irritant to the world. The Roil built up over time as a natural defense against the presence of the Eldrazi. Prior to the Eldrazi's arrival, those of us who called ourselves lullmages spent years trying to perfect the craft of quieting the Roil. As if we were a healer soothing a fever."
"But then the Eldrazi returned." Noyan was grateful for the presence of Gideon Jura, master of the obvious, to perpetuate the illusion of a conversation.
"But then the Eldrazi returned. And the Roil returned in full bloom with them."
"So being a roilmage should be easy, then."
"Easy, except for two problems. First, intensifying the Roil is easy, but intensifying it without killing yourself or any near bystanders is very, very hard. Unless you're . . . resistant to harm." Gideon's eyes narrowed but Noyan continued.
#figure(image("010_Shaping an Army/05.jpg", width: 100%), caption: [Roilmage's Trick | Art by <NAME>], supplement: none, numbering: none)
"Second, the mages most experienced in dealing with the Roil are . . ."
"All the lullmages who spent years learning to do the exact opposite," Gideon finished for him. Noyan smiled. A genuine intelligent response! The world was full of surprises.
"Exactly. Combatting the instincts of pacifying the Roil instead of heightening it proved to be a mental switch requiring much training. In fact . . ." Noyan raised his arms in a dramatic fashion and there was a loud thunderclap in the air. The roilmage initiates ran over and formed a large circle around Noyan.
"Llura, lead the litany please, from the beginning."
Llura had a large smile on her face as she wailed and flailed. The initiates dutifully followed, every awkward word creating a hole in the fabric of good taste that could never be repaired.
"Balance is death!#linebreak Calm is death!#linebreak Peace is death!
The world heaves!#linebreak It shakes!#linebreak It strives!#linebreak It destroys or it dies!
Find your inner peace!#linebreak Kill it! Crush it!#linebreak Do not become one with anything!#linebreak Feel your loneliness! Your fear! You are out of place!#linebreak Every step you take creates dissonance and chaos!#linebreak You will strive! You will shake! You will heave!#linebreak You must destroy or die!"
Despite every awful word, Noyan could not help but be pleased. The litany was remarkably effective at creating the right frame of mind in the initiates. He looked over at Gideon, and saw both eyebrows raised, the normal flat stare finally conquered by a stunned wide-eyed silence.
"Maybe . . . maybe this was not a good idea," croaked Gideon.
#emph[Not a good idea? ] Noyan had been irritated for the much of the day, much of every day since becoming a roilmage, but this was the first time he had been angry. This armored bumpkin had come to his school and presumed he could just order him and his students around and then decided they weren't good enough? #emph[Not a good idea!]
"A practical demonstration is in order," Noyan said. "I insist."
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
It took the better part of the morning to transport Gideon, Noyan, and the initiates over to the main continent of Tazeem. They were many miles outside of Sea Gate, but the Eldrazi density had increased significantly in the last few months. Finding roaming herds of them was not difficult.
Noyan wondered for an idle second whether Gideon was perhaps some secret, brilliant tactical mastermind, devising a facade of the idiot warrior, and using Noyan's own pride to manipulate him into devoting the roilmages to his cause. The second passed and Noyan discarded the thought. First, Noyan was perhaps the only mind brilliant enough to conceive of such a plan. Second, <NAME> was an idiot. No idiot could be that good at duplicity of such a level.
Noyan's plan was simple and elegant. Gideon balked, asking many annoying questions regarding contingencies Noyan assured him would not come into play. Eventually Gideon was reduced to communicating through raising eyebrows. He displayed a remarkable facility at raising both his left and right eyebrows. He was a man of many talents, this <NAME>.
Gideon had been most concerned about the consequences of attracting Eldrazi, considering their own small numbers. He had suggested several tiny groups but Noyan dismissed each one. They needed a group large enough to provide a suitable stage for performance. On an already-desiccated plain, they found an isolated group with several hundred of the creatures, scions and drones, and a few larger creatures that Noyan labeled as "direct lineage" to Ulamog itself.
#figure(image("010_Shaping an Army/06.jpg", width: 100%), caption: [Plains | Art by Vincent Proce], supplement: none, numbering: none)
The initiates were nervous and excited as they spread in a wide circle on the plain. Though, to be fair, they were nearly always nervous and excited. It wasn't even their first time dealing with Eldrazi . . . that was just a function of life on Zendikar now. But it was going to be the first time they were using their magic in conjunction with each other to fight the Eldrazi. This was to be their first live test.
While the initiates yelled at each other and themselves in bizarre rituals of preparation, <NAME> was motionless. Smooth, poised, and unsurprisingly, silent. As the first Eldrazi began to gather, glowing, supple metal blades appeared out of some mechanism in Gideon's hand. Noyan rolled his eyes in disbelief. He wanted to slap Gideon, but would probably end up cutting off his own hand. What sane, intelligent person has blades coming out of one's hand?
Noyan had thought he would have to generate some form of magical beacon to draw the Eldrazi, but there proved no need. The Eldrazi slowly began swarming in the direction of Gideon and Noyan, ignoring the initiates spread wide. Having never encountered this reaction before on his own, Noyan thought the most reasonable explanation was the Eldrazi found Gideon as obnoxious as he did.
Perhaps the Eldrazi were intelligent after all.
Gideon stared at Noyan. "At what point does being a roilmage involve using the Roil? There are a lot of Eldrazi out there." Even obnoxious idiots sometimes had a point. Noyan spread his hands out wide, and gave the signal for his initiates to begin their exercise. In class they called it "working the circle." The initiates began their conversations with the Roil, each in their own way. Some spoke to the earth and some to the air. While there was no large body of water, some roilmages spoke to the ever-present water inside the earth.
It was time for Noyan to begin his own magic.
#emph[Feel the irritation. It is the mosquito in the night, the itch between your shoulder-blades, the raw pain that never heals. It is the sneeze that will not come, the morsel of food stuck in your teeth, the crying of the child not yours. Feel it.]
Noyan was barely aware of the outside world, only having glimpses at the edge of consciousness as Gideon whirled and twirled snapping glowing blades in a kaleidoscopic display of mastery that even if Noyan was able to pay full attention, he was sure he'd find pretentious and boring. The Eldrazi were pressing, and Gideon kept them at bay.
#emph[Good boy,] he tried to say, but the demands of the Roil pressed upon him.
Every wrong interaction of the day, every missed note and awkward move, every single word emitted from the dark hole of Gideon's mouth, every mote of alienation and bitterness, <NAME> swept up and gathered within himself. This is what the earth felt, this is what Zendikar felt, when the awful alien touch of the Eldrazi fell upon it.
In the wider circle, the initiates had achieved connections with parts of the Roil. The earth between the initiates and Noyan rumbled and shook, the air gusted and moaned, and the initiates moved earth, air, and water in a semi-circular pattern, back and forth. #emph[Swoosh, swoosh] , the ground shifted and heaved as it tried to rotate along the wide circle. The initiates began aligning their movements and timing, and the large circle of earth surrounding Gideon and Noyan began to rotate in one direction and then the other.
#figure(image("010_Shaping an Army/07.jpg", width: 100%), caption: [Inspired Charge | Art by <NAME>ai], supplement: none, numbering: none)
The Eldrazi were driven into a frenzy by the earth shifting and rumbling beneath them. No longer lethargic, they buzzed in intensity as they hurled themselves at Gideon and Noyan. Gideon's skin glowed and golden sparks from an invisible shield of energy shimmered constantly as he became a never-ending blur of savage cuts and thrusts. An Eldrazi tentacle lashed out at Noyan's face, and somehow Gideon was there first, knocking it away and decapitating the Eldrazi in one near-impossible motion. The larger Eldrazi were almost on top of them, and Gideon's breath labored. "If you're going to do anything to actually kill Eldrazi, I suggest you do it soon. I can't keep you alive forever."
The Roil was close. So close. It wanted to lash out, but Noyan wouldn't let it, not yet. The irritation within Noyan, within the earth, grew. The initiates had melded their magics into one autonomous beat, finally finding the rhythm that had eluded them all morning. #emph[Swoosh, swoosh] as rock unmoored and wind loosened. The earth wanted to destroy them all, eliminate every blighted touch from the hand of decay; the Roil surged and bucked, desperate to find relief.
#figure(image("010_Shaping an Army/08.jpg", width: 100%), caption: [Endless One | Art by <NAME>], supplement: none, numbering: none)
An Eldrazi twice as tall as Gideon brought a limb down thicker than a tree trunk on top of them. Gideon raised his arm and the massive limb crashed into his energy shield, igniting a flurry of golden sparks. But Gideon sunk down to the ground on one knee, and the Eldrazi giant brought his arm around for another blow.
"Now mage!" Gideon snarled.
#emph[Strive, shake, heave, destroy.]
"You are invulnerable, right?" Noyan shouted. Gideon nodded.
#emph[Strive, shake, heave, destroy.]
#figure(image("010_Shaping an Army/09.jpg", width: 100%), caption: [<NAME>, Roil Shaper | Art by <NAME>opinski], supplement: none, numbering: none)
Noyan cast his spell. All the earth between Noyan and the circle of initiates disintegrated underneath in a circling vortex of wind, magma, and rock. Where once were hundreds of feet of solid ground, there now was . . . nothing. The Eldrazi and Gideon plummeted through a storm of falling debris, and Noyan could see the sparking golden light of Gideon's shield shimmer constantly as Gideon fell.
#figure(image("010_Shaping an Army/10.jpg", width: 100%), caption: [Boiling Earth | Art by Titus Lunter], supplement: none, numbering: none)
What had been the din of chaotic battle was now replaced by stunning silence. Noyan stood, alone, on a patch of earth barely two feet square. Hundreds of feet in every direction was now chasm, a great void separating himself from his initiates, who looked on in disbelief at what they had wrought. The initiates looked at the chasm, and at each other, and began cheering. As the debris settled far below, they could make out the corpses of the Eldrazi plus a lone figure surrounded by golden glowing sparks as rocks and fire made their final falls.
Noyan smiled. It was a fantastic moment. The only thing he regretted was Gideon had not screamed once his entire way down. #emph[What did it take to perturb that man?]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
"You have a powerful team, <NAME>. We would love to have all of you at Sea Gate with us. We need you."
The initiates . . . no, that was not quite fair. The #emph[roilmages ] gathered around them cheered. After Noyan and Gideon had both been rescued, they had all regrouped on a cliff settlement near Coralhelm. Noyan beamed. Finally the man recognized the true worth of the roilmages! It was hard not to feel smug. "I suppose it was a good idea for you to come see us after all."
"Yes, it was." Gideon looked at Noyan intently, but there was something in his eyes that made it awkward for Noyan to try and make fun of it. "I am sorry, Noyan, for any doubt. That display was amazing." Gideon smiled, and Noyan stood there, silent and stunned at how proud he felt, just because an idiot warrior had praised him.
The roilmages began bringing out food and drink. There was going to be a big party tonight for all of them to celebrate their victory. The Eldrazi would still be there tomorrow.
Gideon motioned at the kitesailer who brought him to begin preparations for leaving. "I have to get back to Sea Gate. All of you will come tomorrow?"
"Yes, <NAME>. We will be there." Noyan wanted to say more, ideally something quippy and sharp, but he didn't have the words. All his quips were strangely gone.
Gideon turned back. "One question, though, before I leave. In the story you were telling me, the one about Cosi and Ula, who ended up with the pearl?"
Noyan smirked. "Cosi did, of course. That's how most Cosi stories end up, with Cosi convincing Ula to do something that Ula hadn't initially wanted to do, and then Cosi ends up benefiting from it." Noyan loved the Cosi stories.
Gideon smiled. "Smart one, that Cosi. Too smart for me, anyway. I'll see you at Sea Gate, Noyan." Gideon turned and strapped himself into the kitesail harness, and the kitesailer and Gideon began their ascent back to Sea Gate. Noyan watched them go, bemused at Gideon's open admission of his limited mental faculties, and pondering the strange easy smile on Gideon's face.
It was only later in the evening, after much alcohol and further deliberation on Gideon's last words, that <NAME>'s elation turned into a very flat stare.
#figure(image("010_Shaping an Army/11.jpg", width: 100%), caption: [Prairie Stream | Art by <NAME>aquette], supplement: none, numbering: none)
|
|
https://github.com/mattyoung101/uqthesis_eecs_hons | https://raw.githubusercontent.com/mattyoung101/uqthesis_eecs_hons/master/uqthesis.typ | typst | ISC License | // Main thesis file.
#import "util/macros.typ": *
// Page layout
#set page(
margin: (
left: 30mm,
right: 30mm,
top: 30mm,
bottom: 30mm
)
)
// Colour links blue like LaTeX
#show cite: set text(fill: blue)
#show link: set text(fill: blue)
#show ref: set text(fill: blue)
#show footnote: set text(fill: blue)
// Font size
#set text(size: 12pt)
// Display
#set list(indent: 12pt)
#set math.equation(numbering: "(1)")
// Title
#include "pages/title.typ"
#set page(numbering: "i")
#pagebreak()
// Justify all paragraphs but only after the title page
#set par(justify: true)
// Initially show headings but without the numbering or the "Chapter" string
#show heading.where(level: 1): it => uqHeaderNoChapter(it)
// Letter of originality
#include "pages/letter.typ"
#pagebreak()
// Dedication
#include "pages/dedication.typ"
#pagebreak()
// Acknowledgements
#include "pages/acknowledgements.typ"
#pagebreak()
// Abstract
#include "pages/abstract.typ"
#pagebreak()
// Declaration
// #include "pages/declaration.typ"
// #pagebreak()
// Lists of things
#include "pages/contents.typ"
// Reset page counter (page counting starts from one on this page)
#set page(numbering: "1")
#pagebreak()
// Set page counter to Arabic numerals
#counter(page).update(1)
// From now on, display top level headers as "Chapter XX"
#set heading(numbering: "1.")
#show heading.where(level: 1): it => uqHeaderChapter(it)
// Thesis chapters
#include "pages/chapters/intro.typ"
#pagebreak()
#include "pages/chapters/method.typ"
#pagebreak()
#include "pages/chapters/results.typ"
#pagebreak()
#include "pages/chapters/conclusion.typ"
#pagebreak()
// Return to headers without chapter numbers
#set heading(numbering: none)
#show heading.where(level: 1): it => uqHeaderNoChapter(it)
// Appendices
#include "pages/appendices/example.typ"
#pagebreak()
// Bibliography
#include "pages/bibliography.typ"
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/cetz/0.2.0/src/anchor.typ | typst | Apache License 2.0 | #import "deps.typ"
#import deps.oxifmt: strfmt
#import "util.typ"
#import "intersection.typ"
#import "drawable.typ"
#import "path-util.typ"
#import "matrix.typ"
#import "vector.typ"
// Compass direction to angle
#let compass-angle = (
east: 0deg,
north-east: 45deg,
north: 90deg,
north-west: 135deg,
west: 180deg,
south-west: 225deg,
south: 270deg,
south-east: 315deg,
)
#let compass-directions = compass-angle.keys()
#let compass-directions-with-center = compass-directions + ("center",)
// Path distance anchors
#let path-distances = (
start: 0%,
mid: 50%,
end: 100%,
)
#let path-distance-names = path-distances.keys()
// All default anchor names of closed shapes
#let closed-shape-names = compass-directions-with-center + path-distance-names
/// Calculates a border anchor at the given angle by testing for an intersection between a line and the given drawables.
///
/// This function is not ready to be used widely in its current state. It is only to be used to calculate the cardinal anchors of the arc element until properly updated. It will panic if no intersections have been found.
///
/// - center (vector): The position from which to start the test line.
/// - x-dist (number): The furthest distance the test line should go in the x direction.
/// - y-dist (number): The furthest distance the test line should go in the y direction.
/// - drawables (drawables): Drawables to test for an intersection against. Ideally should be of type path but all others are ignored.
/// - angle (angle): The angle to check for a border anchor at.
/// -> vector
#let border(center, x-dist, y-dist, drawables, angle) = {
x-dist += util.float-epsilon
y-dist += util.float-epsilon
if type(drawables) == dictionary {
drawables = (drawables,)
}
let test-line = (
center,
(
center.at(0) + x-dist * calc.cos(angle),
center.at(1) + y-dist * calc.sin(angle),
center.at(2),
)
)
let pts = ()
for drawable in drawables {
if drawable.type != "path" {
continue
}
pts += intersection.line-path(..test-line, drawable)
}
if pts.len() == 1 {
return pts.first()
}
assert(pts.len() != 0, message: repr(angle))
// Find the furthest intersection point from center
return util.sort-points-by-distance(center, pts).last()
}
/// Handle path distance anchor
#let resolve-distance(anchor, drawable) = {
if type(anchor) in (int, float, ratio) {
return path-util.point-on-path(drawable.segments, anchor)
}
}
/// Handle border angle anchor
#let resolve-border-angle(anchor, center, rx, ry, drawable) = {
return border(center, rx, ry, drawable, anchor)
}
/// Handle named compass direction
#let resolve-compass-dir(anchor, center, rx, ry, drawable, with-center: true) = {
if type(anchor) == str {
return if anchor in compass-directions {
border(center, rx, ry, drawable, compass-angle.at(anchor))
} else if with-center and anchor == "center" {
center
}
}
}
// Handle anchor for a line shape
//
// Path anchors are:
// - Distance anchors
// - Ratio anchors
#let calculate-path-anchor(anchor, drawable) = {
if type(drawable) == array {
assert(drawable.len() == 1,
message: "Expected a single path, got " + repr(drawable))
drawable = drawable.first()
}
if type(anchor) == str and anchor in path-distance-names {
anchor = path-distances.at(anchor)
}
return resolve-distance(anchor, drawable)
}
// Handle anchor for a closed shape
//
// Border anchors are:
// - Compass direction anchors
// - Angle anchors
#let calculate-border-anchor(anchor, center, rx, ry, drawable) = {
if type(drawable) == array {
assert(drawable.len() == 1,
message: "Expected a single path, got " + repr(drawable))
drawable = drawable.first()
}
if type(anchor) == str {
return resolve-compass-dir(anchor, center, rx, ry, drawable)
} else if type(anchor) == angle {
return resolve-border-angle(anchor, center, rx, ry, drawable)
}
}
/// Setup an anchor calculation and handling function for an element. Unifies anchor error checking and calculation of the offset transform.
///
/// A tuple of a transformation matrix and function will be returned.
/// The transform is calculated by translating the given transform by the distance between the position of `offset-anchor` and `default`. It can then be used to correctly transform an element's drawables. If both either are none the calculation won't happen but the transform will still be returned.
/// The function can be used to get the transformed anchors of an element by passing it a string. An empty array can be passed to get the list of valid anchors.
///
/// - callback (function): The function to call to get an anchor's position. The anchor's name will be passed and it should return a vector (str => vector).
/// - anchor-names (array<str>): A list of valid anchor names. This list will be used to validate an anchor exists before `callback` is used.
/// - default (str): The name of the default anchor.
/// - transform (matrix): The current transformation matrix to apply to an anchor's position before returning it. If `offset-anchor` and `default` is set, it will be first translated by the distance between them.
/// - name (str): The name of the element, this is only used in the error message in the event an anchor is invalid.
/// - offset-anchor: The name of an anchor to offset the transform by.
/// - border-anchors (bool): If true, add border anchors (compass and angle anchors)
/// - path-anchors (bool): If true, add path anchors (distance anchors)
/// - center (none,vector): Center of the path `path`, used for border anchor calculation
/// - radii (none,tuple): Radius tuple used for border anchor calculation
/// - path (none,drawable): Path used for path and border anchor calculation
/// -> (matrix, function)
#let setup(callback,
anchor-names,
default: none,
transform: none,
name: none,
offset-anchor: none,
border-anchors: false,
path-anchors: false,
center: none,
radii: none,
path: none) = {
// Passing no callback is valid!
if callback == auto {
callback = (anchor) => {}
}
// Add enabled anchor names
if border-anchors {
assert(center != none and radii != none and path != none,
message: "Border anchors need center point, radii and the path set!")
anchor-names += compass-directions-with-center
}
if path-anchors {
assert(path != none,
message: "Path anchors need the path set!")
anchor-names += path-distance-names
}
// Populate callback with auto added
// anchor functions
if border-anchors or path-anchors {
callback = (anchor) => {
let pt = callback(anchor)
if pt == none and border-anchors {
pt = calculate-border-anchor(
anchor, center, ..radii, path)
}
if pt == none and path-anchors {
pt = calculate-path-anchor(
anchor, path)
}
return pt
}
}
if default != none and offset-anchor != none {
assert(
offset-anchor in anchor-names,
message: strfmt("Anchor '{}' not in anchors {} for element '{}'", offset-anchor, repr(anchor-names), name)
)
let offset = matrix.transform-translate(
..vector.sub(callback(default), callback(offset-anchor)).slice(0, 3)
)
transform = if transform != none {
matrix.mul-mat(
transform,
offset
)
} else {
offset
}
}
// Anchor callback
let calculate-anchor(anchor) = {
if anchor == () {
return anchor-names
}
if anchor == "default" {
assert.ne(default, none, message: strfmt("Element '{}' does not have a default anchor!", name))
anchor = default
}
let out = callback(anchor)
assert(
out != none,
message: strfmt("Anchor '{}' not in anchors {} for element '{}'",
anchor, repr(anchor-names), name)
)
return if transform != none {
util.apply-transform(
transform,
out
)
} else {
out
}
}
return (if transform == none { matrix.ident() } else { transform }, calculate-anchor)
}
|
https://github.com/katamyra/Notes | https://raw.githubusercontent.com/katamyra/Notes/main/Compiled%20School%20Notes/CS2110/Modules/DigitalLogic.typ | typst | #import "../../../template.typ": *
= Digital Logic Structures
== Sequential Logic
=== K-Maps
#note[
*Karnaugh maps* are an easy way to make a truth table and convert it into a circuit using the least number of gates.
]
==== K-Maps Setup
Karnaugh maps are setup using gray code, which means that only one variable changes between two-adjacent cells. If you examine the values across the top from left to right, or down the side from top to bottom, you'll also see that the activated bits follow a pattern like 00, 01, etc
#definition(footer: "In this case, we differ by only 1 bit at time. So for example, 01 -> 10 is NOT a gray code, because 2 bits had to be flipped")[
A *gray code* is a binary numerical system that is ordered such that two subsequent values only differ in one bit. It is also known as reflected binary code because the codes are reflected in the first and last n/2 values
]
==== K-Maps Grouping Rules
+ We want the biggest groups where the size of the groups are a power of 2
+ We want the least number of groups
+ We can build groups with adjacent cells including wrapping around corners
If something doesn't matter we can just put it to X and we can group with it if wanted.
You make the biggest groups possible, and you analyze the groups for what values don't change and use that to create a logical expression. From there, it is easy to turn the logical expression into circuits.
=== Level Triggered Logic
There are two types of sequential logic: *level triggered logic and edge triggered logic*. Both rely on the signals of a clock, which is a circuit component that oscillates between a 1 and 0 at a set frequency to help synchronize operations in a circuit.
The difference is when the output changes based on the input signal. In *level triggered logic*, when the clock has a 1 output, the circuit output will match the input, and when the clock has 0 output, the circuit output stays the same.
Think of the changes happening when the clock is 1 and level.
#blockquote[
RS Latches, D Latches, and memory are all level triggered
]
== Basic Storage Elements
- The other kind of storage element are those that involve the storage of information and those that do not
=== The RS Latch
#definition[
The *RS Latch* can store one bit of information, a 1 or a 0. Generally, two 2-input NAND gates are connected such that the output of each is connected to one of the inputs of the other. The other inputs are usually held to be zero.
Setting the latch to store a 1 is known as *setting* the latch, while setting the latch to store a 0 is referred to *resetting* the latch
]
#image("../Images/RSLatch.png", width: 100%-200pt)
#definition[
The *quiescent* (or quiet) state of a latch is the state when the latch is storing a value, either 0/1, and nothing attempts to change that value.
This happens when S and R are both equal to 1. So as long as the inputs S and R remain as 1, the state of the circuit will not change.
]
#note(footer: "Logic behind setting to 1: If we set S to 0 for a brief period of time, this causes a and thus A to be equal to 1. Since R is 1 and A is 1, b must be 0, This causes B to be 0, which makes a equal to 1 again. Now when we return S to 1, a remains the same since B is also 0, and 1 0 input to a nand gate is enough to make sure that the NAND gate stays at 1.")[
*Setting the latch to a 1 or 0*
The latch can be sent to 1 by momentarily setting S to 0, provided that we keep the value of R at 1.
Similarly, we can set the patch to 0 by setting R to zero (known as clearing or resetting), provided we keep the value of S at 1.
]
When a digital circuit is powered on, the latch can be in either of its two states, 0 or 1. It does not matter which state since we never use that information until after we have set it to 1 or 0.
=== The Gated D Latch
#definition[
The D latch helps control when a latch is set and when it is cleared. In the following figure, the latch is set to the value of D whenever WE is asserted. When WE is not asserted, the outputs S and R are both equal to 1.
When WE is momentarily set to 1, exactly one of the outputs S or R is set to 0 depending on the value of D. If D is set to 1, the S is set to 0, else
]
#image("../Images/DLatch.png")
== The Concept of Memory
=
*Memory* is made up of a (usually large) number of locations, each uniquely identifiable and each having the ability to store a value. We refer to the unique memory location as its _address_. We refer to the number of bits of information stored ine ach location as its _addressability_
=== Address Space
#definition[
We refer to the total number of uniquely identifiable locations as the *memory's address space*.
]
For example, 2GB memory has two billion memory locations.
=== Addressability
#definition[
*Addressability*: the number of bits stored in each memory location.
]
== $2^2$ by 3-Bit Memory Example
In this case, the memory has an address space of 4 locations and an addressability of three bits. Since it is $2^2$ memory, it takes two bits to specify the address. We specify it using A[1:0]. Since its addressability is 3, that means in each location, it stores 3 bits worth of information/data.
#note[
When specifying a memory location in terms of *A[high:low]*, we are starting from the rightmost spot as index of 0. This means we are looking at the sequence of $h - l + 1$ bits such that high is the leftmost bit number, and low is the rightmost bit number in the sequence.
]
Access of memory first starts with _decoding the address bits_, using a decoder. We also have WE, which defines whether we are in write-enable mode of not.
The input of A[1:0] defines what the decoder has to select for the correct _word line_. From there, the decoder outputs a line of 1 which is anded across all three D-latches producing the output of that position.
== State
#definition[
*State*: a snapshot of that system in which all relevant items are explicitly expressed.
Ex: for a lock, the state would be open, or 0/1/2 correct operations leading to opening the lock.
]
#definition[
A *finite state machine* consists of the following elements
+ A finite number of states
+ A finite number of external inputs
+ A finite number of external outputs
+ An explicit specification of all state transitions
+ An explicit specification of what determines each external output value
]
The state machines we have talked about so far are *asynchronous*, because there is no fixed amount of time in between when these inputs should be fed into the state machine. On the other hand, a *synchronous* state machine (such as most computers) have a fixed amount of time in between inputs.
#note[
The control for the fixed time between state machine changes is controlled by a clock, _whos values alternate between 0 volts and some specified fixed voltage_
]
== Datapath of LC-3
#definition[
*Datapath of the LC-3* consists of all the logic structures that combine to process information at the core of the computer.
] |
|
https://github.com/rxt1077/it610 | https://raw.githubusercontent.com/rxt1077/it610/master/markup/exercises/dev-env.typ | typst | #import "/templates/exercise.typ": exercise, code, admonition
#show: doc => exercise(
course-name: "Systems Administration",
exercise-name: "Creating a Developer Environment",
doc,
)
== Goals
- Create a multi-container development environment using a container orchestration tool
== Background
Imagine that you are a recent hire for a frog-themed startup named ElFroggo.
You are part of the operations team and you recieve the following email:
#quote(attribution: [Susan J. Developer], block: true)[
Hello and Welcome to ElFroggo,
I'm looking for options for a quick interface to our frog database front-end and I'm thinking about using Flask and PostgreSQL.
Do you have some way I could test this environment on my Desktop?
I'd like a couple of rows of test data in a table called `Frogs` with the columns `ID`, `Name`, `ScientificName`, `Color`.
I'd also like a Flask example that shows how to use the `psycopg2-binary` package to connect to the db.
Thanks,
]
Design a system that meets Susan's needs using either Docker Compose or Kubernetes.
== Deliverables
Submit the files that you would give to Susan.
== Resources
- #link("https://www.psycopg.org/docs/install.html", [Psoycopg documentation])
- #link("https://www.docker.com/blog/how-to-use-the-postgres-docker-official-image/", [How to Use the Postgres Docker Official Image])
- #link("https://stefanopassador.medium.com/docker-compose-with-python-and-posgresql-45c4c5174299", [Docker Compose with Python and PostgreSQL])
|
|
https://github.com/Fabian-Heinrich/typst_homework | https://raw.githubusercontent.com/Fabian-Heinrich/typst_homework/main/example.typ | typst | MIT License | #import "homework_template.typ": *
#show: doc => conf(
title: "Homework Example",
subtitle: "Assignment Group 5",
authors: (
(name: "Alice", email: "<EMAIL>", student_number: "18859"),
(name: "Bob", email: "<EMAIL>", student_number: "45893"),
),
language: "en",
date_format: "[month]/[day]/[year]",
show_page_numbers: true,
doc
)
= Homework Example
== Task 1
#lorem(75)
== Task 2
#lorem(200)
|
https://github.com/Servostar/dhbw-abb-typst-template | https://raw.githubusercontent.com/Servostar/dhbw-abb-typst-template/main/src/pages/confidentiality-statement.typ | typst | MIT License | // .--------------------------------------------------------------------------.
// | Confidentiality Statement |
// '--------------------------------------------------------------------------'
// Author: <NAME>
// Edited: 28.06.2024
// License: MIT
#let new_confidentiality_statement_page(config) = (
context {
pagebreak(weak: true)
let thesis = config.thesis
let author = config.author
if text.lang == "de" [
#heading(level: 1, "Sperrvermerk")
] else if text.lang == "en" [
#heading(level: 1, "Confidentiality Statement")
]
if text.lang == "de" [
Der Inhalt dieser Arbeit mit dem Thema
] else if text.lang == "en" [
The content of this work with the topic
]
v(1em)
set align(center)
text(weight: "bold", thesis.title)
if thesis.subtitle != none {
linebreak()
thesis.subtitle
}
set align(left)
v(1em)
set par(justify: true)
if text.lang == "de" [
darf weder als Ganzes noch in Auszügen Personen außerhalb des Prüfungsprozesses und des Evaluationsverfahrens zugänglich gemacht werden, sofern keine anderslautende Genehmigung der Ausbildungsstätte vorliegt.
] else if text.lang == "en" [
may not be made accessible to persons outside the examination process and the evaluation procedure, either as a whole or in excerpts, unless otherwise authorized by the training institution.
]
set align(horizon)
grid(
// set width of columns
// we need two, so make both half the page width
columns: (50%, 50%),
row-gutter: 0.75em,
align(left, {line(length: 6cm)}),
align(left, {line(length: 6cm)}),
align(left, if text.lang == "de" [ Ort, Datum ] else if text.lang == "en" [ Place, Date ] else { panic("no translation for language: ", text.lang) }),
align(left, if text.lang == "de" [ Unterschrift ] else if text.lang == "en" [ Signature ] else { panic("no translation for language: ", text.lang) }))
}
)
|
https://github.com/sicheng1806/pytez | https://raw.githubusercontent.com/sicheng1806/pytez/main/doc/README.typ | typst | == 编程命名约定
+ 对象属性命名一般使用小写,命名写全,一般不使用下划线连接单词, 如 `maxlength`
+ 私有属性前加下划线,如 `_previous`
+ 类属性和全局变量命名大写,可加下划线, 如 `MAX_NUM`
+ 方法写简称,加下划线,一般以谓词结构命名,如 `get_ctx,set_ctx`
+ 与内部其他部分属性相配合的属性都设为私有属性,相应的设立对应的只读属性用于读取使用。如果需要更改,一般使用方法进行更改和读写。
== 参数设置
=== 位置的表示方法
用户操作的坐标为相对数据坐标,对应的表示方法:
+ XY直角坐标系表示
坐标可以使用字典表示, 如`{x:1};{y:1};{x:1,y:2}`,也可以使用二元数组表示 `(a,b)`。如果支持z轴,则可以使用三维数组表示。
+ 当前坐标
可以使用空字典表示,或者空数组表示。 如 `{} 、 ()`
+ 相对坐标
相对于当前坐标的表示,可以使用字典中的rel键表示, 如 `{rel:(a,b)}; {rel:{x:1,y:2}};`
+ 极坐标
极坐标使用角度和半径表达。角度由角度字符串 `"30deg"` 表示,弧度由数字表示。
可以使用字典和二元数组表示,如 `{angle:"30deg",radius: 1}; ("30deg",1)`
+ 绝对数据坐标系
可以直接指定绝对数据坐标系,这点在使用锚点系统中十分便利,使用字典中abs键表示, 如 `{abs:(a,b)}`
+ 指定当前位置不更新
使用update键可以指定当前位置不发生改变,如 `{update: False}` 。默认为True会发生改变。
=== 样式的表示方法
略
=== 单位的表示方法
略
=== 锚点的表示方法
略
== 绘图流程
一般的绘图函数具有以下流程:
+ 确定坐标点的位置
pytez的坐标表示是一个具有状态的坐标系,其基础是一个二维直角坐标系称为绝对数据坐标系,在此之上用户输入的数据是相对数据坐标系,没有进行任何设置
之前,相对坐标系和绝对坐标系一致,绝对坐标系的关系由原点位置+变换矩阵决定。其状态性还体现在当前坐标位置的保存,当前坐标位置以相对数据坐标系表示,
初始状态下相对坐标系位于原点。原点位置和变换矩阵都可以通过方法设置,当前位置随着绘画的过程改变,也可以通过方法设置,当前位置可以通过传入空数组表示。
pytez还支持在当前相对坐标系下建立子坐标系,通过确定子坐标系在相对数据坐标系中的位置大小单位,来建立子坐标系,本质上是坐标的缩放变换。可以通过还原变换矩阵复原。
参见:
目前暂时使用变换矩阵完成,后面研究变换矩阵的变换范围或再做取舍。变换矩阵并不能完成相对于绝对单位平移的操作,这里有两种解决方法,一种是坐标维度+1,用`(x,y,1)`来表示变换时的坐标,这样的变换矩阵维度+1。另一种方法是设置原点,这里采用和typst相同的4维变换矩阵来表示变换。
#link("https://zh.wikipedia.org/wiki/%E5%8F%98%E6%8D%A2%E7%9F%A9%E9%98%B5")
注:为了增加平移的支持,从而设置
+ 确定绘图尺寸和类型
pytez的绘图尺寸通过长度单位设置,在Canvas类初始化时就可以设置单位长度的尺寸,后面的绘图一般都是基于单位长度绘图的。
但是matplotlib提供了多种坐标系绘图,而typst提供了多种绝对长度尺寸的设置。matplotlib的设计功能更为强大,typst的多种绝对长度尺寸
设计也具有其便利性。设计初期只实现相对于单位长度的支持
+ 确定绘图变换
影响最终图形的形状的要素还有matplotlib的变换,一般而言不使用其就可以完成大部分工作,但是当需要针对matplotlib多种坐标系设计图形时就需要用到。
确定绘图变换pytez不做过多设计,是以patch类的参数传递的,需要了解matplotlib的transform属性以使用。
+ 确定绘图的样式
pytez的基本图形以具有填充色和轮廓线的封闭图形表示,也可以单独绘制线型。2维封闭图形的样式具有fill和stroke两项,1维线型则只具有stroke属性,fill属性将被忽略。
广义的样式还包括图形的尺寸默认参数,例如圆的默认半径为1。marker的箭头形状、长度等等。所以前面的绘图尺寸和类型的确定最后也作用于广义的样式之中。
+ 生成图形类
图形基于坐标,广义样式以及mpl的patch属性生成图形类。当patch属性和其他属性冲突时,优先其他属性。
+ 添加到Axes
将生成图形类添加到Axes的children中,交于matplotlib处理。
== 主要类型的设计
=== Canvas类的设计
pytez支持许多特性,综合考虑这些特性来设计的Canvas类,目的是简化几何图的绘制,而不是数据作图。
+ 数据坐标系统。
Canvas具有以下属性和方法来支持坐标表示:
属性,包括不建议调用的私有属性:
+ \#被上下文管理器取代:: `Canvas._curpos` 获取当前坐标位置
+ \#被上下文管理器取代:: `Canvas._transform` \#用变换矩阵类代替:: `Canvas._matrix` 获取变换矩阵
+ \#被维度+1的变换矩阵取代:: `Canvas._origin` 相对坐标系原点在绝对坐标系中的位置
+ \#被废除,一律采用变换矩阵,设计变换矩阵类后计算优化也才类中处理:: `Canvas._transformed` 变换矩阵是否起作用
+ `Canvas._prematrixes` 使用过的变换矩阵,默认最多存储15项
+ `Canvas._maxprematrixes` 使用过的变换矩阵序列的最大长度,默认为15。可以直接访问设置,必须为整数
+ `Canvas._bounds` 相对数据坐标系边界,默认为None,指无穷,必须为正值。
+ `Canvas._context` 上下文属性是一个上下文类,保存着Canva的当前状态,包括length,transform,debug。
+ `Canvas.context` 读、部分可写,返回上下文管理器(set_ctx)
+ `Canvas.curpos` 读写,返回当前位置, (moveto)
+ `Canvas.transform` 读写,返回变换矩阵,(set_tansform)
+ `Canvas.bounds` 读写,返回边界,(set_bounds)
+ `Canvas.maxpromatrixes` 读写,返回变换矩阵缓存数
+ `Canvas.prematrixes` 只读,返回变换矩阵缓存序列
方法:
+ `Canvas.get_curpos()` 获取相对数据坐标表示下的当前坐标位置
+ `Canvas.moveto(pos)` 设置相对数据坐标表示下的当前坐标位置
+ `Canvas.set_transform(mat)` 设置变换矩阵,参数可以是4x4变换矩阵或者变换矩阵类。
+ `Canvas.rotate(angle,origin=(0,0))` 过点绕轴旋转,在只支持二维条件下默认为z轴,如果支持三维,则angle参数可以传入字典指定其旋转轴,如`{x:"45deg",y:"15deg"}`
+ `Canvas.translate(vetc)` 传入坐标视为矢量,向其平移
+ `Canvas.scale(s)` 传入字典,使其放缩,如, {x:0.5,y:0.5}
+ `Canvas.set_origin(pos)` 设置坐标轴原点
+ `Canvas.set_viewport(from,to,bounds)` 设置子坐标系,原理也是变换矩阵完成,所以边界没有限制
+ `Canvas.set_bounds(bounds=None)` 设置当前坐标系的边界,None表示无限制。
+ `Canvas.get_origin()` 获取绝对坐标系表示下的原点位置
+ `Canvas.get_ctx` 获取上下文,即坐标系的状态
+ `Canvas.set_ctx(mat,length,debug,zorder)` 设置上下文,即坐标系的状态。
+ 图层
matplotlib库中也支持图层的设置,通过绘图时指定zorder参数完成。pytez在此基础上支持通过图层设置函数设置当前图层。
属性,包括不建议调用的私有属性:
+ `Canvas._zorder` 当前图层序号
+ `Canvas.zorder` 只读
方法:
+ `Canvas.set_layer(zorder)` 这里与typst不同,这是由于typst的参数可以传递代码块(内容)。 所以将当前图层设置为状态属性。
+ 样式
pytez的样式支持传入绘图函数或者使用统一接口设置样式。为了实现统一接口设置,Canvas类必须记录样式状态。
样式通过样式管理器类完成,其任务是从默认样式文件获取样式,临时储存当前样式,获取当前样式,将当前样式存储为默认样式文件或者样式文件。
属性,包括不推荐使用的私有属性:
+ `Canvas._curstyle` 当前样式,类型为样式管理器类型
方法:
+ `Canvas.get_style()` 获取当前默认样式
+ `Canvas.set_style(*,fill=None,strick=None,**style_special)` 设置当前默认样式
+ 锚点与命名系统
pytez支持使用锚点来获取特定点的位置,Canvas类中用字典来储存锚点类。
所有的支持锚点的类都属于AnchorBase的子类,Anchor是一个简单锚点。
属性,包括不支持直接调用的私有属性:
+ `Canvas._anchors` 储存所有锚点的字典,默认为空字典。
+ `Canvas.chidren` 只读,储存图形元素和锚点的列表,没有命名的元素也想获取时就需要用到此属性
方法:
+ `Canvas.get_anchors()` 返回锚点命名列表
+ `Canvas.anchor(name,pos)` 设置一个新的指定位置的锚点,类型为Anchor。
+ `Canvas.copy_anchors(element_from,filter=None)` canvas类之间复制锚点
+ `Canvas.get_namedchild(name)` 获取一个命名后的元素,可以是锚点也可以是图形元素
+ 与mpl的交互
pytez可以视为一个mpl在可视化绘图的延伸设计,只是提供了一个独特的Axes类的包装Canvas类。Canvas类并不具有Axes的属性,相反,Canvas通过ax属性来储存Axes类CanvasAxes。\
同样CanvasAxes类也可以通过canvas属性获取canvas。
属性,包括不建议调用的私有属性:
+ `Canvas._ax`
方法:
+ `Canvas.get_ax()`
=== 变换矩阵类的设计
在绝对坐标系和相对坐标系中的变换部分,有一定的计算过程,这里将这部分计算过程包含到变换矩阵类 `TransformationMatrix` 。
=== 图形类的设计
图形类有两种类型,一种是有填充色和线框的2维封闭图形,另一种是只有线型的曲线。填充色属性在曲线中忽略。
2维封闭图形的基础是mpl中的Patch类型,曲线的基础是Line2D类型,因此曲线不仅可以绘制曲线,也可以绘制标记和散点。
图形类也不是artist类,而是artist类型的包装,可以通过 `.artist` 属性调用对应的Artist。
图形类需要解决两个问题,锚点的支持和生成对应的artist类。
+ 锚点
图形类是锚点类的子类,具体参见锚点类的设计
+ 生成对应的artist类
由于pytez和matplotlib接口的差异性,需要对此进行样式和尺寸的转接,同时还要支持matplotlib对应参数的设计。
+ 支持便捷的定义新的图形类 |
|
https://github.com/grodino/uni-rennes-typst | https://raw.githubusercontent.com/grodino/uni-rennes-typst/main/lib.typ | typst | MIT License | #import "src/poster.typ": unirennes-poster
#import "src/slides.typ": unirennes-slides, title-slide, slide, slide-full, note
#import "src/colors.typ" as unirennes-colors |
https://github.com/yonatankremer/matrix-utils | https://raw.githubusercontent.com/yonatankremer/matrix-utils/main/src/matrix.typ | typst | MIT License | #import "complex.typ": *
//todo - determinant?, adj, inverse, rank?, row-reduce?, diagonal, row operations, is symmtetric/hermitian/unitary/orth...
#let _mrows-content(m) = m.rows
#let _mtrans(m) = {
let i = 0
let j = 0
while i < m.at(0).len() {
while j < m.len() {
let temp = m.at(j).at(i)
m.at(j).at(i) = m.at(i).at(j)
m.at(i).at(j) = temp
j += 1
}
i += 1
}
return m
}
// init mat "object"
#let minit(m) = {
let dic = custom-type("matrix")
let rows = _mrows-content(m)
let first-len = rows.at(0).len()
assert(rows.all(x => x.len() == first-len))
dic.insert("rows", rows)
dic.insert("cols", _mtrans(rows))
dic.insert("x", rows.at(0).len())
dic.insert("y", rows.len())
return dic
}
#let _mform(m) = {
let con = m.map(x => x.map(y => y))
return math.mat(..con)
}
#let _mrows(m) = minit(m).at("rows")
#let mrow(m,idx) = _mrows(m).at(idx)
#let _mcols(m) = minit(m).at("cols")
#let mcol(m,idx) = _mcols(m).at(idx)
#let mget(m,x, y) = minit(m).at("cols").at(x).at(y)
#let mx(m) = minit(m).at("x")
#let my(m) = minit(m).at("y")
#let mtrans(m) = _mform(minit(_mtrans(m)))
#let mconj(m) = {
let row = 0
let col = 0
let x = mx(m)
let y = my(m)
let new = m
let i = 0
let j = 0
while i < x {
while j < y {
let temp = cconj(new.at(j).at(i))
new.at(j).at(i) = cconj(new.at(i).at(j))
new.at(i).at(j) = temp
j += 1
}
i += 1
}
return _mform(new)
}
#let madd(l,r) = {
let x = mx(l)
let y = my(l)
assert(x == mx(r) and y == my(r))
let i = 0
let j = 0
let rows = (())
while i < y {
while j < x {
rows.at(i).push(cadd(mget(l,i,j),mget(r,i,j)))
j += 1
}
i += 1
}
return minit(rows)
}
// row operations
#let mrow-switch(m,st,nd) = {
let new = m
let temp = _mrows(new).at(st)
_mrows(new).at(st) = _mrows(new).at(nd)
_mrows(new).at(nd) = temp
}
#let mrow-mul(m,row,scalar) = {
let new = m
let new-row = _mrows(m).at(row).map(x => cmul(x,scalar))
_mrows(new).at(row) = new-row
return new
}
//#let mropw-addrows type 3 do later
#let mdiaginit(..vals) = {
let i = 0
let len = vals.pos().len()
let vals = vals
let rows = (())
while i < len {
let j = 0
while j < len {
if i == j {
rows.at(i).push(vals.remove(0))
}
else {
rows.at(i).push(0)
}
}
}
return minit(rows)
}
#let mtrace(m) = {
let x = mx(m)
assert(x == my(m))
let i = 0
let sum = cinit()
while i < x {
sum = cadd(sum,cinit(mget(m,i,i)))
}
return sum
}
#let mscal(l,r) = {
assert(l.len == r.len)
r.map(x => x.map(y => cmul(l, y)))
}
#let mmul(l,r) = {
let lRows = l.fields().body.fields().rows
let rRows = r.fields().body.fields().rows
assert.eq(lRows.at(0).len(), rRows.len())
let out = (())
let row = 0
while row < lRows.len() {
let col = 0
let curRow = ()
while col < rRows.at(0).len() {
let idx = 0
let curVal = 0
while idx < lRows.at(0).len() {
curVal += float(lRows.at(row).at(idx).text) * float(rRows.at(idx).at(col).text)
idx += 1
}
curRow.push(curVal)
col += 1
}
out.push(curRow)
row += 1
}
let con = out.map(x => x.map(y => y))
return math.mat(..con)
}
#let mmul2(l,r) = {
assert(my(l) == mx(r))
let i = 0
let j = 0
let r = mx(l)
let c = my(r)
let rows = (())
while i < r {
let cur-row = ()
while j < c {
cur-row.push(mscalmul(_mrows(l).at(i),_mcols(r).at(j)))
}
rows.push(cur-row)
}
return minit(rows)
} |
https://github.com/gongke6642/tuling | https://raw.githubusercontent.com/gongke6642/tuling/main/Math/vec.typ | typst | #set text(
size:10pt,
)
#set page(
paper:"a5",
margin:(x:1.8cm,y:1.5cm),
)
#set par(
justify: true,
leading: 0.52em,
)
= 列向量
列向量。
矢量元素中的内容可以与&符号对齐。
= 例
#image("63.png")
= 参数
#image("64.png")
= 分隔符
要使用的分隔符。
#image("65.png")
= 间距
元素之间的间距。
默认0.5em
= 元素
向量的元素 |
|
https://github.com/qujihan/toydb-book | https://raw.githubusercontent.com/qujihan/toydb-book/main/src/chapter4.typ | typst | #import "../typst-book-template/book.typ": *
#let path-prefix = figure-root-path + "src/pics/"
= SQL引擎
#code(
"tree src/sql",
"SQL引擎的代码结构",
```zsh
src/sql
├── engine
│ ├── engine.rs # 定义了SQL引擎的接口.
│ ├── local.rs # 本地存储的SQL引擎.
│ ├── mod.rs
│ ├── raft.rs # 基于Raft的分布式SQL引擎.
│ └── session.rs # 执行SQL语句, 并处理事务控制.
├── execution
│ ├── aggregate.rs # SQL的聚合操作, 如GROUP BY, COUNT等.
│ ├── execute.rs # 执行计划的执行器.
│ ├── join.rs # SQL的连接操作, 如JOIN, LEFT JOIN等.
│ ├── mod.rs
│ ├── source.rs # 负责提供数据源, 如表扫描, 主键扫描, 索引扫描等.
│ ├── transform.rs # SQL的转换操作, 如投影, 过滤, 限制, 排序等.
│ └── write.rs # SQL的写操作, 如INSERT, DELETE, UPDATE等.
├── mod.rs
├── parser
│ ├── ast.rs # 定义了SQL的抽象语法树(ast)的结构.
│ ├── lexer.rs # SQL的词法分析器, 将SQL语句转换为Token.
│ ├── mod.rs
│ └── parser.rs # SQL的语法分析器, 将Token转换为AST.
├── planner
│ ├── mod.rs
│ ├── optimizer.rs # 执行计划的优化器.
│ ├── plan.rs # 执行计划的结构与操作.
│ └── planner.rs # SQL解析以及执行计划的生成.
├── testscripts
│ └── ...
└── types
├── expression.rs # 定义了SQL的表达式.
├── mod.rs
├── schema.rs # 定义了SQL的表结构以及列结构.
└── value.rs # 定义了SQL的基本数据类型以及数据类型枚举.
```,
)
在看所有代码之前,通过它们对外的接口来了解整个SQL引擎的结构。除了第一个基本数据类型以外,我们通过一条SQL的生命周期来排序这些接口。
#code("src/sql/types/mod.rs", "types对外暴露的接口")[
```rust
// src/sql
// ├── ...
// └── types
// ├── expression.rs # 定义了SQL的表达式.
// ├── mod.rs
// ├── schema.rs # 定义了SQL的表结构以及列结构.
// └── value.rs # 定义了SQL的基本数据类型以及数据类型枚举.
pub use expression::Expression;
pub use schema::{Column, Table};
pub use value::{DataType, Label, Row, Rows, Value};
```
]
这里暴露的东西还是比较简单的,就是SQL的基本数据类型,表结构,列结构,表达式以及多表查询的Lable等。
#code("src/sql/engine/mod.rs", "engine对外暴露的接口")[
```rust
// src/sql
// ├── ...
// └── engine
// ├── engine.rs # 定义了SQL引擎的接口.
// ├── local.rs # 本地存储的SQL引擎.
// ├── mod.rs
// ├── raft.rs # 基于Raft的分布式SQL引擎.
// └── session.rs # 执行SQL语句, 并处理事务控制.
pub use engine::{Catalog, Engine, Transaction};
pub use local::{Key, Local};
pub use raft::{Raft, Status, Write};
pub use session::{Session, StatementResult};
```
]
`local.rs`,`raft.rs`定义的两个引擎本别处理本地以及分布式事务
在`engine`模块中,`Session`通过`Engine`接口与具体的引擎交互,`Session`里面有个方法`execute`,用于执行SQL语句。`Session`里面的`StatementResult`用于表示SQL语句的执行结果。
#code("src/sql/parse/mod.rs", "parse对外暴露的接口")[
```rust
// src/sql
// ├── ...
// └── parser
// ├── ast.rs # 定义了SQL的抽象语法树(ast)的结构.
// ├── lexer.rs # SQL的词法分析器, 将SQL语句转换为Token.
// ├── mod.rs
// └── parser.rs # SQL的语法分析器, 将Token转换为AST.
pub use lexer::{is_ident, Keyword, Lexer, Token};
pub use parser::Parser;
```
]
在`execute`中,会调用`Parser`来解析SQL语句,解析的流程大概是:`Lexer`负责将SQL语句转换为Token,`Parser`负责将Token转换为AST。
AST就是可以被下面的`Planner`所使用的执行计划。
#code("src/sql/planner/mod.rs", "planner对外暴露的接口")[
```rust
// src/sql
// ├── ...
// └── planner
// ├── mod.rs
// ├── optimizer.rs # 执行计划的优化器.
// ├── plan.rs # 执行计划的结构与操作.
// └── planner.rs # SQL解析以及执行计划的生成.
pub use plan::{Aggregate, Direction, Node, Plan};
pub use planner::{Planner, Scope};
#[cfg(test)]
pub use optimizer::OPTIMIZERS;
```
]
`execute`从上一步获得了AST,然后调用`Plan::build()`来生成执行计划。生成的执行计划会被`Plan`中的`optimize()`方法调用optimize.rs中的优化方法来优化。
#code("src/sql/execution/mod.rs", "execution对外暴露的接口")[
```rust
// src/sql
// │── ...
// └── execution
// ├── aggregate.rs # SQL的聚合操作, 如GROUP BY, COUNT等.
// ├── execute.rs # 执行计划的执行器.
// ├── join.rs # SQL的连接操作, 如JOIN, LEFT JOIN等.
// ├── mod.rs
// ├── source.rs # 负责提供数据源, 如表扫描, 主键扫描, 索引扫描等.
// ├── transform.rs # SQL的转换操作, 如投影, 过滤, 限制, 排序等.
// └── write.rs # SQL的写操作, 如INSERT, DELETE, UPDATE等.
pub use execute::{execute_plan, ExecutionResult};
```
]
最后执行计划会被`Plan::execution`执行,执行计划的结果会被`ExecutionResult`返回。
当然这里只是简单的说一下功能,具体的链路比现在的还要复杂一些。最终会在 @sql_summary 更详细描述脉络。
#include "chapter4/type.typ"
#include "chapter4/engine.typ"
#include "chapter4/parse.typ"
#include "chapter4/planner.typ"
#include "chapter4/execution.typ"
#include "chapter4/summary.typ"
|
|
https://github.com/usertam/curriculum-vitae | https://raw.githubusercontent.com/usertam/curriculum-vitae/resume/main.typ | typst | Other | #import "template.typ": *
#show: project.with(
title: "Résumé",
author: (
name: "<NAME>",
email: "<EMAIL>",
email-alt: "<EMAIL>",
bio: "Final Year in Computer Engineering.",
),
links: (
("icons/linkedin.svg", "https://linkedin.com/in/usertam"),
("icons/github.svg", "https://github.com/usertam"),
),
)
#experience[Projects]
#gh_item(
"LLVM Toolchain",
"Bleeding-edge LLVM toolchains for cross-compilation, optimized with PGO and LTO.",
"June 2024",
"usertam/toolchain"
)[
- Leveraging GitHub Actions along with automated scripts, we perform weekly builds against the LLVM #link("https://github.com/llvm/llvm-project", mono[master]). The build is then optimized, stripped and patched for portability across different environments.
]
#gh_item(
"Context–minimals",
"Typesetting system made reproducible.",
"August 2022",
"usertam/context-minimals"
)[
- After analyzing the dependencies of #ConTeXt LMTX (like #LaTeX, derived from #TeX), we rewrite the installation declaratively in #mono[nix]. This results in a far more efficient installation that is both reproducible and portable.
]
#gh_item(
"Android Kernel Development",
["What if you can run #mono[dockerd] on your #mono[aarch64] phone natively?"],
"September 2021",
"usertam/dumpling-lineage-kernel"
)[
- With open-source kernel trees released by Qualcomm, we cherry-pick upstream changes and custom patches to rebuild the Android kernel, enabling support for custom kernel features (e.g. namespaces for containers).
]
#gh_item(
"Open-Source contributions",
[Numerous contributions I authored over the years with #mono[git].],
smallcaps("Since 2020"),
"pulls?q=author%3Ausertam",
url_desc: "Pull requests"
)[
- Contributions include but not limited to #mono(link("https://gitlab.com/lilypond/", "GNU Lilypond")), #github("NixOS/nixpkgs"), #github("nix-community/nix-index"), #github("nix-community/nixos-generators"), #github("LnL7/nix-darwin"), #github("wolfcw/libfaketime"), #github("kovidgoyal/kitty").
]
#experience[Experience]
#item(
"Department of Computer Science and Engineering, HKUST",
"Undergraduate Representative",
"September 2023 – Current",
"Office, Room 3528"
)[
- Keynote Speaker of university credit-bearing seminars: #link("https://www.youtube.com/watch?v=53TWNe3_z38", mono[Developing with GitHub]) and #link("https://csess.su.hkust.edu.hk/activity/149", mono[The Unix Philosophy]).
- Co-host of the Departmental Briefing for Direct Entry Students in 2023, and again in 2024.
- Representative Speaker of CSE Program Orientation Talk in 2023.
- Advocating for students' interests and serving as a liaison to university administration.
]
#item(
[System and Network Administration Office, \
Department of Computer Science and Engineering, HKUST],
"Student Intern",
"July 2023 – Current",
"Office, Room 4202"
)[
- Maintained all departmental computer science labs for undergraduates and postgraduates.
- Provided technical support including hand-ons repairs for lab equipment and server infrastructure.
- Served as a liaison for confidential, professional and inter-departmental communications.
]
#item(
"The Computer Science and Engineering Students' Society, HKUSTSU",
"Honorary Advisor; formerly Executive Committee",
"April 2023 – Current",
"Student Centre, Mailbox #3"
)[
- Oversee executive operations, provide strategic guidance and support to the student society.
- Contribute to and organize public documentation and confidential records.
- Maintain build systems and proprietary technologies for operational excellency and security.
- Principal Coordinator of CSE Festival 2023; Master of Ceremonies of CSE Farewell Dinner 2023.
]
#experience[Education]
#item(
"The Hong Kong University of Science and Technology",
"Bachelor of Engineering in Computer Engineering",
"July 2026",
"Clear Water Bay, Kowloon"
)[
#course("Fall 2024", ("Cybersecurity","Artificial Intelligence Ethics"))
#course("Spring 2024", ("Advanced Deep Learning Architectures","Modern Compiler Construction",))
#course("Fall 2023", ("Design and Analysis of Algorithms",))
#course("Spring 2023", ("Introduction to Embedded Systems","Operating Systems"))
#course("Fall 2022", ("Honors Object-Oriented Programming and Data Structures",))
]
#item(
"The University of Wollongong College Hong Kong",
"Distinction; Associate of Science in Information Systems Development",
"July 2022",
"Tai Wai, New Territories"
)[]
|
https://github.com/janlauber/bachelor-thesis | https://raw.githubusercontent.com/janlauber/bachelor-thesis/main/chapters/introduction.typ | typst | Creative Commons Zero v1.0 Universal | = Introduction
#set quote(block: true)
#quote(attribution: [*<NAME>*, Tweet from Nov 27, 2017])[
Kubernetes is a platform for building platforms. It's a better place to start; not the endgame.
]
Deploying and managing open-source software (OSS) has become increasingly complex, especially with the rise of Kubernetes, a powerful but often challenging tool for container orchestration. This thesis introduces the One-Click Deployment system, designed to make Kubernetes more accessible and straightforward for users of all technical backgrounds. The system has been developed under constant iteration and is available in a first fully functional version. It has even experienced commercial production use at the time of this writing.
== Background and Context
Open-source software is a cornerstone of modern technology, driving innovation and providing essential tools for building robust systems. However, the complexity of deploying and managing these tools can be a significant barrier, particularly with Kubernetes. While Kubernetes offers powerful features for container management, its steep learning curve can be intimidating.
=== Concrete Example
Imagine a developer who has built a data visualization tool using the open-source framework Streamlit #footnote[https://streamlit.io/] or a complex automation workflow using Node-RED #footnote[https://nodered.org/]. Initially, the developer considers deploying the application using Docker Compose, which involves creating and managing a docker-compose.yml file. However, this approach requires configuring a virtual machine (VM) and deciding on a cloud host or on-premise setup. The developer must also implement an SSL reverse proxy and consider vertical scaling and automatic updates every time a new Docker image is published.
Faced with these challenges, the developer might turn to Kubernetes for its built-in solutions to these problems. Kubernetes provides tools for container orchestration, scaling, and managing configurations. However, the user soon realizes the difficulty of managing the deployment through numerous YAML files required for Kubernetes resources, such as deployments, services, ingress controllers, and more. This is where the One-Click Deployment system comes in, streamlining the deployment process and abstracting the complexity involved. \ \
*Challenges*
- *Complexity*: Kubernetes requires a deep understanding of its concepts and resources, making it challenging for beginners.
- *Configuration*: Managing YAML files for Kubernetes resources can be error-prone and time-consuming.
- *Scalability*: Ensuring that the deployment can scale horizontally and vertically requires additional configurations.
- *Security*: Implementing secure deployments with SSL certificates can be complex.
- *Maintenance*: Keeping the deployment up-to-date with the latest versions of the software and Kubernetes resources can be a manual process.
The One-Click Deployment project aims to democratize Kubernetes by simplifying its deployment and management processes, making these advanced capabilities available to everyone, from beginners to experienced developers. The system centralizes configuration and follows the principle of *"convention over configuration"* #footnote[https://en.wikipedia.org/wiki/Convention_over_configuration], allowing users to deploy and manage applications with minimal effort.
#pagebreak()
== Problem Statement
The deployment and management of OSS using Kubernetes involve numerous challenges. These include setting up environments, managing dependencies, and ensuring security and scalability. These tasks often require specialized knowledge, which can limit the use of Kubernetes to larger organizations with dedicated resources. Smaller teams and individual developers may find these complexities overwhelming, hindering their ability to leverage the full potential of Kubernetes. \ \
*Concretely, the challenges include:*
- *Complex Deployment Process*: The manual configuration of Kubernetes resources can be complex and error-prone.
- *Limited Accessibility*: Kubernetes is often perceived as difficult to learn and use, limiting its adoption.
- *Scalability Management*: Ensuring that deployments can scale efficiently requires additional configurations.
- *Security Maintenance*: Implementing secure deployments with SSL certificates and encryption can be challenging.
- *Operational Complexity*: Keeping deployments up-to-date with the latest software versions and Kubernetes resources can be time-consuming.
*Requirement by the End-User:*
- *Simplicity*: Users need an easy-to-use interface that abstracts away the complexities of Kubernetes.
- *Efficiency*: Deployments should be quick and efficient, allowing users to focus on building applications.
- *Reliability*: Deployments should be reliable, scalable, and secure without requiring manual intervention.
- *Customization*: Users should have the flexibility to customize deployment configurations based on their requirements.
- *Documentation*: Detailed documentation and support should be available to guide users through the deployment process.
The goal of One-Click System is to address these challenges by providing a solution that centralizes configuration and follows the principle of "convention over configuration." This approach reduces the need for users to understand the Complex details of Kubernetes and allows them to deploy and manage applications with minimal effort. By encapsulating Kubernetes’ strengths within a user-friendly interface, the One-Click Deployment system simplifies deployment, scaling, and management processes, making these advanced capabilities accessible to a broader audience.
== Objectives of the Study
The main objectives of this study are to design, develop, and evaluate the One-Click Deployment system, focusing on:
- Simplifying the Kubernetes deployment process to fewer steps and less manual configuration.
- Enabling easy management and scaling of OSS deployments within a Kubernetes ecosystem.
- Assessing the impact of the One-Click Deployment system on the adoption and utilization of Kubernetes.
- Collecting feedback from users to refine and enhance the system's features continuously.
- Identifying opportunities for future research and development in Kubernetes deployment and management.
|
https://github.com/Enter-tainer/typstyle | https://raw.githubusercontent.com/Enter-tainer/typstyle/master/tests/assets/unit/comment/comment-in-closure.typ | typst | Apache License 2.0 | #let conf(
title: none, //comments
authors: (),
abstract: [],
lang: "zh", // language
doctype: "book", //comments
doc // all comments will be deleted by typstyle
)={doc} |
https://github.com/trondhauklien/typst-resume | https://raw.githubusercontent.com/trondhauklien/typst-resume/main/main.typ | typst | #let userData = json("data.json")
#set text(font: "Cambria", size: 12pt)
#show heading: set text(font: "Georgia")
#box(height: 5cm, columns(2)[
#align(bottom)[
= #userData.name
#userData.contact.email | #userData.contact.phone
]
#colbreak()
#align(end)[
#block(
stroke: black,
radius: 50%,
height: 100%,
clip: true,
image("avatar.png"),
)
]
])
== Why consider me?
#userData.motivation
== Education
* #userData.education.degree | #userData.education.university | #userData.education.graduationYear *
- Developed a web application using HTML, CSS, and JavaScript for a university
project.
== Experience
#for e in userData.experience [
* #e.position | #e.company | #e.startYear - #e.endYear *
#for r in e.responsibilities [
- #r
]
]
== Skills
#grid(
columns: (auto, auto, auto, auto),
gutter: 10pt,
..userData.skills.map(s => [
#box(stroke: black, inset: 5pt)[
#s
]
])
) |
|
https://github.com/swaits/typst-collection | https://raw.githubusercontent.com/swaits/typst-collection/main/glossy/0.1.0/src/gloss.typ | typst | MIT License | #import "./themes.typ": *
#import "./utils.typ": *
#let __gloss_entries = state("__gloss_entries", (:))
#let __gloss_used = state("__gloss_used", (:))
#let __gloss_label_prefix = "__gloss:"
// given an array of dictionaries, make sure each has all the keys we'll
// reference, using default values if needed
#let __normalize_entries(entry-list) = {
// TODO: panic if key or short missing, all others optional
let new-list = ()
for entry in entry-list {
let long = entry.at("long", default: none)
let longplural = entry.at("longplural", default: none)
if long != none and longplural == none {
longplural = __pluralize(long)
}
new-list.push((
key: entry.key,
short: entry.short,
plural: entry.at("plural", default: __pluralize(entry.short)),
long: long,
longplural: longplural,
description: entry.at("description", default: none),
group: entry.at("group", default: ""),
))
}
return new-list
}
// update our state with a glossary entry
#let __add_entry(entry) = {
// make sure our final glossary state does not already have this key
if __gloss_entries.final().at(entry.key, default: false) == true {
panic("Glossary error. Duplicate key: " + entry.key)
}
// add it to the state
__gloss_entries.update(st => {
st.insert(entry.key, entry)
return st
})
}
// fetch a glossary entry from our state, or panic
#let __get_entry(key) = {
let entries = __gloss_entries.final()
if key not in entries {
panic("Glossary error. Missing key: " + key)
}
entries.at(key)
}
// returns true if an entry with `key` is in our glossary state
#let __has_entry(key) = {
let entries = __gloss_entries.final()
key in entries
}
// helper to prefix a key for label/reference scoping for the dictionary entry
#let __dict_label_str(key) = {
__gloss_label_prefix + key
}
// helper to get a label with prefix for the dictionary entry
#let __dict_label(key) = {
label(__dict_label_str(key))
}
// helper to prefix a key for label/reference scoping for the term use in a doc
#let __term_label_str(key, index) = {
__gloss_label_prefix + key + "." + str(index)
}
// helper to get a label with prefix for the term use in a doc
#let __term_label(key, index) = {
label(__term_label_str(key, index))
}
// update the usage count for a given key
#let __save_term_usage(key, count) = {
__gloss_used.update(st => {
st.insert(key, count)
return st;
})
}
// the main function which emits terms used in a document
//
// handles all the modifiers:
//
// - cap: capitalize the term
// - pl: pluralize the term
// - both: emit "Long form (short form)"
// - short: emit just the short form
// - long: emit just the long form
#let __gls(
key,
modifiers: array
) = {
// Get the term
let entry = __get_entry(key)
let entry_label = __dict_label_str(key)
// Lookup and increment the count
let entry_counter = counter(entry_label)
entry_counter.step()
// See if this is the first use
let key_index = entry_counter.get().first()
let first = key_index == 0
// Count the entry as used so we can link back from glossary
__save_term_usage(entry.key, key_index + 1)
// Helper: Apply pluralization if needed
let pluralize_term = (singular, plural) => {
if "pl" in modifiers and plural != none { plural } else { singular }
}
// Helper: Capitalize term if "cap" modifier present
let capitalize_term = (term) => {
if "cap" in modifiers { upper(term.first()) + term.slice(1) } else { term }
}
// Helper: Select and format the displayed term (long, short, or both)
let select_term = (is_long_mode, use_both) => {
// Derive pluralized and capitalized versions of the long and short forms
let long_form = capitalize_term(pluralize_term(entry.long, entry.longplural))
let short_form = capitalize_term(pluralize_term(entry.short, entry.plural))
// If "both" modifier is present, show both the long form and short form
if use_both {
// If the long form exists, show "Long Form (Short Form)"
if long_form != none {
[#long_form (#short_form)]
}
// Fallback: If no long form exists, just show the short form
else {
[#short_form]
}
}
// If in "long mode", show the long form, fallback to short if long form is missing
else if is_long_mode {
if long_form != none {
[#long_form]
}
// Fallback: if no long form, show short
else {
[#short_form]
}
}
// Default case: Just show the short form
else {
[#short_form]
}
}
// Determine which form to display: "short", "long", or "both"
let is_both = "both" in modifiers
let is_long = "long" in modifiers and not is_both
let is_short = "short" in modifiers and not is_both and not is_long
context {
// Final display logic
let display = if is_both or is_long or is_short {
// User requested specific behavior via modifiers
select_term(is_long, is_both)
} else {
// Default behavior: show "both" on first use, else "short"
select_term(false, first)
}
// TODO: figure out how to link to __dict_label(key) if the glossary exists
// NOTE: this is a low priority, I don't think it's that important or useful.
// Emit with labels for this instance of the term usage
[#display#metadata(display)#__term_label(key, key_index)]
}
}
// Create all the backlinks to term uses in a doc, in the form of page numbers
// linked. Used when emitting a glossary, so its page numbers can link back to
// the uses.
#let __create_backlinks(key, count) = {
// create array of labels to link to
let labels = for i in range(count) {
(__term_label(key, i),)
}
// create arrays of locations, and page number display text
let pages = labels
.map(l => { locate(l) })
.map((loc) => { numbering(__default(loc.page-numbering(), "1"), loc.page()) })
// convert labels to links, filtering out duplicated pages
let seen = ()
let links = for i in range(labels.len()) {
let l = labels.at(i)
let p = pages.at(i)
if seen.contains(p) {
(none,)
} else {
seen.push(p)
(link(l, p),)
}
}.filter(l => l != none)
// connect links with commas and return
links.join(", ")
}
// Main wrapper (usually used in a `#show: init-glossary`) which loads the
// entries passed in into our state. Furthermore, hooks into references so that
// we can intercept term usage in a doc and label them appropriately.
#let init-glossary(entries, body) = context {
for entry in __normalize_entries(entries) {
__add_entry(entry)
}
// convert refs we recognize into links with labels
show ref: r => {
let (key, ..modifiers) = str(r.target).split(":")
if __has_entry(key) {
__gls(key, modifiers: modifiers)
} else {
r
}
}
body
}
// Used to print a glossary. Can customize title, theme, and/or specify groups.
//
// A theme is a dictionary with three attributes:
//
// #let my-theme = (
// section: (title, body) => {
// // how to display the glossary section and its body which will contain
// // groups, each with their entries
// },
//
// group: (name, body) => {
// // how to display a group name and its body (which will contain the
// // entries in that group)
// },
//
// entry: (entry, i, n) => {
// // how to display a single entry, along with its index and total count
// }
// }
#let glossary(title: "Glossary", theme: theme-2col, groups: ()) = context {
// our output is a map of group name to array of entries (each entry is a map)
let output = (:)
// pull in entire dictionary
let all_entries = __gloss_entries.final()
// filter down to just what we used
let all_used = __gloss_used.final()
// TODO: what about entries with no group? Or group == "" or == none
// get all groups
let all_groups = all_entries.values().map(e => e.at("group")).dedup().sorted()
let groups = if groups.len() == 0 { all_groups } else { groups }
// make sure requested groups are legit
for g in groups {
if g not in all_groups {
panic("Requested group not found: " + g)
}
}
// iterate one group at a time
for g in groups {
// collect used entries in this group
let cur = ()
for (key, count) in all_used {
let e = all_entries.at(key)
if e.at("group") == g {
// this term is both in this group and was used, add to our output
let short = e.at("short")
let long = e.at("long")
let description = e.at("description")
let label = [#metadata(key)#__dict_label(key)]
let pages = __create_backlinks(key, count)
cur.push((short: short, long: long, description: description, label: label, pages: pages))
}
}
if cur.len() > 0 {
if g == "" { g = none }
output.insert(g, cur.sorted(key: e => e.short))
}
}
// TODO: rendering for a) just one group, or b) the default group (ie terms
// with no group)?? -- in this case should we just not use the theme.group()
// function?
// render it using our theme
(theme.section)(
title,
for (group, entries) in output {
(theme.group)(
group,
for (i,e) in entries.enumerate() {
(theme.entry)(e, i, entries.len())
},
)
}
)
}
|
https://github.com/Fr4nk1inCs/typreset | https://raw.githubusercontent.com/Fr4nk1inCs/typreset/master/src/utils/question.typ | typst | MIT License | #let question_counter = counter("question")
// simple question frame
// - heading-counter(bool): whether to show heading counter
// - number(auto | str | int): question number, if auto, then it will be auto-incremented
// - desc(content): question description
#let simple_question(
heading-counter: false,
number: auto,
desc
) = locate(loc => {
set text(weight: "bold")
if number == auto {
question_counter.step()
if heading-counter {
str(counter(heading).at(loc).first()) + "." + question_counter.display("1.")
} else {
question_counter.display("1.")
}
} else {
if type(number) == int {
str(number) + "."
} else {
number
}
}
desc
v(-0.9em)
line(length: 100%)
v(-0.6em)
})
// complex question frame
// - heading-counter(bool): whether to show heading counter
// - number(auto | str | int): question number, if `auto`, then it will be auto-incremented. (`auto` is default)
// - desc(content): question description
#let complex_question(
heading-counter: false,
number: auto,
desc
) = locate(loc => {
let number = if number == auto {
question_counter.step()
if heading-counter {
str(counter(heading).at(loc).first()) + "." + question_counter.display("1.")
} else {
question_counter.display("1.")
}
} else {
if type(number) == int {
str(number) + "."
} else {
number
}
}
rect(width: 100%, radius: 5pt)[
#strong(number)
#desc
]
})
|
https://github.com/SeniorMars/tree-sitter-typst | https://raw.githubusercontent.com/SeniorMars/tree-sitter-typst/main/examples/math/style.typ | typst | MIT License | // Test text styling in math.
---
// Test italic defaults.
$a, A, delta, ϵ, diff, Delta, ϴ$
---
// Test forcing a specific style.
$A, italic(A), upright(A), bold(A), bold(upright(A)), \
serif(A), sans(A), cal(A), frak(A), mono(A), bb(A), \
italic(diff), upright(diff), \
bb("hello") + bold(cal("world")), \
mono("SQRT")(x) wreath mono(123 + 456)$
---
// Test a few style exceptions.
$h, bb(N), frak(R), Theta, italic(Theta), sans(Theta), sans(italic(Theta))$
---
// Test font fallback.
$ よ and 🏳️🌈 $
---
// Test text properties.
$text(#red, "time"^2) + sqrt("place")$
---
// Test different font.
#show math.equation: set text(font: "Fira Math")
$ v := vec(1 + 2, 2 - 4, sqrt(3), arrow(x)) + 1 $
|
https://github.com/PLASTA0728/CV-for-High-School-Student | https://raw.githubusercontent.com/PLASTA0728/CV-for-High-School-Student/main/README.md | markdown | MIT License | # My own CV template in Typst based on [typst-chi-cv-template](https://github.com/matchy233/typst-chi-cv-template)
This is my first time trying to change a template in Typst and this helped me in getting familiar with `typst` functionalities. And I think errors or some shortages are inevitable so PRs and questions are of course welcomed :)
## New content and what existing problem
Since I am a high school student, I create some new functions ~~it's so unfamiliar with the name "function" because I always use "environment" referring to LaTeX~~ so that a few contents in a (high-school-student) CV may fit it better.
- A `#grade` function for education experiences with _italic_ GPA.
- An `#honor` function for single honor. `tl` for the name of honor, `tc` for the institution, `tr` for time, and `content` for an explanation of some unfamiliar awards.
- A `#multihonor` function used together with `#honorline`. Add `#honorline` functions in the `#multihonor()` and connect each other using `+`. As original `#honor` function, `[]` is for explanation.
- A `#multicventry` function used together with `#positionline`.
PS: I create an svg file of AoPS small logo `AoPS_small_logo.svg` as an icon.
## Usage
### Using Typst web app
Upload `chicv.typ`, `fontawesome.typ`, `resume.typ`, `AoPS_small_logo.svg` (if u also want to show your AoPS account at the beginning of CV) and `fonts/FontAwesome6.otf` to [Typst](https://typst.app/), and then you can edit the CV.
## Sample Output

[PDF file](CV_sample.pdf)
|
https://github.com/RodolpheThienard/typst-template | https://raw.githubusercontent.com/RodolpheThienard/typst-template/main/reports/1/template1.typ | typst | MIT License | #let date = datetime(
year: 2024,
month: 01,
day: 29,
)
#let template(
title: none,
subtitle: none,
subsubtitle: none,
authors: (),
supervisors: (),
abstract: [],
doc,
) = {
set page(numbering: "1/1")
set text(font: "Linux Libertine", lang: "en", size: 11pt)
set heading(numbering: "1.1", outlined: true)
set align(center)
grid(
columns: (1fr, 1fr, 1fr),
align(left+horizon)[
#image("images/logo.png", width: 100%)
],
align(center+horizon)[],
align(right + horizon)[
#image("images/logo.png", width: 100%)
]
)
align(
center,
text(24pt)[
#subtitle
]
)
align(
center,
text(18pt)[#subsubtitle]
)
line(length: 100%)
align(center, text(32pt, title))
line(length: 100%)
v(1em)
grid(columns: (1fr, 1fr),
align(center, box(align(start, text(16pt)[
*Author :* \
#for author in (authors) {
[#author.name\ ]
}
]))),
align(center, box(align(end, text(16pt)[
*Supervisors :* \
#for supervisor in (supervisors) {
[#supervisor.name\ ]
}
])))
)
if(abstract != none){
v(1em)
align(center, box(align(start, text(16pt)[
*Abstract:* \
])))
[#abstract]
}
align(center + bottom, text(16pt)[#date.display("[month repr:long] [day], [year]")])
set par(justify: true)
set align(left)
columns(1, doc)
}
|
https://github.com/xrarch/books | https://raw.githubusercontent.com/xrarch/books/main/xrcomputerbook/main.typ | typst | #import "@preview/hydra:0.2.0": hydra
#set page(header: hydra(paper: "iso-b5"), paper: "iso-b5")
#set document(title: "XR/computer Platform Handbook")
#set text(font: "IBM Plex Mono", size: 9pt)
#show math.equation: set text(font: "Fira Math")
#show raw: set text(font: "Cascadia Code", size: 9pt)
#set heading(numbering: "1.")
#set par(justify: true)
#include "titlepage.typ"
#pagebreak(weak: true)
#set page(numbering: "i")
#counter(page).update(1)
#include "toc.typ"
#pagebreak(weak: true)
#set page(numbering: "1", number-align: right)
#counter(page).update(1)
#include "chapintro.typ"
#pagebreak(weak: true)
#include "chapinter.typ"
#pagebreak(weak: true)
#include "chapcitron.typ"
#pagebreak(weak: true)
#include "chapaudio.typ"
#pagebreak(weak: true)
#include "chapether.typ"
#pagebreak(weak: true)
#include "chapamtsu.typ"
#pagebreak(weak: true)
#include "chapkinnow.typ"
#pagebreak(weak: true) |
|
https://github.com/RaphGL/ElectronicsFromBasics | https://raw.githubusercontent.com/RaphGL/ElectronicsFromBasics/main/DC/chap4/5_hand_calculator_use.typ | typst | Other | #import "../../core/core.typ"
=== Hand calculator use
To enter numbers in scientific notation into a hand calculator, there is
usually a button marked \"E\" or \"EE\" used to enter the correct power
of ten. For example, to enter the mass of a proton in grams $1.67 times
10^-24 "grams"$) into a hand calculator, I would enter the following
keystrokes:
```
[1] [.] [6] [7] [EE] [2] [4] [+/-]
```
The \[+/-\] keystroke changes the sign of the power (24) into a -24.
Some calculators allow the use of the subtraction key \[-\] to do this,
but I prefer the \"change sign\" \[+/-\] key because its more consistent
with the use of that key in other contexts.
If I wanted to enter a negative number in scientific notation into a
hand calculator, I would have to be careful how I used the \[+/-\] key,
lest I change the sign of the power and not the significant digit value.
Pay attention to this example:
Number to be entered: $-3.221 times 10^-15$:
```
[3] [.] [2] [2] [1] [+/-] [EE] [1] [5] [+/-]
```
The first \[+/-\] keystroke changes the entry from 3.221 to -3.221; the
second \[+/-\] keystroke changes the power from 15 to -15.
Displaying metric and scientific notation on a hand calculator is a
different matter. It involves changing the display option from the
normal \"fixed\" decimal point mode to the \"scientific\" or
\"engineering\" mode. Your calculator manual will tell you how to set
each display mode.
These display modes tell the calculator how to represent any number on
the numerical readout. The actual value of the number is not affected in
any way by the choice of display modes -- only how the number appears to
the calculator user. Likewise, the procedure for entering numbers into
the calculator does not change with different display modes either.
Powers of ten are usually represented by a pair of digits in the
upper-right hand corner of the display, and are visible only in the
\"scientific\" and \"engineering\" modes.
The difference between \"scientific\" and \"engineering\" display modes
is the difference between scientific and metric notation. In
\"scientific\" mode, the power-of-ten display is set so that the main
number on the display is always a value between 1 and 10 (or -1 and -10
for negative numbers). In \"engineering\" mode, the powers-of-ten are
set to display in multiples of 3, to represent the major metric
prefixes. All the user has to do is memorize a few prefix/power
combinations, and his or her calculator will be \"speaking\" metric!
#table(
columns: (auto, auto),
align: center,
table.header(
[*Power*], [*Metric Prefix*]
),
[12], [Tera (T)],
[9], [Giga (G)],
[6], [Mega (M)],
[3], [Kilo (k)],
[0], [UNITS (plain)],
[-3], [milli (m)],
[-6], [micro (u)],
[-9], [nano (n)],
[-12], [pico (p)],
)
#core.review[
- Use the \[EE\] key to enter powers of ten.
- Use \"scientific\" or \"engineering\" to display powers of ten, in
scientific or metric notation, respectively.
]
|
https://github.com/Enter-tainer/typstyle | https://raw.githubusercontent.com/Enter-tainer/typstyle/master/tests/assets/unit/comment/comment-in-if.typ | typst | Apache License 2.0 | #{
if /*(condition)*/ true {
}
if true /*(condition)*/ {
}
if
// (condition)
false {
}
if true {
} // (condition)
else {
}
if true {
}
else /*(condition)*/ {
}
}
|
https://github.com/touying-typ/touying | https://raw.githubusercontent.com/touying-typ/touying/main/themes/stargazer.typ | typst | MIT License | // Stargazer theme.
// Authors: Coekjan, QuadnucYard, OrangeX4
// Inspired by https://github.com/Coekjan/touying-buaa and https://github.com/QuadnucYard/touying-theme-seu
#import "../src/exports.typ": *
#let _typst-builtin-align = align
#let _tblock(self: none, title: none, it) = {
grid(
columns: 1,
row-gutter: 0pt,
block(
fill: self.colors.primary-dark,
width: 100%,
radius: (top: 6pt),
inset: (top: 0.4em, bottom: 0.3em, left: 0.5em, right: 0.5em),
text(fill: self.colors.neutral-lightest, weight: "bold", title),
),
rect(
fill: gradient.linear(self.colors.primary-dark, self.colors.primary.lighten(90%), angle: 90deg),
width: 100%,
height: 4pt,
),
block(
fill: self.colors.primary.lighten(90%),
width: 100%,
radius: (bottom: 6pt),
inset: (top: 0.4em, bottom: 0.5em, left: 0.5em, right: 0.5em),
it,
),
)
}
/// Theorem block for the presentation.
///
/// - `title` is the title of the theorem. Default is `none`.
///
/// - `it` is the content of the theorem.
#let tblock(title: none, it) = touying-fn-wrapper(_tblock.with(title: title, it))
/// Default slide function for the presentation.
///
/// - `title` is the title of the slide. Default is `auto`.
///
/// - `config` is the configuration of the slide. You can use `config-xxx` to set the configuration of the slide. For more several configurations, you can use `utils.merge-dicts` to merge them.
///
/// - `repeat` is the number of subslides. Default is `auto`,which means touying will automatically calculate the number of subslides.
///
/// The `repeat` argument is necessary when you use `#slide(repeat: 3, self => [ .. ])` style code to create a slide. The callback-style `uncover` and `only` cannot be detected by touying automatically.
///
/// - `setting` is the setting of the slide. You can use it to add some set/show rules for the slide.
///
/// - `composer` is the composer of the slide. You can use it to set the layout of the slide.
///
/// For example, `#slide(composer: (1fr, 2fr, 1fr))[A][B][C]` to split the slide into three parts. The first and the last parts will take 1/4 of the slide, and the second part will take 1/2 of the slide.
///
/// If you pass a non-function value like `(1fr, 2fr, 1fr)`, it will be assumed to be the first argument of the `components.side-by-side` function.
///
/// The `components.side-by-side` function is a simple wrapper of the `grid` function. It means you can use the `grid.cell(colspan: 2, ..)` to make the cell take 2 columns.
///
/// For example, `#slide(composer: 2)[A][B][#grid.cell(colspan: 2)[Footer]]` will make the `Footer` cell take 2 columns.
///
/// If you want to customize the composer, you can pass a function to the `composer` argument. The function should receive the contents of the slide and return the content of the slide, like `#slide(composer: grid.with(columns: 2))[A][B]`.
///
/// - `..bodies` is the contents of the slide. You can call the `slide` function with syntax like `#slide[A][B][C]` to create a slide.
#let slide(
title: auto,
header: auto,
footer: auto,
align: auto,
config: (:),
repeat: auto,
setting: body => body,
composer: auto,
..bodies,
) = touying-slide-wrapper(self => {
if align != auto {
self.store.align = align
}
// restore typst builtin align function
let align = _typst-builtin-align
if title != auto {
self.store.title = title
}
if header != auto {
self.store.header = header
}
if footer != auto {
self.store.footer = footer
}
let self = utils.merge-dicts(
self,
config-page(fill: self.colors.neutral-lightest),
)
let new-setting = body => {
show: align.with(self.store.align)
set text(fill: self.colors.neutral-darkest)
show: setting
body
}
touying-slide(self: self, config: config, repeat: repeat, setting: new-setting, composer: composer, ..bodies)
})
/// Title slide for the presentation. You should update the information in the `config-info` function. You can also pass the information directly to the `title-slide` function.
///
/// Example:
///
/// ```typst
/// #show: stargazer-theme.with(
/// config-info(
/// title: [Title],
/// logo: emoji.city,
/// ),
/// )
///
/// #title-slide(subtitle: [Subtitle])
/// ```
#let title-slide(..args) = touying-slide-wrapper(self => {
self.store.title = none
let info = self.info + args.named()
info.authors = {
let authors = if "authors" in info {
info.authors
} else {
info.author
}
if type(authors) == array {
authors
} else {
(authors,)
}
}
let body = {
show: align.with(center + horizon)
block(
fill: self.colors.primary,
inset: 1.5em,
radius: 0.5em,
breakable: false,
{
text(size: 1.2em, fill: self.colors.neutral-lightest, weight: "bold", info.title)
if info.subtitle != none {
parbreak()
text(size: 1.0em, fill: self.colors.neutral-lightest, weight: "bold", info.subtitle)
}
},
)
// authors
grid(
columns: (1fr,) * calc.min(info.authors.len(), 3),
column-gutter: 1em,
row-gutter: 1em,
..info.authors.map(author => text(fill: black, author)),
)
v(0.5em)
// institution
if info.institution != none {
parbreak()
text(size: 0.7em, info.institution)
}
// date
if info.date != none {
parbreak()
text(size: 1.0em, utils.display-info-date(self))
}
}
self = utils.merge-dicts(
self,
config-page(fill: self.colors.neutral-lightest),
)
touying-slide(self: self, body)
})
/// Outline slide for the presentation.
///
/// - `title` is the title of the outline. Default is `utils.i18n-outline-title`.
///
/// - `level` is the level of the outline. Default is `none`.
///
/// - `numbered` is whether the outline is numbered. Default is `true`.
#let outline-slide(
title: utils.i18n-outline-title,
numbered: true,
level: none,
..args,
) = touying-slide-wrapper(self => {
self.store.title = title
self = utils.merge-dicts(
self,
config-page(fill: self.colors.neutral-lightest),
)
touying-slide(
self: self,
align(
self.store.align,
components.adaptive-columns(
text(
fill: self.colors.primary,
weight: "bold",
components.custom-progressive-outline(
level: level,
alpha: self.store.alpha,
indent: (0em, 1em),
vspace: (.4em,),
numbered: (numbered,),
depth: 1,
..args.named(),
),
),
) + args.pos().sum(default: none),
),
)
})
/// New section slide for the presentation. You can update it by updating the `new-section-slide-fn` argument for `config-common` function.
///
/// Example: `config-common(new-section-slide-fn: new-section-slide.with(numbered: false))`
///
/// - `title` is the title of the section. The default is `utils.i18n-outline-title`.
///
/// - `level` is the level of the heading. The default is `1`.
///
/// - `numbered` is whether the heading is numbered. The default is `true`.
///
/// - `body` is the body of the section. It will be pass by touying automatically.
#let new-section-slide(
title: utils.i18n-outline-title,
level: 1,
numbered: true,
..args,
body,
) = outline-slide(title: title, level: level, numbered: numbered, ..args, body)
/// Focus on some content.
///
/// Example: `#focus-slide[Wake up!]`
///
/// - `align` is the alignment of the content. Default is `horizon + center`.
#let focus-slide(align: horizon + center, body) = touying-slide-wrapper(self => {
self = utils.merge-dicts(
self,
config-common(freeze-slide-counter: true),
config-page(
fill: self.colors.primary,
margin: 2em,
header: none,
footer: none,
),
)
set text(fill: self.colors.neutral-lightest, weight: "bold", size: 1.5em)
touying-slide(self: self, _typst-builtin-align(align, body))
})
/// End slide for the presentation.
///
/// - `title` is the title of the slide. Default is `none`.
///
/// - `body` is the content of the slide.
#let ending-slide(title: none, body) = touying-slide-wrapper(self => {
let content = {
set align(center + horizon)
if title != none {
block(
fill: self.colors.tertiary,
inset: (top: 0.7em, bottom: 0.7em, left: 3em, right: 3em),
radius: 0.5em,
text(size: 1.5em, fill: self.colors.neutral-lightest, title),
)
}
body
}
touying-slide(self: self, content)
})
/// Touying stargazer theme.
///
/// Example:
///
/// ```typst
/// #show: stargazer-theme.with(aspect-ratio: "16-9", config-colors(primary: blue))`
/// ```
///
/// Consider using:
///
/// ```typst
/// #set text(font: "Fira Sans", weight: "light", size: 20pt)`
/// #show math.equation: set text(font: "Fira Math")
/// #set strong(delta: 100)
/// #set par(justify: true)
/// ```
///
/// - `aspect-ratio` is the aspect ratio of the slides. Default is `16-9`.
///
/// - `align` is the alignment of the content. Default is `horizon`.
///
/// - `title` is the title in header of the slide. Default is `self => utils.display-current-heading(depth: self.slide-level)`.
///
/// - `header-right` is the right part of the header. Default is `self => self.info.logo`.
///
/// - `footer` is the footer of the slide. Default is `none`.
///
/// - `footer-right` is the right part of the footer. Default is `context utils.slide-counter.display() + " / " + utils.last-slide-number`.
///
/// - `progress-bar` is whether to show the progress bar in the footer. Default is `true`.
///
/// - `footer-columns` is the columns of the footer. Default is `(25%, 25%, 1fr, 5em)`.
///
/// - `footer-a` is the left part of the footer. Default is `self => self.info.author`.
///
/// - `footer-b` is the second left part of the footer. Default is `self => utils.display-info-date(self)`.
///
/// - `footer-c` is the second right part of the footer. Default is `self => if self.info.short-title == auto { self.info.title } else { self.info.short-title }`.
///
/// - `footer-d` is the right part of the footer. Default is `context utils.slide-counter.display() + " / " + utils.last-slide-number`.
///
/// ----------------------------------------
///
/// The default colors:
///
/// ```typ
/// config-colors(
/// primary: rgb("#005bac"),
/// primary-dark: rgb("#004078"),
/// secondary: rgb("#ffffff"),
/// tertiary: rgb("#005bac"),
/// neutral-lightest: rgb("#ffffff"),
/// neutral-darkest: rgb("#000000"),
/// )
/// ```
#let stargazer-theme(
aspect-ratio: "16-9",
align: horizon,
alpha: 20%,
title: self => utils.display-current-heading(depth: self.slide-level),
header-right: self => self.info.logo,
progress-bar: true,
footer-columns: (25%, 25%, 1fr, 5em),
footer-a: self => self.info.author,
footer-b: self => utils.display-info-date(self),
footer-c: self => if self.info.short-title == auto {
self.info.title
} else {
self.info.short-title
},
footer-d: context utils.slide-counter.display() + " / " + utils.last-slide-number,
..args,
body,
) = {
let header(self) = {
set _typst-builtin-align(top)
grid(
rows: (auto, auto),
utils.call-or-display(self, self.store.navigation),
utils.call-or-display(self, self.store.header),
)
}
let footer(self) = {
set text(size: .5em)
set _typst-builtin-align(center + bottom)
grid(
rows: (auto, auto),
utils.call-or-display(self, self.store.footer),
if self.store.progress-bar {
utils.call-or-display(
self,
components.progress-bar(height: 2pt, self.colors.primary, self.colors.neutral-lightest),
)
},
)
}
show: touying-slides.with(
config-page(
paper: "presentation-" + aspect-ratio,
header: header,
footer: footer,
header-ascent: 0em,
footer-descent: 0em,
margin: (top: 3.5em, bottom: 2.5em, x: 2.5em),
),
config-common(
slide-fn: slide,
new-section-slide-fn: new-section-slide,
),
config-methods(
init: (self: none, body) => {
set text(size: 20pt)
set list(marker: components.knob-marker(primary: self.colors.primary))
show figure.caption: set text(size: 0.6em)
show footnote.entry: set text(size: 0.6em)
show heading: set text(fill: self.colors.primary)
show link: it => if type(it.dest) == str {
set text(fill: self.colors.primary)
it
} else {
it
}
show figure.where(kind: table): set figure.caption(position: top)
body
},
alert: utils.alert-with-primary-color,
tblock: _tblock,
),
config-colors(
primary: rgb("#005bac"),
primary-dark: rgb("#004078"),
secondary: rgb("#ffffff"),
tertiary: rgb("#005bac"),
neutral-lightest: rgb("#ffffff"),
neutral-darkest: rgb("#000000"),
),
// save the variables for later use
config-store(
align: align,
alpha: alpha,
title: title,
header-right: header-right,
progress-bar: progress-bar,
footer-columns: footer-columns,
footer-a: footer-a,
footer-b: footer-b,
footer-c: footer-c,
footer-d: footer-d,
navigation: self => components.simple-navigation(self: self, primary: white, secondary: gray, background: self.colors.neutral-darkest, logo: utils.call-or-display(self, self.store.header-right)),
header: self => if self.store.title != none {
block(
width: 100%,
height: 1.8em,
fill: gradient.linear(self.colors.primary, self.colors.neutral-darkest),
place(left + horizon, text(fill: self.colors.neutral-lightest, weight: "bold", size: 1.3em, utils.call-or-display(self, self.store.title)), dx: 1.5em),
)
},
footer: self => {
let cell(fill: none, it) = rect(
width: 100%,
height: 100%,
inset: 1mm,
outset: 0mm,
fill: fill,
stroke: none,
_typst-builtin-align(horizon, text(fill: self.colors.neutral-lightest, it)),
)
grid(
columns: self.store.footer-columns,
rows: (1.5em, auto),
cell(fill: self.colors.neutral-darkest, utils.call-or-display(self, self.store.footer-a)),
cell(fill: self.colors.neutral-darkest, utils.call-or-display(self, self.store.footer-b)),
cell(fill: self.colors.primary, utils.call-or-display(self, self.store.footer-c)),
cell(fill: self.colors.primary, utils.call-or-display(self, self.store.footer-d)),
)
}
),
..args,
)
body
}
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/scholarly-tauthesis/0.4.1/template/main.typ | typst | Apache License 2.0 | /*** main.typ
*
* The main document to be compiled. Run
*
* typst compile main.typ
*
* to perform the compilation. If you are writing a multi-file
* project, this file is where you need to include your content
* files.
*
***/
//// Initialize document type.
#import "@preview/scholarly-tauthesis:0.4.1" as tauthesis
#import "meta.typ"
/*** scholarly-tauthesis.tauthesis
*
* Possible input arguments are as follows:
*
* - fignumberwithinlevel
*
* Defines the heading level that figure numbers will be based
* on.
*
* - eqnumberwithinlevel
*
* Defines the heading level that equation numbers will be
* based on.
*
* - textfont
*
* Chooses the font that will be used for normal text.
*
* - mathfont
*
* Chooses the font that will be used for mathematics.
*
* - codefont
*
* Chooses the font that will be used to display code or raw
* elements.
*
***/
#let fignumberwithinlevel = 1
#let eqnumberwithinlevel = 1
#show: doc => tauthesis.tauthesis(
fignumberwithinlevel : fignumberwithinlevel,
eqnumberwithinlevel : eqnumberwithinlevel,
doc
)
//// Include your chapters here.
//
// Your text can be written entirely in this file, or split into
// multiple subfiles. If you do, you will need to import the
// preamble separately to those files, if you wish to use the
// commands.
//
#include "content/01.typ"
#include "content/02.typ"
#include "content/03.typ"
#include "content/04.typ"
#show: tauthesis.bibsettings
#bibliography(style: meta.citationstyle, "bibliography.bib")
//// Place appendix-related chapters here.
#show: doc => tauthesis.appendix(
fignumberwithinlevel : fignumberwithinlevel,
eqnumberwithinlevel : eqnumberwithinlevel,
doc
)
#include "content/A.typ"
|
https://github.com/piepert/grape-suite | https://raw.githubusercontent.com/piepert/grape-suite/main/src/exercise.typ | typst | MIT License | #import "colors.typ" as colors: *
#import "tasks.typ": *
#import "todo.typ": todo, list-todos, todo-state, hide-todos
#let standard-box-translations = (
"task": [Task],
"hint": [Hint],
"solution": [Suggested solution],
"definition": [Definition],
"notice": [Notice!],
"example": [Example],
)
#let project(
no: none,
// category of the document, eg. "Exam", "Handout", "Series"
type: [Exam],
// title of the document; if not set, type and suffix-title generate the title of the document
title: none,
// if title is not set, it is used to generate the title of the document
suffix-title: none,
// disable/enable outline
show-outline: false,
// abstract
abstract: none,
// used in header; if none, then is set to title
document-title: none,
show-hints: false,
show-solutions: false,
// show name and time in header of first page
show-namefield: false,
namefield: [Name:],
show-timefield: false,
timefield: (time) => [Time: #time min.],
// if show-timefield is true, then the timefield(max-time) is generated in the header
max-time: 0,
// if task has a defined amount of lines, draw the amount of lines below the task
show-lines: false,
// show point distributions after tasks/at the end of the solutions
show-point-distribution-in-tasks: false,
show-point-distribution-in-solutions: false,
// show solution matrix; expected solution argument of the tasks is now a list of 2-tuples, where the first element is always a number of points and the second element is the description of what these points are awarded for
solutions-as-matrix: false,
// show comment field in solution matrix
show-solution-matrix-comment-field: false,
solution-matrix-comment-field-value: [*Note:* #v(0.5cm)],
university: none,
faculty: none,
institute: none,
seminar: none,
semester: none,
docent: none,
author: none,
date: datetime.today(),
// if set, above attributes featuring automatic generation of the header are ignored
header: none,
header-right: none,
header-middle: none,
header-left: none,
footer: none,
footer-right: none,
footer-middle: none,
footer-left: none,
// translations
task-type: [Task],
extra-task-type: [Extra task],
box-task-title: standard-box-translations.at("task"),
box-hint-title: standard-box-translations.at("hint"),
box-solution-title: standard-box-translations.at("solution"),
box-definition-title: standard-box-translations.at("definition"),
box-notice-title: standard-box-translations.at("notice"),
box-example-title: standard-box-translations.at("example"),
sentence-supplement: "Example",
hint-type: [Hint],
hints-title: [Hints],
solution-type: [Suggested solution],
solutions-title: [Suggested solutions],
solution-matrix-task-header: [Tasks],
solution-matrix-achieved-points-header: [Points achieved],
distribution-header-point-value: [Point],
distribution-header-point-grade: [Grade],
message: (points-sum, extrapoints-sum) => [In sum #points-sum + #extrapoints-sum P. are achievable. You achieved #box(line(stroke: purple, length: 1cm)) out of #points-sum points.],
grade-scale: (
([excellent], 0.9),
([very good], 0.8),
([good], 0.7),
([pass], 0.6),
([fail], 0.49)),
page-margins: none,
fontsize: 11pt,
show-todolist: true,
body
) = {
let ifnn-line(e) = if e != none [#e \ ]
if title == none {
title = if type != none or no != none [ #type #no ] + if (type != none or no != none) and suffix-title != none [ --- ] + if suffix-title != none [#suffix-title]
}
if document-title == none {
document-title = title
}
set text(font: "Atkinson Hyperlegible", size: fontsize)
// show math.equation: set text(font: "Fira Math")
show math.equation: set text(font: "STIX Two Math")
set par(justify: true)
set enum(indent: 1em)
set list(indent: 1em)
show link: underline
show link: set text(fill: purple)
show heading: it => context {
let num-style = it.numbering
if num-style == none {
return it
}
let num = text(weight: "thin", numbering(num-style, ..counter(heading).at(here()))+[ \u{200b}])
let x-offset = -1 * measure(num).width
pad(left: x-offset, par(hanging-indent: -1 * x-offset, text(fill: purple.lighten(25%), num) + [] + text(fill: purple, it.body)))
}
let ufi = ()
if university != none { ufi.push(university) }
if faculty != none { ufi.push(faculty) }
if institute != none { ufi.push(institute) }
set page(
margin: if page-margins != none {page-margins} else {
(top: if ufi.len() <= 2 or not show-namefield {
3.5cm
} else {
4cm
}, bottom: 3cm)
},
header: if header != none {header} else [
#set text(size: 0.75em)
#table(columns: (1fr, auto, 1fr), align: top, stroke: none, inset: 0pt, if header-left != none {header-left} else [
#if ufi.len() == 2 {
ufi.join(", ")
[\ ]
} else if ufi.len() > 0 {
ufi.join([\ ])
[\ ]
}
#ifnn-line(seminar)
#ifnn-line(semester)
#ifnn-line(docent)
#context {
if state("grape-suite-namefields").at(here()) != 1 {
if show-namefield {
namefield
}
state("grape-suite-namefields").update(1)
}
}
], align(center, if header-middle != none {header-middle} else []), if header-right != none {header-right} else [
#show: align.with(top + right)
#ifnn-line(document-title)
#ifnn-line(author)
#ifnn-line(date.display("[day].[month].[year]"))
#context {
if state("grape-suite-timefield").at(here()) != 1 {
if show-timefield {
timefield(max-time)
}
state("grape-suite-timefield").update(1)
}
}
])
] + v(-0.5em) + line(length: 100%, stroke: purple),
footer: if footer != none {footer} else {
set text(size: 0.75em)
line(length: 100%, stroke: purple) + v(-0.5em)
table(columns: (1fr, auto, 1fr),
align: top,
stroke: none,
inset: 0pt,
if footer-left != none {footer-left},
align(center, context {
str(counter(page).display())
[ \/ ]
str(counter(page).final().first())
}),
if footer-left != none {footer-left}
)
},
)
state("grape-suite-task-translations").update((
"task-type": task-type,
"extra-task-type": extra-task-type
))
state("grape-suite-box-translations").update((
"task": box-task-title,
"hint": box-hint-title,
"solution": box-solution-title,
"definition": box-definition-title,
"notice": box-notice-title,
"example": box-example-title,
))
state("grape-suite-element-sentence-supplement").update(sentence-supplement)
show: sentence-logic
big-heading(title)
if abstract != none {
set text(size: 0.85em)
pad(x: 1cm, abstract)
}
if show-outline {
show outline.entry: it => h(1em) + it
set text(size: 0.75em)
pad(x: 1cm, top: if abstract != none {0.25cm} else {0cm}, outline(indent: 1.5em))
}
if show-todolist {
set text(size: 0.75em)
context {
if todo-state.final().len() > 0 {
pad(x: 1cm, top: if abstract != none or show-outline != none {0.25cm} else {0cm}, list-todos())
}
}
}
set heading(numbering: "1.")
state("grape-suite-tasks").update(())
state("grape-suite-show-lines").update(show-lines)
body
if show-point-distribution-in-tasks {
context make-point-distribution(here(), message, grade-scale, distribution-header-point-value, distribution-header-point-grade)
}
context {
let tasks = state("grape-suite-tasks", ()).at(here())
if show-hints and tasks.filter(e => e.hint != none).len() != 0 {
pagebreak()
big-heading[#hints-title #if type != none or no != none [ -- ] #type #no]
make-hints(here(), hint-type)
}
}
show: it => if show-solutions and solutions-as-matrix {
set page(flipped: true, columns: 2, margin: (x: 1cm, top: 3cm, bottom: 2cm))
it
} else if show-solutions {
pagebreak()
it
}
context {
let tasks = state("grape-suite-tasks", ()).at(here())
if show-solutions and tasks.filter(e => e.solution != none).len() != 0 {
big-heading[#solutions-title #if type != none or no != none [ -- ] #type #no]
if solutions-as-matrix {
set text(size: 0.75em)
make-solution-matrix(
show-comment-field: show-solution-matrix-comment-field,
comment-field-value: solution-matrix-comment-field-value,
here(),
solution-matrix-task-header,
task-type,
extra-task-type,
solution-matrix-achieved-points-header)
if show-point-distribution-in-solutions {
make-point-distribution(loc)
}
} else {
make-solutions(here(), solution-type)
}
}
}
} |
https://github.com/LDemetrios/Conspects-4sem | https://raw.githubusercontent.com/LDemetrios/Conspects-4sem/master/typst/sources/probability.typ | typst | #import "header.typ": *
#show: general-style
|
|
https://github.com/Kasci/LiturgicalBooks | https://raw.githubusercontent.com/Kasci/LiturgicalBooks/master/SK/zalmy/Z037.typ | typst | Nekarhaj ma, Pane, vo svojom rozhorčení \* a netrestaj ma vo svojom hneve,
lebo tvoje šípy utkveli vo mne, \* dopadla na mňa tvoja ruka.
Pre tvoje rozhorčenie niet na mojom tele zdravého miesta, \* pre môj hriech nemajú pokoj moje kosti.
Hriechy mi prerástli nad hlavu \* a ťažia ma príliš sťa veľké bremeno.
Rany mi zapáchajú a hnisajú \* pre moju nerozumnosť.
Zohnutý som a veľmi skľúčený, \* smutne sa vlečiem celý deň.
Bedrá mi spaľuje horúčka \* a moje telo je nezdravé.
Nevládny som a celý dobitý, \* v kvílení srdca nariekam.
Pane, ty poznáš každú moju túžbu; \* ani moje vzdychy nie sú skryté pred tebou.
Srdce mi búcha, sila ma opúšťa \* i svetlo v očiach mi hasne.
Priatelia moji a moji známi odvracajú sa odo mňa pre moju biedu; \* aj moji príbuzní sa ma stránia.
Tí, čo mi číhajú na život, nastavujú mi osídla, \* a tí, čo mi stroja záhubu, rozchyrujú o mne výmysly a deň čo deň vymýšľajú úklady.
Ale ja som sťa hluchý, čo nečuje, \* ako nemý, čo neotvára ústa.
Podobám sa človekovi, čo nepočuje \* a čo nevie obvinenie vyvrátiť.
Pane, pretože v teba dúfam, \* ty ma vyslyšíš, Pane, Bože môj.
A tak hovorím: „Nech sa už neradujú nado mnou; \* a keď sa potknem, nech sa nevystatujú nado mňa.“
Ja, pravda, už takmer padám \* a na svoju bolesť myslím ustavične.
Preto vyznávam svoju vinu \* a pre svoj hriech sa trápim.
Moji nepriatelia sú živí a stále mocnejší, \* ba ešte pribudlo tých, čo ma nenávidia neprávom.
Za dobro sa mi odplácajú zlom a tupia ma za to, \* že som konal dobre.
Neopúšťaj ma, Pane; \* Bože môj, nevzďaľuj sa odo mňa.
Ponáhľaj sa mi na pomoc, \* Pane, moja spása.
Neopúšťaj ma, Pane; \* Bože môj, nevzďaľuj sa odo mňa.
Ponáhľaj sa mi na pomoc, \* Pane, moja spása. |
|
https://github.com/Riesi/typst_stains | https://raw.githubusercontent.com/Riesi/typst_stains/main/stains.typ | typst | The Unlicense | #let coffee_list = (
"stain_A.svg",
"stain_B.svg",
"stain_C.svg",
"stain_D.svg",
)
#let stain_list = (
coffee: coffee_list,
)
#let stain(
type: "coffee",
index: 0,
dx: 0em,
dy: 0em,
scale: 100%,
rotation: 0deg,
//opacity: 100%, TODO not yet supported in Typst
) = {
let stain_path = type+"/"+stain_list.at(type).at(index)
layout(size => {}) // this changes the behavior of the lower code
place(
center+horizon,
dx: dx,
dy: dy,
rotate(rotation)[#image(stain_path, width: scale)]
)
}
|
https://github.com/yonatanmgr/summaries-template | https://raw.githubusercontent.com/yonatanmgr/summaries-template/main/template/utils.typ | typst | #let zero_pad(number) = {
return ("00"+str(number)).slice(-2)
}
#let concat_hebrew(arr) = [
#arr.slice(0, -1).join(", ")
#if arr.len() > 1 [ו#arr.last()] else [#arr.first()]
]
#let graph(style: "school-book", w: 2, h: 2, start: -2, end: 2, y-tick-step: 1, x-tick-step:1, functions: (), v-asymptotes: (), h-asymptotes: (), additionals: ()) = {
text(lang: "en", dir: ltr)[
#import "@preview/cetz:0.2.0"
#show math.equation: block.with(fill: white, inset: 1pt)
#cetz.canvas({
import cetz.plot
plot.plot(
axis-style: style, size: (w,h), x-tick-step: x-tick-step, y-tick-step: y-tick-step, grid: true, {
for f in functions { cetz.plot.add(domain: (start, end), f, samples: 2500) }
if v-asymptotes.len() > 0 {
cetz.plot.add-vline(..v-asymptotes, style: (stroke: (dash: "dashed", paint: rgb("#00000075"))))
}
if h-asymptotes.len() > 0 {
cetz.plot.add-hline(..h-asymptotes, style: (stroke: (dash: "dashed", paint: rgb("#00000075"))))
}
if additionals.len() > 0 {
for f in additionals { cetz.plot.add(domain: (start, end), f, samples: 2500, style: (stroke: (dash: "dashed", paint: rgb("#00000075")))) }
}
}
)
})
]
}
|
|
https://github.com/HiiGHoVuTi/requin | https://raw.githubusercontent.com/HiiGHoVuTi/requin/main/lib.typ | typst | #let show_correct = false
#let q_count = counter("questions")
#let setup_ex() = {
q_count.update(0)
}
#let levels_emojis = (
"emojis/chick.svg",
"emojis/cat.svg",
"emojis/octopus.svg",
"emojis/shark.svg",
"emojis/dragon.svg",
"emojis/biohazard.svg",
).map(x => box(image(x)))
// ---- headings ----
#let heading_fct(it) = {
if (it.numbering == none) {it} else {
let numb = counter(heading).display(it.numbering)
if (it.level == 1) [
#pagebreak()
#v(1fr)
#align(center, [
#text(size: 20pt)[#numb #it.body]
])
#v(1fr)
] else if (it.level == 2) [
#pagebreak()
#setup_ex()
#align(center, [
#set text(size: 1.2em)
* #numb #it.body *
])
] else [
#set text(size: 1.2em)
* #numb #it.body *
]
}
}
#show heading: heading_fct
#let question(score, question) = {
[ #levels_emojis.at(score) *Question #q_count.display()* #h(10pt) #question #h(1fr) /*#array.range(5).map(i => if i < score {$star.filled$} else {$star.stroked$}).sum()*/ \ ]
q_count.step()
}
// et pour afficher la correction
#let correct(body) = {
if (show_correct) {
set text(white)
rect(
fill: rgb(196,255,181),
inset: 8pt,
radius: 4pt,
width: 100%,
[
#set text(white)
#rect(
fill: green,
inset: 8pt,
radius: 4pt,
width: 100%,
[Correction],
)
#set text(black)
#body
],
)
} else {}
}
#let problem(name,entry,output) = {
set align(center)
rect(outset: 3pt)[
*#name*\
#set align(left)
*ENTREE:* #entry \
*SORTIE:* #output
]
set align(left)
}
|
|
https://github.com/japrozs/resume | https://raw.githubusercontent.com/japrozs/resume/master/template.typ | typst | #import "utils.typ"
// Load CV Data from YAML
//#let info = yaml("cv.typ.yml")
// Variables
//#let headingfont = "Linux Libertine" // Set font for headings
//#let bodyfont = "Linux Libertine" // Set font for body
//#let fontsize = 10pt // 10pt, 11pt, 12pt
//#let linespacing = 6pt
//#let showAddress = true // true/false Show address in contact info
//#let showNumber = true // true/false Show phone number in contact info
// set rules
#let setrules(uservars, doc) = {
// set page(
// paper: "us-letter", // a4, us-letter
// numbering: "1 / 1",
// number-align: center, // left, center, right
// margin: 1.25cm, // 1.25cm, 1.87cm, 2.5cm
// )
// Set Text settings
set text(
font: uservars.bodyfont,
size: uservars.fontsize,
hyphenate: false,
)
set list(
spacing: uservars.linespacing
)
// Set Paragraph settings
set par(
leading: uservars.linespacing,
justify: true,
)
doc
}
// show rules
#let showrules(uservars, doc) = {
// Uppercase Section Headings
show heading.where(
level: 2,
): it => block(width: 100%)[
#set align(left)
#set text(font: uservars.headingfont, size: 1em, weight: "bold")
#upper(it.body)
#v(-0.75em) #line(length: 100%, stroke: 1pt + black) // Draw a line
]
// Name Title
show heading.where(
level: 1,
): it => block(width: 100%)[
#set text(font: uservars.headingfont, size: 1.5em, weight: "bold")
#upper(it.body)
#v(2pt)
]
doc
}
// Set Page Layout
#let cvinit(doc) = {
doc = setrules(doc)
doc = showrules(doc)
doc
}
// Address
#let addresstext(info, uservars) = {
if uservars.showAddress {
block(width: 100%)[
#info.personal.location.city, #info.personal.location.region, #info.personal.location.country #info.personal.location.postalCode
#v(-4pt)
]
} else {none}
}
// Arrange the contact profiles with a diamond separator
#let contacttext(info, uservars) = block(width: 100%, below:2.4em)[
// Contact Info
// Create a list of contact profiles
#let profiles = (
box(link("mailto:" + info.personal.email)),
if uservars.showNumber {box(link("tel:" + info.personal.phone))} else {none},
if info.personal.url != none {
box(link(info.personal.url)[#info.personal.url.split("//").at(1)])
}
).filter(it => it != none) // Filter out none elements from the profile array
// Add any social profiles
// #if info.personal.profiles.len() > 0 {
// for profile in info.personal.profiles {
// profiles.push(
// box(link(profile.url)[#profile.url.split("//").at(1)])
// )
// }
// }
// #set par(justify: false)
#set text(font: uservars.bodyfont, weight: "regular", size: uservars.fontsize * 1)
#pad(x: 0em)[
#profiles.join([#sym.space.en #sym.dash.em #sym.space.en])
]
]
// Create layout of the title + contact info
#let cvheading(info, uservars) = {
align(center)[
= #info.personal.name
// #addresstext(info, uservars)
#contacttext(info, uservars)
// #v(0.5em)
]
}
// Education
#let cveducation(info, isbreakable: true) = {
if info.education != none {block[
#heading(level: 2, "Education")
#for edu in info.education {
// Parse ISO date strings into datetime objects
let end = utils.strpdate(edu.endDate)
let edu-items = ""
if edu.honors != none {edu-items = edu-items + "- *Honors*: " + edu.honors.join(", ") + "\n"}
if edu.courses != none {edu-items = edu-items + "- *Courses*: " + edu.courses.join(", ") + "\n"}
if edu.highlights != none {
for hi in edu.highlights {
edu-items = edu-items + "- " + hi + "\n"
}
edu-items = edu-items.trim("\n")
}
// Create a block layout for each education entry
block(width: 100%, breakable: isbreakable)[
// Line 1: Institution and Location
// #if edu.url != none [
// *#link(edu.url)[#edu.institution]* #h(1fr) #edu.location \
// ] else [
#text(font: "New Computer Modern")[*#edu.institution*] #h(1fr) #text(style:"italic")[#end] \
// ]
// Line 2: Degree and Date Range
#if edu.studyType != none [#text()[#edu.studyType] #h(1fr)] \
#h(1fr)
#eval(edu-items, mode: "markup")
]
}
]}
}
// Work Experience
#let cvwork(info, isbreakable: true) = {
if info.work != none {block[
== Work Experience
#for w in info.work {
// Parse ISO date strings into datetime objects
let start = utils.strpdate(w.startDate)
let end = utils.strpdate(w.endDate)
// Create a block layout for each education entry
block(width: 100%, breakable: isbreakable)[
// Line 1: Institution and Location
// #if w.url != none [
// *#link(w.url)[#w.organization]* #h(1fr) *#w.location* \
// ] else [
#text(font: "New Computer Modern")[*#w.organization*] #h(1fr) #start #sym.dash.en #end \
// ]
// Line 2: Degree and Date Range
#text(style: "italic")[#w.position] #h(1fr)
#text(style: "italic")[#w.location] \
#h(1fr)
// Highlights or Description
#for hi in w.highlights [
- #eval(hi, mode: "markup")
]
]
}
]}
}
// Leadership and Activities
#let cvaffiliations(info, isbreakable: true) = {
if info.affiliations != none {block[
== Leadership & Activities
#for org in info.affiliations {
// Parse ISO date strings into datetime objects
let start = utils.strpdate(org.startDate)
let end = utils.strpdate(org.endDate)
// Create a block layout for each education entry
block(width: 100%, breakable: isbreakable)[
// Line 1: Institution and Location
#if org.url != none [
*#link(org.url)[#org.organization]* #h(1fr) *#org.location* \
] else [
*#org.organization* #h(1fr) *#org.location* \
]
// Line 2: Degree and Date Range
#text(style: "italic")[#org.position] #h(1fr)
#start #sym.dash.en #end \
// Highlights or Description
#if org.highlights != none {
for hi in org.highlights [
- #eval(hi, mode: "markup")
]
} else {}
]
}
]}
}
// Projects
#let cvprojects(info, isbreakable: true) = {
if info.projects != none {block[
== Projects
#for project in info.projects {
// Parse ISO date strings into datetime objects
let date = utils.strpdate(project.date)
// let end = utils.strpdate(project.endDate)
// Create a block layout for each education entry
block(width: 100%, breakable: isbreakable)[
// Line 1: Institution and Location
#if project.url != none [
*#link(project.url)[#text(font: "New Computer Modern")[#project.name]]*
] else [
*#project.name*
]
// Line 2: Degree and Date Range
#h(1fr) #date #sym.dash.em #text()[#eval(project.languages, mode: "markup")] \
#h(1fr)
// Summary or Description
#for hi in project.highlights [
- #eval(hi, mode: "markup")
]
]
}
]}
}
// Honors and Awards
#let cvawards(info, isbreakable: true) = {
if info.awards != none {block[
== Honors & Awards
#for award in info.awards {
// Parse ISO date strings into datetime objects
let date = utils.strpdate(award.date)
// Create a block layout for each education entry
block(width: 100%, breakable: isbreakable)[
// Line 1: Institution and Location
#if award.url != none [
*#link(award.url)[#text(font: "New Computer Modern")[#award.title]]* #h(1fr) *#award.location* \
] else [
#text(font: "New Computer Modern")[*#award.title*] #h(1fr) *#award.location* \
]
// Line 2: Degree and Date Range
Issued by #text(style: "italic")[#award.issuer] #h(1fr) #date \
#h(1fr)
// Summary or Description
#if award.highlights != none {
for hi in award.highlights [
- #eval(hi, mode: "markup")
]
} else {}
]
}
]}
}
// Certifications
#let cvcertificates(info, isbreakable: true) = {
if info.certificates != none {block[
== Certificates
#for cert in info.certificates {
// Parse ISO date strings into datetime objects
let date = utils.strpdate(cert.date)
// Create a block layout for each education entry
block(width: 100%, breakable: isbreakable)[
// Line 1: Institution and Location
#if cert.url != none [
*#link(cert.url)[#text(font: "New Computer Modern")[#cert.name]]* \
] else [
#text(font: "New Computer Modern")[*#cert.name*] \
]
// Line 2: Degree and Date Range
Issued by #text(style: "italic")[#cert.issuer] #h(1fr) #date \
]
}
]}
}
// Certifications
#let cvopensource(info, isbreakable: true) = {
if info.opensource != none {block[
== Open Source Contributions
#for cert in info.opensource {
// Parse ISO date strings into datetime objects
let date = utils.strpdate(cert.date)
// Create a block layout for each education entry
block(width: 100%, breakable: isbreakable)[
#text(font: "New Computer Modern")[*#cert.name*] #h(1fr) #date #sym.dash.em #text()[#eval(cert.languages, mode: "markup")] \
// Line 2: Degree and Date Range
#cert.desc \
]
}
]}
}
#let cvcoursework(info, isbreakable: true) = {
if info.certificates != none {block[
== CourseWork
#for cw in info.coursework {
// Create a block layout for each education entry
block(width: 100%, breakable: isbreakable)[
#text(font: "New Computer Modern")[*#cw.name*] \
#for course in cw.list [
- #text()[#eval(course, mode: "markup")] \
]
]
}
]}
}
// Research & Publications
#let cvpublications(info, isbreakable: true) = {
if info.publications != none {block[
== Research & Publications
#for pub in info.publications {
// Parse ISO date strings into datetime objects
let date = utils.strpdate(pub.releaseDate)
// Create a block layout for each education entry
block(width: 100%, breakable: isbreakable)[
// Line 1: Institution and Location
#if pub.url != none [
*#link(pub.url)[#pub.name]* \
] else [
*#pub.name* \
]
// Line 2: Degree and Date Range
Published on #text(style: "italic")[#pub.publisher] #h(1fr) #date \
]
}
]}
}
// Skills, Languages, and Interests
#let cvskills(info, isbreakable: true) = {
if (info.languages != none) or (info.skills != none) or (info.interests != none) {block(breakable: isbreakable)[
== Skills, Languages, Interests
#if (info.languages != none) [
#let langs = ()
#for lang in info.languages {
langs.push([#lang.language (#lang.fluency)])
}
- #text(font: "New Computer Modern")[*Languages*]: #langs.join(", ")
]
#if (info.skills != none) [
#for group in info.skills [
- #text(font: "New Computer Modern")[*#group.category*]: #group.skills.join(", ")
]
]
#if (info.interests != none) [
- #text(font: "New Computer Modern")[*Interests*]: #info.interests.join(", ")
]
]}
}
// References
#let cvreferences(info, isbreakable: true) = {
if info.references != none {block[
== References
#for ref in info.references {
block(width: 100%, breakable: isbreakable)[
#if ref.url != none [
- *#link(ref.url)[#ref.name]*: "#ref.reference"
] else [
- *#ref.name*: "#ref.reference"
]
]
}
]} else {}
}
// #cvreferences
// =====================================================================
// End Note
#let endnote = {
place(
bottom + right,
block[
#set text(size: 5pt, font: "Consolas", fill: silver)
\*This document was last updated on #datetime.today().display("[year]-[month]-[day]") using #strike[LaTeX] #link("https://typst.app")[Typst].
]
)
}
// #place(
// bottom + right,
// dy: -71%,
// dx: 4%,
// rotate(
// 270deg,
// origin: right + horizon,
// block(width: 100%)[
// #set align(left)
// #set par(leading: 0.5em)
// #set text(size: 6pt)
// #super(sym.dagger) This document was last updated on #raw(datetime.today().display("[year]-[month]-[day]")) using #strike[LaTeX] #link("https://typst.app")[Typst].
// // Template by <NAME>.
// ]
// )
// ) |
|
https://github.com/darkMatter781x/OverUnderNotebook | https://raw.githubusercontent.com/darkMatter781x/OverUnderNotebook/main/main.typ | typst | #import "/packages.typ": notebookinator, codly
#import notebookinator: *
#import codly: *
#import themes.radial: radial-theme, components
#show: notebook.with(theme: radial-theme, cover: align(center)[
#text(size: 24pt, font: "Tele-Marines")[
#v(3em)
#text(size: 28pt)[
Code Notebook | 781X
]
#image("./assets/781X-logo.png", height: 60%)
2023 - 2024
#line(length: 50%, stroke: (thickness: 2.5pt, cap: "round"))
Over Under
]
], team-name: "781X")
#create-frontmatter-entry(title: "Table of Contents")[
#components.toc()
]
#include "entries/entries.typ"
// #include "./appendix.typ" |
|
https://github.com/RiccardoTonioloDev/Bachelor-Thesis | https://raw.githubusercontent.com/RiccardoTonioloDev/Bachelor-Thesis/main/appendix/glossary.typ | typst | Other | #set heading(numbering: none)
#import "@preview/glossarium:0.4.1": print-glossary
#show figure.caption : set text(font: "EB Garamond",size: 12pt)
#pagebreak()
= Glossario
#print-glossary(
(
(key: "MDE", short: "MDE", desc: "Monocular depth estimation, è il campo che si occupa di trovare soluzioni in grado di stimare le profondità a partire da una sola immagine in input."),
(key: "LiDAR", short: "LiDAR", desc: [Strumento di telerilevamento che permette di determinare la distanza di una superficie utilizzando un impulso laser.]),
(key: "FotoStereo", short: "stereocamera", desc: [Particolari tipi di fotocamere dotate di due obbiettivi paralleli. Questo tipo di fotocamera viene utilizzata per ottenere due immagini della stessa scena a una distanza nota. Queste immagini vengono successivamente introdotte in un algoritmo che, cercando di trovare la corrispondenza dei vari pixel tra le due immagini e conoscendo la distanza tra i due obbiettivi, triangola la profondità di tali pixel.]),
(key: "embedded", short: "embedded", desc: [Un dispositivo si dice _embedded_ quanto, è progettato per eseguire operazioni di elaborazione e analisi dei dati localmente, vicino alla fonte dei dati stessi, piuttosto che inviarli a un server centrale o al cloud.]),
(key: "Tensorflow", short: "TensorFlow", desc: [Libreria _open source_ per l'apprendimento automatico sviluppata da Google Brain.]),
(key: "PyTorch", short: "PyTorch", desc: [Libreria _open source_ per l'apprendimento automatico sviluppata da Meta AI.]),
(key: "Python", short: "Python", desc: [Linguaggio di programmazione interpretato con tipizzazione dinamica e forte, diventato standard per la scrittura di codice orientato al _machine learning_ e alla _data science_.]),
(key: "pip", short: "pip", desc: [_Package-management system_ scritto in _Python_ e usato per installare e gestire pacchetti software.]),
(key: "Anaconda", short: "Anaconda", desc: [Distribuzione del linguaggio di programmazione Python per la computazione scientifica, che cerca di semplificare la gestione dei pacchetti e la messa in produzione del software.]),
(key: "Wandb", short: "Wandb", desc: [Sistema online per il logging e la gestione dei log mediante _report_, per registrare l'andamento di variabili di interesse, specialmente utilizzato nel campo del _machine learning_.]),
(key: "cuDNN", short: "cuDNN", desc: [*cu*\da *D*\eep *N*\eural *N*\etwork è una libreria sviluppata da NVIDIA, che espone una serie di primitive per permettere l'esecuzione di codice accellerata su schede video NVIDIA, specialmente utile per reti neurali profonde.]),
(key: "CUDA", short: "CUDA", desc: [*C*\ompute *U*\nified *D*\evice *A*\rchitecture è un'architettura hardware per l'elaborazione parallela creata da NVIDIA.]),
(key: "disparità", short: "disparità", desc: [Nel contesto delle fotocamere stereoscopiche, la disparità è la differenza nella posizione orizzontale di un pixel tra due immagini catturate da due fotocamere posizionate ad una certa distanza l'una dall'altra. Questa differenza è causata dalla variazione di angolo con cui ogni fotocamera vede gli oggetti nella scena.]),
(key: "encoder", short: "encoder", desc: [Rete neurale che comprime un input in una rappresentazione di dimensioni ridotte, estraendo le caratteristiche essenziali.]),
(key: "decoder", short: "decoder", desc: [Rete neurale che ha lo scopo di analizzare un input compresso da un @encoder, per generare la predizione.]),
(key: "kernel", short: "kernel", desc: [Matrice di pesi utilizzata per filtrare l'immagine, eseguendo operazioni di somma e prodotto su sotto-regioni dell'immagine per estrarre caratteristiche come bordi, texture e dettagli.]),
(key:"stride",short:"stride",desc:[Il passo con cui il @kernel si sposta sull'immagine, determinando la distanza tra le posizioni successive del @kernel in una convoluzione.]),
(key:"fmap",short:"feature map",desc:[Il risultato delle operazioni di una convoluzione sull'immagine, rappresentando le caratteristiche rilevate come bordi e texture.]),
(key:"adam",short:"Adam",desc:[L'Adaptive Moment Estimation è un ottimizzatore che utilizzando stime adattive del momento di primo e secondo ordine (media e varianza dei gradienti) aggiorna i pesi, migliorando la velocità e stabilità della convergenza durante l'addestramento dei modelli di apprendimento profondo.]),
(key:"linter",short:"linter",desc:[Strumento che analizza il codice sorgente per individuare errori, bug, stile non conforme e altri problemi di qualità.]),
(key:"finetune",short:"fine tuning",desc:[Processo di adattamento di un modello pre-addestrato su un nuovo dataset specifico per migliorare le sue prestazioni su un compito particolare.]),
(key:"hadamard",short:"prodotto di Hadamard",desc:[Date due matrici dalle stesse dimensioni, la matrice risultato dell'operazione avrà le medesime dimensioni dell'input, e il valore di ogni sua cella corrisponderà al prodotto tra i valori delle celle corrispondenti nelle due matrici di input.]),
(key:"gh",short:"GitHub",desc: [Piattaforma di hosting per il controllo di versione e la collaborazione, basata su Git.]),
(key:"nlp",short:"natural language processing",desc: [Il *N*\atural *L*\anguage *P*\rocessing è un campo dell'intelligenza artificiale che si occupa dell'integrazione tra computer e linguaggio umano.]),
)
)
|
https://github.com/m4cey/mace-typst | https://raw.githubusercontent.com/m4cey/mace-typst/main/templates/colors.typ | typst | #let colors(colorscheme: (), doc) = {
let colors = colorscheme
if colorscheme.len() == 0 {
import "./colorschemes/default.typ" as colorscheme
colors = colorscheme
}
set page(
fill: colors.background
)
set text(
fill: colors.foreground
)
doc
}
// TESTS
#show: doc => colors(doc)
#set text(size: 2em)
#let sep() = [#align(center)[#line(length: 50%)]]
= Heading 1
== Heading 2
=== Heading 3
==== Heading 4
===== Heading 5
#sep()
*Bold* _Italic_ *_BoldItalic_*
#sep()
`inline rawtext`
```bash
#!/bin/bash
echo "codeblock\n"
```
#sep()
#table(inset: 0.5em, columns:2, [just],[a],[normal],[table])
#sep()
a quote:
#quote(block: true, attribution: [me])[thou suck ass]
#sep()
Here's some math: $1 + 1 = 3$
$
1 + 1 &= 1 + 0 dots.c 0 + 1 \ &= "what do I fucking know"
$
|
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/math/font-features.typ | typst | Apache License 2.0 | // Test that setting font features in math.equation has an effect.
---
$ nothing $
$ "hi ∅ hey" $
$ sum_(i in NN) 1 + i $
#show math.equation: set text(features: ("cv01",), fallback: false)
$ nothing $
$ "hi ∅ hey" $
$ sum_(i in NN) 1 + i $
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/cetz/0.1.1/src/lib/angle.typ | typst | Apache License 2.0 | #import "../draw.typ"
#import "../cmd.typ"
#import "../styles.typ"
#import "../vector.typ"
#import "../util.typ"
// Angle default-style
#let default-style = (
fill: none,
stroke: auto,
radius: .5,
label-radius: .25,
mark: (
start: none,
end: none,
size: auto,
fill: none,
stroke: auto,
)
)
/// Draw an angle between origin-a and origin-b
/// Only works for coordinates with z = 0!
///
/// *Anchors:*
/// / start: Arc starting point
/// / end: Arc end point
/// / origin: Arc origin
/// / label: Label center
///
/// - origin (coordinate): Angle corner origin
/// - a (coordinate): First coordinate
/// - b (coordinate): Second coordinate
/// - inner (bool): Draw inner `true` or outer `false` angle
/// - label (content,function,none): Angle label/content or function of the form `angle => content` that receives the angle and must return a content object
/// - ..style (style): Angle style
#let angle(origin, a, b,
inner: true,
label: none,
name: none, ..style) = {
let start-end(origin, a, b) = {
assert(origin.at(2, default: 0) == 0 and
a.at(2, default: 0) == 0 and
b.at(2, default: 0) == 0,
message: "FIXME: Angle only works for 2D coordinates.")
let s = vector.angle2(origin, a) * -1
if s < 0deg { s += 360deg }
let e = vector.angle2(origin, b) * -1
if e < 0deg { e += 360deg }
if s > e {
(s, e) = (e, s)
}
if inner == true {
let d = vector.angle(a, origin, b)
if e - s > 180deg {
(s, e) = (e, e + d)
} else {
(s, e) = (s, s + d)
}
} else if inner == false {
if e - s < 180deg {
let d = 360deg - vector.angle(a, origin, b)
(s, e) = (e, e + d)
}
}
(s, e, (s + e) / 2)
}
let style = style.named()
((
name: name,
default-anchor: "label",
coordinates: (origin, a, b),
transform-coordinates: (ctx, origin, a, b) => {
let style = util.merge-dictionary(default-style,
styles.resolve(ctx.style, style, root: "angle"))
let (s, e, ss) = start-end(origin, a, b)
let (x, y, z) = origin
let (r, _) = util.resolve-radius(style.radius)
.map(util.resolve-number.with(ctx))
let (ra, _) = util.resolve-radius(style.label-radius)
.map(util.resolve-number.with(ctx))
let start = (x + r * calc.cos(s),
y + r * calc.sin(s), z)
let end = (x + r * calc.cos(e),
y + r * calc.sin(e), z)
let label = (x + ra * calc.cos(ss),
y + ra * calc.sin(ss), z)
(origin, a, b, start, end, label)
},
custom-anchors-ctx: (ctx, origin, a, b, start, end, label) => {
(origin: origin,
a: a,
b: b,
start: start,
end: end,
label: label,
)
},
render: (ctx, origin, a, b, start, end, pt-label) => {
let style = util.merge-dictionary(default-style,
styles.resolve(ctx.style, style, root: "angle"))
let (s, e, _) = start-end(origin, a, b)
let (r, _) = util.resolve-radius(style.radius)
.map(util.resolve-number.with(ctx))
let (x, y, z) = start
if style.fill != none {
cmd.arc(x, y, z, s, e, r, r,
mode: "PIE", fill: style.fill, stroke: none)
}
if style.stroke != none {
cmd.arc(x, y, z, s, e, r, r,
mode: "OPEN", fill: none, stroke: style.stroke)
}
if style.mark.start != none {
let f = vector.add(vector.scale(
(calc.cos(s + 90deg), calc.sin(s + 90deg), 0), style.mark.size),
start)
cmd.mark(f, start, style.mark.start,
fill: style.mark.fill, stroke: style.mark.stroke)
}
if style.mark.end != none {
let f = vector.add(vector.scale(
(calc.cos(e - 90deg), calc.sin(e - 90deg), 0), style.mark.size),
end)
cmd.mark(f, end, style.mark.end,
fill: style.mark.fill, stroke: style.mark.stroke)
}
let label = if type(label) == "function" {
label(e - s)
} else { label }
if label != none {
let (lx, ly, ..) = pt-label
let (w, h) = draw.measure(label, ctx)
let (width: width, height: height) = draw.typst-measure(label,
ctx.typst-style)
cmd.content(
lx, ly, w, h,
move(dx: -width/2,
dy: -height/2,
label))
}
},
),)
}
|
https://github.com/SkytAsul/INSA-Typst-Template | https://raw.githubusercontent.com/SkytAsul/INSA-Typst-Template/main/packages/silky-report-insa/template/main.typ | typst | MIT License | #import "@preview/silky-report-insa:{{VERSION}}": *
#show: doc => insa-report(
id: 1,
pre-title: "DPT XA",
title: "Titre du TP",
authors: [
*NOM 1 Prénom 1*
*NOM 2 Prénom 2*
<NAME>
<NAME>
],
date: "jj/mm/aaaa",
doc)
Bonjour
|
https://github.com/k0tran/typst | https://raw.githubusercontent.com/k0tran/typst/sisyphus/vendor/hayagriva/docs/file-format.md | markdown | # The Hayagriva YAML File Format
The Hayagriva YAML file format enables you to feed a collection of literature items into Hayagriva. It is built on the [YAML standard](https://en.wikipedia.org/wiki/YAML). This documentation starts with a basic introduction with examples into the format, explains how to represent several types of literature with parents, and then explores all the possible fields and data types. An [example file](https://github.com/typst/hayagriva/blob/main/tests/data/basic.yml) covering many potential use cases can be found in the test directory of the repository.
## Overview
In technical terms, a Hayagriva file is a YAML document that contains a single mapping of mappings.
Or, in simpler terms: Every literature item needs to be identifiable by some name (the _key_) and have some properties that describe it (the _fields_). Suppose a file like this:
```yaml
harry:
type: Book
title: Harry Potter and the Order of the Phoenix
author: <NAME>.
volume: 5
page-total: 768
date: 2003-06-21
electronic:
type: Web
title: Ishkur's Guide to Electronic Music
serial-number: v2.5
author: Ishkur
url: http://www.techno.org/electronic-music-guide/
```
You can see that it refers to two items: The fifth volume of the Harry Potter books (key: `harry`) and a web page called "Ishkur's Guide to Electronic Music" (key: `electronic`). The key always comes first and is followed by a colon. Below the key, indented, you can find one field on each line: They start with the field name, then a colon, and then the field value.
Sometimes, this value can be more complex than just some text after the colon. If you have an article that was authored by multiple people, its `author` field can look like this instead:
```yaml
author: ["<NAME>", "<NAME>"]
```
Or it could also be this:
```yaml
author:
- <NAME>
- <NAME>
```
The `author` field can be an _array_ (a list of values) to account for media with more than one creator. YAML has two ways to represent these lists: The former, compact, way where you wrap your list in square braces and the latter, more verbose way where you put each author on their own indented line and precede them with a hyphen so that it looks like a bullet list. Since in the compact form both list items and authors' first and last names are separated by commas you have to wrap the names of individual authors in double-quotes.
Sometimes, fields accept composite data. If, for example, you would want to save an access date for an URL for your bibliography, you would need the `url` field to accept that. This is accomplished like this:
```yaml
url:
value: http://www.techno.org/electronic-music-guide/
date: 2020-11-12
```
There is also a more compact form of this that might look familiar if you know JSON:
```yaml
url: { value: http://www.techno.org/electronic-music-guide/, date: 2020-11-12 }
```
By now, you must surely think that there must be an abundance of fields to represent all the possible information that could be attached to any piece of literature: For example, an article could have been published in an anthology whose title you would want to save, and that anthology belongs to a series that has a title itself... For this, you would already need three different title-fields? Hayagriva's data model was engineered to prevent this kind of field bloat, read the next section to learn how to represent various literature.
## Representing publication circumstance with parents
Hayagriva aims to keep the number of fields it uses small to make the format easier to memorize and therefore write without consulting the documentation. Other contemporary literature management file formats like RIS and BibLaTeX use many fields to account for every kind of information that could be attached to some piece of media.
We instead use the concept of parents: Many pieces of literature are published within other media (e. g. articles can appear in newspapers, blogs, periodicals, ...), and when each of these items is regarded isolatedly and without consideration for that publication hierarchy, there are substantially fewer fields that could apply.
How does this look in practice? An article in a scientific journal could look like this:
```yaml
kinetics:
type: Article
title: Kinetics and luminescence of the excitations of a nonequilibrium polariton condensate
author: ["<NAME>.", "<NAME>.", "<NAME>"]
doi: "10.1103/PhysRevB.102.165126"
page-range: 165126-165139
date: 2020-10-14
parent:
type: Periodical
title: Physical Review B
volume: 102
issue: 16
publisher: American Physical Society
```
This means that the article was published in issue 16, volume 102 of the journal "Physical Review B". Notice that the `title` field is in use for both the article and its parent - every field is available for both top-level use and all parents.
To specify parent information, write the `parent` field name and a colon and then put all fields for that parent on indented lines below.
The line `type: Periodical` could also have been omitted since each entry type has the notion of a default parent type, which is the type that the parents will have if they do not have a `type` field.
Sometimes, media is published in multiple ways, i. e. one parent would not provide the full picture. Multiple parents are possible to deal with these cases:
```yaml
wwdc-network:
type: Article
author: ["<NAME>", "<NAME>"]
title: Boost Performance and Security with Modern Networking
date: 2020-06-26
parent:
- type: Conference
title: World Wide Developer Conference 2020
organization: Apple Inc.
location: Mountain View, CA
- type: Video
runtime: "00:13:42"
url: https://developer.apple.com/videos/play/wwdc2020/10111/
```
This entry describes a talk presented at a conference and for which a video is available from which the information was ultimately cited.
Just like the `author` field, `parents` can be a list. If it does, a hyphen indicates the start of a new parent.
Parents can also appear as standalone items and can have parents themselves. This is useful if you are working with articles from a journal that belongs to a series or cases like the one below:
```yaml
plaque:
type: Misc
title: Informational plaque about Jacoby's 1967 photos
publisher: Stiftung Reinbeckhallen
location: Berlin, Germany
date: 2020
parent:
type: Artwork
date: 1967
author: <NAME>
parent:
type: Anthology
title: Bleibtreustraße
archive: Landesmuseum Koblenz
archive-location: Koblenz, Germany
```
This plaque was created by a museum for a photo by Jacoby that belongs to a series that is usually archived at a different museum.
## Reference
This section lists all possible fields and data types for them.
### Fields
#### `type`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | entry type |
| **Description:** | media type of the item, often determines the structure of references. |
| **Example:** | `type: video` |
#### `title`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | formattable string |
| **Description:** | title of the item |
| **Example:** | `title: <NAME>: How An Internet Joke Revived My Career` |
#### `author`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | person / list of persons |
| **Description:** | persons primarily responsible for the creation of the item |
| **Example:** | `author: ["<NAME>", "<NAME>"]` |
#### `date`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | date |
| **Description:** | date at which the item was published |
| **Example:** | `date: 1949-05` |
#### `parent`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | entry |
| **Description:** | item in which the item was published / to which it is strongly associated to |
| **Example:** | <pre>parent:<br> type: Anthology<br> title: Automata studies<br> editor: ["<NAME>.", "<NAME>."]</pre> |
#### `editor`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | person / list of persons |
| **Description:** | persons responsible for selecting and revising the content of the item |
| **Example:** | <pre>editor:<br> - <NAME>.<br> - <NAME>-Larry</pre> |
#### `affiliated`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | list of persons with role / list of lists of persons with role |
| **Description:** | persons involved with the item that do not fit `author` or `editor` |
| **Example:** | <pre>affiliated:<br> - role: Director<br> names: <NAME><br> - role: CastMember<br> names: ["<NAME>", "<NAME>", "<NAME>"]<br></pre> |
#### `call-number`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | formattable string |
| **Description:** | The number of the item in a library, institution, or collection. Use with `archive`.|
| **Example:** | `call-number: "F16 D14"` |
#### `publisher`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | formattable string |
| **Description:** | publisher of the item |
| **Example:** | `publisher: Penguin Books` |
#### `location`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | formattable string |
| **Description:** | location at which the item was published or created |
| **Example:** | `location: Lahore, Pakistan` |
#### `organization`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | formattable string |
| **Description:** | Organization at/for which the item was produced |
| **Example:** | `organization: Technische Universität Berlin` |
#### `issue`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | numeric or string |
| **Description:** | For an item whose parent has multiple issues, indicates the position in the issue sequence. Also used to indicate the episode number for TV. |
| **Example:** | `issue: 5` |
#### `volume`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | numeric or string |
| **Description:** | For an item whose parent has multiple volumes/parts/seasons ... of which this item is one |
| **Example:** | `volume: 2-3` |
#### `volume-total`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | numeric |
| **Description:** | Total number of volumes/parts/seasons this item consists of |
| **Example:** | `volume-total: 12` |
#### `edition`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | numeric or string |
| **Description:** | published version of an item |
| **Example:** | `edition: expanded and revised edition` |
#### `page-range`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | numeric or string |
| **Description:** | the range of pages within the parent this item occupies |
| **Example:** | `page-range: 812-847` |
#### `page-total`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | numeric |
| **Description:** | total number of pages the item has |
| **Example:** | `page-total: 1103` |
#### `time-range`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | timestamp range |
| **Description:** | the time range within the parent this item starts and ends at |
| **Example:** | `time-range: 00:57-06:21` |
#### `runtime`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | timestamp |
| **Description:** | total runtime of the item |
| **Example:** | `runtime: 01:42:21,802` |
#### `url`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | url |
| **Description:** | canonical public URL of the item, can have access date |
| **Example:** | `url: { value: https://www.reddit.com/r/AccidentalRenaissance/comments/er1uxd/japanese_opposition_members_trying_to_block_the/, date: 2020-12-29 }` |
#### `serial-number`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | string or dictionary of strings |
| **Description:** | Any serial number, including article numbers. If you have serial numbers of well-known schemes like `doi`, you should put them into the serial number as a dictionary like in the second example. Hayagriva will recognize and specially treat `doi`, `isbn` `issn`, `pmid`, `pmcid`, and `arxiv`. You can also include `serial` for the serial number when you provide other formats as well. |
| **Example:** | `serial-number: 2003.13722` or <pre>serial-number:<br> doi: "10.22541/au.148771883.35456290"<br> arxiv: "1906.00356"<br> serial: "8516"</pre> |
#### `language`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | unicode language identifier |
| **Description:** | language of the item |
| **Example:** | `language: zh-Hans` |
#### `archive`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | formattable string |
| **Description:** | name of the institution/collection where the item is kept |
| **Example:** | `archive: National Library of New Zealand` |
#### `archive-location`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | formattable string |
| **Description:** | location of the institution/collection where the item is kept |
| **Example:** | `archive-location: Wellington, New Zealand` |
#### `note`
| | |
|------------------|-----------------------------------------------------------|
| **Data type:** | formattable string |
| **Description:** | additional description to be appended after reference list entry |
| **Example:** | `note: microfilm version` |
### Data types
#### Entry
Entries are collections of fields that could either have a key or be contained in the `parent` field of another entry.
#### Entry Type
Needs a keyword with one of the following values:
- `article`. A short text, possibly of journalistic or scientific nature, appearing in some greater publication (default parent: `periodical`).
- `chapter`. A section of a greater containing work (default parent: `book`).
- `entry`. A short segment of media on some subject matter. Could appear in a work of reference or a data set (default parent: `reference`).
- `anthos`. Text published within an Anthology (default parent: `anthology`).
- `report`. A document compiled by authors that may be affiliated to an organization. Presents information for a specific audience or purpose.
- `thesis`. Scholarly work delivered to fulfill degree requirements at a higher education institution.
- `web`. Piece of content that can be found on the internet and is native to the medium, like an animation, a web app, or a form of content not found elsewhere. Do not use this entry type when referencing a textual blog article, instead use an `article` with a `blog` parent (default parent: `web`).
- `scene`. A part of a show or another type of performed media, typically all taking place in the same location (default parent: `video`).
- `artwork`. A form of artistic/creative expression (default parent: `exhibition`).
- `patent`. A technical document deposited at a government agency that describes an invention to legally limit the rights of reproduction to the inventors.
- `case`. Reference to a legal case that was or is to be heard at a court of law.
- `newspaper`. The issue of a newspaper that was published on a given day.
- `legislation`. Legal document or draft thereof that is, is to be, or was to be enacted into binding law (default parent: `anthology`).
- `manuscript`. Written document that is submitted as a candidate for publication.
- `original`. The original container of the entry before it was re-published.
- `post`. A post on a micro-blogging platform like Twitter (default parent: `post`).
- `misc`. Items that do not match any of the other Entry type composites.
- `performance`. A live artistic performance.
- `periodical`. A publication that periodically publishes issues with unique content. This includes scientific journals and news magazines.
- `proceedings`. The official published record of the events at a professional conference.
- `book`. Long-form work published physically as a set of bound sheets.
- `blog`. Set of self-published articles on a website.
- `reference`. A work of reference. This could be a manual or a dictionary.
- `conference`. Professional conference. This Entry type implies that the item referenced has been an event at the conference itself. If you instead want to reference a paper published in the published proceedings of the conference, use an `article` with a `proceedings` parent.
- `anthology`. Collection of different texts on a single topic/theme.
- `repository`. Publicly visible storage of the source code for a particular software, papers, or other data and its modifications over time.
- `thread`. Written discussion on the internet triggered by an original post. Could be on a forum, social network, or Q&A site.
- `video`. Motion picture of any form, possibly with accompanying audio (default parent: `video`).
- `audio`. Recorded audible sound of any kind (default parent: `audio`).
- `exhibition`. A curated set of artworks.
The field is case insensitive. It defaults to `Misc` or the default parent if the entry appears as a parent of an entry that defines a default parent.
#### Formattable String
A formattable string is a string that may run through a text case transformer when used in a reference or citation. You can disable these transformations on segments of the string or the whole string.
The simplest scenario for a formattable string is to provide a string that can be case-folded:
```yaml
publisher: UN World Food Programme
```
If you want to preserve a part of the string but want to go with the style's
behavior otherwise, enclose the string in braces like below. You must wrap the
whole string in quotes if you do this.
```yaml
publisher: "{imagiNary} Publishing"
```
To disable formatting altogether and instead preserve the casing as it appears
in the source string, put the string in the `value` sub-field and specify
another sub-field as `verbatim: true`:
```yaml
publisher:
value: UN World Food Programme
verbatim: true
```
Title and sentence case folding will always be deactivated if your item has set
the `language` key to something other than English.
You can also include mathematical markup evaluated by [Typst](https://typst.app) by
wrapping it in dollars.
Furthermore, every formattable string can include a short form that a citation
style can choose to render over the longer form.
```yaml
journal:
value: International Proceedings of Customs
short: Int. Proc. Customs
```
#### Person
A person consists of a name and optionally, a given name, a prefix, and a suffix for the (family) name as well as an alias. Usually, you specify a person as a string with the prefix and the last name first, then a comma, followed by a given name, another comma, and then finally the suffix. Following items are valid persons:
- `<NAME>`
- `<NAME>.`
- `UNICEF`
- `<NAME>`
The prefix and the last name will be separated automatically using [the same algorithm as BibTeX (p. 24)](https://ftp.rrze.uni-erlangen.de/ctan/info/bibtex/tamethebeast/ttb_en.pdf) which can be summarized as "put all the consecutive lower case words at the start into the prefix."
Usually, this is all you need to specify a person's name. However, if a part of a name contains a comma, the prefix is not lowercased, or if one needs to specify an alias, the person can also be specified using sub-fields:
```yaml
author:
given-name: <NAME>
name: Watkins
alias: bell hooks
```
The available sub-fields are `name`, `given-name`, `prefix`, `suffix`, and `alias`. The `name` field is required.
#### List of persons with role
This data type requires a mapping with two fields: `names` which contains a list of persons or a single person and a `role` which specifies their role with the item:
```yaml
role: ExecutiveProducer
names: ["<NAME>", "<NAME>.", "<NAME>"]
```
##### **Possible `role` values**
- `translator`. Translated the work from a foreign language to the cited edition.
- `afterword`. Authored an afterword.
- `foreword`. Authored a foreword.
- `introduction`. Authored an introduction.
- `annotator`. Provided value-adding annotations.
- `commentator`. Commented on the work.
- `holder`. Holds a patent or similar.
- `compiler`. Compiled the works in an Anthology.
- `founder`. Founded the publication.
- `collaborator`. Collaborated on the cited item.
- `organizer`. Organized the creation of the cited item.
- `cast-member`. Performed in the cited item.
- `composer`. Composed all or parts of the cited item's musical/audible components.
- `producer`. Produced the cited item.
- `executive-producer`. Lead Producer for the cited item.
- `writer`. Did the writing for the cited item.
- `cinematography`. Shot film/video for the cited item.
- `director`. Directed the cited item.
- `illustrator`. Illustrated the cited item.
- `narrator`. Provided narration or voice-over for the cited item.
The `role` field is case insensitive.
#### Date
A calendar date as ISO 8601. This means that you specify the full date as `YYYY-MM-DD` with an optional sign in front to represent years earlier than `0000` in the Gregorian calendar. The year 1 B.C.E. is represented as `0000`, the year 2 B.C.E. as `-0001` and so forth.
The shortened forms `YYYY` or `YYYY-MM` are also possible.
#### Timestamp
A timestamp represents some time in a piece of media. It is given as a string of the form `DD:HH:MM:SS,msms` but everything except `MM:SS` can be omitted. Wrapping the string in double-quotes is necessary due to the colons.
The left-most time denomination only allows values that could overflow into the next-largest denomination if that is not specified. This means that the timestamp `138:00` is allowed for 2 hours and 18 minutes, but `01:78:00` is not.
#### Timestamp range
A range of timestamps is a string containing two timestamps separated by a hyphen. The first timestamp in the string indicates the starting point, whereas the second one indicates the end. Wrapping the string in double-quotes is necessary due to the colons in the timestamps.
```yaml
time-range: "03:35:21-03:58:46"
```
#### String
Strings are sequences of characters as a field value. In most cases you can write your string after the colon, but if it contains a special character (`:`, `{`, `}`, `[`, `]`, `,`, `&`, `*`, `#`, `?`, `|`, `-`, `<`, `>`, `=`, `!`, `%`, `@`, `\`) it should be wrapped with double-quotes. If your string contains double-quotes, you can write those as this escape sequence: `\"`. If you instead wrap your string in single quotes, most YAML escape sequences such as `\n` for a line break will be ignored.
#### Numeric
Numeric variables are one or more numbers that are delimited by commas,
ampersands, and hyphens. Numeric variables can express a single number or a
range and contain only integers, but may contain negative numbers. Numeric variables can have a non-numeric prefix and suffix.
```yaml
page-range: S10-15
```
#### Unicode Language Identifier
A [Unicode Language Identifier](https://unicode.org/reports/tr35/tr35.html#unicode_language_id) identifies a language or its variants. At the simplest, you can specify an all-lowercase [two-letter ISO 639-1 code](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) like `en` or `es` as a language. It is possible to specify regions, scripts, or variants to more precisely identify a variety of a language, especially in cases where the ISO 639-1 code is considered a "macrolanguage" (`zh` includes both Cantonese and Mandarin). In such cases, specify values like `en-US` for American English or `zh-Hans-CN` for Mandarin written in simplified script in mainland China. The region tags have to be written in all-caps and are mostly corresponding to [ISO 3166-1 alpha_2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2#Officially_assigned_code_elements) codes.
Consult the [documentation of the Rust crate unic-langid](https://docs.rs/unic-langid/latest/unic_langid/index.html) we use for parsing these language identifiers for more information.
|
|
https://github.com/OverflowCat/BUAA-Digital-Image-Processing-Sp2024 | https://raw.githubusercontent.com/OverflowCat/BUAA-Digital-Image-Processing-Sp2024/master/chap04/3.typ | typst | #import "@preview/unify:0.6.0": qty
#import "helper.typ": Q
#let a = qty(0.5, "mm")
#Q[考虑一幅棋盘图像,其中每一个方格的大小为 #a×#a。假定图像在两个坐标方向上无限扩展,为避免混淆,问最小取样率是多少(样本数/mm)?(教材P192页,第4.12题。)]
记方格边长为 $a = #a $,则其最高频率
$ f_max = 1 / a = #qty(2, "mm^-1"). $
根据 Nyquist 采样定理,取样率至少应为最高频率的两倍,所以
$ f_upright(s) = 2 f_max = #qty(4, "mm^-1"). $
即最小取样率是 4 样本数 / mm。
|
|
https://github.com/gongke6642/tuling | https://raw.githubusercontent.com/gongke6642/tuling/main/Text/raw.typ | typst | #set text(
size:10pt,
)
#set page(
paper:"a5",
margin:(x:1.8cm,y:1.5cm),
)
#set par(
justify: true,
leading: 0.52em,
)
= 原始文本
带有可选语法高亮的原始文本。
使用等宽字体逐字显示文本,通常用于将代码嵌入文档中。
= 例
#image("11.png")
= 语法
此函数还具有专用语法。您可以将文本括在 1 或 3+ 反引号 () 中以使其原始。两个反引号将生成空的原始文本。当您使用三个或更多反引号时,还可以指定一个语言标记,以便在开始反引号之后直接突出显示语法。在原始块中,所有内容(语言标记除外,如果适用)都按原样呈现,特别是没有转义序列。
语言标记是一个标识符,仅当有三个或更多反引号时,它才直接跟随开始的反引号。如果文本以看起来像标识符的内容开头,但不需要突出显示语法,请以单个空格(将被修剪)开始文本或使用单个反引号语法。如果您的文本应该以反引号开头或结尾,请在其前面或后面放置一个空格(它将被修剪)。
= 参数
#image("12.png")
= 文本
原始文本块。
你还可以创造性地使用原始块为你的自动化创建自定义语法。
= 块
是否将原始文本显示在一个单独的块中。
在标记模式中,使用一个反引号会将此值设置为 false。 如果包含的内容至少包含一个换行符,则使用三个反引号会将其设置为 true。
默认:false
= 语言
要在其中突出显示语法的语言。
要在其中突出显示语法的语言,用法与 Markdown 代码块类似。 除了 Markdown 中也有的典型编程语言后缀外,它还分别支持 Typst 标记模式和 Typst 脚本模式分别对应的“typ”和“typc”后缀。
默认:none
= 对齐
原始块中每条线应具有的水平对齐方式。如果这不是原始块(如果指定,则在标记模式中使用block: false或单个反引号)
默认情况下,它被设置为start,这意味着默认情况下,无论当前上下文的对齐方式如何,原始文本都朝向块内文本方向的开头对齐(例如,允许您将原始块本身居中,而不将其内部的文本居中)。
默认:start
= 句法
要加载的一个或多个附加语法定义。语法定义应采用 sublime-syntax 文件格式。
默认:()
= 主题
用于语法高亮的主题。主题文件应该使用tmTheme文件格式。
应用主题仅影响特定突出显示文本的颜色。它不考虑主题的前景和背景属性,因此您可以保留对原始文本颜色的控制。您可以使用text函数自行应用前景颜色,使用填充块应用背景颜色。您还可以使用xml函数从主题中提取这些属性。
默认:none
= 制表符大小
制表位的大小(以空格为单位)。选项卡将替换为足够的空格,以便与大小的下一个倍数对齐。
默认:2
= 定义
= 行
突出显示的原始文本行。
这是一个由原始元素合成的辅助元素。
它允许您访问行的各种属性,例如行号、未突出显示的原始文本、突出显示的文本以及它是原始块的第一行还是最后一行。
#image("13.png")
= 数字
原始块内的原始行的行号从 1 开始。
= 计数
原始块中的总行数。
= 文本
原始文本的行。
= 正文
突出显示的原始文本。 |
|
https://github.com/tweaselORG/ReportHAR | https://raw.githubusercontent.com/tweaselORG/ReportHAR/main/README.md | markdown | MIT License | # ReportHAR
> Generate technical reports, controller notices and GDPR complaints concerning tracking by mobile apps for the tweasel project.
ReportHAR is the library for generating the following documents for the tweasel project based on network traffic recordings in HAR files:
* **Technical reports** detailing the findings and methodology of our automated analyses concerning tracking and similar data transmissions performed on mobile apps. The reports include a reproduction of the recorded network traffic.
* **Notices to controllers** making them aware of data protection violations discovered in their app and giving them the opportunity to remedy the violations.
* **Complaints to data protection authorities** about apps that continue to violate data protection law even after the controller was notified and given time to fix the issues. The complaint contains both a technical assessment, based on the traffic analysis, and a detailed legal assessment.
> [!NOTE]
> Currently, ReportHAR only works with templates that are quite specific and hardcoded to our use case with tweasel. Support for custom templates is [planned](https://github.com/tweaselORG/ReportHAR/issues/7).
All documents are generated as PDF files using [Typst](https://typst.app/). The [templates](/templates/) are translatable.
Using ReportHAR is most convenient for traffic recordings made with tweasel tools, which contain [additional metadata about the analysis](https://github.com/tweaselORG/cyanoacrylate#additional-metadata-in-exported-har-files). This way, you don't need to manually provide information about the app, device, etc. However, ReportHAR can also work with HAR files produced by other tools.
ReportHAR doesn't actually analyze the traffic itself, it just produces the documents. You need to use [TrackHAR](https://github.com/tweaselORG/TrackHAR) to detect the transmitted personal data and provide that result to ReportHAR.
## Installation
You can install ReportHAR using yarn or npm:
```sh
yarn add reporthar
# or `npm i reporthar`
```
## API reference
A full API reference can be found in the [`docs` folder](/docs/README.md).
## Example usage
ReportHAR provides two main functions for generating documents: `generate()` and `generateAdvanced()`.
### Usage with tweasel HAR files
`generate()` is the high-level function that is easiest to use. It expects a tweasel HAR file with additional metadata and automatically extracts all required information from it.
First, we generate the initial technical report and notice to send to the controller:
```ts
import { writeFile } from 'fs/promises';
import { process } from 'trackhar';
import { generate } from 'reporthar';
(async () => {
// We start by loading the HAR file of the initial analysis…
const initialHar = /* […] */;
// …and detect the transmitted tracking data using TrackHAR.
const initialTrackHarResult = await process(initialHar);
// Then, we pass both to the `generate()` function to generate
// the technical report…
const initialReport = await generate({
type: 'report',
language: 'en',
har: initialHar,
trackHarResult: initialTrackHarResult,
});
// …and the controller notice.
const notice = await generate({
type: 'notice',
language: 'en',
har: initialHar,
trackHarResult: initialTrackHarResult,
});
// This will give you two PDFs that you can for example
// save to disk.
await writeFile('initial-report.pdf', initialReport);
await writeFile('notice.pdf', notice);
// Remember to store the TrackHAR result as it will also be
// needed for the complaint.
await writeFile(
'initial-trackhar-result.json',
JSON.stringify(initialTrackHarResult)
);
})();
```
If the controller did not appropriately remedy the violations after the deadline, we will send a complaint to the DPAs:
```ts
import { writeFile } from 'fs/promises';
import { process } from 'trackhar';
import { generate, parseNetworkActivityReport } from 'reporthar';
(async () => {
// We again start by loading the HAR files of the initial and
// second analysis, as well as the TrackHAR analysis of the
// initial analysis that we have stored previously.
const initialHar = /* […] */;
const secondHar = /* […] */;
const initialTrackHarResult = /* […] */;
// Again, we detect the transmitted tracking data in the
// second HAR using TrackHAR.
const secondTrackHarResult = await process(secondHar);
// Based on that, we generate the second report.
const secondReport = await generate({
type: 'report',
language: 'en',
har: secondHar,
trackHarResult: secondTrackHarResult,
});
// For the complaint, we also load and parse a report of the
// network activity on the user's device created using the iOS
// App Privacy Report or the Tracker Control app on Android.
// This is to prove that the user making the complaint was
// personally affected by the tracking.
const userNetworkActivityRaw = /* […] */;
const userNetworkActivity = parseNetworkActivityReport(
'tracker-control-csv',
userNetworkActivityRaw
);
// We can then also generate the complaint, providing a whole
// bunch of additional metadata.
const complaint = await generate({
type: 'complaint',
language: 'en',
initialHar: initialHar,
initialTrackHarResult,
har: secondHar,
trackHarResult: secondTrackHarResult,
complaintOptions: {
date: new Date(),
reference: '2024-1ONO079C',
noticeDate: new Date('2023-12-01'),
nationalEPrivacyLaw: 'TDDDG',
complainantAddress: 'Kim Mustermensch, Musterstraße 123, 12345 Musterstadt, Musterland',
controllerAddress: 'Musterfirma, Musterstraße 123, 12345 Musterstadt, Musterland',
controllerAddressSourceUrl: 'https://play.google.com/store/apps/details?id=tld.sample.app',
userDeviceAppStore: 'Google Play Store',
loggedIntoAppStore: true,
deviceHasRegisteredSimCard: true,
controllerResponse: 'denial',
complainantContactDetails: '<EMAIL>',
complainantAgreesToUnencryptedCommunication: true,
userNetworkActivity,
},
});
await writeFile('second-report.pdf', secondReport);
await writeFile('complaint.pdf', complaint);
})();
```
### Usage with regular HAR files
If you want to use ReportHAR with HAR files from other sources that don't include the tweasel metadata, you need to use the `generateAdvanced()` function instead and manually specify the information about the app and analysis that would otherwise be included in the HAR file.
Otherwise, the flow is the same as above.
```ts
const initialAnalysis = {
date: new Date('2023-12-01T10:00:00.000Z'),
deviceType: 'emulator',
platformVersion: '13',
platformBuildString: 'lineage_ocean-userdebug 13 TQ2A.230505.002 8c3345902f',
deviceManufacturer: 'motorola',
deviceModel: 'moto g(7) power',
har: initialHar,
harMd5: '1ee2afb03562aa4d22352ed6b2548a6b',
trackHarResult: initialTrackHarResult,
app: {
platform: 'Android',
id: 'tld.sample.app',
name: 'Sample App',
version: '1.2.3',
url: 'https://play.google.com/store/apps/details?id=tld.sample.app',
store: 'Google Play Store',
},
dependencies: {
"python": "3.11.3",
"mitmproxy": "9.0.1"
},
};
const initialReport = await generateAdvanced({
type: 'report',
language: 'en',
analysis: initialAnalysis,
});
const notice = await generateAdvanced({
type: 'notice',
language: 'en',
analysis: initialAnalysis,
});
const secondTrackHarResult = await process(secondHar);
const secondAnalysis = {
date: new Date('2024-02-01T10:00:00.000Z'),
deviceType: 'emulator',
platformVersion: '13',
platformBuildString: 'lineage_ocean-userdebug 13 TQ2A.230505.002 8c3345902f',
deviceManufacturer: 'motorola',
deviceModel: 'moto g(7) power',
har: secondHar,
harMd5: '2bb3aec14673bb5e33463fe7c3658b7d',
trackHarResult: secondTrackHarResult,
app: {
platform: 'Android',
id: 'tld.sample.app',
name: 'Sample App',
version: '1.2.4',
url: 'https://play.google.com/store/apps/details?id=tld.sample.app',
store: 'Google Play Store',
},
dependencies: {
"python": "3.11.3",
"mitmproxy": "9.0.1"
},
};
const secondReport = await generateAdvanced({
type: 'report',
language: 'en',
analysis: secondAnalysis,
});
const complaint = await generateAdvanced({
type: 'complaint',
language: 'en',
initialAnalysis,
analysis: secondAnalysis,
complaintOptions: {
date: new Date('2024-02-15'),
reference: '2024-1ONO079C',
noticeDate: new Date('2023-12-01'),
nationalEPrivacyLaw: 'TDDDG',
complainantAddress: 'Kim Mustermensch, Musterstraße 123, 12345 Musterstadt, Musterland',
controllerAddress: 'Musterfirma, Musterstraße 123, 12345 Musterstadt, Musterland',
controllerAddressSourceUrl: 'https://play.google.com/store/apps/details?id=tld.sample.app',
userDeviceAppStore: 'Google Play Store',
loggedIntoAppStore: true,
deviceHasRegisteredSimCard: true,
controllerResponse: 'denial',
complainantContactDetails: '<EMAIL>',
complainantAgreesToUnencryptedCommunication: true,
userNetworkActivity,
},
});
```
## License
This code is licensed under the MIT license, see the [`LICENSE`](LICENSE) file for details.
Issues and pull requests are welcome! Please be aware that by contributing, you agree for your work to be licensed under an MIT license.
|
https://github.com/jomaway/typst-gentle-clues | https://raw.githubusercontent.com/jomaway/typst-gentle-clues/main/lib/predefined.typ | typst | MIT License | #import "@preview/linguify:0.4.0": *
#import "clues.typ": clue, if-auto-then
#import "theme.typ": catppuccin as theme
// load linguify language database
#let lang_database = toml("lang.toml")
/// Helper for fetching the translated title
#let get-title-for(id) = {
assert.eq(type(id),str);
return linguify(id, from: lang_database, default: linguify(id, lang: "en", default: id));
}
/// Helper to get the accent-color from the theme
///
/// - id (string): The id for the predefined clue.
/// -> color
#let get-accent-color-for(id) = {
return theme.at(id).accent-color
}
/// Helper to get the icon from the theme
///
/// - id (string): The id for the predefined clue.
/// -> content
#let get-icon-for(id) = {
let icon = theme.at(id).icon
if type(icon) == str {
return image("assets/" + theme.at(id).icon, fit: "contain")
} else {
return icon
}
}
/// Wrapper function for all predefined clues.
///
/// - id (string): The id of the clue from which color, icon and default title will be calculated.
/// - ..args (parameter): for overwriting the default parameter of a clue.
#let predefined-clue(id, ..args) = clue(
accent-color: get-accent-color-for(id),
title: get-title-for(id),
icon: get-icon-for(id),
..args
)
#let info(..args) = predefined-clue("info",..args)
#let notify(..args) = predefined-clue("notify",..args)
#let success(..args) = predefined-clue("success",..args)
#let warning(..args) = predefined-clue("warning",..args)
#let danger(..args) = predefined-clue("danger",..args)
#let error(..args) = predefined-clue("error",..args)
#let tip(..args) = predefined-clue("tip",..args)
#let abstract(..args) = predefined-clue("abstract",..args)
#let goal(..args) = predefined-clue("goal",..args)
#let question(..args) = predefined-clue("question",..args)
#let idea(..args) = predefined-clue("idea",..args)
#let example(..args) = predefined-clue("example",..args)
#let experiment(..args) = predefined-clue("experiment",..args)
#let conclusion(..args) = predefined-clue("conclusion",..args)
#let memo(..args) = predefined-clue("memo",..args)
#let code(..args) = predefined-clue("code",..args)
#let quotation(attribution: none, content, ..args) = predefined-clue("quote",..args)[
#quote(block: true, attribution: attribution)[#content]
]
#let __gc_task-counter = counter("gc-task-counter")
#let gc-task-counter-enabled = state("gc-task-counter", true)
#let increment_task_counter() = {
context {
if (gc-task-counter-enabled.get() == true){
__gc_task-counter.step()
}
}
}
#let get_task_number() = {
context {
if (gc-task-counter-enabled.get() == true){
" " + __gc_task-counter.display()
}
}
}
#let task(..args) = {
increment_task_counter()
predefined-clue("task", title: get-title-for("task") + get_task_number(), ..args)
}
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/cetz/0.0.2/manual.typ | typst | Apache License 2.0 | #import "lib.typ"
#import "styles.typ"
#import lib: *
#import "deps/typst-doc/typst-doc.typ": parse-module, show-module
// This is a wrapper around typs-doc show-module that
// strips all but one function from the module first.
// As soon as typst-doc supports examples, this is no longer
// needed.
#let show-module-fn(module, fn, ..args) = {
module.functions = module.functions.filter(f => f.name == fn)
show-module(module, ..args.pos(), ..args.named(), show-module-name: false)
}
#let canvas-background = gray.lighten(75%)
#let example(body, source, ..args, vertical: false) = {
block(if vertical {
align(
center,
stack(
dir: ttb,
spacing: 1em,
block(width: 100%,
canvas(body, ..args),
fill: canvas-background,
inset: 1em
),
align(left, source)
)
)
} else {
table(
columns: (auto, auto),
stroke: none,
fill: (x,y) => (canvas-background, none).at(x),
align: (x,y) => (center, left).at(x),
canvas(body, ..args),
source
)
}, breakable: false)
}
#let def-arg(term, t, default: none, description) = {
if type(t) == "string" {
t = t.replace("?", "|none")
t = `<` + t.split("|").map(s => {
if s == "b" {
`boolean`
} else if s == "s" {
`string`
} else if s == "i" {
`integer`
} else if s == "f" {
`float`
} else if s == "c" {
`coordinate`
} else if s == "d" {
`dictionary`
} else if s == "a" {
`array`
} else if s == "n" {
`number`
} else {
raw(s)
}
}).join(`|`) + `>`
}
stack(dir: ltr, [/ #term: #t \ #description], align(right, if default != none {[(default: #default)]}))
}
#set page(
numbering: "1/1",
header: align(right)[The `CeTZ` package],
)
#set heading(numbering: "1.")
#set terms(indent: 1em)
#show link: set text(blue)
#let STYLING = heading(level: 4, numbering: none)[Styling]
#align(center, text(16pt)[*The `CeTZ` package*])
#align(center)[
#link("https://github.com/johannes-wolf")[<NAME>] and #link("https://github.com/fenjalien")[fenjalien] \
https://github.com/johannes-wolf/typst-canvas \
Version #lib.version.map(v => str(v)).join(".")
]
#set par(justify: true)
#outline(indent: true, depth: 3)
#pagebreak(weak: true)
= Introduction
This package provides a way to draw stuff using a similar API to #link("https://processing.org/")[Processing] but with relative coordinates and anchors from #link("https://tikz.dev/")[Ti#[_k_]Z]. You also won't have to worry about accidentally drawing over other content as the canvas will automatically resize. And remember: up is positive!
The name CeTZ is a recursive acronym for "CeTZ, ein Typst Zeichenpacket" (german for "CeTZ, a Typst drawing package") and is pronounced like the word "Cats".
= Usage
This is the minimal starting point:
```typ
#import "@local/cetz:0.0.2"
#cetz.canvas({
import cetz.draw: *
...
})
```
Note that draw functions are imported inside the scope of the `canvas` block. This is recommended as draw functions override Typst's functions such as `line`.
== Argument Types
Argument types in this document are formatted in `monospace` and encased in angle brackets `<>`. Types such as `<integer>` and `<content>` are the same as Typst but additional are required:
/ `<coordinate>`: Any coordinate system. See @coordinate-systems.
/ `<number>`: `<integer> or <float>`
== Anchors <anchors>
Anchors are named positions relative to named elements.
To use an anchor of an element, you must give the element a name using the `name` argument.
#example({
import "draw.typ": *
circle((0,0), name: "circle")
fill(red)
stroke(none)
circle("circle.left", radius: 0.3)
},
[```typc
// Name the circle
circle((0,0), name: "circle")
// Draw a smaller red circle at "circle"'s left anchor
fill(red)
stroke(none)
circle("circle.left", radius: 0.3)
```]
)
All elements will have default anchors based on their bounding box, they are: `center`, `left`, `right`, `above`/`top` and `below`/`bottom`, `top-left`, `top-right`, `bottom-left`, `bottom-right`. Some elements will have their own anchors.
Elements can be placed relative to their own anchors.
#example({
import "draw.typ": *
circle((0,0), anchor: "left")
fill(red)
stroke(none)
circle((0,0), radius: 0.3)
},
[```typc
// An element does not have to be named
// in order to use its own anchors.
circle((0,0), anchor: "left")
// Draw a smaller red circle at the origin
fill(red)
stroke(none)
circle((0,0), radius: 0.3)
```]
)
= Draw Function Reference
== Canvas
```typc
canvas(background: none, length: 1cm, debug: false, body)
```
#def-arg("background", `<color>`, default: "none", "A color to be used for the background of the canvas.")
#def-arg("length", `<length>`, default: "1cm", "Used to specify what 1 coordinate unit is.")
#def-arg("debug", `<bool>`, default: "false", "Shows the bounding boxes of each element when `true`.")
#def-arg("body", none, [A code block in which functions from `draw.typ` have been called.])
== Styling <styling>
You can style draw elements by passing the relevant named arguments to their draw functions. All elements have stroke and fill styling unless said otherwise.
#def-arg("fill", [`<color>` or `<none>`], default: "none", [How to fill the draw element.])
#def-arg("stroke", [`<none>` or `<auto>` or `<length>` \ or `<color>` or `<dicitionary>` or `<stroke>`], default: "black + 1pt", [How to stroke the border or the path of the draw element. See Typst's line documentation for more details: https://typst.app/docs/reference/visualize/line/#parameters-stroke])
#example({
import "draw.typ": *
// Draws a red circle with a blue border
circle((), fill: red, stroke: blue)
// Draws a green line
line((), (1,1), stroke: green)
},
[```typc
cetz.canvas({
import cetz.draw: *
// Draws a red circle with a blue border
circle((0, 0), fill: red, stroke: blue)
// Draws a green line
line((0, 0), (1, 1), stroke: green)
})
```]
)
Instead of having to specify the same styling for each time you want to draw an element, you can use the `set-style` function to change the style for all elements after it. You can still pass styling to a draw function to override what has been set with `set-style`. You can also use the `fill()` and `stroke()` functions as a shorthand to set the fill and stroke respectively.
#example({
import "draw.typ": *
// Shows styling is applied after
rect((-1, -1), (1, 1))
// Shows how `set-style` works
set-style(stroke: blue, fill: red)
circle((0,0))
// Shows that styling can be overridden
line((), (1,1), stroke: green)
},
[```typc
cetz.canvas({
import cetz.draw: *
// Draws an empty square with a black border
rect((-1, -1), (1, 1))
// Sets the global style to have a fill of red and a stroke of blue
set-style(stroke: blue, fill: red)
circle((0,0))
// Draws a green line despite the global stroke is blue
line((), (1,1), stroke: green)
})
```]
)
When using a dictionary for a style, it is important to note that they update each other instead of overriding the entire option like a non-dictionary value would do. For example, if the stroke is set to `(paint: red, thickness: 5pt)` and you pass `(paint: blue)`, the stroke would become `(paint: blue, thickness: 5pt)`.
#example({
import "draw.typ": *
set-style(stroke: (paint: red, thickness: 5pt))
line((0,0), (1,0))
line((0,0), (1,1), stroke: (paint: blue))
line((0,0), (0,1), stroke: yellow)
},
[```typc
canvas({
import cetz.draw: *
// Sets the stroke to red with a thickness of 5pt
set-style(stroke: (paint: red, thickness: 5pt))
// Draws a line with the global stroke
line((0,0), (1,0))
// Draws a blue line with a thickness of 5pt because dictionaries update the style
line((0,0), (1,1), stroke: (paint: blue))
// Draws a yellow line with a thickness of 1pt because other values override the style
line((0,0), (0,1), stroke: yellow)
})
```]
)
You can also specify styling for each type of element. Note that dictionary values will still update with its global value, the full hierarchy is `function > element type > global`. When the value of a style is `auto`, it will become exactly its parent style.
#example({
import "draw.typ": *
set-style(
fill: green,
stroke: (thickness: 5pt),
rect: (stroke: (dash: "dashed"), fill: blue),
)
rect((0,0), (1,1))
circle((0.5, -1.5))
rect((0,-3), (1, -4), stroke: (thickness: 1pt))
},
[
```typc
canvas({
import cetz.draw: *
set-style(
// Global fill and stroke
fill: green,
stroke: (thickness: 5pt),
// Stroke and fill for only rectangles
rect: (stroke: (dash: "dashed"), fill: blue),
)
rect((0,0), (1,1))
circle((0.5, -1.5))
rect((0,-3), (1, -4), stroke: (thickness: 1pt))
})
```])
#example({
import "draw.typ": *
set-style(
rect: (
fill: red,
stroke: none
),
line: (
fill: blue,
stroke: (dash: "dashed")
),
)
rect((0,0), (1,1))
line((0, -1.5), (0.5, -0.5), (1, -1.5), close: true)
circle((0.5, -2.5), radius: 0.5, fill: green)
},
[
```typc
// Its a nice drawing okay
cetz.canvas({
import cetz.draw: *
set-style(
rect: (
fill: red,
stroke: none
),
line: (
fill: blue,
stroke: (dash: "dashed")
),
)
rect((0,0), (1,1))
line((0, -1.5), (0.5, -0.5), (1, -1.5), close: true)
circle((0.5, -2.5), radius: 0.5, fill: green)
})
```])
== Elements
#let draw-module = parse-module("../../draw.typ", name: "Draw")
#show-module-fn(draw-module, "line")
#example({
import "draw.typ": *
line((-1.5, 0), (1.5, 0))
line((0, -1.5), (0, 1.5))
},
[
```typc
canvas({
import cetz.draw: *
line((-1.5, 0), (1.5, 0))
line((0, -1.5), (0, 1.5))
})
```])
#STYLING
#def-arg("mark", `<dictionary> or <auto>`, default: auto, [The styling to apply to marks on the line, see `mark`])
#show-module-fn(draw-module, "rect")
#example({
import "draw.typ": *
rect((-1.5, 1.5), (1.5, -1.5))
},
[```typc
canvas({
import cetz.draw: *
rect((-1.5, 1.5), (1.5, -1.5))
})
```])
#show-module-fn(draw-module, "arc")
#example({
import "draw.typ": *
arc((0,0), start: 45deg, stop: 135deg)
arc((0,-0.5), start: 45deg, delta: 90deg, mode: "CLOSE")
arc((0,-1), stop: 135deg, delta: 90deg, mode: "PIE")
},
[```typc
cetz.canvas({
import cetz.draw: *
arc((0,0), start: 45deg, stop: 135deg)
arc((0,-0.5), start: 45deg, delta: 90deg, mode: "CLOSE")
arc((0,-1), stop: 135deg, delta: 90deg, mode: "PIE")
})
```]
)
#STYLING
#def-arg("radius", `<number> or <array>`, default: 1, [The radius of the arc. This is also a global style shared with circle!])
#def-arg("mode", `<string>`, default: `"OPEN"`, [The options are "OPEN" (the default, just the arc), "CLOSE" (a circular segment) and "PIE" (a circular sector).])
#show-module-fn(draw-module, "circle")
#example({
import "draw.typ": *
circle((0,0))
circle((0,-2), radius: (0.75, 0.5))
},
[```typc
cetz.canvas({
import cetz.draw: *
circle((0,0))
// Draws an ellipse
circle((0,-2), radius: (0.75, 0.5))
})
```]
)
#show-module-fn(draw-module, "circle-through")
#example({
import "draw.typ": *
let (a, b, c) = ((0,0), (2,-.5), (1,1))
line(a, b, c, close: true, stroke: gray)
circle-through(a, b, c, name: "c")
circle("c.center", radius: .05, fill: red)
},
[```typ
#cetz.canvas({
import cetz.draw: *
let (a, b, c) = ((0,0), (2,-.5), (1,1))
line(a, b, c, close: true, stroke: gray)
circle-through(a, b, c, name: "c")
circle("c.center", radius: .05, fill: red)
})
```]
)
#STYLING
#def-arg("radius", `<number> or <length> or <array of <number> or <length>>`, default: "1", [The circle's radius. If an array is given an ellipse will be drawn where the first item is the `x` radius and the second item is the `y` radius. This is also a global style shared with arc!])
#show-module-fn(draw-module, "bezier")
#example({
import "draw.typ": *
let (a, b, c) = ((0, 0), (2, 0), (1, 1))
line(a, c, b, stroke: gray)
bezier(a, b, c)
let (a, b, c, d) = ((0, -1), (2, -1), (.5, -2), (1.5, 0))
line(a, c, d, b, stroke: gray)
bezier(a, b, c, d)
},
[```typc
cetz.canvas({
import cetz.draw: *
let (a, b, c) = ((0, 0), (2, 0), (1, 1))
line(a, c, b, stroke: gray)
bezier(a, b, c)
let (a, b, c, d) = ((0, -1), (2, -1), (.5, -2), (1.5, 0))
line(a, c, d, b, stroke: gray)
bezier(a, b, c, d)
})
```]
)
#show-module-fn(draw-module, "bezier-through")
#example({
import "draw.typ": *
let (a, b, c) = ((0, 0), (1, 1), (2, -1))
line(a, b, c, stroke: gray)
bezier-through(a, b, c, name: "b")
// Show calculated control points
line(a, "b.ctrl-1", "b.ctrl-2", c, stroke: gray)
},
[```typ
#cetz.canvas({
import cetz.draw: *
let (a, b, c) = ((0, 0), (1, 1), (2, -1))
line(a, b, c, stroke: gray)
bezier-through(a, b, c, name: "b")
// Show calculated control points
line(a, "b.ctrl-1", "b.ctrl-2", c, stroke: gray)
})
```]
)
#show-module-fn(draw-module, "content")
#example({
import "draw.typ": *
content((0,0), [Hello World!])
},
[```typc
cetz.canvas({
import cetz.draw: *
content((0,0), [Hello World!])
})
```]
)
#example({
import "draw.typ": *
let (a, b) = ((1,0), (3,1))
line(a, b)
content((a, .5, b), angle: b, [Text on a line], anchor: "bottom")
},
[```typc
cetz.canvas({
import cetz.draw: *
let (a, b) = ((1,0), (3,1))
line(a, b)
content((a, .5, b), angle: b, [Text on a line], anchor: "bottom")
})
```]
)
#STYLING
This draw element is not affected by fill or stroke styling.
#def-arg("padding", `<length>`, default: 0em, "")
#show-module-fn(draw-module, "grid")
#example({
import "draw.typ": *
grid((0,0), (3,3), help-lines: true)
},
[```typc
cetz.canvas({
import cetz.draw: *
grid((0,0), (3,2), help-lines: true)
})
```]
)
#show-module-fn(draw-module, "mark")
#example({
import "draw.typ": *
line((1, 0), (1, 6), stroke: (paint: gray, dash: "dotted"))
set-style(mark: (fill: none))
line((0, 6), (1, 6), mark: (end: "<"))
line((0, 5), (1, 5), mark: (end: ">"))
set-style(mark: (fill: black))
line((0, 4), (1, 4), mark: (end: "<>"))
line((0, 3), (1, 3), mark: (end: "o"))
line((0, 2), (1, 2), mark: (end: "|"))
line((0, 1), (1, 1), mark: (end: "<"))
line((0, 0), (1, 0), mark: (end: ">"))
},
[```typc
cetz.canvas({
import cetz.draw: *
line((1, 0), (1, 6), stroke: (paint: gray, dash: "dotted"))
set-style(mark: (fill: none))
line((0, 6), (1, 6), mark: (end: "<"))
line((0, 5), (1, 5), mark: (end: ">"))
set-style(mark: (fill: black))
line((0, 4), (1, 4), mark: (end: "<>"))
line((0, 3), (1, 3), mark: (end: "o"))
line((0, 2), (1, 2), mark: (end: "|"))
line((0, 1), (1, 1), mark: (end: "<"))
line((0, 0), (1, 0), mark: (end: ">"))
})
```]
)
#STYLING
#def-arg("symbol", `<string>`, default: ">", [The type of mark to draw when using the `mark` function.])
#def-arg("start", `<string>`, [The type of mark to draw at the start of a path.])
#def-arg("end", `<string>`, [The type of mark to draw at the end of a path.])
#def-arg("size", `<number>`, default: "0.15", [The size of the marks.])
== Path Transformations <path-transform>
#show-module-fn(draw-module, "merge-path")
#example({
import "draw.typ": *
merge-path({
line((0, 0), (1, 0))
bezier((), (0, 0), (1,1), (0,1))
}, fill: white)
}, ```typc
// Merge two different paths into one
merge-path({
line((0, 0), (1, 0))
bezier((), (0, 0), (1,1), (0,1))
}, fill: white)
```)
#show-module-fn(draw-module, "group")
#example({
import "draw.typ": *
group({
stroke(5pt)
scale(.5); rotate(45deg)
rect((-1,-1),(1,1))
})
rect((-1,-1),(1,1))
}, ```typc
// Create group
group({
stroke(5pt)
scale(.5); rotate(45deg)
rect((-1,-1),(1,1))
})
rect((-1,-1),(1,1))
```)
#show-module-fn(draw-module, "anchor")
#example({
import lib.draw: *
group(name: "g", {
circle((0,0))
anchor("x", (.4,.1))
})
circle("g.x", radius: .1, fill: black)
},
```typc
group(name: "g", {
circle((0,0))
anchor("x", (.4,.1))
})
circle("g.x", radius: .1)
```)
#show-module-fn(draw-module, "copy-anchors")
#example({
import lib.draw: *
group(name: "g", {
rotate(45deg)
rect((0,0), (1,1), name: "r")
copy-anchors("r")
})
circle("g.top", radius: .1, fill: black)
},
```typc
group(name: "g", {
rotate(45deg)
rect((0,0), (1,1), name: "r")
copy-anchors("r")
})
circle("g.top", radius: .1, fill: black)
```)
#show-module-fn(draw-module, "place-anchors")
#example({
import lib.draw: *
place-anchors(name: "demo",
bezier((0,0), (3,0), (1,-1), (2,1)),
(name: "a", pos: .15),
(name: "mid", pos: .5))
circle("demo.a", radius: .1, fill: black)
circle("demo.mid", radius: .1, fill: black)
},
```typc
place-anchors(name: "demo",
bezier((0,0), (3,0), (1,-1), (2,1)),
(name: "a", pos: .15),
(name: "mid", pos: .5))
circle("demo.a", radius: .1, fill: black)
circle("demo.mid", radius: .1, fill: black)
```)
#show-module-fn(draw-module, "intersections")
#example({
import lib.draw: *
intersections(name: "demo", {
circle((0, 0))
bezier((0,0), (3,0), (1,-1), (2,1))
line((0,-1), (0,1))
rect((1.5,-1),(2.5,1))
})
for-each-anchor("demo", (name) => {
circle("demo." + name, radius: .1, fill: black)
})
},
```typc
intersections(name: "demo", {
circle((0, 0))
bezier((0,0), (3,0), (1,-1), (2,1))
line((0,-1), (0,1))
rect((1.5,-1),(2.5,1))
})
for-each-anchor("demo", (name) => {
circle("demo." + name, radius: .1, fill: black)
})
```)
== Transformations
All transformation functions push a transformation matrix onto the current transform stack.
To apply transformations scoped use a `group(...)` object.
Transformation martices get multiplied in the following order:
$ M_"world" = M_"world" dot M_"local" $
#show-module-fn(draw-module, "translate")
#example({
import "draw.typ": *
rect((0,0), (2,2))
translate((.5,.5,0))
rect((0,0), (1,1))
}, ```typc
// Outer rect
rect((0,0), (2,2))
// Inner rect
translate((.5,.5,0))
rect((0,0), (1,1))
```)
#show-module-fn(draw-module, "set-origin")
#example({
import "draw.typ": *
rect((0,0), (2,2), name: "r")
set-origin("r.above")
circle((0, 0), radius: .1)
}, ```typc
// Outer rect
rect((0,0), (2,2), name: "r")
// Move origin to top edge
set-origin("r.above")
circle((0, 0), radius: .1)
```)
#show-module-fn(draw-module, "set-viewport")
#example({
import "draw.typ": *
rect((0,0), (2,2))
set-viewport((0,0), (2,2), bounds: (10, 10))
circle((5,5))
}, ```typc
rect((0,0), (2,2))
set-viewport((0,0), (2,2), bounds: (10, 10))
circle((5,5))
```)
#show-module-fn(draw-module, "rotate")
#example({
import "draw.typ": *
rotate((z: 45deg))
rect((-1,-1), (1,1))
rotate((y: 80deg))
circle((0,0))
}, ```typc
// Rotate on z-axis
rotate((z: 45deg))
rect((-1,-1), (1,1))
// Rotate on y-axis
rotate((y: 80deg))
circle((0,0))
```)
#show-module-fn(draw-module, "scale")
#example({
import "draw.typ": *
scale((x: 1.8))
circle((0,0))
}, ```typc
// Scale x-axis
scale((x: 1.8))
circle((0,0))
```)
= Coordinate Systems <coordinate-systems>
A _coordinate_ is a position on the canvas on which the picture is drawn. They take the form of dictionaries and the following sub-sections define the key value pairs for each system. Some systems have a more implicit form as an array of values and `CeTZ` attempts to infer the system based on the element types.
== XYZ <coordinate-xyz>
Defines a point `x` units right, `y` units upward, and `z` units away.
#def-arg("x", [`<number>` or `<length>`], default: 0, [The number of units in the `x` direction.])
#def-arg("y", [`<number>` or `<length>`], default: 0, [The number of units in the `y` direction.])
#def-arg("z", [`<number>` or `<length>`], default: 0, [The number of units in the `z` direction.])
The implicit form can be given as an array of two or three `<number>` or `<length>`, as in `(x,y)` and `(x,y,z)`.
#example(
{
import "draw.typ": *
line((0, 0), (x: 1))
line((0, 0), (y: 1))
line((0, 0), (z: 1))
// Implicit form
line((0, -2), (1, -2))
line((0, -2), (0, -1, 0))
line((0, -2), (0, -2, 1))
},
[```typc
#import "@local/cetz:0.0.2"
#cetz.canvas({
import cetz.draw: *
line((0,0), (x: 1))
line((0,0), (y: 1))
line((0,0), (z: 1))
// Implicit form
line((0, -2), (1, -2))
line((0, -2), (0, -1, 0))
line((0, -2), (0, -2, 1))
})
```]
)
== Previous <previous>
Use this to reference the position of the previous coordinate passed to a draw function. This will never reference the position of a coordinate used in to define another coordinate. It takes the form of an empty array `()`. The previous position initially will be `(0, 0, 0)`.
#example(
{
import "draw.typ": *
line((0,0), (1, 1))
circle(())
},
[```typc
#import "@local/cetz:0.0.2"
#cetz.canvas({
import cetz.draw: *
line((0,0), (1, 1))
// Draws a circle at (1,1)
circle(())
})
```]
)
== Relative <coordinate-relative>
Places the given coordinate relative to the previous coordinate. Or in other words, for the given coordinate, the previous coordinate will be used as the origin. Another coordinate can be given to act as the previous coordinate instead.
#def-arg("rel", `<coordinate>`, "The coordinate to be place relative to the previous coordinate.")
#def-arg("update", `<bool>`, default: true, "When false the previous position will not be updated.")
#def-arg("to", `<coordinate>`, default: (), "The coordinate to treat as the previous coordinate.")
In the example below, the red circle is placed one unit below the blue circle. If the blue circle was to be moved to a different position, the red circle will move with the blue circle to stay one unit below.
#example({
import "draw.typ": *
circle((0, 0), stroke: blue)
circle((rel: (0, -1)), stroke: red)
},
[```typc
#import "@local/cetz:0.0.2"
#cetz.canvas({
import cetz.draw: *
circle((0, 0), stroke: blue)
circle((rel: (0, -1)), stroke: red)
})
```]
)
== Polar
Defines a point a `radius` distance away from the origin at the given `angle`. An angle of zero degrees. An angle of zero degrees is to the right, a degree of 90 is upward.
#def-arg("angle", `<angle>`, [The angle of the coordinate.])
#def-arg("radius", `<number> or <length> or <array of length or number>`, [The distance from the origin. An array can be given, in the form `(x, y)` to define the `x` and `y` radii of an ellipse instead of a circle.])
#example(
{
import "draw.typ": *
line((0,0), (angle: 30deg, radius: 1cm))
},
[```typc
#import "@local/cetz:0.0.2"
#cetz.canvas({
import cetz.draw: *
line((0,0), (angle: 30deg, radius: 1cm))
})
```]
)
The implicit form is an array of the angle then the radius `(angle, radius)` or `(angle, (x, y))`.
#example(
{
import "draw.typ": *
line((0,0), (30deg, 1), (60deg, 1), (90deg, 1), (120deg, 1), (150deg, 1), (180deg, 1),)
},
[```typc
#import "@local/cetz:0.0.2"
#cetz.canvas({
import cetz.draw: *
line((0,0), (30deg, 1), (60deg, 1),
(90deg, 1), (120deg, 1), (150deg, 1), (180deg, 1))
})
```]
)
== Barycentric
In the barycentric coordinate system a point is expressed as the linear combination of multiple vectors. The idea is that you specify vectors $v_1$, $v_2$ ..., $v_n$ and numbers $alpha_1$, $alpha_2$, ..., $alpha_n$. Then the barycentric coordinate specified by these vectors and numbers is $ (alpha_1 v_1 + alpha_2 v_1 + dots.c + alpha_n v_n)/(alpha_1 + alpha_2 + dots.c + alpha_n) $
#def-arg("bary", `<dictionary>`, [A dictionary where the key is a named element and the value is a `<float>`. The `center` anchor of the named element is used as $v$ and the value is used as $a$.])
#example(
vertical: true,
{
import "draw.typ": *
circle((90deg, 3), radius: 0, name: "content")
circle((210deg, 3), radius: 0, name: "structure")
circle((-30deg, 3), radius: 0, name: "form")
for (c, a) in (("content", "bottom"), ("structure", "top-right"), ("form", "top-left")) {
content(c, box(c + " oriented", inset: 5pt), anchor: a)
}
stroke(gray + 1.2pt)
line("content", "structure", "form", close: true)
for (c, s, f, cont) in (
(0.5, 0.1, 1, "PostScript"),
(1, 0, 0.4, "DVI"),
(0.5, 0.5, 1, "PDF"),
(0, 0.25, 1, "CSS"),
(0.5, 1, 0, "XML"),
(0.5, 1, 0.4, "HTML"),
(1, 0.2, 0.8, "LaTeX"),
(1, 0.6, 0.8, "TeX"),
(0.8, 0.8, 1, "Word"),
(1, 0.05, 0.05, "ASCII")
) {
content((bary: (content: c, structure: s, form: f)), cont)
}
},
[```typc
circle((90deg, 3), radius: 0, name: "content")
circle((210deg, 3), radius: 0, name: "structure")
circle((-30deg, 3), radius: 0, name: "form")
for (c, a) in (
("content", "bottom"),
("structure", "top-right"),
("form", "top-left")
) {
content(c, box(c + " oriented", inset: 5pt), anchor: a)
}
stroke(gray + 1.2pt)
line("content", "structure", "form", close: true)
for (c, s, f, cont) in (
(0.5, 0.1, 1, "PostScript"),
(1, 0, 0.4, "DVI"),
(0.5, 0.5, 1, "PDF"),
(0, 0.25, 1, "CSS"),
(0.5, 1, 0, "XML"),
(0.5, 1, 0.4, "HTML"),
(1, 0.2, 0.8, "LaTeX"),
(1, 0.6, 0.8, "TeX"),
(0.8, 0.8, 1, "Word"),
(1, 0.05, 0.05, "ASCII")
) {
content((bary: (content: c, structure: s, form: f)), cont)
}
```]
)
== Anchor
Defines a point relative to a named element using anchors, see @anchors.
#def-arg("name", `<string>`, [The name of the element that you wish to use to specify a coordinate.])
#def-arg("anchor", `<string>`, [An anchor of the element. If one is not given a default anchor will be used. On most elements this is `center` but it can be different.])
You can also use implicit syntax of a dot separated string in the form `"name.anchor"`.
#example(
{
import "draw.typ": *
line((0,0), (3,2), name: "line")
circle("line.end", name: "circle")
rect("line.start", "circle.left")
},
[```typc
import cetz.draw: *
line((0,0), (3,2), name: "line")
circle("line.end", name: "circle")
rect("line.start", "circle.left")
```]
)
== Tangent
This system allows you to compute the point that lies tangent to a shape. In detail, consider an element and a point. Now draw a straight line from the point so that it "touches" the element (more formally, so that it is _tangent_ to this element). The point where the line touches the shape is the point referred to by this coordinate system.
#def-arg("element", `<string>`, [The name of the element on whose border the tangent should lie.])
#def-arg("point", `<coordinate>`, [The point through which the tangent should go.])
#def-arg("solution", `<integer>`, [Which solution should be used if there are more than one.])
A special algorithm is needed in order to compute the tangent for a given shape. Currently it does this by assuming the distance between the center and top anchor (See @anchors) is the radius of a circle.
#example(
{
import "draw.typ": *
grid((0,0), (3,2), help-lines: true)
circle((3,2), name: "a", radius: 2pt)
circle((1,1), name: "c", radius: 0.75)
content("c", $ c $)
stroke(red)
line(
"a",
(element: "c", point: "a", solution: 1),
"c",
(element: "c", point: "a", solution: 2),
close: true
)
},
[```typ
grid((0,0), (3,2), help-lines: true)
circle((3,2), name: "a", radius: 2pt)
circle((1,1), name: "c", radius: 0.75)
content("c", $ c $)
stroke(red)
line(
"a",
(element: "c", point: "a", solution: 1),
"c",
(node: "c", point: "a", solution: 2),
close: true
)
```]
)
== Perpendicular
Can be used to find the intersection of a vertical line going through a point $p$ and a horizontal line going through some other point $q$.
#def-arg("horizontal", `<coordinate>`, [The coordinate through which the horizontal line passes.])
#def-arg("vertical", `<coordinate>`, [The coordinate through which the vertical line passes.])
You can use the implicit syntax of `(horizontal, "-|", vertical)` or `(vertical, "|-", horizontal)`
#example(
{
import "draw.typ": *
content((30deg, 1), $ p_1 $, name: "p1")
content((75deg, 1), $ p_2 $, name: "p2")
line((-0.2, 0), (1.2, 0), name: "xline")
content("xline.end", $ q_1 $, anchor: "left")
line((2, -0.2), (2, 1.2), name: "yline")
content("yline.end", $ q_2 $, anchor: "bottom")
line("p1", (horizontal: (), vertical: "xline"))
line("p1", (vertical: (), horizontal: "yline"))
// Implicit form
line("p2", ((), "|-", "xline"))
line("p2", ((), "-|", "yline"))
},
[```typc
content((30deg, 1), $ p_1 $, name: "p1")
content((75deg, 1), $ p_2 $, name: "p2")
line((-0.2, 0), (1.2, 0), name: "xline")
content("xline.end", $ q_1 $, anchor: "left")
line((2, -0.2), (2, 1.2), name: "yline")
content("yline.end", $ q_2 $, anchor: "bottom")
line("p1", (horizontal: (), vertical: "xline"))
line("p2", (horizontal: (), vertical: "xline"))
line("p1", (vertical: (), horizontal: "yline"))
line("p2", (vertical: (), horizontal: "yline"))
```]
)
== Interpolation
Use this to linearly interpolate between two coordinates `a` and `b` with a given factor `number`. If `number` is a `<length>` the position will be at the given distance away from `a` towards `b`.
An angle can also be given for the general meaning: "First consider the line from `a` to `b`. Then rotate this line by `angle` around point `a`. Then the two endpoints of this line will be `a` and some point `c`. Use this point `c` for the subsequent computation."
#def-arg("a", `<coordinate>`, [The coordinate to interpolate from.])
#def-arg("b", `<coordinate>`, [The coordinate to interpolate to.])
#def-arg("number", [`<number>` or `<length>`], [
The factor to interpolate by or the distance away from `a` towards `b`.
])
#def-arg("angle", `<angle>`, default: 0deg, "")
#def-arg("abs", `<bool>`, default: false, [
Interpret `number` as absolute distance, instead of a factor.
])
Can be used implicitly as an array in the form `(a, number, b)` or `(a, number, angle, b)`.
#example(
{
import "draw.typ": *
grid((0,0), (3,3), help-lines: true)
line((0,0), (2,2))
for i in (0, 0.2, 0.5, 0.8, 1, 1.5) {
content(((0,0), i, (2,2)),
box(fill: white, inset: 1pt, [#i]))
}
line((1,0), (3,2))
for i in (0, 0.5, 1, 2) {
content((a: (1,0), number: i, abs: true, b: (3,2)),
box(fill: white, inset: 1pt, text(red, [#i])))
}
},
[```typc
grid((0,0), (3,3), help-lines: true)
line((0,0), (2,2))
for i in (0, 0.2, 0.5, 0.8, 1, 1.5) { /* Relative distance */
content(((0,0), i, (2,2)),
box(fill: white, inset: 1pt, [#i]))
}
line((1,0), (3,2))
for i in (0, 0.5, 1, 2) { /* Absolute distance */
content((a: (1,0), number: i, abs: true, b: (3,2)),
box(fill: white, inset: 1pt, text(red, [#i])))
}
```]
)
#example(
{
import "draw.typ": *
grid((0,0), (3,3), help-lines: true)
line((1,0), (3,2))
line((1,0), ((1, 0), 1, 10deg, (3,2)))
fill(red)
stroke(none)
circle(((1, 0), 0.5, 10deg, (3, 2)), radius: 2pt)
},
[```typc
grid((0,0), (3,3), help-lines: true)
line((1,0), (3,2))
line((1,0), ((1, 0), 1, 10deg, (3,2)))
fill(red)
stroke(none)
circle(((1, 0), 0.5, 10deg, (3, 2)), radius: 2pt)}
```]
)
#example(
{
import "draw.typ": *
grid((0,0), (4,4), help-lines: true)
fill(black)
stroke(none)
let n = 16
for i in range(0, n+1) {
circle(((2,2), i / 8, i * 22.5deg, (3,2)), radius: 2pt)
}
},
[```typc
grid((0,0), (4,4), help-lines: true)
fill(black)
stroke(none)
let n = 16
for i in range(0, n+1) {
circle(((2,2), i / 8, i * 22.5deg, (3,2)), radius: 2pt)
}
```]
)
You can even chain them together!
#example(
{
import "draw.typ": *
grid((0,0), (3, 2), help-lines: true)
line((0,0), (3,2))
stroke(red)
line(((0,0), 0.3, (3,2)), (3,0))
fill(red)
stroke(none)
circle(
(
// a
(((0, 0), 0.3, (3, 2))),
0.7,
(3,0)
),
radius: 2pt
)
},
[```typ
grid((0,0), (3, 2), help-lines: true)
line((0,0), (3,2))
stroke(red)
line(((0,0), 0.3, (3,2)), (3,0))
fill(red)
stroke(none)
circle(
(
// a
(((0, 0), 0.3, (3, 2))),
0.7,
(3,0)
),
radius: 2pt
)
```]
)
#example(
{
import "draw.typ": *
grid((0,0), (3, 2), help-lines: true)
line((1,0), (3,2))
for (l, c) in ((0cm, "0cm"), (1cm, "1cm"), (15mm, "15mm")) {
content(((1,0), l, (3,2)), $ #c $)
}
},
[```typc
grid((0,0), (3, 2), help-lines: true)
line((1,0), (3,2))
for (l, c) in ((0cm, "0cm"), (1cm, "1cm"), (15mm, "15mm")) {
content(((1,0), l, (3,2)), $ #c $)
}
```]
)
== Function
An array where the first element is a function and the rest are coordinates will cause the function to be called with the resolved coordinates. The resolved coordinates have the same format as the implicit form of the 3-D XYZ coordinate system, @coordinate-xyz.
The example below shows how to use this system to create an offset from an anchor, however this could easily be replaced with a relative coordinate with the `to` argument set, @coordinate-relative.
#example(
{
import "draw.typ": *
circle((0, 0), name: "c")
fill(red)
circle((v => vector.add(v, (0, -1)), "c.right"), radius: 0.3)
},
[```typc
circle((0, 0), name: "c")
fill(red)
circle((v => cetz.vector.add(v, (0, -1)), "c.right"), radius: 0.3)
```]
)
= Utility
#show-module-fn(draw-module, "for-each-anchor")
#example({
import "draw.typ": *
rect((0, 0), (2,2), name: "my-rect")
for-each-anchor("my-rect", (name) => {
if not name in ("above", "below", "default") {
content((), box(inset: 1pt, fill: white, text(8pt, [#name])),
angle: -45deg)
}
})
}, ```typc
// Label nodes anchors
rect((0, 0), (2,2), name: "my-rect")
for-each-anchor("my-rect", (name) => {
if not name in ("above", "below", "default") {
content((), box(inset: 1pt, fill: white, text(8pt, [#name])),
angle: -45deg)
}
})
```)
= Libraries
== Tree
#let tree-module = parse-module("../../tree.typ", name: "Tree")
With the tree library, CeTZ provides a simple tree layout algorithm.
#show-module(tree-module, show-module-name: false)
#example({
import "draw.typ": *
import "tree.typ"
let data = ([Root], ([A], [A-A], [A-B]), ([B], [B-A]))
tree.tree(data, content: (padding: .1), line: (stroke: blue))
}, ```typc
import "tree.typ"
let data = ([Root], ([A], [A-A], [A-B]), ([B], [B-A]))
tree.tree(data, content: (padding: .1), line: (stroke: blue))
```)
#example({
import "draw.typ": *
import "tree.typ"
let data = ([\*], ([A], [A-A], [A-B]), ([B], [B-A]))
tree.tree(data, content: (padding: .1), direction: "right",
mark: (end: ">", fill: none),
draw-node: (node, ..) => {
circle((), radius: .35, fill: blue, stroke: none)
content((), text(white, [#node.content]))
},
draw-edge: (from, to, ..) => {
let (a, b) = (from + ".center",
to + ".center")
line((a: a, b: b, abs: true, number: .35),
(a: b, b: a, abs: true, number: .35))
})
}, ```typc
import "tree.typ"
let data = ([Root], ([\*], [A-A], [A-B]), ([B], [B-A]))
tree.tree(data, content: (padding: .1), direction: "right",
mark: (end: ">", fill: none),
draw-node: (node, ..) => {
circle((), radius: .35, fill: blue, stroke: none)
content((), text(white, [#node.content]))
},
draw-edge: (from, to, ..) => {
let (a, b) = (from + ".center",
to + ".center")
draw.line((a: a, b: b, abs: true, number: .35),
(a: b, b: a, abs: true, number: .35))
})
```)
=== Node <tree-node>
A tree node is an array of nodes. The first array item represents the
current node, all following items are direct children of that node.
The node itselfes can be of type `content` or `dictionary` with a key `content`.
== Plot
#let plot-module = parse-module("../../lib/plot.typ", name: "Plot")
The library `plot` of CeTZ allows plotting 2D data as linechart.
#show-module(plot-module, show-module-name: false)
=== Examples
#example({
import "draw.typ": *
plot.plot(size: (3,2), x-tick-step: 180, y-tick-step: 1,
x-unit: $degree$, {
plot.add(domain: (0, 360), x => calc.sin(x * 1deg))
})
}, ```typc
plot.plot(size: (3,2), x-tick-step: 180, y-tick-step: 1,
x-unit: $degree$, {
plot.add(domain: (0, 360), x => calc.sin(x * 1deg))
})
```)
#example({
import "draw.typ": *
plot.plot(size: (3,2), x-tick-step: 180, y-tick-step: 1,
x-unit: $degree$, y-max: .5, {
plot.add(domain: (0, 360), x => calc.sin(x * 1deg))
plot.add(domain: (0, 360), x => calc.cos(x * 1deg),
samples: 10, mark: "x", mark-style: (stroke: blue))
})
}, ```typc
plot.plot(size: (3,2), x-tick-step: 180, y-tick-step: 1,
x-unit: $degree$, y-max: .5, {
plot.add(domain: (0, 360), x => calc.sin(x * 1deg))
plot.add(domain: (0, 360), x => calc.cos(x * 1deg),
samples: 10, mark: "x", style: (mark: (stroke: blue)))
})
```)
#example({
import "draw.typ": *
// Axes can be styled.
// Set the tick length to .05:
set-style(axes: (tick: (length: .05)))
// Plot something
plot.plot(size: (3,3), axis-style: "left",
y-tick-step: .5, x-tick-step: 1, {
for i in range(0, 3) {
plot.add(domain: (-4, 2),
x => calc.exp(-(calc.pow(x + i, 2))),
fill: true, style: palette.tango)
}
})
}, ```typc
// Axes can be styled!
// Set the tick length to .05:
set-style(axes: (tick: (length: .05)))
// Plot something
plot.plot(size: (3,3), axis-style: "left", {
for i in range(0, 3) {
plot.add(domain: (-4, 2),
x => calc.exp(-(calc.pow(x + i, 2))),
fill: true, style: palette.tango)
}
})
```)
=== Styling <plot.style>
The following style keys can be used (in addition to the standard keys)
to style plot axes. Individual axes can be styled differently by
using their axis name as key below the `axes` root.
```typc
set-style(axes: ( /* Style for all axes */ ))
set-style(axes: (bottom: ( /* Style axis "bottom" */)))
```
Axis names to be used for styling:
- School-Book and Left style:
- `x`: X-Axis
- `y`: Y-Axis
- Scientific style:
- `left`: Y-Axis
- `right`: Y2-Axis
- `bottom`: X-Axis
- `top`: X2-Axis
==== Default `scientific` Style
#raw(repr(axes.default-style))
==== Default `school-book` Style
#raw(repr(axes.default-style-schoolbook))
== Chart
#let chart-module = parse-module("../../lib/chart.typ", name: "Chart")
With the `chart` library it is easy to draw charts.
Supported charts are:
- `barchart(..)`: A chart with horizontal growing bars
- `mode: "basic"`: (default): One bar per data row
- `mode: "clustered"`: Multiple grouped bars per data row
- `mode: "stacked"`: Multiple stacked bars per data row
- `mode: "stacked100"`: Multiple stacked bars relative to the sum of a data row
#show-module(chart-module, show-module-name: false)
=== Examples -- Bar Chart <barchart-examples>
==== Basic
#example(vertical: true, {
draw.set-style(axes: (tick: (stroke: red, length: 1)))
let data = (("A", 10), ("B", 20), ("C", 13))
chart.barchart(size: (10, auto), x-tick-step: 10, data)
}, ```typc
let data = (("A", 10), ("B", 20), ("C", 13))
chart.barchart(size: (10, auto), x-tick-step: 10, data)
```)
==== Clustered
#example(vertical: true, {
let data = (("A", 10, 12, 22), ("B", 20, 1, 7), ("C", 13, 8, 9))
chart.barchart(size: (10, auto), mode: "clustered",
x-tick-step: 10, value-key: (..range(1, 4)), data)
}, ```typc
let data = (("A", 10, 12, 22), ("B", 20, 1, 7), ("C", 13, 8, 9))
chart.barchart(size: (10, auto), mode: "clustered",
x-tick-step: 10, value-key: (..range(1, 4)), data)
```)
==== Stacked
#example(vertical: true, {
let data = (("A", 10, 12, 22), ("B", 20, 1, 7), ("C", 13, 8, 9))
chart.barchart(size: (10, auto), mode: "stacked",
x-tick-step: 10, value-key: (..range(1, 4)), data)
}, ```typc
let data = (("A", 10, 12, 22), ("B", 20, 1, 7), ("C", 13, 8, 9))
chart.barchart(size: (6, auto), mode: "clustered",
x-tick-step: 10, value-key: (..range(1, 4)), data)
```)
=== Examples -- Column Chart <columnchart-examples>
==== Basic, Clustered and Stacked
#example(vertical: true, {
draw.set-style(axes: (tick: (stroke: red, length: 1)))
let data1 = (("A", 10), ("B", 20), ("C", 13))
let data2 = (("A", 10, 12, 22), ("B", 20, 1, 7), ("C", 13, 8, 9))
draw.group(name: "chart", {
draw.anchor("default", (0,0))
chart.columnchart(size: (auto, 4), y-tick-step: 10, data1)
})
draw.set-origin("chart.bottom-right")
draw.group(name: "chart", anchor: "bottom-left", {
draw.anchor("default", (0,0))
chart.columnchart(size: (auto, 4),
mode: "clustered",
value-key: (1,2,3),
y-tick-step: 10, data2)
})
draw.set-origin("chart.bottom-right")
draw.group(name: "chart", anchor: "bottom-left", {
draw.anchor("default", (0,0))
chart.columnchart(size: (auto, 4),
mode: "stacked",
value-key: (1,2,3),
y-tick-step: 10, data2)
})
},
```typc
// Left
let data = (("A", 10), ("B", 20), ("C", 13))
chart.columnchart(size: (auto, 4), data)
// Center
let data = (("A", 10, 12, 22), ("B", 20, 1, 7), ("C", 13, 8, 9))
chart.columnchart(size: (auto, 4),
mode: "clustered", value-key: (1,2,3), data)
// Right
let data = (("A", 10, 12, 22), ("B", 20, 1, 7), ("C", 13, 8, 9))
chart.columnchart(size: (auto, 4),
mode: "stacked", value-key: (1,2,3), data)
```)
=== Styling
Charts share their axis system with plots and therefore can be
styled the same way, see @plot.style.
==== Default `barchart` Style
#raw(repr(chart.barchart-default-style))
==== Default `columnchart` Style
#raw(repr(chart.columnchart-default-style))
== Palette <palette>
#let palette-module = parse-module("../../lib/palette.typ", name: "Palette")
A palette is a function that returns a style for an index.
The palette library provides some predefined palettes.
#show-module(palette-module, show-module-name: false)
#let show-palette(p) = {
canvas({
import draw: *
for i in range(p("len")) {
if calc.rem(i, 10) == 0 { move-to((rel: (0, -.5))) }
rect((), (rel: (1,.5)), name: "r", ..p(i))
move-to("r.bottom-right")
}
})
}
=== List of predefined palettes
- `gray` #show-palette(palette.gray)
- `red` #show-palette(palette.red)
- `blue` #show-palette(palette.blue)
- `rainbow` #show-palette(palette.rainbow)
- `tango-light` #show-palette(palette.tango-light)
- `tango` #show-palette(palette.tango)
- `tango-dark` #show-palette(palette.tango-dark)
|
https://github.com/tingerrr/chiral-thesis-fhe | https://raw.githubusercontent.com/tingerrr/chiral-thesis-fhe/main/src/core/component/acknowledgement.typ | typst | #let make-acknowledgement(
body: lorem(100),
) = {
heading(level: 1)[Danksagung]
body
}
|
|
https://github.com/Woodman3/modern-ysu-thesis | https://raw.githubusercontent.com/Woodman3/modern-ysu-thesis/main/pages/bachelor-decl-page.typ | typst | MIT License | #import "../utils/indent.typ": indent
#import "../utils/style.typ": 字号, 字体
// 本科生声明页
#let bachelor-decl-page(
anonymous: false,
twoside: false,
fonts: (:),
info: (:),
) = {
// 0. 如果需要匿名则短路返回
if anonymous {
return
}
// 1. 默认参数
fonts = 字体 + fonts
info = (
title: ("基于 Typst 的", "南京大学学位论文"),
) + info
// 2. 对参数进行处理
// 2.1 如果是字符串,则使用换行符将标题分隔为列表
if type(info.title) == str {
info.title = info.title.split("\n")
}
set text(font: fonts.宋体, size: 字号.小四)
set par(leading: 12pt,first-line-indent: 2em)
// 3. 正式渲染
pagebreak(weak: true, to: "odd" )
v(1em)
align(center,text(font: fonts.黑体, size: 字号.三号, "学位论文原创性声明"))
v(字号.三号)
[
郑重声明:所呈交的学位论文《#info.title.sum()》,是本人在导师的指导下,独立进行研究取得的成果。除文中已经注明引用的内容外,本论文不包括他人或集体已经发表或撰写过的作品成果。对本文的研究做出贡献的个人和集体,均已在文中以明确方式标明。本人完全意识到本声明的法律后果,并承诺因本声明而产生的法律结果由本人承担。
]
v(1em)
[学位论文作者签名:~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~日期:~~~~年~~~~月~~~~日]
v(3em)
align(center,text(font: fonts.黑体, size: 字号.三号, "学位论文版权使用授权书"))
v(字号.小三)
[
本学位论文作者完全了解学校有关保留、使用学位论文的规定,同意学校保留并向国家有关部门或机构送交论文的复印件和电子版,允许论文被查阅和借阅。本人授权燕山大学将本学位论文的全部或部分内容编入有关数据库进行检索,可以采用影印、缩印或扫描等复制手段保存和汇编本学位论文。
]
v(1em)
// typst 中第一行不会缩进,所以需要手动缩进
[#h(8em) 保 密☐,在\_\_年解密后适用本授权书。]
linebreak()
[#indent 本学位论文属于]
linebreak()
[#indent #h(8em) 不保密☐。]
linebreak()
[#indent (请在以上相应方框内打“√”)]
v(1em)
[学位论文作者签名:~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~日期:~~~~年~~~~月~~~~日]
v(1em)
[指导教师签名:~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~日期:~~~~年~~~~月~~~~日]
v(1em)
pagebreak(weak: false, to: "even" )
} |
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/text/emoji_00.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// This should form a three-member family.
👩👩👦
// This should form a pride flag.
🏳️🌈
// Skin tone modifier should be applied.
👍🏿
// This should be a 1 in a box.
1️⃣
|
https://github.com/crd2333/Astro_typst_notebook | https://raw.githubusercontent.com/crd2333/Astro_typst_notebook/main/src/docs/note.typ | typst | #set page(margin: 1em, height: auto)
#show link: it => text(fill: blue)[#it]
#import "@preview/cheq:0.1.0": checklist
Home page of notebook.
$arrow.t$ 点击上面的 here 查看更多例子
一个基于 #link("https://github.com/OverflowCat/astro-typst/tree/master")[astro-typst] 的网页笔记本,支持 md 和 typ 两种格式
#show: checklist.with(fill: luma(95%), stroke: blue, radius: .2em)
- Typst 原有的一些功能得到 astro-typst 的支持
- [x] Import packages in Typst Universe
- [x] import / include / read files or resources
- [x] Use system fonts
- [x] Selectable, clickable text layer
- [x] Set scale
- [x] Static SVGs without JavaScript
- [ ] Responsive SVGs
- [ ] Add font files or blobs
#let typst = {
text(font: "Linux Libertine", weight: "semibold", fill: eastern)[typst]
}
= Typst 笔记
== #typst: Compose paper faster
$ cases(
dot(x) = A x + B u = mat(delim: "[", 0, 0, dots.h.c, 0, - a_n; 1, 0, dots.h.c, 0, - a_(n - 1); 0, 1, dots.h.c, 0, - a_(n - 2); dots.v, dots.v, dots.down, dots.v, dots.v; 0, 0, dots.h.c, 1, - a_1) x + mat(delim: "[", b_n; b_(n - 1); b_(n - 2); dots.v; b_1) u,
y = C x = mat(delim: "[", 0, 0, dots.h.c, 1) x
) $
#set text(font: ("Garamond", "Noto Serif CJK SC"))
#import "@preview/tablem:0.1.0": tablem
#tablem[
| *English* | *German* | *Chinese* | *Japanese* |
| --------- | -------- | --------- | ---------- |
| Cat | Katze | 猫 | 猫 |
| Fish | Fisch | 鱼 | 魚 |
] |
|
https://github.com/HiiGHoVuTi/requin | https://raw.githubusercontent.com/HiiGHoVuTi/requin/main/graph/croissant.typ | typst | #import "../lib.typ": *
#show heading: heading_fct
#import "@preview/gloss-awe:0.0.5": gls
#show figure.where(kind: "jkrb_glossary"): it => {it.body}
#import "@preview/syntree:0.2.0": tree
_Ce sujet est adapté de l'épreuve d'informatique A 2014._
#correct[
Dans ce sujet, il faut faire _très attention_ à la rigueur dans les récurrences. On ne peut pas faire une récurrence sur tout et n'importe quoi, et on ne fait pas dire à l'hypothèse de récurrence ce qu'on veut.
]
=== Introduction
Soit $cal(A)$ la plus petite #gls(entry: "Classe")[classe] contenant
#align(center, grid(
columns: (1fr, 2fr),
[- $"E "$], [- $"N "(g, x, d)$ pour $x in ZZ$ et $g,d in cal(A)$]
))
On définit la taille et la hauteur de ces arbres
#align(center, grid(
columns: (1fr, 2fr),
[- $|"E "| = 1$], [- $|"N "(g, x, d)| = 1 + |g| + |d|$\ \ ],
[- $h("E ") = 0$], [- $h("N "(g, x, d))=1+max(h(g), h(d))$]
))
#question(0)[Proposer un type `OCaml` décrivant $cal(A)$.]
#correct[
```ocaml
type arbre = E | N of arbre * int * arbre
```
]
#question(1)[Implémenter les fonctions `taille` et `hauteur`.]
#correct[
```ocaml
let rec taille = function
| E -> 1
| N (g, _, d) -> 1 + taille g + taille d
let rec hauteur = function
| E -> 0
| N (g, _, d) -> max (hauteur g) (hauteur d)
```
]
Un arbre de $cal(A)$ est un arbre croissant si et seulement si sa racine est son minimum et que ses deux fils sont croissants.
_Par exemple_, les arbres suivants sont croissants
#align(center, grid(
columns: (1fr, 1fr),
tree($1$, tree($2$, $4$, []), tree($3$, $$, $5$)),
tree($1$, $3$, tree($2$, $3$, $$))
))
#question(0)[Implémenter une fonction `minimum : arbre -> int option`.]
#correct[
```ocaml
let minimum = function
| E -> None
| N (_, x, _) -> x
```
]
#question(2)[Montrer qu'il existe exactement $n!$ arbres croissants à $n$ noeuds (à étiquettes distinctes).]
#correct[
On raisonne par récurrence forte sur $n in NN$.
_Initialisation_: Si $n = 0$ ou $n = 1$, c'est évident.
_Hérédité_: On suppose que si $k <= n$, alors il existe $k!$ arbres croissants à $k$ noeuds.
Pour construire un arbre croissant à $n+1$ noeuds, on peut
- Choisir $k in [|0, n|]$ la taille du fils gauche (_$n+1$ choix_)
- Choisir $g$ la partie des noeuds qui iront au fils gauche (_$binom(n, k)$ choix_)
- Construire le fils gauche à $g$ noeuds (_HR: $k!$ choix_)
- Construire le fils droit à $n-g$ noeuds (_HR: $(n-k)!$ choix_)
Au total, $(n+1)binom(n+1, k)k!(n-k)!$ choix ont été faits.
Il existe $(n+1)!$ arbres croissants à $n+1$ noeuds.
]
=== Fusion
On propose un algorithme de fusion des arbres croissants
```ml
let rec fusion t1 t2 = match t1, t2 with
| E, x -> x
| x, E -> x
| N (g1, x1, d1), N (g2, x2, d2) ->
if x1 <= x2
then N (fusion d1 t2, x1, g1)
else N (fusion d2 t1, x2, g2)
```
#question(0)[Donner la fusion des arbres suivants
#align(center, grid(
columns: (1fr, 1fr),
tree($1$, $2$, $4$),
tree($3$, $5$, $6$)
))
]
#correct[
#align(center, tree(
$1$, tree($3$, tree($4$, $6$, $$), $5$), $2$
))
]
#question(1)[Proposer une fonction `ajoute : int -> arbre -> arbre` conservant la propriété d'arbre croissant.]
#correct[
```ocaml
let ajoute x = fusion (N (E, x, E))
```
Il est sous-entendu ici qu'il faut rappeler que `fusion` conserve la propriété d'arbre croissant.
On peut le faire sans difficulté par récurrence _sur la somme des tailles des arbres_.
]
#question(1)[Proposer une fonction `supprime_minimum : arbre -> arbre` conservant la propriété d'arbre croissant.]
#correct[
```ocaml
let supprime_minimum a = match a with
| E -> E
| N (g, _, d) -> fusion g d
```
]
On définit $alpha_1 (x) = "N "("E ", x, "E ")$ puis $alpha_(n+1) (x_1...x_(n+1)) := mono("fusion")(alpha_n (x_1...x_n), alpha_1 (x_(n+1)))$
#question(2)[Trouver $x_1...x_n in NN$ tels que $h(alpha_n (x_1...x_n)) >= n/2$.]
#correct[
Une suite décroissante d'éléments $x_k := n-k$ répond à la question.
En effet, l'appel #grid(columns: (1fr, 1fr, 1fr), tree(`fusion`, $alpha_k (x_1...x_k)$, $x_(k+1)$), "renvoie l'arbre", tree($x_(k+1)$, $alpha_k (x_1 ... x_k)$, $$))
On obtient alors le graphe peigne, qui convient.
]
#question(3)[Calculer $h(alpha_n (1...n))$. _Justifier soigneusement la réponse_.]
#correct[
Il faut ici faire une récurrence *rigoureuse* sur la taille de l'arbre.
On notera $alpha T + beta$ l'arbre $T$ ré-étiqueté par $x arrow.bar alpha x + beta$.
On pose $cal(H)(n)$ l'hypothèse suivante
#grid(columns: (8fr, 1fr, 10fr, 1fr, 8fr, 1fr, 10fr),
tree($alpha_(2n+2)(1...2n+2)$), $=$, tree($1$, $2 alpha_n (1..n) + 1$, $2 alpha_(n+1)(1..n)$),
"et",
tree($alpha_(2n+1) (1...2n+1)$), $=$, tree($1$, $2 alpha_n (1...n) + 1$, $2 alpha_n (1...n)$),
)
_Initialisation_: Il suffit d'étudier les arbres à $1$ et $2$ éléments.
_Hérédité_: On suppose $cal(H)(k)$ vraie pour $k <= n$.
On applique l'algorithme à la main sur les objets proprement définis par $cal(H)$ et on montre les deux hypothèses.
]
=== Analyse
On dit d'un noeud $" N"(g, x, d)$ qu'il est _lourd_ si $|g| < |d|$. On dit qu'il est _léger_ sinon.
On pose $Phi$ la fonction qui à un arbre associe son nombre de noeuds lourds, qu'on appellera _potentiel_.
#question(1)[Implémenter `potentiel : arbre -> int`.]
#correct[
```ocaml
let potentiel a0 =
let rec aux arbre = match arbre with
| E -> (* potentiel *) 0, (* taille *) 1
| N (g, _, d) ->
let pg, tg = aux g in
let pd, td = aux d in
(* potentiel *) pg + pd + (if td > tg then 1 else 0),
(* taille *) 1 + tg + td
in fst (aux a0)
```
]
On appelle _coût de fusion_ de deux arbres $t_1$ et $t_2$ le nombre d'appels récursifs effectués pendant le calcul de `fusion(t1, t2)`.
On note ce coût $C(t_1, t_2)$.
#question(2)[Soient $t_1,t_2$ des arbres croissants et $t := mono("fusion") t_1 med t_2$.
Montrer que $ C(t_1, t_2) <= Phi(t_1) + Phi(t_2) - Phi(t) + 2 (log(|t_1|)+log(|t_2|)) $
]
#correct[
Il s'agit de faire une récurrence sur la somme de la taille des arbres.
// TODO(Juliette): améliorer cette correction.
Il n'y a pas d'astuce particulière, de la persévérance suffit.
]
#question(1)[Montrer que le coût de $alpha(x_1...x_n)$ est en $cal(O)(n log n).$]
#correct[
Il s'agit simplement d'un télescopage, en remarquant que la différence de potentiel totale est en $cal(O)(|t|)$.
]
#question(1)[Exhiber un cas $x_1...x_n$ où une des `fusion`s a un coût supérieur ou égal à $n / 2$.]
#correct[
On peut réutiliser la suite décroissante, en ajoutant en dernier un élément maximal. On montre que l'on a bien des basculements à chaque niveau.
]
\
Soit $t_0$ un arbre de taille $2n+1$.
On pose récursivement $t_(k+1) = mono("fusion")(g_k, d_k)$ avec $t_k =: "N "(g_k, x, d_k)$. On note que $t_n = "E "$.
#question(1)[Montrer que cette construction est réalisable en temps $cal(O)(n log n)$.]
#correct[
Il s'agit comme à la question $11$ d'un télescopage, mais cette fois-ci plus subtil. Il suffit de mener à bien les calculs.
]
=== Applications
#correct[
Cette partie n'est pas encore corrigée car relou, vous pouvez chercher "tri par tas".
]
#question(1)[En utilisant la structure d'arbre croissant, définir `tri : int list -> int list`. _Une complexité temporelle en $cal(O)(n log n)$ est attendue et sera soigneusement justifiée_.]
Soient $x_1...x_n in ZZ$ avec $n = 2^k$. On définit $(T_i^j)_(i <= k)^(j <= 2^(k-i))$ une famille d'arbres tels que
$ cases(T^j_0 := "N "("E ", x_j, "E "), T_(i+1)^j := "fusion" T_i^(2j) med med T_i^(2j+1)) $
#question(2)[Montrer que le temps total de la construction des arbres $T$ est en $cal(O)(n)$.]
#question(1)[En déduire une fonction `construire : int array -> arbre` de complexité temporelle en $cal(O)(n)$.]
#question(1)[Peut-on relâcher la contrainte $n = 2^k$ ? _Justifier_.]
|
|
https://github.com/dariasc/notebook | https://raw.githubusercontent.com/dariasc/notebook/master/template.typ | typst | #let project(body) = {
set document(title: "Notebook")
set page(
paper: "a4",
flipped: true,
margin: ( left: 0.75cm, right: 0.75cm, bottom: 0.75cm, top: 1.25cm ),
header-ascent: 40%,
header: locate(loc => {
let headings = query(
selector(heading.where(level: 2)).after(loc),
loc,
)
let this = headings
.filter((it) => it.location().position().page == loc.position().page)
.map((it) => it.body);
return align(right, text(this.join(", "), size: 9pt, weight: "semibold"))
}),
background: [
#place(
top + left,
line(
start: (34%, 5%),
end: (34%, 97%),
stroke: 0.5pt + gray
),
)
#place(
top + right,
line(
start: (-34%, 5%),
end: (-34%, 97%),
stroke: 0.5pt + gray
),
)
]
)
set text(font: "Linux Libertine", lang: "en")
set par(justify: true)
show: columns.with(3, gutter: 2%)
show heading.where(level: 1): it => [
#set block(above: 0em)
#smallcaps[
#it.body
]
]
show heading.where(level: 2): it => [
#set text(weight: "regular")
#it.body
]
show raw.where(block: true): it => {
set text(7pt)
set par(justify: false)
it
}
set raw(theme: "theme.xml")
body
}
#let title() = {
return {
block(width: 100%, height: 2.5em, {
set text(size: 1.25em)
align(bottom)[
= Team Notebook
]
place(top + right)[
#image("logo.svg", height: 32pt)
]
})
line(length: 100%, stroke: 0.5pt + gray)
}
}
#let extract_code(contents) = {
return contents.split("- */\n").at(-1)
}
#let extract_metadata(contents) = {
return toml.decode(contents.split("- */\n").at(0).split("/* -\n").at(-1))
}
#let insert(filename) = {
let contents = read(filename)
let metadata = extract_metadata(contents)
return block[
#set text(9pt)
#block(breakable: false)[
== #metadata.name
#linebreak()
#for (key, value) in metadata.info {
text(key + ": ", weight: "bold")
eval(value, mode: "markup")
linebreak()
}
]
#raw(extract_code(contents), lang: "cpp", block: true)
#line(length: 100%, stroke: 0.5pt + gray)
]
}
|
|
https://github.com/mrknorman/evolving_attention_thesis | https://raw.githubusercontent.com/mrknorman/evolving_attention_thesis/main/02_gravitation/02_gravitational_waves.typ | typst | #set page(numbering: "1", number-align: center)
#set math.equation(numbering: it => {[2.#it]})
#counter(math.equation).update(0)
// #set math.mat(delim: "[")
#import "../notation.typ": vectorn, uvectorn, dvectorn, udvectorn, matrixn
= Gravitational Waves <gravitational-waves-sec>
Since time immemorial, humanity has gazed at the stars. With wonder rooted deep in their minds, they imagined strange and divine mechanisms in order to try and make sense of what they saw. Over the millennia, the vast skies have revealed much about their workings, and with ever-improving tools we have come much closer to understanding their mysteries, but there is still much to be uncovered. It is unclear how deep the truth lies. Perhaps we have but only scratched at the surface. The depths are so vast we simply do not know.
Almost all of that knowledge, all of that understanding and academia, has been built upon the observation of just a single type of particle. Until very recently, the only information we had about the world above us came from light, and although the humble photon has taught us a great deal about the scope of our universe, the discovery of new messengers promises pristine, untapped wells of science. It has only been in the last century that we have achieved the technological prowess to detect any other extraterrestrial sources of data except that which fell to us as meteors. We have brought rocks home from the moon @moon_rocks. We study data sent back from probes shot out across the solar system and even ones that have peaked just beyond the Sun's mighty sphere of influence @voyager. We have seen neutrinos, tiny, almost massless particles that pass through entire planets more easily than birds through the air @neutrino, and single particles with the energy of a Wimbledon serve @cosmic_rays. Most recently of all, we have seen the skin of space itself quiver --- gravitational waves, the newest frontier in astronomy @first_detction.
Practical gravitational-wave astronomy is still in its infancy; compared to the other fields of astrophysics, it has barely left the cradle, with the first confirmed gravitational-wave detection occurring in 2015 @first_detction. Although the field has progressed remarkably quickly since its inception, there is still a lot of work to be done --- a lot of groundwork to be performed whilst we uncover the best ways to deal with the influx of new data that we are presented with. It seems likely, assuming both funding and human civilization prevail, that work undertaken now will be but the first bricks in a great wall of discovery. New gravitational-wave observatories that are even today being planned will increase our sensitive range by orders of magnitude @einstein_telescope @cosmic_explorer @LISA. With any luck, they will open our ears further to previously undiscovered wonders.
This chapter will introduce a small part of the science of gravitational waves; it will not be an extensive review as many of the particularities are not especially relevant to the majority of the content of this thesis. Instead, this section aims to act as a brief overview to give context to the purpose behind the data-analysis methods presented throughout. We will cover the theoretical underpinning of gravitational waves, and perform a glancing tour through the experiments used to detect them.
== Gravity
Gravity is one of the four fundamental forces of nature, the other three being the electromagnetic force and the strong and weak nuclear forces @four_forces. It is, in some ways, the black sheep of the interactions, as it is the only one not explained by the standard model of particle physics, which is, by some measures, the most accurate theory of physics ever described @standard_model. Gravity is also orders of magnitude weaker than the other four fundamental forces @weak_gravity, see @force-coupling-constants. This weakness adds to its mystery by ensuring that only extremely sensitive detectors can detect the tiny fluctuations caused by some of the most violent collisions of mass in the universe. Luckily, gravity has its own extremely accurate descriptive theory @testing_gr. It has a storied history, which, if you are unfamiliar, is worth skimming for context.
#pagebreak()
#figure(
table(
columns: (auto, auto),
inset: 10pt,
align: horizon,
[*Force*], [*Coupling Constant*],
[Strong], [1],
[Electromagnetic], [$frac(1, 137)$],
[Weak], [ $10^(-6)$ ],
[Gravitational], [ $10^(-29)$ ],
),
caption: [Dimensionless coupling constants for the four fundamental forces of nature. These coupling constants are dimensionless values normalised to the strongest of the forces, the strong nuclear force. The coupling constants illustrate the weakness of the gravitational force compared to the other three fundamental forces of nature @coupling_constants, as they determine the strength of their respective forces. The specifics of why this is the case are not discussed here, as that is outside the scope of this work, but even without a deep understanding, it is clear that gravity is by far the weakest of the forces.]
) <force-coupling-constants>
=== Ye Old Times
In the beginning, men threw rocks at each other and were entirely unsurprised when they hit the floor. Over time, people became more and more confused as to why this was the case. Many philosophers proposed many reasons why one direction should be preferred over all others when it came to the unrestrained motion of an object. For a long time, there was much confusion about the relationship between mass, density, buoyancy, and the nature and position of various celestial objects. Sometime after we had decided that the Earth was not, in fact, at the centre of the universe and that objects fell at the same speed irrespective of their densities, came the time of Sir <NAME>, and along with him arrived what many would argue was one of the most influential theories in the history of physics.
The idea of gravity as a concept had been around for many thousands of years by this point @gravity_history, but what Newton did was formalise the rules by which objects behaved under the action of gravity. Newton's universal law of gravitation states that all massive objects in the universe attract all others @principia_mathematica, acting upon each other whether surrounded by a medium or not. Gravity appeared to ignore all boundaries and was described in a simple formula that seemed to correctly predict everything from the motion of the planets (mostly) to the fall of an apple
$ F = G frac(m_1 m_2, Delta r^2). $ <newtons-law-of-universal-gravitation>
where $F$ is the scalar force along the direction between the two masses, $G$ is the gravitational constant equal to #box($6.67430(15) times 10^(−11)$ + h(1.5pt) + $m^3 "kg"^(-1) s^(-2)$) @gravitational_constant, $m_1$ is the mass of the first object, $m_2$ is the mass of the second object, and $Delta r$ is the scalar distance between the two objects. In vector form, this becomes,
$ vectorn(F) = - G frac(m_1 m_2, |dvectorn(r)|^2) udvectorn("r") = - G frac( m_1 m_2, |dvectorn(r)|^3) dvectorn(r), $ <newtons-law-of-universal-gravitation-vec>
where $vectorn(F)$ is the force vector exerted on body 2 by the gravitational effect of body 1, $dvectorn(r)$ is the displacement vector between bodies 1 and 2, and $udvectorn(r)$ is the unit direction vector between bodies 1 and 2.
Newton's law of universal gravitation describes the force every massive object in the universe experiences because of every other --- an equal and opposite attraction proportional to the product of their two masses @principia_mathematica; see @newtons_law. Though we now know this equation to be an imperfect description of reality, it still holds accurate enough for many applications to this day.
#figure(
image("newtons_law.png", width: 30%),
caption: [An illustration of Newton's law of universal gravitation, as described by @newtons-law-of-universal-gravitation. Two particles, here distinguished by their unique masses, $m_1$ and $m_2$, are separated by a distance, $r$. According to Newton's law, they are pulled toward each other by the force of gravity acting on each object $F$ @principia_mathematica, each being pulled directly toward the other by a force that is equal and opposite to its partner's.],
) <newtons_law>
It was also at this time that a fierce debate raged over the nature of time and space. Newton proposed a universal time that ticked whether in the presence of a clock or not, and a static, ever-present grid of space that never wavered nor wandered. Both space and time would continue to exist whether the rest of the universe was there or not @newton_vs_leibniz. Leibniz, on the other hand, argued that space and time were little more than the relations between the positions of objects and their velocities. By his reasoning, if there were no objects, there would be nothing to measure, and there would be no space. If there were no events, there would be nothing to time, and there would be no time; see @absolute_time_and_space. At the time, they did not come to a resolution, and to this day we do not have a definite answer to this question @what_is_spacetime, but as we will see, each saw some aspect of the truth.
#figure(
grid(
columns: 1,
rows: 2,
gutter: 1em,
[ #align(center)[#image("absoloute_space.png", width: 100%)] ],
[ #align(center)[#image("relative_space.png", width: 100%)] ],
),
caption: [An illustration of two competing historical views on the nature of space and time @newton_vs_leibniz. _Upper:_ Newton's vision of absolute universal time and absolute space, wherein time moves forward at a constant and uniform rate across the universe and space is immobile and uniform. In this model, both time and space can exist independently of objects within, even in an entirely empty universe. _Lower:_ Leibniz's view wherein time and space did not and could not exist independently of the objects used to measure them. Within this model, space is simply a measure of the relative distances between objects, and time is a measure of the relative motion of objects as their relative positions change. In this model, it makes little sense to talk of a universe without objects since time and space do not exist without objects with relative positions and velocities.],
) <absolute_time_and_space>
For a good few centuries, Newton's law of universal gravitation stood as our fundamental understanding of gravity, with its impressive descriptive and predictive power @newton_dominance. As our measurements of the solar system became more precise, however, a major discrepancy was noted, one that Newton's law failed to describe. The planet Mercury, so close to the sun and so heavily influenced by its gravity, was found to be behaving ever so slightly strangely @mercury_precession. Under Newton's laws, the orbits of the planets were described precisely --- ellipses plotted through space. The influence of other gravitational bodies, such as the other planets, would cause these ellipses to precess, their perihelion increasing with time. The vast majority of the precession of Mercury was accounted for by applying Newton's laws to the solar system as a whole. However, a small amount, only the barest fractions of a degree per century, remained a mystery. For a long time, it was thought there was an extra hidden planet in the inner solar system, but none was ever found. If this extra precession was an accurate measurement, the difference was enough to state with confidence that Newton's universal law of gravitation was not a complete description of gravity.
=== Special Relativity <special-relativity-sec>
By the start of the 20#super("th") century, two more thorns in Newton's side had been revealed. Experiments failed to detect a change in the speed of light irrespective of the Earth's motion through space @absoloute_light --- if light behaved as we might expect from ordinary matter, then its measured speed should change depending on whether we are moving toward its source, and hence in opposition to its own direction of motion, or against and in unison with its direction of motion. That is not what was observed. Light appeared to move at the same speed no matter how fast you were going when you measured it, whether you measured your velocity relative to its source or any other point in the universe. There was no explanation for this behaviour under Newtonian mechanics. The second tantalising contradiction arrived when attempting to apply Maxwell's hugely successful equations describing electromagnetism, which proved incompatible with Newtonian mechanics, again in large part because of the requirement for a constant speed of light at all reference frames @maxwells. This failing of Newtonian mechanics was noted by <NAME> and <NAME>, the former of which developed many of the ideas and mathematics later used by Einstein @lorentz_relativity.
In 1905, Einstein built upon Lorentz's work @lorentz_relativity and proposed his theory of special relativity as an extension beyond standard Newtonian mechanics in a successful attempt to rectify the previously mentioned shortcomings @special_relativity. The initial presentation of special relativity was built upon two revolutionary principles. Firstly, the speed of light was the same in all reference frames, meaning that no matter how fast you were travelling relative to another body, the speed of light would, to you (and to all observers), appear the same as it always has --- light would move away from you as it always had done, unaffected by your apparent velocity. Secondly, and closely related to the first principle, special relativity states that the laws of physics will act identically in all inertial reference frames. If you are isolated from the outside world by some impenetrable shell, there is no experiment you can perform to determine that you are moving relative to another body --- the only situations between which you could tell the difference were between different non-inertial reference frames (and between a non-inertial reference frame and an inertial one), wherein the shell surrounding you was accelerating at different rates. By introducing these postulates, Einstein explained the observations of light-speed measurements and allowed for the consistent application of Maxwell's laws.
What special relativity implied was that there was no one true "stationary" reference frame upon which the universe was built @special_relativity, seemingly disproving Newton's ideal of an absolute universe. All inertial frames were created equal. This seemingly innocent fact had strange consequences for our understanding of the nature of space and time. In order for the speed of light to be absolute, space and time must necessarily be relative -- were they not, then the cause-and-effect nature of the universe would break down.
We can visualize the problem in a thought experiment, as Einstein often liked to do @light_clock. Imagine an observer standing in the carriage of a train moving at a constant velocity relative to a nearby platform. The observer watches as a light beam bounces back and forth between two mirrors, one on the ceiling, and the other on the floor. From the perspective of the observer, the time taken for light to transit this fixed vertical distance is also fixed, and determined by the speed of light and the distance between the mirrors.
A second observer stands on a nearby platform a looks into the moving train as it passes (it has big windows) @light_clock. As they watch the light beam bounce between the two mirrors, they see that, from their reference frame, the beam must take a diagonal path between the mirrors as the train moves forward. This diagonal path is longer than the vertical path observed in the carriage's reference frame. If we take special relativity to be true, the speed of light must be constant for both observers. However, in one reference frame, the light must travel a greater distance than in the other. It cannot be the case that the time taken for the photon to travel between the mirrors is the same for the observer on the carriage and the observer on the platform --- their measurements of time must differ in order to preserve the supremacy of the speed of light. The observer on the platform would indeed see time passing on the train more slowly than time on the apparently "stationary" platform around them --- this effect is known as *time dilation*, and it has since been experimentally verified @time_dilation_ref. We can quantify this effect using
$ Delta t' = (Delta t) / sqrt(1 - v^2/c^2) = gamma(v) Delta t, $ <time-dilation-eq>
where $Delta t'$ is the measured duration of an event (the time it takes light to move between the two mirrors) in a secondary inertial reference frame (the train carriage) which has a relative velocity, $v$, compared with the inertial reference frame of the current observer (the platform), $Delta t$ is the measured duration of the same event when measured in the secondary inertial reference frame (the train carriage), $c$ is the speed of light in a vaccum, #box($299,792,458$ + h(1.5pt) + $ m s^(-1)$), and $gamma$ is the Lorentz factor given by
$ gamma(v) = 1 / sqrt(1 - v^2/c^2). $ <lorentz-factor>
An illustration of this effect can be seen in @light_clock_diagram.
#figure(
grid(
columns: 1,
rows: 2,
gutter: 1em,
[ #align(center)[#image("train_observer.png", width: 52%)] ],
[ #align(center)[#image("platform_observer.png", width: 100%)] ],
),
caption: [An illustration of the light clock thought experiment. The light clock thought experiment is a scenario that can be imagined in order to illustrate the apparent contradiction that arises from a universally constant speed of light. In order to rectify this contradiction, the concepts of time dilation and length contraction are introduced, fundamentally changing our understanding of the nature of time and space. Two observers stand in inertial reference frames. From special relativity, we know all inertial reference frames are equal, and the laws of physics, including the speed of light, should look identical @light_clock @special_relativity. _Upper:_ The observer on the train measures the time it takes a single photon of light to bounce from a mirror at the bottom of the train, to a mirror at the top, and back again. The distance travelled by the light beam is two times the height of the train, $H$, which gives $2H$. The time it takes a particle to transit a given distance, $D$, is given by $Delta t = D / v$. Since light always travels at $c$, we know the measured photon transit time in this reference frame will be $Delta t = 2H / c$. _Lower:_ A second observer, standing on a platform, watches as the train passes at a constant velocity, $v$. Through a large window in the carriage, they observe the first observer performing their experiment. However, from the second observer's reference frame, the light now has to move on a diagonal path created by the motion of the train, we can calculate its new transit length $2D$, using Pythagoras's theorem. Each of the two transit, will, by definition take half of the total transit time measured by the platform observer, $1/2 Delta t'$, and in this time the train will have moved, $1/2 Delta t' v$, this gives us $D = sqrt(H^2 + (1/2 Delta t' v)^2)$. If we substitute this new distance into the original equation to calculate the duration of the transit $Delta t' = D / v$, we get $Delta t' = sqrt(H^2 + (1/2 Delta t' v)^2) / v$. This means that the platform observer measures a longer transit duration. Since the bouncing light beam is a type of clock, a light clock, and all functioning clocks in a given inertial reference will tick at a consistent rate, we can conclude that time is passing more slowly for the observer on the train when observed from the platform's reference frame. In reality, these effects would only become noticeable to a human if the velocities involved were significant fractions of the speed of light. In everyday life, the effects of special relativity are negligible, which was probably why it took so long for anyone to notice.]
) <light_clock_diagram>
Similarly, if we orient the mirrors horizontally, so that the light travels along the length of the carriage, a different relativistic effect becomes apparent @light_clock. The observer on the platform, observing the light's path as longer due to the train's motion, must reconcile this with the constant speed of light. This reconciliation leads to the conclusion that the train, and the distance between the mirrors, are shorter in the direction of motion from the platform observer's perspective. This phenomenon, where objects in motion are contracted in the direction of their movement, is known as *length contraction* and is described by
$ L' = L sqrt(1 - v^2/c^2) = L / gamma(v), $ <length-contraction-eq>
where $L'$ is the length of an object when measured from an inertial reference frame that has a velocity, $v$, relative to the inertial frame of the measured object, L is the "proper length" of the object when its length is measured in the object's inertial frame, $c$ is the speed of light in a vacuum, #box($299,792,458$ + h(1.5pt) + $ m s^(-1)$), and $gamma$ is the Lorentz factor given by @lorentz-factor.
Together, length contraction and time dilation shatter Newton's notions of absolute time and space @special_relativity. It should be remembered, however, that neither the carriage observer nor the platform observer can be said to be in the true stationary reference frame. The observer standing in the station is in the same inertial reference frame as the rest of the Earth, but that doesn't make it any more valid than any other. If the observer at the station had a similar setup of mirrors and light beams, and the train occupant looked out at them, the train occupant would observe the same phenomenon. To the passenger, time outside the train appears slowed, and the station shorter than it ought to be. This seems to be a paradox, often known as the twin paradox. What happens if the train later stopped and the two observers were to meet? Who would have experienced more time? It is a common misconception that acceleration must be introduced in order to reconcile the two clocks, however, even staying within the regime of special relativity, we can observe an asymmetry between the two observers @twin_paradox. In order for the two observers to meet in a shared reference frame, one of the observers, in this case, the train passenger, must change between reference frames, even if that change is instantaneous. This asymmetry allows us to solve the paradox, but the explanation is a little complex so will not be discussed here.
In order to transfer values between two coordinate frames we may use what is referred to as a Lorentz transform, the simplest of which involves moving from the coordinates of one inertial reference frame to another moving at a velocity, $v$, relative to the first. We can see that from @time-dilation-eq and @length-contraction-eq, this transform is given by
$ t' = gamma(t - frac(v x, c^2) ) , $ <lorentz_t_eq>
$ x' = gamma(x - v t), $ <lorentz_x_eq>
$ y' = y, $
and
$ z' = z . $
Noting that as expected, there are no changes in the $y$ and $z$ directions.
Although the world presented by special relativity may at first seem counter-intuitive and hard to believe, there have been numerous experiments verifying its predictions @special_relativity_tests. Most famously, the Global Positioning System (GPS) network of satellites would be unable to operate without accounting for the time dilation induced by the satellites' relative velocities @gps_relativity, due to the extremely precise time measurements required.
=== Minkowski Spacetime <minkowski-sec>
Although the notions of independent and absolute time and space were dislodged, it is still possible to describe the new universe illuminated by special relativity as an all-pervasive 4D geometry inside which the universe sits. Unlike Newton's world, however, space and time are inseparably linked into one joint four-dimensional continuum wherein motion can affect the relative measurements of time and space. We call this geometry *spacetime*. As we have seen in @special-relativity-sec, time intervals between events within spacetime are not fixed, and observers don't necessarily agree on their order. Events must be described by a combination of temporal and spatial coordinates, and because all inertial reference frames are equal, all inertial coordinate systems (ways of assigning reference values to points in spacetime) are also equally valid.
Special relativity deals with flat spacetime. This type of spacetime is known as *Minkowski space* @gravitation; see @flat for an illustration. Although it is non-Euclidian, and its geometry can sometimes be counterintuitive to people used to travelling at pedestrian velocities, it is still isotropic and homogeneous; it looks identical, no matter where in it you are, or what velocity you are traveling at relative to any other point or object.
We can fully describe a given geometry by constructing a metric that can return the distance between any two points in that geometry. In standard 3D Euclidean geometry, which is the most instinctively familiar from everyday life, a point can be represented by a three-vector comprised of $x$, $y$, and $z$ components,
$ vectorn(r) = mat(x; y; z;) . $<euclidean_point>
The scalar distance, $Delta r$, between two points each described by @euclidean_point is given by the Euclidean distance formula --- the expansion of Pythagoras' theorem from two dimensions into three,
$ Delta r^2 = ||dvectorn(r)||^2 = Delta x^2 + Delta y^2 + Delta z^2 $ <euclidean_formula>
where $Delta r$ is the scalar distance between two points separated by $Delta x$, $Delta y$, and $Delta z$ in the $x$, $y$, and $z$ dimensions respectively, and $dvectorn(r)$ is the displacement vector between the two points. This relationship assumes a flat geometry and does not consider the role that time plays in special relativity. In the case of Euclidean geometry, the metric that we have omitted in @euclidean_formula is the $3 times 3$ Euclidean metric
$ matrixn(g) = mat(
1, 0, 0;
0, 1, 0;
0, 0, 1;
). $ <identity_metric>
We can use @identity_metric and @euclidean_formula to construct a more complete expression, which can be adjusted for different geometries,
$ Delta r^2 = ||dvectorn(r)||^2 = dvectorn(r)^bold(T) matrixn(g) dvectorn(r) = mat(Delta x, Delta y, Delta z;) mat(
1, 0, 0;
0, 1, 0;
0, 0, 1;
) mat(
Delta x;
Delta y;
Delta z;
) = Delta x^2 + Delta y^2 + Delta z^2 . $
In this case, the inclusion of this metric does not change the calculation of the scalar distance between two points, however, as we have seen in @special-relativity-sec, in order to represent the spacetime described by special relativity, we must include the time dimension, $t$, which does not behave identically to the other three dimensions. The *Minkowski metric* allows us to explore beyond standard 3D Euclidean geometry by including a 4#super("th") dimension, time
$ matrixn(eta) = mat(
-1, 0, 0, 0;
0, 1, 0, 0;
0, 0, 1, 0;
0, 0, 0, 1;
). $ <minkowski_metric>
Using @minkowski_metric, which describes a flat spacetime, we can use this metric to compute the interval between two events in flat Minkowski space, whose locations can be described with four-positions (four-vectors), #vectorn("s"), of the following form:
$ vectorn(s) = mat(c t; vectorn(r);) = mat(c t; x; y; z;) $ <four-vector>
where $vectorn(s)$, is the four-position of an event in spacetime, $c$ is the speed of light in a vacuum, #box($299,792,458$ + h(1.5pt) + $ m s^(-1)$), $t$ is the time component of the four-position, and #vectorn("r") is a position in 3D Euclidean space. We set $s_0$ equal to $c t$ rather than just $t$ to ensure that each element of the four-position is in the same units.
From @minkowski_metric and @four-vector, it follows that the displacement four-vector between two events in Minkowski spacetime, $dvectorn(s)$, can be computed with
$ Delta s^2 = dvectorn(s)^bold(T) matrixn(eta) dvectorn(s)= - c^2 Delta t^2 + Delta x^2 + Delta y^2 + Delta z^2 . $ <spacetime-interval>
Even though two observers may disagree on the individual values of the elements of the vector describing the four-displacement, #dvectorn("s"), between the two events, $Delta s$, known as the spacetime interval, is invariant and has a value that all observers will agree on, independent of their reference frame. Using @spacetime-interval, we can describe the relationship of events and interactions in a flat Minkowski spacetime.
We can show that the Minkowski metric is consistent with length contraction and time dilation, described by @time-dilation-eq and @length-contraction-eq respectively, by showing that the spacetime interval, $Delta s$, is equal in two different coordinate frames that disagree on the values of $Delta t$ and $Delta x$.
In a second, boosted coordinate frame moving with a velocity $v$ (in the x-axis alone) relative to our initial frame, @spacetime-interval becomes
$ Delta s^2 = - c^2 Delta t'^2 + Delta x'^2 + Delta y'^2 + Delta z'^2 . $ <spacetime-interval-shifted>
We can substitute @lorentz_t_eq and @lorentz_x_eq into @spacetime-interval-shifted, and show that $Delta s^2$ remains the same. Substituting we get
$ Delta s^2 = - c^2 gamma ^2 ( Delta t - frac(v Delta x, c^2))^2 + gamma^2 (Delta x - v Delta t)^2 Delta y^2 + Delta z^2 . $
We can also substitute our definition for the Lorentz factor, $gamma$, given by @lorentz-factor to get
$ Delta s^2 = - c^2 (1 / sqrt(1 - v^2/c^2))^2 ( Delta t - frac(v Delta x, c^2))^2 + (1 / sqrt(1 - v^2/c^2))^2 (Delta x - v Delta t)^2 + Delta y^2 + Delta z^2 . $
Expanding the squares gives us
$ Delta s^2 = frac( - c^2 ( Delta t - frac(v Delta x, c^2)) ( Delta t - frac(v Delta x, c^2)) , 1 - v^2/c^2) + frac((Delta x - v Delta t)(Delta x - v Delta t), 1 - v^2/c^2) + Delta y^2 + Delta z^2 . $
We can then multiply out the brackets to get
$ Delta s^2 = frac( - c^2 Delta t^2 + 2 c^2 Delta t frac( v Delta x, c^2) - c^2 frac(v^2 Delta x^2, c^4), 1 - v^2/c^2) + frac(Delta x^2 - 2 v Delta t Delta x + v^2 Delta t^2, 1 - v^2/c^2) + Delta y^2 + Delta z^2 , $
and we can cancel this further to get
$ Delta s^2 = frac( - c^2 Delta t^2 + 2 v Delta t Delta x - frac(v^2 Delta x^2, c^2), 1 - v^2/c^2) + frac(Delta x^2 - 2 v Delta t Delta x + v^2 Delta t^2, 1 - v^2/c^2) + Delta y^2 + Delta z^2 . $
Next, we can merge the first two terms under their common denominator, $1 - v^2/c^2$, to get
$ Delta s^2 = frac( - c^2 Delta t^2 + 2 v Delta t Delta x - frac(v^2 Delta x^2, c^2) + Delta x^2 - 2 v Delta t Delta x + v^2 Delta t^2, 1 - v^2/c^2) + Delta y^2 + Delta z^2 . $
This reduces to
$ Delta s^2 = frac( - c^2 Delta t^2 - frac(v^2 Delta x^2, c^2) + Delta x^2 + v^2 Delta t^2, 1 - v^2/c^2) + Delta y^2 + Delta z^2 . $
We can then rewrite the numerator in terms of $Delta t^2$ and $Delta x^2$, since we are aiming to reduce it to this form. This gives us
$ Delta s^2 = frac( -(c^2 + v^2) Delta t^2 + (1 - frac(v^2, c^2)) Delta x^2, 1 - v^2/c^2) + Delta y^2 + Delta z^2 . $
We can then split the common demoniator into two fractions, giving us
$ Delta s^2 = frac( -(c^2 + v^2) Delta t^2, 1 - v^2/c^2) + frac((1 - frac(v^2, c^2)) Delta x^2, 1 - v^2/c^2) + Delta y^2 + Delta z^2 . $
The coefficients in the term in $Delta x^2$ cancel to leave us with only $Delta x^2$, and we can divide the coefficients of the $Delta t^2$ term by a factor of $c^2$ to give us
$ Delta s^2 = frac( - c^2 (1+ v^2 / c^2 ) Delta t^2, 1 - v^2/c^2) + Delta x^2 + Delta y^2 + Delta z^2 . $
Which cancels and returns us to our original expression @spacetime-interval,
$ Delta s^2 = - c^2 Delta t^2 + Delta x^2 + Delta y^2 + Delta z^2 . $
This shows, that after performing a Lorentz transform by a constant velocity, $v$, in the $x$-axis, the spacetime interval, $ Delta s$, remains constant, i.e,
$ Delta s^2 = - c^2 Delta t^2 + Delta x^2 + Delta y^2 + Delta z^2 = - c^2 Delta t'^2 + Delta x'^2 + Delta y'^2 + Delta z'^2 . $
This demonstrates that performing a Lorentz transform between two inertial reference frames is consistent with the formulation of Minkowski spacetime described by @minkowski_metric.
When dealing with the gravitational effects of spacetime, we are often considering point-like particles or spherical masses; for this reason, it is very often convenient to work with spherical coordinates with the basis $t$, $r$, $theta$, and $phi$ rather than the Euclidean coordinate system we have been using so far. In spherical coordinates @spacetime-interval becomes
$ Delta s^2 = -c^2 Delta t^2 + Delta r^2 + r^2 Delta Omega^2 $ <minkowski-interval-spherical>
where
$ Delta Omega^2 = Delta theta^2 + sin^2 theta Delta phi^2 $
is the standard metric used on the surface of a two-sphere --- a 2D spherical surface embedded in a 3D space. @minkowski-interval-spherical will become a valuable reference when we move to examine curved spacetime under the influence of gravity.
As alluded to, special relativity, and Minkowski Spacetime, only deal with inertial reference frames, hence it is a "special" case of a larger, cohesive theory --- that theory, developed by Einstein in the following years, is general relativity @gravitation.
=== General Relativity <general-relativity-sec>
Although special relativity and Minkowski space successfully reconcile the nature of space and time with the observed constancy of the speed of light, and allow Maxwell's equations to operate as predicted in all inertial reference frames, they still only provide an incomplete picture of the universe. Specifically, they do not explain how to reconcile non-inertial reference frames and coordinate systems, which constitute a significant portion of what we observe in the universe. A more general theory was needed to explain all facets of reality, *general relativity*.
Einstein realized that by introducing deformations to the otherwise flat Minkowski spacetime described by special relativity you could induce accelerations in particles within this spacetime without invoking any forces @gravitation. Rather than being attracted by some gravitational "force", the particles continue to behave as they always had, following their natural paths or *geodesics*. A geodesic is the shortest path between two points in a given geometry; in Euclidian geometry, all geodesics are straight lines, in other geometries however, this is not necessarily the case. Thus, depending on the shape of the spacetime they exist within, particles can accelerate with respect to each other whilst remaining within inertial frames. This is the reason that it is often stated that gravity is "not a force" --- gravitational attraction is a result of the geometry of the spacetime in which objects exist, rather than because of any fundamental attraction caused by something with the traditional properties of a force.
It should be noted that although under general relativity gravity is not described in the same way as the other fundamental forces, it is still often useful and valid to describe it as such. We don't yet have a unifying theory of all of the forces, so they may end up being more similar than current theories describe.
After observing that deformations in spacetime would cause apparent accelerations akin to a force of gravity, the natural jump to make is that massive objects that have gravity deform spacetime @gravitation. The more massive the object, the larger the gravitational well and the more negative the gravitational potential energy of an object within that valley. The more dense the object, the steeper the gravitational well, and the stronger the gravitational attraction. See @gravitaional-potentials for an illustration.
What we experience as the force of gravity when standing on the surface of a massive body like Earth, is an upward acceleration caused by the electromagnetic force of the bonds between atoms within the Earth. These atoms exert upward pressure on the soles of our feet. We know we are accelerating upward because we are not in freefall, which would be the case if gravity was a force that was balanced against the force of the planet below. Our bodies, and all particles, simply wish to continue on their geodesics, and in the absence of any other forces, that path would be straight down toward the centre of the Earth.
In general relativity, spacetime is described as a four-dimensional *manifold* @gravitation. A manifold is a type of space that resembles Euclidean space locally irrespective of its global geometry. This is why on the scales humans are used to dealing with, we experience space as Euclidean and never anything else. Consequentially, the flat spacetime described by Minkowski space is also a manifold. Specifically, the type of manifold that represents spacetime is known as a *Lorenzian manifold*, which has all the properties thus far described, plus some extra conditions. The Lorenzian manifold is a differentiable manifold, meaning that its differential is defined at all points without discontinuities between different regions.
Einstein formulated ten equations that describe how gravity behaves in the presence of mass and energy, known as Einstein's Field Equations (EFEs). The full complexity of EFEs is not required for this brief introduction, however, they take the general form of
$ matrixn(G) + Lambda matrixn(g) = frac(8 pi G, c^4) matrixn(T) $ <einstein_equation>
$ matrixn(G) = - 8 pi G matrixn(T) $
where $matrixn(G)$ is the Einstein tensor, describing the curvature of spacetime given the specific distribution of mass-energy described by $matrixn(T)$, $Lambda$ is the cosmological constant, $matrixn(g)$ is the metric tensor, describing the generic geometric structure of spacetime, and $matrixn(T)$ is the stress-energy tensor, describing the distribution of mass and energy across a given spacetime, $G$ is Netwtonian constant of gravitation, and $c$ is the speed of light in a vaccum. The Einstein tensor is given by
$ matrixn(G) = matrixn(R) - frac(1,2) matrixn(g) R , $ <einstein_tensor_eq>
where $matrixn(R)$ is the Ricci tensor, a tensor which detemines how much the metric differs from the Euclidean metric, or in our case the Minkowski metric, $R$ is the Ricci tensor's trace, the scalar sum of the tensor's diagonal elements, which tells us the scalar curvature, and $matrixn(g)$ is the metric tensor.
This description of spacetime as deformable geometry altered by the location of the mass and energy it contains gives us a more complete picture of how space, time, and gravity work in non-intertial reference frames. It also expands the validity of coordinate systems to include all coordinate systems, not just non-inertial ones.
#figure(
grid(
columns: 2,
rows: 1,
gutter: 1em,
[ #align(center)[#image("flat.png", width: 100%)]],
[ #align(center)[#image("earth.png", width: 100%)] ],
),
caption: [Two depictions of Einsteins's spacetime. For illustrative purposes, since we are not 4D beings and the paper on which this will be printed very much isn't, the four dimensions of our universe have been compacted down into two. It should also be noted that these illustrations were not generated with correct physical mathematics but only to give an impression of the concepts being described. _Left:_ Minkowski space --- in the absence of any mass, spacetime will not experience any curvature @gravitation. This is the special case that Einstien's special relativity describes. If we were to place a particle into this environment, it would not experience any acceleration due to gravity. If the particle were massive, it would distort the spacetime, and the spacetime would no longer be considered Minkowski space even though, alone, the particle would not experience any acceleration. Often, when dealing with particles of low mass, their effects on the distortion of spacetime are ignored, and we can still accurately describe the scenario with special relativity @special_relativity. _Right:_ Spacetime distorted by a massive object, shown in blue. Curved space is described by Einstein's more general theory, general relativity @gravitation. In this scenario, we can see how the presence of mass imprints a distortion into the shape of spacetime. Any particles also present in the same universe as the blue object, assuming it has existed indefinitely, will experience an apparent acceleration in the direction of the blue sphere. A beam of light, for example, comprised of photons and entirely massless, would be deflected when moving past the sphere. Even though light will always travel along its geodesic through the vacuum of space, the space itself is distorted; therefore, a geodesic path will manifest itself as an apparent attraction toward the sphere. Notice that the mass of the photon is zero; therefore, using Newton's universal law of gravitation @newtons-law-of-universal-gravitation, it should not experience any gravitational attraction, and indeed, gravitational lensing of the passage of starlight, as it moved past the Sun, was one of the first confirmations of Einstein's theory of general relativity @gravitational_lensing. Even if we assume the photon has some infinitesimal mass, Newtonian mechanics predicts a deflection angle that is only half as large as General Relativity predicts, and half as large as what is observed. Were this sphere several thousand kilometres in diameter, any lifeforms living on its surface, which would appear essentially flat at small scales, would experience a pervasive and everpresent downward force. Note that the mass of the object is distributed throughout its volume, so in regions near the centre of the sphere, the spacetime can appear quite flat, as equal amounts of mass surround it from all directions.],
) <flat>
Perhaps not the first question to arise, but certainly one that would come up eventually, would be, what happens if we keep increasing the density of a massive object? Is there a physical limit to the density of an object? Would gravity keep getting steeper and steeper? The mathematical solution to this question was inadvertently answered by <NAME>, who found the first non-flat solutions to EFEs @gravitation. The solution described the exterior of a spherical mass @gravitation. The Schwarzschild metric that describes the geometry of this manifold is
$ matrixn(g_(mu v)) = mat(
- (1 - frac(r_s, r)), 0, 0, 0;
0, (1 - frac(r_s, r))^(-1), 0, 0;
0, 0, r^2, 0;
0, 0 , 0, r^2 sin^2 theta;
) $
and the spacetime line element for the Lorenzian manifold described by metric is given by
$ d s = - (1 - frac(r_s, r) ) c^2 d t^2 + (1 - frac(r_s, r))^(-1) d r^2 + r ^2 d Omega^2 $
where $r_s$ is the Schwarzschild radius of the massive body inducing the spacetime curvature. The Schwarzschild radius is given by
$ r_s = frac(2 G M , c^2) . $
As can be seen from inspection, this metric introduces multiple singularities. The singularity introduced at $r = r_s$, can be shown to be a coordinate singularity alone, that can be removed via choice of coordinate system. However, the other singularity that is introduced, at the centre of the mass, often known simply as "the singularity", cannot be removed by such a trick. There was at first much confusion about the nature of the singularity, it was assumed by some that the solution was theoretical alone and such an object could not exist in nature.
It was later discovered that there were indeed physical scenarios in which matter could become so compressed there was nothing to stop it from collapsing into what can mathematically be described as a single point @gravitation. This state occurs when a given spherical volume with a radius, $r$, contains a mass-energy content larger than $M >= frac(r c^2 , 2 G)$. No known repulsive forces exist which are strong enough to prevent this kind of gravitational collapse. Such objects would create a gravitational well so steep that light itself would not be able to escape, and since light travels at the fastest possible velocity, nothing else could either. It was from this complete state of darkness that these objects received their name --- black holes. See @gravitaional-potentials for a depiction of a black hole.
#figure(
grid(
columns: 2,
rows: 1,
gutter: 1em,
[ #align(center)[#image("earth_moon.png", width: 100%)] ],
[ #align(center)[#image("black_hole.png", width: 100%)] ],
),
caption: [Two further depictions of spacetime. Again, these images are a 2D representation of 4D spacetime, and they were generated without correct physical descriptions but for illustrative purposes alone. _Left:_ Two objects, one in blue with a lesser mass and one in yellow with a greater mass. Objects with a larger mass distort spacetime to a greater extent. Objects close to either sphere will experience acceleration as the space curves and the objects continue to move in a straight line. In this scenario, if stationary, the yellow and blue objects will accelerate and move toward each other and, without outside interference, inevitably collide. However, if either the blue or yellow ball is given an initial velocity perpendicular to the direction of the other sphere so that its straight-line path orbits the other sphere, they can remain equidistant from each other in a stable orbit for potentially very long periods of time. As we will see, this orbit will eventually lose energy and decay, but depending on the masses of the two objects, this could take an extremely long time. _Right:_ A black hole. The three red lines represent the geodesic paths of three light beams as they move past the black hole at different distances. Thus far, we have assumed that the mass of the yellow and blue objects are evenly distributed through their volume, so the spacetime at the very centre of the object is, at its limit, entirely flat. In many scenarios, this is a physically possible arrangement of matter, as although gravity pulls on every particle within the object, pulling it toward the centre, it is a very weak pull compared to the other forces of nature, which push back out and stop the particles continuing on their naturally preferred trajectory. This prevents a complete collapse of the object. Gravity, however, has one advantage on its side, and that is that there is no negative mass, only positive, so whereas large bodies tend to be electrically neutral as positive and negative charges cancel each other out, gravity always grows stronger. If enough mass congregates in the same place, or if the forces pushing matter away from the centre stop, there's nothing to stop gravity from pulling every particle in that object right to the centre, right into a singular point of mass with infinite density known as the singularity. As this collapse occurs, the curvature of spacetime surrounding the object gets stronger and stronger, eventually reaching the point where within a region around the singularity, known as the event horizon, all straight-line paths point toward the singularity. Meaning that no matter your speed, no matter your acceleration, you cannot escape, even if you are light itself. Consequently, no information can ever leave the event horizon, and anything within is forever censored from the rest of the universe.]
) <gravitaional-potentials>
== Orbits are Not Forever
=== Orbits
In both Newtonian mechanics and general relativity it is possible to describe two objects that are in an excited state of constant motion, each object gravitationally bound to the other but never directly touching, similar to an electron caught in an atomic orbital. As expected, both theories are correctly describing existent phenomena. When in this state, the objects are said to be *orbiting* each other. If one object is significantly smaller than the other, then the smaller is usually referred to as the orbiter, and the larger the orbited, although in reality, they both exert equal force on each other and the centre of their orbit, known as the barycentre, will never quite align with the centre of the more massive object, even if it is negligibly close.
It is also quite easy to arrive at the notion of an orbit starting from everyday intuition. We can imagine that we live on the surface of a massive spherical object, such as a planet. Hopefully, this is not a particularly hard thing to imagine. We feel an apparent gravitational attraction toward the planet's centre, but the planet's crust prevents us from following our natural geodesic. If we drop something it will fall to the ground until it hits something stopping its motion, the ground. If we throw something, it will still fall, but it will also move a distance across the surface of the sphere since we have imparted some velocity onto the object. Now if we imagine this planet, for some reason, has an incredibly tall mountain and no atmosphere, and we go to the top of that mountain with a suitably sized cannon, we can throw objects (cannon balls in this case), much further. As we increase the amount of gunpowder we use to propel our cannonball, we impart more and more initial velocity onto the balls. We start to notice that as the velocity increases the ball takes longer and longer to reach the ground as the surface of the planet below curves away from it as it falls toward it. Eventually, if we increase the initial velocity enough, we reach a point where the curvature of the planet below exactly matches the rate at which the ball falls toward the centre of the planet. Assuming no external forces, and that the ball doesn't crash into the back of your head as it completes its first full orbit, this ball could circle the planet forever; see @orbits-diagram. Whilst in orbit the ball would be moving along its natural geodesic and would experience no net forces and hence no acceleration, it would be in freefall. This is the microgravity experienced by astronauts aboard the international space station, their distance from Earth's centre is not massively larger than on the surface of the planet and things would still quite happily fall down if left at that altitude with no velocity with respect to the planet's surface.
#figure(
grid(
columns: 2,
rows: 1,
gutter: 1em,
[ #align(center)[#image("orbits.png", width: 100%)] ],
[ #align(center)[#image("barycentre.png", width: 100%)] ],
),
caption: [Two illustrations of scenarios involving simple orbital mechanics. _Left:_ In this thought experiment we imagine a cannon atop a large mountain on an unphysically small spherical planet with mass, $m$. As is described in both Newtonian mechanics and general relativity, objects are attracted toward the centre of mass of the planet. Left to their own devices they will fall until they meet some force resisting their motion, most likely, the surface of the planet. The cannon operator can control the velocity of the projected cannon balls. They note that the more velocity they impart, the longer it takes for the ball to impact the surface of the planet. The balls can travel further before impacting the ground when their velocity is greater, even if the time to impact remains the same. However, with this increased distance travelled along the surface of the sphere, the distance between the ball and the ground increases as the surface of the planet curves away from the ball. Eventually, the ball's trajectory will circularise around the planet, and, if not impeded by any other forces, the ball would remain on this circular trajectory indefinitely. _Right:_ Two identical massive objects, such as planets, in a circular orbit with a shared centre, called a barycentre (note that the objects do not have to have equal mass or be in a circular orbit, to have a shared barycentre, in fact, this will always be the case). Any massive objects can orbit each other, including black holes.]
) <orbits-diagram>
=== Gravitational Radiation
In Newtonian mechanics, assuming no other gravitational interactions, and no energy losses through tidal heating or other means (so not in reality), orbits are eternal and will never decay. This is not the case under general relativity, however, where orbiting bodies will release energy through gravitational radiation otherwise known as gravitational waves @gravitation. Two objects in orbit will continuously emit gravitational waves which will carry energy away from the system and gradually decay the orbit until eventually, the two objects merge. For most objects in the universe, the energy released through gravitational radiation will be almost negligible, and orbital decay from other factors will usually be vastly more significant. However, when we look again at the densest objects in the universe, black holes, and their slightly less dense cousins, neutron stars, their gravitational wells are so extreme that the energy lost through the emission of gravitational waves becomes significant enough for them to merge within timeframes less than the lifespan of the universe, outputting a colossal amount of energy in a frenzy of ripples in the moments before their collision. These huge amounts of energy propagate out through the universe at the speed of light, causing imperceptible distortions in the geometry of spacetime as they go. They pass through planets, stars, and galaxies with almost no interaction at all.
/*
We can aproximate a given metric tensor with,
$ matrixn(g) = matrixn(eta) + matrixn(h) $
where $matrixn(h)$ is tensor pertabation from the Minkowski metric. If we use natural units, and set both the speed of light in a vaccum, $c$, to unity, @einstein_equation becomes:
*/
Like many things, the existence of gravitational waves was predicted by Einstein @einstein_grav_waves (although there had been earlier proposals based on different physical theories), as a consequence of the mathematics of general relativity. General relativity predicts that any non-axisymmetric acceleration of mass, linear or circular, will generate gravitational waves. This is because these motions induce changes in the system's quadrupole moment. A perfect rotating sphere will not produce any gravitational waves, no matter how fast it is spinning, because there is no change in the quadrupole moment. A sphere with an asymmetric lump however, like a mountain, will produce gravitational radiation @neutron_star_gw_review, as will two spheres connected by a bar spinning around their centre, or a massive alien starship accelerating forward using immense thrusters. However, as Einstein quite rightly calculated, for most densities and velocities, the energy released in such a manner is minuscule.
Under general relativity, gravitational waves travel at the speed of light @gravitation. They are not predicted by Newtonian mechanics, as in Newtonian mechanics the propagation of gravitational effects is instant. Special and general relativity, do not allow any information to travel faster than the speed of light, gravitational information is no different @special_relativity @gravitation. All current observations suggest gravitational waves appear to travel at, or very close to, the speed of light, however, this is still some limited debate on the matter @speed_of_gravity. As a perfectly spherical body rotates, the gravitational field remains constant in all directions. Due to the lack of a quadrupole moment, its rotation has no effect on the surrounding spacetime, thus no waves are created that can propagate, and no energy is lost from the spinning sphere.
/*
If a lone black hole, were for some reason, to move rapidly back and forth along a straight line, this, in isolation, would not produce gravitational waves. This would be an example of a dipole motion, where the center of mass of the object moves. In general relativity, changes in the dipole moment, such as the linear motion of black holes, do not produce gravitational waves because no oscillations would be produced in the surrounding spacetime. This is a consequence of the conservation of linear momentum @gravitation. If the oscillating mass was a perfectly symmetric black hole, but perhaps a planet with an asymmetric internal distribution of mass, or some enigmatic alien space station with lopsided corridors and internal voids, then a changing quadrupole moment again becomes a possibility.
*/
Aside from detections of the stochastic gravitational wave background @PTA, we have, thus far, only detected, gravitational waves from extremely dense binary systems consisting of pairs of black holes @first_detction, neutron stars @first_bns, and their combination @first_nsbh. These systems, known as Compact Binary Coalescences (CBCs), have a clear quadrupole moment that produces strong gravitational waves that propagate out through the universe, removing energy from the system which will eventually result in the merger of the companions into one body. See @waves for an illustration. Gravitational waves from many events of this type pass through the Earth regularly, at the moment, it is only the loudest of these that we can detect. The fact that we can detect them all, however, remains an impressive feat only possible due to the nature of gravitational waves themselves. The amplitude of gravitational waves scales inversely with distance from their source, rather than by the inverse square law as might naively be expected. If this were not the case, detection would be all but impossible. The energy contained within the waves still decreases with the inverse square law, so the conservation of energy is maintained @gravitation.
#figure(
image("waves.png", width: 70%),
caption: [A depiction of the region of spacetime surrounding two inspiraling black holes. The spacetime grid visible is a 2D representation of the true 4D nature of our universe as described by general relativity @gravitation. This depiction was not produced by an accurate simulation but was constructed as a visual aid alone. Two massive objects can orbit each other if they have sufficient perpendicular velocity; this is a natural state for objects to find themselves trapped in because the chances of direct collisions between objects are low, and any objects that find themselves gravitationally bound together and do not experience a direct collision will eventuate in an orbit. The same is true for black holes; whether they form from pairs of massive stars that both evolve into black holes after the end of their main sequence lives or whether they form separately and through dynamical interaction, end up adjoined and inseparable, the occurrence of two black holes orbiting is not inconceivable @black_hole_binary_formation. Over time, small amounts of energy will leak from these binaries; ripples are sent out through the cosmos, carrying energy away from the system and gradually reducing the separation between the companions. As they get closer, the curvature of the spacetime they occupy increases, and thus, their acceleration toward each other grows. They speed up, and the amount of energy that is lost through gravitational radiation increases, further increasing the speed of their inspiral in an ever-accelerating dance. If they started just close enough, this process would be enough to merge them within the lifetime of the universe; they will inevitably collide with an incredible release of energy out through spacetime as powerful gravitational waves. It is these waves, these disturbances in the nature of length and time itself, that we can measure here on Earth using gravitational wave observatories.]
) <waves>
Gravitational waves have two polarization states, typically named plus, $+$, and cross, $times$. They are named as such due to the effect the different polarisations have on spacetime as they propagate through it. In both cases, the two polarisations cause distortions in the local geometry of spacetime along two axes at once, this is a result of their quadrupole nature. Gravitational waves are transverse waves, meaning they oscillate in a direction that is perpendicular to their direction of propagation. They alternate between stretching spacetime along one of the two axes of oscillation and squeezing along the other, to the inverse, as the wave oscillates. See @wobble for an illustration of the effect of the passage of a gravitational wave through a region of spacetime. It is this stretching and squeezing effect that we have been able to detect in gravitational wave detectors on Earth. It is worth noting that because they are quadrupole waves and oscillate in two directions simultaneously, the polarisation states are $45 degree$ apart rather than the $90 degree$ separation of states seen in electromagnetic waves. This means that any two points on a line that is at a $45 degree$ angle to the polarisation of an incoming wave, will not see any effect due to the passing wave.
#figure(
image("wibble_wobble.png", width: 100%),
caption: [The effect of two polarisation states of gravitational waves as they oscillate whilst passing through a region of spacetime. Each of the black dots represents freely falling particles unrestricted by any other forces. The plus and cross polarisations shown are arbitrary names, and the polarisation can be at any angle, but plus and cross are a convention to distinguish the two orthogonal states.]
) <wobble>
== Gravitational Wave Detection
Detecting gravity is quite easy, just let go of whatever you're holding. Detecting gravitational waves, however, requires the use of some of the most precise measurement instruments humanity has ever constructed. This subsection will cover the basics of how we detect gravitational waves and the challenges that our detection methods embedded into the data.
=== Interferometry
After the notion of detectable gravitational waves became more widespread, a few methods were put forward as possible avenues of investigation, the most notable alternative to current methods was the resonant bar antenna @gravitational_wave_detectors. In the end, interferometers have been proven as viable gravitational wave detectors @first_detction, along with, more recently, pulsar timing arrays @PTA. These two detection methods operate in very different frequency regimes and so can detect very distinct gravitational wave phenomena --- the former able to detect gravitational waves generated from stellar-mass CBC events, and the latter able to detect the pervasive stochastic gravitational wave background, generated by the overlapping and interfering signals of many supermassive black hole mergers. With increased sensitivity, future ground-based detectors may be able to extract the stochastic background generated from stellar-mass mergers, and with further data collection and analysis, PTA might be able to detect individual supermassive black hole mergers.
We will focus our discussion on laser interferometry, as that is the most relevant to work in this thesis. As illustrated by @wobble, gravitational waves have a periodic effect on the distance between pairs of freely falling particles (assuming their displacement doesn't lie at $45degree$ to the polarisation of the wave). We can use this effect to create a detection method if we can measure a precise distance between two freely floating masses @LIGO_interferometers. In the absence of all other interactions (hence freely falling), the distance between two particles should remain constant. If there is a change in this distance we can deduce that this arises from a passing gravitational wave.
Masses suspended by a pendulum are effectively in a state of free fall in the direction perpendicular to the suspension fibers, this allows us to build test masses that are responsive to gravitational wave oscillations in one direction, provided they have significant isolation from other forces --- which is no small task; a considerable amount of engineering goes into ensuring these test masses are as isolated as possible from the outside world @LIGO_interferometers.
Once we have our test masses we must be able to measure the distance between them with incredible accuracy. The LIGO interferometers can measure a change in the length of their four-kilometer arms of only #box($10^(-18)$ + h(1.5pt) + "m"), a distance equivalent to $1/200$#super("th") of the diameter of a proton @LIGO_interferometers, a truly remarkable feat. In order to achieve this degree of accuracy they use laser interferometers.
Interferometers use lasers to accurately measure a change in the length of two arms --- in the case of all current interferometers these arms are perpendicular to each other, but there are designs for future gravitational wave interferometers that use different angles but combine multiple overlapping interferometers @einstein_telescope. In the case of single interferometer designs, right-angled arms capture the largest possible amount of information about one polarisation state, so they are preferred.
What follows is a very rudimentary description of the optics of a gravitational wave detecting interferometer @interferometers. The real detectors have a complex setup with many additional optics that will not be discussed here. A single laser beam produced by a coherent laser source is split between the two arms by a beam-splitting optic. Each of the beams travels down the length of its respective arm before being reflected off of a mirror suspended by multiple pendulums --- the test masses. These beams are reflected back and forth along the arms thousands of times, before leaving the cavity and being recombined with the beam from the other detector and directed into a photodetector. The path lengths of the two beams are very slightly different, calibrated so that under normal operation the two beams will destructively interfere with each other, resulting in a very low photodetector output. This is the output expected from the interferometer if there are no gravitational waves within the sensitive amplitude range and frequency band passing through the detector. When a detectable gravitational wave passes through the interferometer, it will generate an effective difference in the arm lengths that will cause the distance between the freely falling mirrors to slightly oscillate. This oscillation will create a difference in the beam path lengths, and the two beams will no longer exactly cancel each other causing the photodetector to detect incoming laser light. If the detector is working correctly the amount of light detected will be proportional to the amplitude of the incoming gravitational wave at that moment. See @interferometer_diagram.
#figure(
image("interferometer.png", width: 80%),
caption: [A very simplified interferometer diagram. Real gravitational wave detection apparatus have considerably more optics than what is shown. The power recycling and signal recycling mirrors help maintain a high laser power within the cavities. Higher laser powers are preferable as they help reduce quantum shot noise, the limiting source of noise at high frequencies.]
) <interferometer_diagram>
A detector of this kind can only detect elements of incoming gravitational wave signals that align with its polarisation @antenna_pattern. An incoming signal that was completely antialigned to the detector arms would be almost undetectable, though there will always be higher modes present that will produce some SNR in the detector. However, this small SNR would likely be undetectable for current sensitivities unless the unaligned event was very close to Earth. Fortunately, the occurrence of completely unaligned waves is a rare since most signals are at least partially aligned with a given detector. Interferometers are also sensitive to the angle between their tangent and the source direction, known as the orientation. Interferometers are most sensitive to signals that lie directly above or below the plane of the detector arms, and least sensitive to signals whose propagation direction is parallel to the plane of the arms. These two factors combine to generate the antenna pattern of the detector, which dictates which regions of the sky the detector is most sensitive to.
=== The Global Detector Network
There are currently five operational gravitational wave detectors worldwide: LIGO Livingston (L1), LIGO Hanford (H1), Virgo (V1), Kagra (K1), and GEO600 (G1) @open_data. See @global_network. Several further future detectors are planned, including LIGO India @LIGO_India, and three future next-generation detectors: the Einstein Telescope @einstein_telescope, Cosmic Explorer @cosmic_explorer, and LISA @LISA, a space-based detector constellation.
Having multiple geographically separated detectors has multiple advantages.
- *Verification:* Separation provides verification that detected signals are from gravitational wave sources, and are not local experimental glitches or terrestrial phenomena that appear to be signals @network_snr. Since there are no other known phenomena that can cause a similar effect in such spatially separated detectors, if we see a similar signal in multiple detectors we can say that either they were caused by gravitational wave signals or a statistical fluke. The chances for the latter to occur decrease with the number of detectors.
- *Sky Localisation:* Gravitational-Wave detectors cannot be targeted in a particular area of the sky. Other than their antenna pattern, which is fixed and moves with the Earth, they can sense detections from many directions @network_snr. This means we don't have to worry about choosing where to point our detectors, but it also means that we have very little information about the source location of an incoming signal, other than a probabilistic analysis using the antenna pattern. Because gravitational waves travel at the speed of light, they won't usually arrive in multiple detectors simultaneously. We can use the arrival time difference between detectors to localize the gravitational wave sources with a much higher constraint than using the antenna pattern alone. With two detectors we can localize to a ring in the sky, and with three we can localize further to two degenerate regions of the sky, narrowing it down to one with four detectors. Adding this triangulation method with the antenna patterns of each of the detectors in the network can provide good localization if all detectors are functioning as expected.
#figure(
image("network.png", width: 100%),
caption: [Location of currently operation LIGO detectors: LIGO Livingston (L1), LIGO Hanford (H1), Virgo (V1), Kagra (K1), and GEO600 (G1) @open_data. Arm angles are accurate, the arm lengths were generated with a relative scale with the real detectors: 4 km for the two LIGO detectors, 3 km for Virgo and Kagra, and 600 m for GEO600. ]
) <global_network>
=== Interferometer Noise <interferometer_noise_sec>
Perhaps the area of experimental gravitational wave science that is most relevant to gravitational wave data analysis is interferometer noise. Data scientists must examine the interferometer photodetector outputs, and determine whether a gravitational wave signal is present in the data (signal detection), then make statements about the properties of any detected signals, and how they relate to the gravitational wave source and its relation to us (parameter estimation).
Because it is not possible to reduce noise in all areas of frequency space at once, gravitational wave interferometers are designed to be sensitive in a particular region of frequency space @ligo_o3_noise --- this region of frequency space is chosen to reveal a particular type of gravitational wave feature that is of interest to us. It makes sense then, that the current generation of detectors were designed with a sensitivity range overlapping the area in which it was correctly predicted that CBCs would lie. The design specification of LIGO Livingston can be seen in @noise_diagram, it shows the main sources of noise in the detector, which we will cover very briefly.
#figure(
image("noise_budget.PNG", width: 80%),
caption: [Full noise budget of the LIGO Hanford Observatory (LHO) during the 3#super("rd") joint observing run. This image was sourced from @ligo_o3_noise. ]
) <noise_diagram>
+ *Quantum* noise is the limiting noise at high frequencies @ligo_o3_noise. This is primarily in the form of quantum shot noise and quantum radiation pressure noise. It is caused by stochastic quantum fluctuations in the vacuum electric field. Quantum shot noise is the noise generated by the uncertainty in photon arrival time at the photodetector, this can be mitigated by using higher laser power. Quantum radiation pressure noise is caused by variations of the pressure on the optics caused by quantum fluctuation; this kind of noise increases with laser power, but it has a much smaller overall contribution to the noise budget so reducing shot noise is usually preferable.
+ *Thermal* noise is caused by the motion of the particles that comprise the test mass, coating, and suspensions. This is the noise caused by the random motion of particles present in all materials not cooled to absolute zero (all materials). Thermal noise dominates at lower frequencies. Reductions in thermal noise are primarily achieved through the design of the optics and suspension systems.
+ *Seismic* noise is noise generated from ground motion. One of the purposes of the test mass suspensions is to try and reduce this noise. It performs this job admirably. Seismic noise is not a dominant noise source at any frequency range.
These types of noise, plus several other accounted-for sources and small amounts of unaccounted-for noise sum to make a coloured Gaussian background. Some elements of the noise can vary depending on the time of day, year, and status of the equipment because they are sensitive to changes in the weather, local geography, and human activity. This means the noise is also non-stationary and can fluctuate quite dramatically even on an hourly basis @det_char. There are also a number of known and unknown sources of transient non-linear glitches that can cause features to appear in the noise. These are some of the most difficult noise sources to deal with and are discussed in more detail in @glitch-sec. The non-stationary nature of the noise, in addition to the presence of non-linear glitches, makes for an intriguing backdrop in which to perform data analysis. Most of these problems already have working solutions, but there are certainly potential areas for improvement.
Gravitational wave interferometers are not perfect detectors. Their sensitivity is limited by noise present in their output. Despite best efforts, it is not, and will never be, possible to eliminate all sources of noise. When such precise measurements are taken, the number of factors that can induce noise in the output is considerable. It is remarkable that the detectors are as sensitive as they are. Nonetheless, a challenge remains to uncover the most effective techniques for extracting information from signals obfuscated by detector data. The deeper and more effectively we can peer through the noise the more information will be available to us to advance our knowledge and understanding of the universe.
This thesis focuses on a very small part of that problem. We attempt to apply the latest big thing in data science, machine learning, to both detection and parameter estimation problems in the hopes that we can make a small contribution to the ongoing effort. |
|
https://github.com/EpicEricEE/typst-quick-maths | https://raw.githubusercontent.com/EpicEricEE/typst-quick-maths/master/README.md | markdown | MIT License | # quick-maths
A package for creating custom shorthands for math equations.
## Usage
The package comes with a single template function `shorthands` that takes one or more tuples of the form `(shorthand, replacement)`, where `shorthand` can be a string or content.
```typ
#import "@preview/quick-maths:0.1.0": shorthands
#show: shorthands.with(
($+-$, $plus.minus$),
($|-$, math.tack),
($<=$, math.arrow.l.double) // Replaces '≤'
)
$ x^2 = 9 quad <==> quad x = +-3 $
$ A or B |- A $
$ x <= y $
```

|
https://github.com/hei-templates/hevs-typsttemplate-thesis | https://raw.githubusercontent.com/hei-templates/hevs-typsttemplate-thesis/main/00-templates/sections.typ | typst | MIT License | //
// Description: Some recurrent section elements mainly for exams
// Author : <NAME>
//
#import "constants.typ": *
#import "boxes.typ": *
#import "tablex.typ": *
#let part(
title: [],
number: 1,
size: huge,
) = {
pagebreak()
v(1fr)
align(center, smallcaps(text(size, [Part #number])))
v(2em)
align(center, smallcaps(text(size, title)))
v(1fr)
pagebreak()
}
#let titlebox(
width: 100%,
radius: 4pt,
border: 1pt,
inset: 20pt,
outset: -10pt,
linecolor: box-border,
titlesize: huge,
subtitlesize: larger,
title: [],
subtitle: [],
) = {
if title != [] {
align(center,
rect(
stroke: (left:linecolor+border, top:linecolor+border, rest:linecolor+(border+1pt)),
radius: radius,
outset: (left:outset, right:outset),
inset: (left:inset*2, top:inset, right:inset*2, bottom:inset),
width: width)[
#align(center,
[
#if subtitle != [] {
[#text(titlesize, title) \ \ #text(subtitlesize, subtitle)]
} else {
text(titlesize, title)
}
]
)
]
)
}
}
#let exam_header(
nbrEx: 5+1,
pts: 10,
lang: "en" // "de" "fr"
) = {
if nbrEx == 0 {
tablex(
columns: (2cm, 90%),
align: center + top,
stroke: none,
(), (),
if lang == "en" or lang == "de" {[#text(large, "Name:")]} else {[#text(large, "Nom:")]
},
[#line(start: (0cm, 0.7cm), length:(100%), stroke:(dash:"loosely-dashed"))],
)
} else if nbrEx == 1 {
tablex(
columns: (2cm, 90%-1.3cm, 1.3cm),
align: center + top,
stroke: none,
[], [], if lang == "en" {[#v(-0.4cm)#text(small, "Grade")]} else {[#v(-0.4cm)#text(small, "Note")]},
if lang == "en" or lang == "de" {[#text(large, "Name:")]} else {[#text(large, "Nom:")]
},
[#line(start: (0cm, 0.7cm), length:(100%), stroke:(dash:"loosely-dashed"))],
[#v(-0.3cm)#rect(height:1cm, width:1.2cm, stroke:2pt)],
)
} else if nbrEx == 2 {
tablex(
columns: (2cm, 90%-2.3cm, 1cm, 1.3cm),
align: center + top,
stroke: none,
[], [], [#v(-0.4cm)#text(small, "1")], if lang == "en" {[#v(-0.4cm)#text(small, "Grade")]} else {[#v(-0.4cm)#text(small, "Note")]},
if lang == "en" or lang == "de" {[#text(large, "Name:")]} else {[#text(large, "Nom:")]
},
[#line(start: (0cm, 0.7cm), length:(100%), stroke:(dash:"loosely-dashed"))],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#rect(height:1cm, width:1.2cm, stroke:2pt)],
[], [], [#v(-0.2cm)#text(small, [(#pts)])], [],
)
} else if nbrEx == 3 {
tablex(
columns: (2cm, 90%-3.3cm, 1cm, 1cm, 1.3cm),
align: center + top,
stroke: none,
[], [], [#v(-0.4cm)#text(small, "1")], [#v(-0.4cm)#text(small, "2")], if lang == "en" {[#v(-0.4cm)#text(small, "Grade")]} else {[#v(-0.4cm)#text(small, "Note")]},
if lang == "en" or lang == "de" {[#text(large, "Name:")]} else {[#text(large, "Nom:")]
},
[#line(start: (0cm, 0.7cm), length:(100%), stroke:(dash:"loosely-dashed"))],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#rect(height:1cm, width:1.2cm, stroke:2pt)],
[], [], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [],
)
} else if nbrEx == 4 {
tablex(
columns: (2cm, 90%-4.3cm, 1cm, 1cm, 1cm, 1.3cm),
align: center + top,
stroke: none,
[], [], [#v(-0.4cm)#text(small, "1")], [#v(-0.4cm)#text(small, "2")], [#v(-0.4cm)#text(small, "3")], if lang == "en" {[#v(-0.4cm)#text(small, "Grade")]} else {[#v(-0.4cm)#text(small, "Note")]},
if lang == "en" or lang == "de" {[#text(large, "Name:")]} else {[#text(large, "Nom:")]
},
[#line(start: (0cm, 0.7cm), length:(100%), stroke:(dash:"loosely-dashed"))],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#rect(height:1cm, width:1.2cm, stroke:2pt)],
[], [], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [],
)
} else if nbrEx == 5 {
tablex(
columns: (2cm, 90%-5.3cm, 1cm, 1cm, 1cm, 1cm, 1.3cm),
align: center + top,
stroke: none,
[], [], [#v(-0.4cm)#text(small, "1")], [#v(-0.4cm)#text(small, "2")], [#v(-0.4cm)#text(small, "3")], [#v(-0.4cm)#text(small, "4")], if lang == "en" {[#v(-0.4cm)#text(small, "Grade")]} else {[#v(-0.4cm)#text(small, "Note")]},
if lang == "en" or lang == "de" {[#text(large, "Name:")]} else {[#text(large, "Nom:")]
},
[#line(start: (0cm, 0.7cm), length:(100%), stroke:(dash:"loosely-dashed"))],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#rect(height:1cm, width:1.2cm, stroke:2pt)],
[], [], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [],
)
} else if nbrEx == 6 {
tablex(
columns: (2cm, 90%-6.3cm, 1cm, 1cm, 1cm, 1cm, 1cm, 1.3cm),
align: center + top,
stroke: none,
[], [], [#v(-0.4cm)#text(small, "1")], [#v(-0.4cm)#text(small, "2")], [#v(-0.4cm)#text(small, "3")], [#v(-0.4cm)#text(small, "4")], [#v(-0.4cm)#text(small, "5")], if lang == "en" {[#v(-0.4cm)#text(small, "Grade")]} else {[#v(-0.4cm)#text(small, "Note")]},
if lang == "en" or lang == "de" {[#text(large, "Name:")]} else {[#text(large, "Nom:")]
},
[#line(start: (0cm, 0.7cm), length:(100%), stroke:(dash:"loosely-dashed"))],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#rect(height:1cm, width:1.2cm, stroke:2pt)],
[], [], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [],
)
} else if nbrEx == 7 {
tablex(
columns: (2cm, 90%-7.3cm, 1cm, 1cm, 1cm, 1cm, 1cm, 1cm, 1.3cm),
align: center + top,
stroke: none,
[], [], [#v(-0.4cm)#text(small, "1")], [#v(-0.4cm)#text(small, "2")], [#v(-0.4cm)#text(small, "3")], [#v(-0.4cm)#text(small, "4")], [#v(-0.4cm)#text(small, "5")], [#v(-0.4cm)#text(small, "6")], if lang == "en" {[#v(-0.4cm)#text(small, "Grade")]} else {[#v(-0.4cm)#text(small, "Note")]},
if lang == "en" or lang == "de" {[#text(large, "Name:")]} else {[#text(large, "Nom:")]
},
[#line(start: (0cm, 0.7cm), length:(100%), stroke:(dash:"loosely-dashed"))],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#rect(height:1cm, width:1.2cm, stroke:2pt)],
[], [], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [],
)
} else if nbrEx == 8 {
tablex(
columns: (2cm, 90%-8.3cm, 1cm, 1cm, 1cm, 1cm, 1cm, 1cm, 1cm, 1.3cm),
align: center + top,
stroke: none,
[], [], [#v(-0.4cm)#text(small, "1")], [#v(-0.4cm)#text(small, "2")], [#v(-0.4cm)#text(small, "3")], [#v(-0.4cm)#text(small, "4")], [#v(-0.4cm)#text(small, "5")], [#v(-0.4cm)#text(small, "6")], [#v(-0.4cm)#text(small, "7")], if lang == "en" {[#v(-0.4cm)#text(small, "Grade")]} else {[#v(-0.4cm)#text(small, "Note")]},
if lang == "en" or lang == "de" {[#text(large, "Name:")]} else {[#text(large, "Nom:")]
},
[#line(start: (0cm, 0.7cm), length:(100%), stroke:(dash:"loosely-dashed"))],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#rect(height:1cm, width:1.2cm, stroke:2pt)],
[], [], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [],
)
} else if nbrEx == 9 {
tablex(
columns: (2cm, 90%-9.3cm, 1cm, 1cm, 1cm, 1cm, 1cm, 1cm, 1cm, 1cm, 1.3cm),
align: center + top,
stroke: none,
[], [], [#v(-0.4cm)#text(small, "1")], [#v(-0.4cm)#text(small, "2")], [#v(-0.4cm)#text(small, "3")], [#v(-0.4cm)#text(small, "4")], [#v(-0.4cm)#text(small, "5")], [#v(-0.4cm)#text(small, "6")], [#v(-0.4cm)#text(small, "7")], [#v(-0.4cm)#text(small, "8")], if lang == "en" {[#v(-0.4cm)#text(small, "Grade")]} else {[#v(-0.4cm)#text(small, "Note")]},
if lang == "en" or lang == "de" {[#text(large, "Name:")]} else {[#text(large, "Nom:")]
},
[#line(start: (0cm, 0.7cm), length:(100%), stroke:(dash:"loosely-dashed"))],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#rect(height:1cm, width:1.2cm, stroke:2pt)],
[], [], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [],
)
} else if nbrEx == 10 {
tablex(
columns: (2cm, 90%-10.3cm, 1cm, 1cm, 1cm, 1cm, 1cm, 1cm, 1cm, 1cm, 1cm, 1.3cm),
align: center + top,
stroke: none,
[], [], [#v(-0.4cm)#text(small, "1")], [#v(-0.4cm)#text(small, "2")], [#v(-0.4cm)#text(small, "3")], [#v(-0.4cm)#text(small, "4")], [#v(-0.4cm)#text(small, "5")], [#v(-0.4cm)#text(small, "6")], [#v(-0.4cm)#text(small, "7")], [#v(-0.4cm)#text(small, "8")], [#v(-0.4cm)#text(small, "9")], if lang == "en" {[#v(-0.4cm)#text(small, "Grade")]} else {[#v(-0.4cm)#text(small, "Note")]},
if lang == "en" or lang == "de" {[#text(large, "Name:")]} else {[#text(large, "Nom:")]
},
[#line(start: (0cm, 0.7cm), length:(100%), stroke:(dash:"loosely-dashed"))],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#square(size:1cm, stroke:1pt)],
[#v(-0.3cm)#rect(height:1cm, width:1.2cm, stroke:2pt)],
[], [], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [#v(-0.2cm)#text(small, [(#pts)])], [],
)
}
/*if lang == "en" {
[#text(large, "Name:")]
} else if lang == "fr" {
[#text(large, "Nom:")]
} else if lang == "de" {
[#text(large, "Name:")]
}
line(start: (2cm, 0cm), length:(80%-nbrEx*5%), stroke:(dash:"loosely-dashed"))
if nbrEx != 0 {
let i = 0
while i <= nbrEx {
if i == nbrEx {
square(size:1.3cm, stroke:2pt)
} else {
square(size:1cm, stroke:1pt)
}
i = i + 1
}
}*/
}
#let exam_reminder_did(
lang: "en" // "de" "fr",
) = {
if lang == "en" {
infobox[
*Exam Reminder:* \
You can only use the following items:
- a laptop without internet connection
- a pocketcalculator
- all paper documents you want
It is forbidden to use generative AI.
\
*Good Luck!*
]
} else if lang == "fr" {
infobox[
*Rappel d'examen :* \
Vous ne pouvez utiliser que les éléments suivants :
- un ordinateur portable sans connexion internet
- une calculatrice de poche
- tous les documents papier que vous souhaitez
Il est interdit d'utiliser l'IA générative.
\
*Bonne chance!*
]
} else if lang == "de" {
infobox[
*Prüfungserinnerung:* \
Sie können nur die folgenden Gegenstände verwenden:
- ein Laptop ohne Internetanschluss
- einen Taschenrechner
- alle Papierdokumente
Es ist verboten, generative KI zu verwenden.
\
*Viel Glück!*
]
}
}
#let exam_reminder_car(
lang: "en" // "de" "fr",
) = {
if lang == "en" {
infobox[
*Exam Reminder:*
\ \
You can only use the following items:
- the two-page summary you created.
- a pocketcalculator
In addition, properly comment all high-level and assembler code to explain its purpose and how it fits into the program structure.
\ \
*Good Luck!*
]
} else if lang == "fr" {
infobox[
*Rappel d'examen :*
\ \
Vous ne pouvez utiliser que les éléments suivants :
- le résumé de deux pages que vous avez créé.
- une calculatrice de poche
Commenter également tout le code de haut niveau et le code assembleur de manière appropriée afin d'expliquer son but et son intégration dans la structure du programme.
\ \
*Bonne chance!*
]
} else if lang == "de" {
infobox[
*Prüfungserinnerung:*
\ \
Sie können nur die folgenden Elemente verwenden:
- die zweiseitige Zusammenfassung, die Sie erstellt haben.
- einen Taschenrechner
Kommentieren Sie ausserdem den gesamten High-Level- und Assembler-Code ordnungsgemäss aus, um seinen Zweck und seine Einbindung in die Programmstruktur zu erklären.
\ \
*Viel Glück!*
]
}
}
#let exam_reminder_syd(
lang: "en" // "de" "fr",
) = {
if lang == "en" {
infobox[
*Exam Reminder:*
\
You can only use the following items:
- your personal notes
- the couse slides
//- A one-page summary (front and back) prepared by you.
It is forbidden to use generative AI.
\
*Good Luck!*
]
} else if lang == "fr" {
infobox[
*Rappel d'examen :*
\
Vous ne pouvez utiliser que les éléments suivants :
- vos notes personnelles
- les diapositives du cours
Il est interdit d'utiliser l'IA générative.
\
*Bonne chance!*
]
} else if lang == "de" {
infobox[
*Prüfungserinnerung:*
\
Sie können nur die folgenden Elemente verwenden:
- Ihre persönlichen Notizen
- die Vorlesungsfolien
Es ist verboten, generative KI zu verwenden.
\
*Viel Glück!*
]
}
}
#let exercises_solution_hints(
lang: "en" // "de" "fr",
) = {
if lang == "en" {
infobox[
*Solution vs. Hints:*
\
While not every response provided herein constitutes a comprehensive solution, some serve as helpful hints intended to guide you toward discovering the solution independently. In certain instances, only a portion of the solution is presented.
]
} else if lang == "fr" {
infobox[
*Solution vs. Hints:*
\
Toutes les réponses fournies ici ne sont pas des solutions complètes. Certaines ne sont que des indices pour vous aider à trouver la solution vous-même. Dans d'autres cas, seule une partie de la solution est fournie.
]
} else if lang == "de" {
infobox[
*Lösung vs. Hinweise:*
\
Nicht alle hier gegebenen Antworten sind vollständige Lösungen. Einige dienen lediglich als Hinweise, um Ihnen bei der eigenständigen Lösungsfindung zu helfen. In anderen Fällen wird nur ein Teil der Lösung präsentiert.
]
}
} |
https://github.com/magicwenli/keyle | https://raw.githubusercontent.com/magicwenli/keyle/main/CHANGELOG.md | markdown | MIT License | ## Unreleased
### Feat
- suit project to commentizen
## 0.2.0 (2024-08-19)
### Fix
- theme type-writer overlaps; add test cases
## 0.1.2 (2024-08-13)
### Feat
- support shadow for themes and modify example
## 0.1.1 (2024-08-09)
### Feat
- format keyle.typ and bump to 0.1.1
- add example for theme
- add `config` factory method pattern
- add Biolinum Keyboard style
## 0.1.0 (2024-07-24)
### Feat
- enhance doc and bump to 0.1.0
- add type-writer style
- support deep-blue style and bump to 0.0.2
- init keyle lib for typst
### Refactor
- add repository
|
https://github.com/jneug/schule-typst | https://raw.githubusercontent.com/jneug/schule-typst/main/src/schule.typ | typst | MIT License |
#import "ab.typ"
#import "kl.typ"
#import "ka.typ"
#import "pt.typ"
#import "cu.typ"
#import "wp.typ"
#import "lt.typ"
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/tuhi-course-poster-vuw/0.1.0/tuhi-course-poster-vuw.typ | typst | Apache License 2.0 | #import "@preview/codetastic:0.1.0": qrcode
#let scaps(body, tracking: 1pt, weight: "regular") = {
set text(tracking: tracking, weight: weight)
smallcaps[#lower[#body]]
}
#let caps(body, tracking: 0pt, weight: "semibold") = {
set text(tracking: tracking, weight: weight)
smallcaps[#lower[#body]]
}
#let tuhi-course-poster-vuw(
coursetitle: "course title",
courseid: "rand101",
coursemajor: none,
courseimage : none,
coursepoints: "15",
coursetrimester: none,
courselecturers: ("nature",),
courseformat: none,
courseprereqs: none,
imagecredit: none,
coursedescription: none,
courses: none,
contact: [address goes here],
logo: none,
qrcodeurl: none,
// The main content.
body
) = {
let (pagewidth, pageheight, bleed) = (297mm, 420mm, 6mm)
let margin-body-left = 33.5mm
let margin-shaded-top = 125mm
let margin-shaded-bottom = 50mm
let padding-body = 0.5em
let title-top = 40mm
let details-left = 210mm
let details-top = 165mm
let details-width = 70mm
let details-height = 85mm
let schedule-width = 230mm
let schedule-spacer = 6mm
let schedule-height = 20mm
let logo-height = 2cm
let address-width = 110mm
let address-left = 126mm
let address-top = 15mm
let image-width = pagewidth - 2*margin-body-left
let image-top = 104mm
let stoke-width = 1.4pt
let image-height = 0.5*image-width
let description-top = 28pt
let content-width = 160mm
let content-height = 100mm
let content-top = 28pt
let vuwcol = rgb(0,71,48)
let darkcol = rgb("#555555") // dark text
// function to draw the page background with a specific colour
let bkg(col: vuwcol) = {
place(
dx: 0mm,
dy: margin-shaded-top,
rect(
width: pagewidth,
height: pageheight - margin-shaded-top - margin-shaded-bottom,
fill: color.mix((col, 10%), (white, 90%)),
)
)
place(
dx: 0mm,
dy: margin-shaded-top,
line(
length: pagewidth,
stroke: stoke-width + col,
)
)
place(
dx: 0mm,
dy: pageheight - margin-shaded-bottom,
line(
length: pagewidth,
stroke: stoke-width + col,
)
)
// details separators
place(
dx: details-left,
dy: pageheight - details-top,
line(
length: details-width,
stroke: stoke-width + col,
)
)
place(
dx: details-left,
dy: pageheight - details-top + details-height,
line(
length: details-width,
stroke: stoke-width + col,
)
)
// schedule separator
place(
dx: 0.5*(pagewidth - schedule-width),
dy: pageheight - margin-shaded-bottom - schedule-spacer,
line(
length: schedule-width,
stroke: (thickness: 0.5*stoke-width , paint: col , dash: "dotted"),
)
)
place(
dx: 0.5*(pagewidth - schedule-width),
dy: pageheight - margin-shaded-bottom - 2*schedule-spacer,
line(
length: schedule-width,
stroke: (thickness: 0.5*stoke-width , paint: col , dash: "dotted"),
)
)
}
// computed
let coursediscipline = courseid.slice(0,4)
let coursecode = courseid.slice(4,7)
let year = coursecode.at(0)
let courseurl = qrcodeurl + coursediscipline + "/" + coursecode
// url expected to follow this scheme: https://www.wgtn.ac.nz/courses/phys/304/2023
// colour inferred from course level
let col = (
"1": rgb(235,157,12),
"2": rgb(0,158,224),
"3": rgb(86,163,38),
"4": rgb(226,0,122),
).at(year)
// now set up page with coloured background
set page(height: pageheight, width: pagewidth, margin: 0mm, background: bkg(col: col))
set text(size: 13pt)
set par(justify: false, leading: 0.2em)
// contact details
place(
dx: address-left,
dy: pageheight - margin-shaded-bottom + address-top,
box(width: address-width, height: logo-height)[
#set align(top)
#set par(justify: true, leading: 0.7em)
// #rect(width:100%, height: 100%)
#set text(weight: 200, size: 11pt, fill: vuwcol)
#scaps(tracking: 0.2pt,weight:400)[#contact.school] \
#scaps(tracking: 0.2pt,weight:400)[#contact.faculty] \
#contact.university\
#scaps(tracking: 1pt,weight:400)[p:] #raw(contact.phone) • #scaps(tracking: 1pt,weight:400)[e:] #raw(contact.email) • #scaps(tracking: 1pt,weight:400)[w:] #raw(contact.website)
],
)
// qr code
place(
dx: pagewidth - margin-body-left - logo-height,
dy: pageheight - margin-shaded-bottom + address-top,
qrcode(courseurl, width: logo-height, ecl:"l",
colors: (white, vuwcol), quiet-zone: 0)
)
// logo
place(
dx: margin-body-left,
dy: pageheight - margin-shaded-bottom + address-top,
box(height: logo-height, width: auto)[#logo]
)
// now defining coloured replacements globally
show "•": text.with(weight: "extralight", fill: col)
show "·": text.with(weight: "extralight", fill: col)
show "↯": text.with(fill: col)
// image with border
place(
dx: margin-body-left,
dy: image-top,
rect(width:image-width, height:0.5*image-width, stroke: 0.5*stoke-width)
)
place(
dx: margin-body-left,
dy: image-top,{
set image(width:image-width, height:0.5*image-width)
courseimage
},
)
// title
let title = [
#set align(center+bottom)
#set par(leading: 0.4em)
#show text : smallcaps
#set text(size: 46pt, fill: col, tracking: 0pt)
#text(weight: "thin")[#coursediscipline]#h(0.042em)#text(fill: black, weight: "medium")[#coursecode]\ #text(weight: "semibold", tracking: 0pt, fill: black)[#coursetitle]
]
// title
place(
dx: 0mm,
dy: title-top,
block(width:pagewidth, height:45mm, inset:1em)[#title]
)
// description
place(
dx: margin-body-left + padding-body,
dy: image-top + image-height + description-top,
block(width:image-width - padding-body, height:2*description-top)[
#set text(size: 22pt, weight: 700, fill: darkcol)
#set par(justify: false, leading: 0.6em)
#coursedescription]
)
// course content
place(
dx: margin-body-left + padding-body,
dy: image-top + image-height + 2.5*description-top + content-top ,
block(width:content-width, height:content-height)[
#set text(weight: 300, size: 18pt)
#set par(justify: false, leading: 0.52em)
// #rect(width:100%, height:100%)
// content height dictates the leading to cater for both dense and sparse course descriptions
#let sizeme(body) = style(styles => {
let size = measure(body, styles)
let headingspace = if(size.height < 350pt) {1.2em} else if(size.height < 250pt) {0.6em} else {0.6em}
let spacer = if(size.height < 260pt) {1.2em} else if(size.height < 250pt) {0.6em} else {0em}
show heading: it => block(above: headingspace, below: 0.3em)[
#set align(left)
#set text(size: 18pt, fill: darkcol)
#set par(justify: true, leading: 0.2em)
#h(-0.5em)• #scaps(it.body, weight: "bold") //#size.height
]
v(spacer)
body
})
#sizeme(body)
]
)
// copyright
place(
dx: 33.5mm,
dy: 220mm ,
block(width:230mm, height:10mm)[
#set text(weight: "thin", size: 8pt)
#set align(right + top)
Image: #imagecredit]
)
// course info
place(
dx: 212mm,
dy: 265mm ,
block(width:70mm, height:10mm)[
#set par(justify: false, leading: 0.7em)
#set text(weight: 300, size: 13pt)
#set align(left + top)
#show strong: set text(weight: 400, fill: darkcol)
#let majorstring = [Major: ] + scaps(weight: "regular")[#coursemajor] + [\ ]
#if(coursemajor != none) {show strong: scaps; majorstring ; v(0.1em)} else {}
#strong(coursepoints) points • trimester #strong(coursetrimester) \
#courseformat
#let lect = if (courselecturers.len() == 1 ) {"Lecturer"} else {"Lecturers"}
#show strong: set text(weight: "regular")
#show courselecturers.at(0): emph.with()
#box[#text[#lect: #courselecturers.map(x => strong(x)).join(", ")]]
#show strong: scaps.with(tracking: 0pt)
_Pre-requisites:_\
#courseprereqs
]
)
// schedule
place(
dx: 0mm,
dy: pageheight - margin-shaded-bottom - schedule-spacer - 0.5*schedule-height,
block(width:pagewidth, height:schedule-height)[
#set align(center+horizon)
#set text(weight: "extralight", size: 9pt, fill:black)
#show regex(courseid): text.with(weight: "semibold", fill: col)
#show: scaps
#table(
columns: (auto, auto, auto, auto),
inset: 0pt, row-gutter:7pt, column-gutter: 20pt, stroke: none,
align: horizon,
[#courses.Y1T1], [#courses.Y2T1], [#courses.Y3T1], [#courses.Y4T1],
[#courses.Y1T2], [#courses.Y2T2], [#courses.Y3T2], [#courses.Y4T2],
)
])
} // end body
|
https://github.com/mrcinv/nummat-typst | https://raw.githubusercontent.com/mrcinv/nummat-typst/master/07_page_rank.typ | typst | = Invariantna porazdelitev Markovske verige
== Naloga
- Implementiraj potenčno metodo za iskanje največje lastne vrednosti.
- Uporabi potenčno metodo in poišči invariantno porazdelitev Markovske verige z
dano prehodno matriko $P$. Poišči invariantne porazdelitve za naslednja primera:
- veriga, ki opisuje skakanje konja (skakača) po šahovnici,
- veriga, ki opisuje brskanje po mini spletu z 5-10 stranmi (podobno spletni iskalniki #link("https://en.wikipedia.org/wiki/PageRank")[razvrščajo strani po relevantnosti]). |
|
https://github.com/Seasawher/typst-cv-template | https://raw.githubusercontent.com/Seasawher/typst-cv-template/main/resume.typ | typst | #show link: underline
#set page(
margin: (x: 0.9cm, y: 1.3cm),
)
#set par(justify: true)
#let chiline() = {v(-3pt); line(length: 100%); v(-5pt)}
= 履歴書
2023年5月 現在
#grid(
columns: (3fr, 1fr),
align(center)[
#table(
columns: (1fr, 3fr),
inset: 10pt,
align: horizon,
[氏名],[],
[氏名ふりがな],[],
[生年月日], [西暦2000年1月1日],
[現住所],[
〒100-000 東京都千代田区
],
[携帯],[000-1111-2222],
[メール],[<EMAIL>]
)
],
align(center)[
#image("faceimage.jpg", width: 90%)
]
)
== 経歴
#table(
columns: (1fr, 4fr),
inset: 10pt,
align: horizon,
[西暦], [学歴],
[], [],
[], [],
[], [],
[], [],
)
#table(
columns: (1fr, 4fr),
inset: 10pt,
align: horizon,
[西暦], [職歴],
[], [],
[], [],
[], [],
[], [現在に至る]
)
|
|
https://github.com/hei-templates/hevs-typsttemplate-thesis | https://raw.githubusercontent.com/hei-templates/hevs-typsttemplate-thesis/main/02-main/05-implementation.typ | typst | MIT License | #import "../00-templates/helpers.typ": *
#pagebreak()
= Implementation <sec:impl>
#lorem(50)
#minitoc(after:<sec:impl>, before:<sec:validation>)
#pagebreak()
== Section 1
#lorem(50)
== Section 2
#lorem(50)
== Conclusion
#lorem(50)
|
https://github.com/sysu/better-thesis | https://raw.githubusercontent.com/sysu/better-thesis/main/specifications/bachelor/acknowledgement.typ | typst | MIT License | // 利用 state 捕获摘要参数,并通过 context 传递给渲染函数
#import "/utils/style.typ": 字号, 字体
#import "/utils/indent.typ": fake-par
// 致谢内容
#let acknowledgement-content = state("acknowledgement", [
致谢应以简短的文字对课题研究与论文撰写过程中曾直接给予帮助的人员(例如指导教师、答疑教师
及其他人员)表达自己的谢意,这不仅是一种礼貌,也是对他人劳动的尊重,是治学者应当遵循的学
术规范。内容限一页。
])
#let acknowledgement(
body
) = {
context acknowledgement-content.update(body)
}
#let acknowledgement-page() = {
// 致谢、附录内容 宋体小四号
set text(font: 字体.宋体, size: 字号.小四)
// 致谢、附录标题 黑体三号居中
show heading.where(level: 1): set text(font: 字体.黑体, size: 字号.三号)
// 致谢标题不编号
show heading.where(level: 1): set heading(numbering: none)
// 通过插入假段落修复[章节第一段不缩进问题](https://github.com/typst/typst/issues/311)
show heading.where(level: 1): it => {
it
fake-par
}
[
= 致谢
#set par(first-line-indent: 2em)
#context acknowledgement-content.final()
#v(1em)
]
}
|
https://github.com/jrihon/multi-bibs | https://raw.githubusercontent.com/jrihon/multi-bibs/main/chapters/01_chapter/introduction.typ | typst | MIT License | #import "../../lib/multi-bib.typ": *
#import "bib_01_chapter.typ": biblio
== Introduction
#lorem(50)
We follow the nomenclature of IUPAC #mcite(("Iupac1983nucleicacids"), biblio)
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-105C0.typ | typst | Apache License 2.0 | #let data = (
("TODHRI LETTER A", "Lo", 0),
("TODHRI LETTER AS", "Lo", 0),
("TODHRI LETTER BA", "Lo", 0),
("TODHRI LETTER MBA", "Lo", 0),
("TODHRI LETTER CA", "Lo", 0),
("TODHRI LETTER CHA", "Lo", 0),
("TODHRI LETTER DA", "Lo", 0),
("TODHRI LETTER NDA", "Lo", 0),
("TODHRI LETTER DHA", "Lo", 0),
("TODHRI LETTER EI", "Lo", 0),
("TODHRI LETTER E", "Lo", 0),
("TODHRI LETTER FA", "Lo", 0),
("TODHRI LETTER GA", "Lo", 0),
("TODHRI LETTER NGA", "Lo", 0),
("TODHRI LETTER GJA", "Lo", 0),
("TODHRI LETTER NGJA", "Lo", 0),
("TODHRI LETTER HA", "Lo", 0),
("TODHRI LETTER HJA", "Lo", 0),
("TODHRI LETTER I", "Lo", 0),
("TODHRI LETTER JA", "Lo", 0),
("TODHRI LETTER KA", "Lo", 0),
("TODHRI LETTER LA", "Lo", 0),
("TODHRI LETTER LLA", "Lo", 0),
("TODHRI LETTER MA", "Lo", 0),
("TODHRI LETTER NA", "Lo", 0),
("TODHRI LETTER NJAN", "Lo", 0),
("TODHRI LETTER O", "Lo", 0),
("TODHRI LETTER PA", "Lo", 0),
("TODHRI LETTER QA", "Lo", 0),
("TODHRI LETTER RA", "Lo", 0),
("TODHRI LETTER RRA", "Lo", 0),
("TODHRI LETTER SA", "Lo", 0),
("TODHRI LETTER SHA", "Lo", 0),
("TODHRI LETTER SHTA", "Lo", 0),
("TODHRI LETTER TA", "Lo", 0),
("TODHRI LETTER THA", "Lo", 0),
("TODHRI LETTER U", "Lo", 0),
("TODHRI LETTER VA", "Lo", 0),
("TODHRI LETTER XA", "Lo", 0),
("TODHRI LETTER NXA", "Lo", 0),
("TODHRI LETTER XHA", "Lo", 0),
("TODHRI LETTER NXHA", "Lo", 0),
("TODHRI LETTER Y", "Lo", 0),
("TODHRI LETTER JY", "Lo", 0),
("TODHRI LETTER ZA", "Lo", 0),
("TODHRI LETTER ZHA", "Lo", 0),
("TODHRI LETTER GHA", "Lo", 0),
("TODHRI LETTER STA", "Lo", 0),
("TODHRI LETTER SKAN", "Lo", 0),
("TODHRI LETTER KHA", "Lo", 0),
("TODHRI LETTER PSA", "Lo", 0),
("TODHRI LETTER OO", "Lo", 0),
)
|
https://github.com/mismorgano/UG-DifferentialGeometry23 | https://raw.githubusercontent.com/mismorgano/UG-DifferentialGeometry23/main/Examenes/PrimerExamen-ABR.typ | typst |
#set text(font: "New Computer Modern Math")
== Problema 1
Sea $gamma : I -> RR^3 without {0}$ una curva regular tal que $gamma''(t) != 0$ para todo $t in I$ ($I subset RR $ es un intervalo).
Supongamos que para todo $t in I$ existe una constante $lambda(t)$ tal que $gamma(t) + lambda(t)n(t) = bold(0) in RR^3$.
Pribar que existe $c in RR_(>0)$ tal que $angle.l gamma(t), gamma(t) angle.r = c$ para todo $t in I$.
*Demostración:*
Dado que $gamma$ es regular, podemos suponer sin perdidad de generalidad que $gamma$ esta parametrizada
por longitud de arco. Queremos probar que $angle.l gamma(t), gamma(t) angle.r = c$ para todo $t in I$,
lo cual equivale a probar que $(angle.l gamma(t), gamma(t) angle.r)' = 0$, mas aún, sabemos que
$(angle.l gamma(t), gamma(t) angle.r)' = 2angle.l gamma(t), gamma'(t) angle.r$, por lo cual
probaremos que $angle.l gamma(t), gamma'(t) angle.r = 0$.
Por hipotesis tenemos que $gamma(t) + lambda(t)n(t) = bold(0)$, lo que implica $gamma(t) = -lambda(t)n(t)$,
entonces:
$ angle.l gamma(t), gamma'(t) angle.r = angle.l -lambda(t)n(t) , gamma'(t) angle.r = -lambda(t) angle.l n(t), gamma'(t) angle.r, $
como $gamma$ esta parametrizada por longitud de arco tenemos que $gamma'$ y $n$ son ortogonales pues $gamma'$ y $gamma''$ lo son. Se sigue que $-lambda(t) angle.l n(t), gamma'(t) angle.r = 0$ y
por tanto $angle.l gamma(t), gamma'(t) angle.r = 0$ como queremos.
== Problema 2
Sean $A = (-r, 0), B=(r, 0) in RR^2$. $A B = [-r, r] subset RR subset^2$, el segmento que los une, de longitud $2r$.
+ Para todo $l>2r$ probar que existe un circulo $C subset RR^2$ tal que $A, B in C$ y si $C=C_1 union C_2$
donde $C_1$ y $C_2$ son los arcos con bordes $A$ y $ B$, entonces $l$ es la longitud de $C_1$ (o $C_2$).
+ Sea $lambda$ una curva que une $A$ con $B$. Supongamos que el segmento $A B$ seguido de $lambda$ forman
una curva cerrada, simple y convexa. Sea $R_1$ la region acotada con borde esta curva. Sea $R_2$ la region acotada por $A B$ y $C_1$.
Probar que el area de $R_2$ es mayor o igual que el area de $R_1$.
*Solución:*
+ Por simpliciadad consideremos $C$ con centro $O = (0, k) in RR^2, $ con $k>0$, sea $C_1$ el arco $A B$ y
sea $C_2$ el arco $B A$ #footnote[En dirección de las manecillas del reloj.].
Podemos notar que el radio del circulo es $sqrt(r^2 + k^2)$ y por tanto
su longitud es $l(C) = 2pi sqrt(r^2 + k^2)$, sin embargo estamos interesados en $l(C_1)$.
Sea $alpha = angle B A O$, como $triangle.stroked.t A O B$ es isoceles tenemos que $angle A B O = alpha$,
por lo cual $angle A O B = pi - 2alpha$, ($alpha in (0, 2pi)$ ).
De lo anterior haciendo una regla de tres obtenemos que
$ l(C_1) = (2pi sqrt(r^2 + k^2)(pi - 2alpha)) / (2pi) = sqrt(r^2 + k^2)(pi - 2alpha), $
ademas notemos que $tan(alpha) = k/r$, entonces $alpha = arctan(k/r)$, se sigue que
$ l(C_1) = sqrt(r^2 + k^2)(pi - 2arctan(k/r)), $
la cual es una funicón continua en $k$ ($r$ fijo).
Por ultimo notemos que
$ lim_(k -> 0) l(C_1) = lim_(k -> 0) sqrt(r^2 + k^2)(pi - 2arctan(k/r)) = pi r, $
y que
$ lim_(k -> infinity) l(C_1) = lim_(k -> infinity) sqrt(r^2 + k^2)(pi - 2arctan(k/r)) =infinity, $
dado que $pi r < l < infinity$ y la continuidad de $l(C_1)$ obtenemos que existe un circulo $C$ tal
que $l(C_1) = l$, como queremos.
+ En este caso podemos suponer que $l(C_1) = l(gamma)$, sea $D$ la curva $C_2$ o su reflejo sobre el eje $x$
de tal forma que $gamma$ y $D$ queden en distintas partes del plano,
se cumple que $gamma$ seguida de $D$ sige siendo una curva cerrada simple.
Podemos notar que el area encerrada por $gamma$ seguida de $D$ es igual a $A(gamma) + A(D)$
#footnote[Aqui consideramos el area entre la curva y $A B$.] y el area encerrada por $C_1$ y $C_2$ es $A(C_1) + A(C_2)$,
por como definimos $D$ tenemos que $A(C_2) = A(D)$.
De lo anterior podemos notar que $l(C) = l(C_1 union C_2) = l(gamma union D)$, luego, por la desigualdad
isoperimetrica, pues $gamma union D$ es una curva cerrada simple, tenemos que $A(gamma union D) <= A(C_1 union C_2)$, es decir,
$ A(gamma) + A(D) <= A(C_1) + A(C_2), $
de donde obtenemos lo deaseado, pues $A(gamma) = R_1$ y $A(C_1) = R_2$. |
|
https://github.com/chomosuke/PMC-notes | https://raw.githubusercontent.com/chomosuke/PMC-notes/master/notes.typ | typst | #set page(numbering: "1")
= Introduction
== Background and Motivation
- Parallel computing is a part of HPC.
- HPC also includes everything else that makes the computation fast.
- No point parallelizing without increasing performance.
- You might want to optimize for the architecture.
- Sometimes overhead outweighs benefits from parallelization.
- Focusing on parallel algorithms.
- Different version of parallel algorithms suits different architecture or models.
- Many application yo.
- People made super computers throughout the 1900s
- Super computers rely on carefully designed interconnects.
- Cloud computers are just AWS instances.
- Many aspects #image("aspects.png", width: 75%)
== Complexity
- $f(n) = O(g(n)) =>$ $f$ grows no faster than $g$
- $f(n) = Omega(g(n)) =>$ $f$ grows no slower than $g$
- $f(n) = o(g(n)) =>$ $f$ grows slower than $g$
- $f(n) = omega(g(n)) =>$ $f$ grows faster than $g$
- $f(n) = Omega(g(n)) and f(n) = O(g(n)) => f(n) = Theta(g(n))$
- Strictly speaking we should really use $in$ instead of $=$
- Some common name for complexities:
- Constant
- Logarithmic
- Polylog: $(log(n))^c$
- Linearithmic: $n log n$
- Quadratic: $n^2$
- Polynomial or geometric
- Exponential
- Factorial
- Log factor are often ignored.
#pagebreak()
== Model
- RAM model: _#text(blue)[random access machine]_
- Common model when we talk about sequential time complexity.
- Multiplying the number of computers by a constant factor doesn't change the complexity.
- Solution: allow $p$, the number of processors to increase with problem size and hence reduces
the complexity.
=== PRAM
- Parallel Random Access Machine
- $p$ number of RAM processors, each have private memory and share a large shared memory, all memory
access takes the same amount of time.
- Does things synchronously, AKA in lock steps.
- PRAM pseudo code looks like regular pseudo code but there's this\
*for* $i <- 0$ *to* $n - 1$ *do in parallel*\
*processor* i *does* thingy
Many different PRAM model
- EREW: exclusive read, exclusive write
- CREW: concurrent read, exclusive write
- CRCW: concurrent read, concurrent write
- Concurrent write have different types
- COMMON: Error when two processor tries to write to the same location with different value.
- ARBITRARY: Pick a arbitrary processor if many processor writes the same time.
- PRIORITY: Processor with lowest ID writes.
- COMBINING: Runs a function whenever multiple processors tries to write at the same time.
- Too powerful.
- ERCW: exclusive read, concurrent write (never used)
Power of model: expresses the set of all problems that can be solved within a certain complexity.
- A is more powerful that B if A can solve a larger set of problems within any complexities.
- A is equally powerful as B if they can solve the same set problems within any complexities.
- Partial ordering.
- COMMON, ARBITRARY, PRIORITY and COMBINING are in increasing order of power.
- Any CRCW PRIORITY PRAM can be simulated by a EREW PRAM with a complexity increase of $cal(O)(log
p)$
#v(8pt)
- _#text(blue)[Parallel Computation Thesis]_: any thing can be solved with a Turing Machine with
polynomially bounded space can be solved in polynomially bounded space with unlimited processors.
- Unbounded _#text(blue)[word sizes]_ are not useful, so we limit word counts to $cal(O)(log p)$
- _#text(blue)[Nick's Class]_ (NC): Solvable in polylog time with ploy number of processors.
- Widely believed that $bold("NP") != bold("P")$
#pagebreak(weak: true)
== Definitions (need to remember)
- $w(n) = t(n) times p(n)$ where $w(n)$ is the work / cost, $t(n)$ is the time and $p(n)$ is the
number of processors.
- Optimal processor allocation means: $t(n) times p(n) = Theta(T(n))$ where $T(n)$ is the time
taking by a sequential algorithm.
- Equivalent to $t(n) times p(n) = O(T(n))$ because $t(n) times p(n) = Omega(T(n))$ always.
- $"Speedup"(n) = T(n) / t(n)$
- Speedup optimal = processor optimal.
- Optimal: processor optimal AND $t(n) = cal(O)(log^k n)$
- Processor optimal and polylog in time.
- Efficient: Assume $T(n) = Omega(n)$ $w(n) = cal(O)(T(n) log^alpha n)$ AND polylog in time
- Optimal but polylog increase in work.
- #text(blue)[_size_]: $"Size"(n)$ is the total number of operations it does.
- #text(blue)[_efficiency_]: $eta(n)$ speedup per processor
- $eta(n) = T(n) / w(n) = "Speedup"(n) / p(n)$
#v(8pt)
- You can decrease $p$ and increase $t$ by a factor of $O(p_1/p_2)$, $w(n)$ doesn't increase its
complexity.
- Can't do it the other way around.
=== Brent's Principle (important)
- If something can be done with size $x$ and $t$ time with infinite processors, then it can be done
in $t + (x - t) / p$ time with $p$ processors
=== Amdahl's Law
- Maximum speedup: if $f$ is the fraction of time that can't be parallelized, then
$"Speedup"(p) -> 1/f "as" p -> infinity$
- Honestly very obvious.
=== Gustafson's Law
- $s$ is fraction time of serial part, $r$ is fraction time of parallel part, then
$"Speedup"(p) = Omega(p)$
- Very obvious again...
== Algorithms
- sum
- logical or
- Maximum
- $n^2$ processors all compare all elements and set is_max array to false if element isn't
maximum.
- Only processor with element being max write it to the returning memory address.
- Maximum$n^2$
- $cal(O)(log log n)$
- $n$ processor on $n$ elements.
- Is efficient
- Make elements into a square, find maximum on each row recursively.
- Find maximum of maximum of the rows using maximum.
- $cal(O)(log log n)$ levels of recursion, each level takes $cal(O)(1)$ times
- Element Uniqueness
- Have an array size of MAX_INT.
- Write processor ID to the array with the element.
- Check if processor ID is indeed there, if not there's another element there.
- Replication
- $O(log n)$
- Replication optimal
- $p = n / log(n)$ and copy at the end.
- Broadcast
- Just replicate
- Simulate PRIORITY with COMMON $n^2$
- Minimum version of Maximum
- Simulate PRIORITY with EREW
- All processor wants to write
- Sort array A of tuples (address, processorID) using Cole's Merge Sort.
- For each processor k, if $A[k]."address" != A[k-1]."address"$ then $A[k]."processorID"$ is the
smallest ID that wants to write to that address.
= Architecture
- Fetch Decode Execute WriteBack
#image("cpu-cycles.png", width: 60%)
- Bus is a wire and everyone can see everything on that wire.
#v(8pt)
- Pipeline: let's do all of them at the same time for the next 4 instructions
- Need to predict the next 4 instructions sometimes.
- Superpipeline: Do all of them for the next 8 (or more) instructions.
- Superscalar: Multiple pipeline in parallel
#v(8pt)
- Word size: 64 bits, 32 bits etc, various aspects:
- Integer size
- Float size
- Instruction size
- Address resolution (mostly bytes)
- Single instruction multiple data SIMD
- Make word size more complicated
#v(8pt)
- Coprocessor
- Used to means stuff directly connected to the CPU like a floating point processor.
- Now can means FPGA or GPU.
#v(8pt)
- Multicore processor are just single core duplicated but they all have one extra single shared
cache.
#v(8pt)
- Classification of parallel architectures
- SISD regular single core.
- SIMD regular modern single core.
- MIMD regular multicore.
- MISD doesn't exist.
- SIMD vs MIMD
- Effectively SIMD vs non-SIMD
- Most processor have multicore and SIMD on each core.
- So a balance between the two.
- SIMD cores are larger so less of them fit on a die.
- SIMD is faster at vector operations.
- SIMD is not useful all the time so sometimes the SIMD part sit idle.
- SIMD is harder to program.
#v(8pt)
- Shared memory: All memory can be accessed by all processors.
- All memory access truly equal time: symmetric multi-processor.
- Only can have so many cores when the bus is only so fast.
- Making more buses doesn't help cause space also slows things down.
- Sometimes can be done with switching interconnect network.
- Some processor access some memory faster.
- More complex network.
- Distributed shared memory: each processor have its own memory but interconnect network exist
so you can read other people's memory.
- #text(blue)[_non-uniform memory access_] NUMA
- Static interconnect network: each node connect to some neighbors.
- #text(blue)[_degree_]: just like degree in graphs.
- #text(blue)[_diameter_]: just like in graphs.
- #text(blue)[_cost_] $= "degree" times "diameter"$
- Distributed memory: Each processor have its own memory. Each process live on one processor.
#v(8pt)
- Blade contains Processor / Package / Socket which contains Core which contains ALU.
#v(8pt)
- Implicit vs explicit: explicit $->$ decision made by programmer
- Parallelism: Can I write a sequential algorithm.
- Decomposition: Can I pretend threads processes doesn't exist.
- Mapping: Can I pretend all cores are the same.
- Communication.
#v(8pt)
- Single Program Multiple Data: one exe
- Multiple Program Multiple Data: multiple exe
Other HPC considerations
- Cache friendliness
- Processor-specific code
- Compiler optimization.
- Compiler from CPU maker are usually better.
- So Intel compiler is better than both clang and gcc.
Memory interleaving
- Memory module takes a while to recharge, so we interleave a page on different memory module.
Automatic Vectorization
- Sometimes compilers automatically insert SIMD instructions in place of loops.
- Depends on the availabilities of a lot of things, including the OS.
- Manual SIMD: #image("manual-SIMD.png")
- AVX have to be aligned: i.e. 256 bits SIMD have to be 256 bits aligned - address is multiple of
256 bits.
Multithreading
- Synchronization is more expensive if threads are on cores further away.
- It's expensive in general.
#v(8pt)
- Instruction reordering: thread continues with other instructions while it waits on earlier
instructions.
- Speculative execution: don't wait on instructions, just go for it and if it fails then unroll.
#v(8pt)
- Some programming patterns are more friendly to NUMA.
Message passing considerations
- Multi processing have to pass messages around because processes don't share address space.
- Hard to predict performance.
Wants good communication patterns
- For multithread multiprocess: #image("mtmp-comm.png", width: 60%)
- For single thread multiprocess: #image("stmp-comm.png", width: 60%)
= OpenMP
- Abstracts single process, multithreaded program execution on a single machine.
- Abstracts: Multi-socket, multi-core, threads, thread synchronization, memory hierarchy, SIMD,
NUMA.
- Everything OMP does are hints.
#v(8pt)
- #text(blue)[_internal control variable_]: ICV: OMP_NUM_THREADS, OMP_THREAD_LIMIT, OMP_PLACES,
OMP_MAX_ACTIVE_LEVELS.
- Can also be set with functions in `#include "omp.h"`
Execution Model
- There's an implicit parallel region on the outside.
- There's by default an implicit barrier at the end of each parallel region.
- `no-wait` removes the implicit barrier
- If a parallel region is encountered, then the threads split and a new team is created.
- A lot of parallel region nested can create a lot of thread very quickly.
- Can limit nesting by `OMP_MAX_ACTIVE_LEVELS`.
Memories: global, stack, heap
- Threads have their own stack but share global and heap.
== Directives
- The `#pragma omp` thingy.
- Allows specifying parallelism and still allow the base language to be the same.
- Theoretically, simply remove the directives and program will just run like a sequential
program.
- Syntax: ```
#pragma omp <directive name> [[,]<clause> [[,]<clause> [...]]]
<statement / block>
```
- Multiple directives can be applied to one following block
- Some directives are #text(blue)[_stand alone_], they don't have structured block following them.
Synchronization
- #text(blue)[_thread team_] is a group of threads.
- `barrier` will block threads in a team that reach it early.
- `flush` will enforce consistency between different thread's view of memory.
- `critical` ensures a critical region where only one thread can be in it at a time.
- `atomic` is faster than critical but only for simple operations.
#v(8pt)
- `simd` make use of SIMD instructions.
- #text(blue)[_places_]: specify how processing units on the architecture are partitioned.
#v(8pt)
- Thread encounters a parallel directive -> split itself into the number of threads.
- `#pragma omp parallel`
- create some number of threads and do its thing.
- clauses:
- `num_threads(int)` overrides ICV, limited by OMP_THREAD_LIMIT
- `private(list of variables)` each thread will have own memory allocated to private
variable.
- Default for variables on stack.
- `shared(list of variables)` all thread share the same variables, same piece of memory.
- OpenMP will add locks.
- `threadprivate(list of variables)` variable stay with the thread if all threadprivate
directives are identical.
- Can combine with `for`, `loop`, `sections` and `workshare`.
- `#pragma omp for`
- clauses:
- `schedule([modifier[, modifier]:]kind[, chunk_size])`
- kind:
- `static`: divided into chunk_size (default $"iterations" / "num threads"$) and
distributed round-robin over the threads.
- `dynamic`: chunks of chunk_size (default 1) distributed to threads as they
complete them.
- `guided`: like `dynamic` but varying `chunk_size`, large chunks at the start and
small chunks at the end.
- `auto`: default.
- `runtime`: determined by `sched-var` ICV.
- modifier:
- `monotonic`: chunks are given in increasing logical iteration
- `nonmonotonic`: default, allows #text(blue)[_work stealing_]: I finished early, I
will now take your work.
- `simd`: try to make the loop into SIMD constructs.
- `collapse(n)`: n nested loops are combined into one large logical loop.
- `ordered[(n)]`: There are operations in the loop that must be executed in their logical
order.
- `reduction([reduction-modifier, ] reduction-modifier:list)`: a list of variable that will
be used in a reduction operation.
- Allowed operations: +, -, \*, &, ^, &&, ||, max, min
- Example: `#pragma omp parallel for reduction(+:x)`, `x` is the result, `+` is the
operation.
- `x` starts as a private variable initialized to the identity value.
- global `x` will be assigned to the sum of all `x`s at the end.
- `#pragma omp loop`
- Work for any loop, not just for.
- Main diff to for is `bind`
- `#pragma omp sections`
- Have `#pragma omp section` inside.
- Each `#pragma omp section` gets executed by one thread.
- clauses:
- `private(list of variables)`: each thread will have its own version of the variable.
- `firstprivate(list of variables)`: same as private but memory is initialized to the
global version.
- `lastprivate(list of variables)`: copy the private variables to the global version for the
"lexically last" private variables.
- A variable can be firstprivate and lastprivate at the same time.
- `#pragma omp single`
- Only do it in a single thread in the team, used inside `#pragma omp parallel`
- `private(list of variables)`: each thread will have its own version of the variable.
- `firstprivate(list of variables)`: same as private but memory is initialized to the
global version.
- `#pragma omp workshare`
- Here's a bunch of independent statements / blocks, figure out how to parallelize it.
- `#pragma omp atomic`
- critical for read, write, update (`x += 1`), compare (`if (expr < x) x = expr;`).
- `#pragma omp critical [(name) [[,] hint(hint-expression)]]`
- clauses:
- `(name)`: two critical region with the same name can't happen at the same time.
- All no name critical region are treated as having the same name.
- `hint(hint-expression)`:
- `omp_sync_hint_uncontended`
- `omp_sync_hint_contended`
- `omp_sync_hint_speculative`: try to speculate.
- `omp_sync_hint_nonspeculative`: don't try to speculate.
- `#pragma omp ordered`
- Inside loops so that they're executed in their logical order.
- `#pragma omp barrier`
- Explicit barrier.
- `#pragma omp flush`
- Sync cache.
- Be aware of code reordering.
- `#pragma omp task`
- The #text(blue)[_Task Model_]: specify work without allocating work to threads.
- Task is a unit of work.
- Task have dependencies such as completion of other tasks.
- Task may generate other tasks.
- Uses many same clauses such as `private`, `shared` and `firstprivate`.
- Task can have data affinity.
- clauses:
- `depend([depend-modifier,] dependence-type:locator-list)`.
- `priority(int)`: hint of order of execution.
- `affinity([aff-modifier :] locator-list)`
- `#pragma omp taskloop`
- clauses:
- `num_tasks([strict:]num-tasks)`: specify the number of tasks that will be generated.
- `grainsize([strict:]grainsize)`: how many iteration per task.
- `#pragma omp taskwait`
- Wait for all current child tasks to finish
\
Places
- OMP_PLACES: list of power units by their identifiers
- `{0,1,2,3},{4,5,6,7}`: specify two places each with 4 processing units.
- use `hwloc-ls` to find processing unit number.
- `threads(8)`: 8 places on 8 hardware threads
- `cores(4)`: 4 places on 4 cores.
- `ll_caches(2)`: 2 places on 2 set of cores where all the cores in a set shares their last
level cache.
- `numa_domains(2)`: 2 places on 2 set of cores whose closes memory is the same or similar
distance.
- `sockets(2)`: 2 places on two sockets
- OMP_PLACES partition power units into places. Which can then be referred to by
`proc_bind(type)` clause in `parallel` directives.
- `proc_bind(type)`: overrides OMP_PROC_BIND, only in `parallel` directives.
- `primary`: All threads created in the team are in the same place.
- `close`: Threads are allocated to places in a round-robin fashion - first thread in place i,
second thread in place i + 1, third thread in place i + 2
- `spread`: Place thread in a way so that the distance between the power unit ID are as far as
possible.
Memory
- Sending memory to other numa domains cost cache as well because the send operation needs to be
done by a CPU which means cache.
- OpenMP memory classification:
- `omp_default_mem_space`: DRAM
- `omp_large_cap_mem_space`: SSD
- `omp_const_mem_space`: optimized for read only
- `omp_high_bw_mem_space`: high bandwidth
- `omp_low_lat_mem_space`: low latency.
- Memory allocator have traits:
- `sync_hint`: expected concurrency - `contended` (default), `uncontended`, `serialized`,
`private`
- `alignment`: default byte.
- `access`: which thread can access the memory, `all` (default), `cgroup`, `pteam`, `thread`
- `pool_size`: total amount of memory the allocator can allocate.
- `fallback`: on error return null or exit, default is first try standard allocator and return
null if fail.
- `partition`: `environment` (default), `nearest`, `blocked`, `interleaved`. How is the
allocated memory partitioned over the allocator's storage resource.
= Prefix Sum
- Doesn't have to be sum, can also be any other associative operations (like prod, min, max).
- The only way to reduce depth is to increase size (hopefully only slightly).
== Upper/Lower parallel prefix algorithm
- Divide array into two parts and compute their prefix sum.
- Add the sum of the first part to the second part.
- $Theta(log n)$ time complexity
- $Theta(n log n)$ work
- $Theta(n log n)$ size
- Half of the processors are idle all time except first iteration. (can probably be easily fixed)
== Odd/Even parallel prefix algorithm
- Divide array into odd and even indices parts.
- Add odd indices to even indices.
- Compute prefix of even part recursively.
- Now the even part contains the correct prefix.
- Compute the odd part in one parallel step.
- Same complexity as Upper/Lower, but 2 times slower.
== Ladner and Fischer's parallel prefix algorithm.
- Optimal possible time.
- Split array into two parts and use odd even for the first part, upper lower for the second part.
- Odd even for the first part is beneficial because the last element is available one step
earlier.
== Pointer jumping
- All processor replace next with next next, so you start going in $2^n$ steps for each iteration.
= Sorting
- Merge sort parallelized is $O(n)$ because last merge is sequential.
- Quick sort parallelized is $O(n)$ because the first split is sequential.
== Parallel merge
- $cal(O)(n/p + log p)$ or $cal(O)(log n)$ where $p = n$
- Two sorted list, assume all value are below $n$ where $n$ is the length of the resulting array.
- Count unique value for both of them.
- Write the sum of count for both array to result array with index $X$.
- Now the count is sorted.
- Compact the result array.
- Use prefix sum to space the resulting array evenly so that there are $"count" - 1$ null element
after even element.
- Use distribution to fill out the rest of the array.
=== Compaction
- Move all non null element to the first part of the array.
- Use prefix sum to count the index of each empty element.
- Move each non empty element to its index.
=== Unique Counts
- Sorted array to (value, count) element.
- Find all places where the adjacent values are different.
- Use prefix sum to find their index.
- Reverse engineer their count with old indices.
=== Distribution
- Array with some null value, fill with the closest non null value to the left.
- Best complexity is achieved with simple broadcast.
+ Use prefix sum and unique count to figure out how many empty element are after each non empty one.
+ Do sequential distribute with each processor.
+ For the processors where their first element is null:
+ Still need to fill
+ Use info obtained previously at the very first step to calculate how long does this null
sequence last.
+ All processor involved in the null sequence, broadcast!
== Rank sort
- Count the number of element smaller and number of element bigger, and just write this element to
the array.
- Use $n^2$ processors. Can count the index in $cal(O)(log n)$ time.
- With a combine PRAM, it can be done in $cal(O)(1)$ time.
=== Rank Merge
- Much simpler than Parallel merge, just use binary search to find the ranks.
== Bitonic MergeSort
- Bitonic is a sequence is two monotonic sequences but one up and one down.
- You can find the pivot and turn this into a regular merge.
- Alternatively:
- compare and maybe swap each two pair of element in two part of the array (none of them
reversed.)
- You end up getting two bitonic array.
- Keep doing this and you sort it.
- Same time complexity.
You can use bitonic sort to do bitonic merge sort, by keep constructing bitonic lists and merging
them with bitonic sort.
= Matrix Multiplication
- Matrix multiplication doesn't have dependencies between them, so easy to parallelize with less
than $n^2$ processors.
- Pretty trivial to parallelize in ideal conditions, so we will focus on practical side of matrix
multiplication.
- For huge matrices, we can divide them into smaller one, multiply the smaller ones, and then sum
the smaller ones. #image("divide-matrix.png", width: 60%)
- You can divide the matrices into 4 parts recursively, until the matrix is small enough to fit into
the cache.
= Gaussian elimination
- Common for matrix to be sparse, aka mostly zero.
- Gaussian elimination is for dense matrix.
- Solving system of linear equation by getting rid of variables one by one by rewriting them in
terms of other variables. #image("gaussian-elimination.png", width: 50%)
- If a coefficient of a variables is close to zero, then we run into numerical problems.
- We can swap this row with the rows below to fix this. This is called #text(blue)[_partial
pivoting_]
- Partial because the columns aren't being swapped.
- Can be done in $cal(O)(log_2 n)$
- We will ignore partial pivoting to simplify our problem. (make it more theoretical)
- The eliminate step can be parallelized. Resulting in time complexity of $cal(O)(n^2)$ with $p = n$
- We can more better utilize processor if there are less than $n$ processor by using
#text(blue)[_cyclic-striped partitioning_]. #image("cyclic-stripe-partitioning.png", width:
40%)
= MPI
- Pass messages between processes. #image("mpi-model.png", width: 75%)
- Provide consistent interface to have portable code on different architectures.
- Can be sync or async
- sync: returns after message being read by receiver process.
- async: returns asap and have some other thread or the kernel do the sending.
- MPI is a language.
== Communicator
- Communicators: a set of processes.
- `MPI_COMM_WORLD`: all processes.
- Rank: 0 based index of your processes given a communicator.
- Size: size of the communicator.
== Functions
- `int MPI_Init(int *argc, char ***argv)`: first MPI call, initialize the MPI execution environment,
takes away the MPI arguments by modifying `argc` and `argv`.
- `int MPI_Finalize()`: last MPI call.
- `int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)`
- `int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm,
MPI_Status *status)`
- For send and receive to succeed, rank must be valid, communicator must be same, tags must be
same (or say idk what tag), message datatype must be compatible.
- `int MPI_Bcast(void *buf, int count, MPI_Datatype, int root, MPI_Comm comm)`
- `int MPI_Scatter(void *buf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)`
- One node send different data to other nodes.
- `int MPI_Gather(void *buf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)`
- Opposite of scatter.
- `int MPI_Allgather(void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)`
- `int MPI_Alltoall(void * sendbuf, int sendcount, MPI_Datatype sendtype, void * recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm )`
- `int MPI_Alltoallv(void * sendbuf, int * sendcounts, int * sdispls, MPI_Datatype sendtype, void * recvbuf, int * recvcounts, int * rdispls, MPI_Datatype recvtype, MPI_Comm comm)`
- `int MPI_Reduce(void * sendbuf, void * recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm)`
- `MPI_Op` can be: `MPI_MAX`, `MPI_MIN`, `MPI_SUM`, `MPI_PROD`, `MPI_LAND`, `MPI_BAND`,
`MPI_LOR`, `MPI_BOR`, `MPI_LXOR`, `MPI_BXOR`.
- `int MPI_Barrier(MPI_Comm comm)`
- Mostly used to share OS resource are not controlled by MPI.
- `MPI_Ssend`: sync
- `MPI_Send` can be sync or async
- Sync send can deadlock if the receiver is waiting for a message with a different tag.
- `MPI_Bsend`: Async
- Many other type of `MPI_Send`
- Async send can be out of order.
- Can check status to check tag.
- `MPI_Comm_split(MPI_Comm comm, int color, int key, MPI_Comm *newcomm)`
- `int MPI_Cart_create(MPI_Comm comm_old, int ndims, int *dims, int *periods, int reorder, MPI_Comm *comm_cart)`
- Create cartesian topology
- `int MPI_Dims_create(int nnodes, int ndims, int *dims)`
- Will fill in the zeros in `dims`, will try to make dimensions as close to each other as
possible.
- `int MPI_Cart_rank(MPI_Comm comm, int *coords, int *rank)`
- Map coordinates to rank.
- `int MPI_Cart_coords(MPI_Comm comm, int rank,, int maxdims, int *coords)`
- Map rank to coordinates.
- `int MPI_Cart_shift(MPI_Comm comm, int direction, int disp, int *rank_source, int *rank_dest)`
- Source is who will send to me
- Dest is who will I send to
- `int MPI_Sendrecv_replace(void *buf, int count, MPI_Datatype datatype, int dest, int sendtag, int source, int recvtag, MPI_Comm comm, MPI_Status *status)`
- Take the result of my source and replace my data which will be send to my dest.
- Can use with `MPI_Cart_shift` but not necessary.
- `MPI_PROC_NULL`: send or receive from and to `MPI_PROC_NULL` does nothing so we don't have to move
forward.
== Multithreading
- 4 levels options of support:
- 0, `MPI_THREAD_SINGLE`: only one thread will execute.
- 1, `MPI_THREAD_FUNNELED`: only one thread will make MPI calls.
- 2, `MPI_THREAD_SERIALIZED`: calls well never be concurrent.
- 3, `MPI_THREAD_MULTIPLE`: No restrictions.
- Call `MPI_Init_thread(int *argc, char ***argv, int required, int *provided)` instead of `MPI_Init`
to declare you need multithreading.
== Topologies
- Examples: ring.
- MPI have function that allows you to refer to "node below me" which will calculate the rank for
you.
- Cylinder: #image("cylinder.png", width: 50%)
- Cartesian: a grid that might or might not be cyclic with each dimensions.
- Graph topologies: explicitly list neighbours of each node
== Derived Types
- Can define new MPI_Datatype.
- Can be used in place of any MPI_Datatype can.
- Types are defined with other list of types and displacements (in bytes).
- `int MPI_Type_contiguous(int count, MPI_Datatype oldtype, MPI_Datatype *newtype)`: just an array
of the same type.
- Vector Datatype: #image("mpi-vector-datatype.png", width: 60%)
- Good for sub blocks of a matrix.
- `int MPI_Type_vector(int count, int blocklength, int stride, MPI_Datatype oldtype, MPI_Datatype *newtype)`
- `int MPI_Type_create_struct(int count, const int array_of_blocklengths[], const MPI_Aint array_of_displacements[], const MPI_Datatype array_of_types[], MPI_Datatype *newtype)`
- Can represent any struct.
- After create datatype, must call `int MPI_Type_commit(MPI_Datatype *datatype)`
- Can use uncommited datatype to build other data type.
- `int MPI_Type_free(MPI_Datatype *datatype)`
- Free a committed type's memory.
- Count $times$ Fundamental datatype count must match for send and receive.
= GPU
- GPGPU: General purpose GPU
- Many CUDA core / Streaming processor make up of Streaming multiprocessor.
- GPU have:
- Threads: smallest execution entity, each have their own id.
- Block / warp made up of threads that execute in a single multiprocessor in sync. Every thread
runs the same line of code at the same time.
- Blocks are SIMD.
- Grid: a bunch of blocks that execute a kernel function (kinda like a regular function).
- Blocks are running the same code but not in lock step.
- Blocks are independent to each other, can go in any order.
- GPU have memory that's high throughput.
- Global memory: main memory ~100ms
- IO for grid
- Shared memory ~128kB: shared memory ~5ns about the speed of L1 cache.
- per block
- Can use for collaboration within a block.
- Register and local memory: fastest, around 10x faster than shared memory, not as fast as
registers on CPU.
- Per thread
- Store stack vars.
- Compared with CPU
- CPU wants to run one thread very fast: Sophisticated control, powerful ALU, Large cache.
- GPU wants high throughput: simple control, small caches, many efficient ALU.
- CUDA: Compute Unified Device Architecture.
- Extends C/C++, can run on both GPU and CPU.
- Abstract from the hardware, same code run on many GPU.
- Auto thread management.
- Vendor-lock to Nvidia.
- Hard to debug, don't have printf in early versions.
= CUDA optimization
- You want threads with similar index to be doing the same thing cause then the whole block isn't
wait for a small portion of block to do something.
- You want to make sure you're not reading from the same memory bank by reading from a continues
block of memory, hence using different memory bank due to memory striped layout
- You don't want idle threads, you can do this by reducing the number of threads and increasing the
load on each threads.
- You can try unrolling the loops.
- If the number of threads are less than wrap size, then you can remove `__syncthreads`
- Use Brent's theorem to do some sequential work first and then parallelize.
= Classes of parallel algorithms
- Embarrassingly parallel: tasks completely independent.
- Parametric: exactly same problem but different parameter.
- Data parallel: same operation but different data.
- Task parallel: different task doing at the same time.
- Common for cloud computing, AKA micro-services.
- Loosely synchronous: some subset of threads have to synchronize.
- Synchronous: all threads have to synchronize.
== Image processing
- We only consider geometric transformations, and only those where each pixels can be calculated
independently of all other pixels.
- Shifting: add a vector
- Scaling: multiply by a vector
- Rotation
- Cropping
- Smoothing: each pixel is a function of a surrounding pixels.
- We want to partition images but that means if the new pixel value depend on pixels outside the
partition, then we need to communication and hence communication overhead.
=== Mandelbrot Set
- $z_(k + 1) = z_k^2 + c$, until $k > 2$.
- if $k$ never gets bigger than 2 then it's black.
- We can't just partition the image because some pixel have a lot of iteration while some have
almost none, leading to a waste of cost.
Dynamic Load Balancing
- Threads finish one pixel and ask for another pixel.
=== Monte Carlo Methods
- Integrate a function by throwing dart into the general area and see how many points lie under the
function.
#import "@preview/physica:0.8.1": *
=== Jacobi Iteration
- Solve system of linear equation. Particularity good with sparse matrices.
- Can solve $pdv(f, x, 2) + pdv(f, y, 2) = 0$
- Discretize space $=>$ solve $=>$ linear equation
Gauss-Seidel relaxation
- Instead of strictly using the old value of the last iteration, just use the new value.
- Tends to converge faster.
|
|
https://github.com/drupol/master-thesis | https://raw.githubusercontent.com/drupol/master-thesis/main/resources/typst/essawy.typ | typst | Other | #{
set text(
font: "<NAME>",
size: .9em,
)
box[
#grid(
columns: 1,
rows: 4,
gutter: 0pt,
polygon(
fill: blue.lighten(80%),
stroke: blue,
(0pt, 3.5cm),
(70pt, 0cm),
(140pt, 3.5cm),
),
polygon(
fill: blue.lighten(80%),
stroke: blue,
(0pt, 2cm),
(40pt, 0pt),
(180pt, 0pt),
(220pt, 2cm),
),
polygon(
fill: blue.lighten(80%),
stroke: blue,
(0pt, 2cm),
(40pt, 0pt),
(260pt, 0pt),
(300pt, 2cm),
),
polygon(
fill: blue.lighten(80%),
stroke: blue,
(0pt, 2cm),
(40pt, 0pt),
(340pt, 0pt),
(380pt, 2cm),
),
)
// Right line
#place(bottom + left)[
#line(length: 320pt, angle: 55deg, start: (216pt, 0pt))
]
// Left line
#place(bottom + right)[
#line(length: 320pt, angle: -55deg, start: (-400pt, 0pt))
]
// Left arrow
#place(top + left, dx: 161pt)[
#rotate(35deg)[
#polygon.regular(fill: black, size: 10pt, vertices: 3)
]
]
// Text left
#place(bottom + left, dx: 50pt, dy: -125pt)[
#rotate(-55deg)[
time
]
]
// Right arrow
#place(top + right, dx: -161pt)[
#rotate(-35deg)[
#polygon.regular(fill: black, size: 10pt, vertices: 3)
]
]
// Text right
#place(bottom + left, dx: 300pt, dy: -130pt)[
#rotate(55deg)[
effort
]
]
#place(center, dy: -45pt)[
Repeatability\
Original researcher,\ machine and data
]
#place(center, dy: -105pt)[
Runnability\
Original researcher and data\ other machine
]
#place(center, dy: -160pt)[
Reproducibility\
Original data\ other researcher and machine
]
#place(center, dy: -215pt)[
Replicability\
Other researcher,\
machine and data
]
]
}
|
https://github.com/parallel101/cppguidebook | https://raw.githubusercontent.com/parallel101/cppguidebook/main/misc/typst/cppguidebook.typ | typst | Other | #set text(
font: "Noto Serif CJK SC",
size: 7pt,
)
#set page(
paper: "a6",
margin: (x: 1.8cm, y: 1.5cm),
header: align(right, text(5pt)[
小彭大典
]),
numbering: "1",
)
#set par(justify: true)
#set heading(numbering: "1.")
#show "小彭大典": name => box[
#text(font: "Arial")[✝️]小彭大典#text(font: "Arial")[✝️]
]
#let fun = body => box[
#box(image(
"pic/ height="1em"awesomeface.png",
height: 1em,
))
#text(font: "LXGWWenKai", size: 1em, fill: rgb("#cd9f0f"))[#body]
]
#let tip = body => box[
#box(image(
"pic/bulb.png",
height: 1em,
))
#text(font: "LXGWWenKai", size: 1em, fill: rgb("#4f8b4f"))[#body]
]
#let warn = body => box[
#box(image(
"pic/warning.png",
height: 1em,
))
#text(font: "LXGWWenKai", size: 1em, fill: rgb("#ed6c6c"))[#body]
]
#let story = body => box[
#box(image(
"pic/book.png",
height: 1em,
))
#text(font: "LXGWWenKai", size: 1em, fill: rgb("#807340"))[#body]
]
#let detail = body => box[
#box(image(
"pic/question.png",
height: 1em,
))
#text(font: "LXGWWenKai", size: 1em, fill: rgb("#8080ad"))[#body]
]
#let space = block[]
#let comment = name => ""
#let codetab = (u, v, a, b, n) => table(
columns: int((a.len() + n - 1) / n) + 1,
inset: 3pt,
align: horizon,
..range(0, n).map(i =>
(
[#u], ..a.slice(int((a.len() * i + n - 1) / n), int((a.len() * (i + 1) + n - 1) / n)).map(c => c),
..range(0, if i == n - 1 { 1 } else { 0 } * (int((a.len() + n - 1) / n) - int(a.len() / n))).map(i => []),
[#v], ..b.slice(int((a.len() * i + n - 1) / n), int((a.len() * (i + 1) + n - 1) / n)).map(c => c),
..range(0, if i == n - 1 { 1 } else { 0 } * (int((a.len() + n - 1) / n) - int(a.len() / n))).map(i => []),
)
).join()
)
#align(center, text(14pt)[
*小彭老师的现代 C++ 大典*
])
小彭大典是一本关于现代 C++ 编程的权威指南,它涵盖了从基础知识到高级技巧的内容,适合初学者和有经验的程序员阅读。本书由小彭老师亲自编写,通过简单易懂的语言和丰富的示例,帮助读者快速掌握 C++ 的核心概念,并学会如何运用它们来解决实际问题。
#fun[敢承诺:土木老哥也能看懂!]
= 前言
推荐用手机或平板*竖屏*观看,可以在床或沙发上躺着。
用电脑看的话,可以按 `WIN + ←`,把本书的浏览器窗口放在屏幕左侧,右侧是你的 IDE。一边看一边自己动手做实验。
#image("pic/slide.jpg")
#fun[请坐和放宽。]
== 格式约定
#tip[用这种颜色字体书写的内容是温馨提示]
#warn[用这种颜色字体书写的内容是可能犯错的警告]
#fun[用这种颜色字体书写的内容是笑话或趣味寓言故事]
#story[用这种颜色书写的是补充说明的课外阅读,看不懂也没关系]
#detail[用这种颜色字体书写的是初学者可暂时不用理解的细节]
/ 术语名称: 这里是术语的定义。
== 观前须知
与大多数现有教材不同的是,本课程将会采用“倒叙”的形式,从最新的 *C++23* 讲起!然后讲 C++20、C++17、C++14、C++11,慢慢讲到最原始的 C++98。
不用担心,越是现代的 C++,学起来反而更容易!反而古代 C++ 才*又臭又长*。
很多同学想当然地误以为 C++98 最简单,哼哧哼哧费老大劲从 C++98 开始学,才是错误的。
为了应付缺胳膊少腿的 C++98,人们发明了各种*繁琐无谓*的写法,在现代 C++ 中,早就已经被更*简洁直观*的写法替代了。
#story[例如所谓的 safe-bool idiom,写起来又臭又长,C++11 引入一个 `explicit` 关键字直接就秒了。结果还有一批劳保教材大吹特吹 safe-bool idiom,吹得好像是个什么高大上的设计模式一样,不过是个应付 C++98 语言缺陷的蹩脚玩意。]
就好比一个*老外*想要学习汉语,他首先肯定是从*现代汉语*学起!而不是上来就教他*文言文*。
#fun[即使这个老外的职业就是“考古”,或者他对“古代文学”感兴趣,也不可能自学文言文的同时完全跳过现代汉语。]
当我们学习中文时,你肯定希望先学现代汉语,再学文言文,再学甲骨文,再学 brainf\*\*k,而不是反过来。
对于 C++ 初学者也是如此:我们首先学会简单明了的,符合现代人思维的 C++23,再逐渐回到专为伺候“古代开发环境”的 C++98。
你的生产环境可能不允许用上 C++20 甚至 C++23 的新标准。
别担心,小彭老师教会你 C++23 的正常写法后,会讲解如何在 C++14、C++98 中写出同样的效果。
这样你学习的时候思路清晰,不用被繁琐的 C++98 “奇技淫巧”干扰,学起来事半功倍;但也“吃过见过”,知道古代 C++98 的应对策略。
#tip[目前企业里主流使用的是 C++14 和 C++17。例如谷歌就明确规定要求 C++17。]
== 举个例子
#story[接下来的例子你可能看不懂,但只需要记住这个例子是向你说明:越是新的 C++ 标准,反而越容易学!]
例如,在模板元编程中,要检测一个类型 T 是否拥有 `foo()` 这一成员函数。如果存在,才会调用。
在 C++20 中,可以使用很方便的 `requires` 语法,轻松检测一个表达式是否能合法通过编译。如果能,`requires ` 语句会返回 `true`。然后用一个 `if constexpr` 进行编译期分支判断,即可实现检测到存在则调用。
```cpp
template <class T>
void try_call_foo(T &t) {
if constexpr (requires { t.foo(); }) {
t.foo();
}
}
```
但仅仅是回到 C++17,没有 `requires` 语法,我们只能自己定义一个 trait 类,并运用烦人的 SFINAE 小技巧,检测表达式是否的合法,又臭又长。
```cpp
template <class T, class = void>
struct has_foo {
inline constexpr bool value = false;
};
template <class T>
struct has_foo<T, std::void_t<decltype(std::declval<T>().foo())>> {
inline constexpr bool value = true;
};
template <class T>
void try_call_foo(T &t) {
if constexpr (has_foo<T>::value) {
t.foo();
}
}
```
如果回到 C++14,情况就更糟糕了!`if constexpr` 是 C++17 的特性,没有他,要实现编译期分支,我们就得用 `enable_if_t` 的 SFINAE 小技巧,需要定义两个 try_call_foo 函数,互相重载,才能实现同样的效果。
```cpp
template <class T, class = void>
struct has_foo {
static constexpr bool value = false;
};
template <class T>
struct has_foo<T, std::void_t<decltype(std::declval<T>().foo())>> {
static constexpr bool value = true;
};
template <class T, std::enable_if_t<has_foo<T>::value, int> = 0>
void try_call_foo(T &t) {
t.foo();
}
template <class T, std::enable_if_t<!has_foo<T>::value, int> = 0>
void try_call_foo(T &) {
}
```
如果回到 C++11,情况进一步恶化!`enable_if_t` 这个方便的小助手已经不存在,需要使用比他更底层的 `enable_if` 模板类,手动取出 `::type`,并且需要 `typename` 修饰,才能编译通过!并且 `void_t` 也不能用了,要用逗号表达式小技巧才能让 decltype 固定返回 void……
```cpp
template <class T, class = void>
struct has_foo {
static constexpr bool value = false;
};
template <class T>
struct has_foo<T, decltype(std::declval<T>().foo(), (void)0)> {
static constexpr bool value = true;
};
template <class T, typename std::enable_if<has_foo<T>::value, int>::type = 0>
void try_call_foo(T &t) {
t.foo();
}
template <class T, typename std::enable_if<!has_foo<T>::value, int>::type = 0>
void try_call_foo(T &) {
}
```
如果回到 C++98,那又要罪加一等!`enable_if` 和 `declval` 是 C++11 引入的 `<type_traits>` 头文件的帮手类,在 C++98 中,我们需要自己实现 `enable_if`…… `declval` 也是 C++11 引入的 `<utility>` 头文件中的帮手函数……假设你自己好不容易实现出来了 `enable_if` 和 `declval`,还没完:因为 constexpr 在 C++98 中也不存在了!你无法定义 value 成员变量为编译期常量,我们只好又用一个抽象的枚举小技巧来实现定义类成员常量的效果。
```cpp
template <class T, class = void>
struct has_foo {
enum { value = 0 };
};
template <class T>
struct has_foo<T, decltype(my_declval<T>().foo(), (void)0)> {
enum { value = 1 };
};
template <class T, typename my_enable_if<has_foo<T>::value, int>::type = 0>
void try_call_foo(T &t) {
t.foo();
}
template <class T, typename my_enable_if<!has_foo<T>::value, int>::type = 0>
void try_call_foo(T &) {
}
```
如此冗长难懂的抽象 C++98 代码,仿佛是“加密”过的代码一样,仅仅是为了实现检测是否存在成员函数 foo……
#fun[如果回到 C 语言,那么你甚至都不用检测了。因为伟大的 C 语言连成员函数都没有,何谈“检测成员函数是否存在”?]
反观 C++20 的写法,一眼就看明白代码的逻辑是什么,表达你该表达的,而不是迷失于伺候各种语言缺陷,干扰我们学习。
```cpp
void try_call_foo(auto &t) {
if constexpr (requires { t.foo(); }) {
t.foo();
}
}
```
// 从残废的 C++98 学起,你的思维就被这些无谓的“奇技淫巧”扭曲了,而使得真正应该表达的代码逻辑,淹没在又臭又长的古代技巧中。
// 从现代的 C++23 学起,先知道正常的写法“理应”是什么样。工作中用不上 C++23?我会向你介绍,如果要倒退回 C++14,古代人都是用什么“奇技淫巧”实现同样的效果。
// 这样你最后同样可以适应公司要求的 C++14 环境。但是从 C++23 学起,你的思维又不会被应付古代语言缺陷的“奇技淫巧”扰乱,学起来就事半功倍。
#fun[既然现代 C++ 这么好,为什么学校不从现代 C++ 教起,教起来还轻松?因为劳保老师保,懒得接触新知识,认为“祖宗之法不可变”,“版号稳定压倒一切”。]
= 开发环境与平台选择
TODO
== IDE 不是编译器!
TODO
== 编译器是?
编译器是将源代码 (`.cpp`) 编译成可执行程序 (`.exe`) 的工具。
#fun[C++ 是*编译型语言*,源代码不能直接执行哦!刚开始学编程的小彭老师曾经把网上的 “Hello, World” 代码拷贝到 `.c` 源码文件中,然后把后缀名改成 `.exe`,发现这样根本执行不了……后来才知道需要通过一种叫做*编译器*编译 `.c` 文件,才能得到计算机可以直接执行的 `.exe` 文件。]
C++ 源码 `.cpp` 是写给人类看的!计算机并不认识,计算机只认识二进制的机器码。要把 C++ 源码转换为计算机可以执行的机器码。
== 编译器御三家
最常见的编译器有:GCC、Clang、MSVC
#fun[俗称“御三家”。]
这些编译器都支持了大部分 C++20 标准和小部分 C++23 标准,而 C++17 标准都是完全支持的。
#fun[有人说过:“如果你不知道一个人是用的什么编译器,那么你可以猜他用的是 GCC。”]
- GCC 主要只在 Linux 和 MacOS 等 Unix 类系统可用,不支持 Windows 系统。但是 GCC 有着大量好用的扩展功能,例如大名鼎鼎的 `pbds`(基于策略的数据结构),还有各种 `__attribute__`,各种 `__builtin_` 系列函数。不过随着新标准的出台,很多原本属于 GCC 的功能都成了标准的一部分,例如 `__attribute__((warn_unused))` 变成了标准的 `[[nodiscard]]`,`__builtin_clz` 变成了标准的 `std::countl_zero`,`__VA_OPT__` 名字都没变就进了 C++20 标准。
#fun[PBDS 又称 “平板电视”]
- 也有 MinGW 这样的魔改版 GCC 编译器,把 GCC 移植到了 Windows 系统上,同时也能用 GCC 的一些特性。不过 MinGW 最近已经停止更新,最新的 GCC Windows 移植版由 MinGW-w64 继续维护。
- Clang 是跨平台的编译器,支持大多数主流平台,包括操作系统界的御三家:Linux、MacOS、Windows。Clang 支持了很大一部分 GCC 特性和部分 MSVC 特性。其所属的 LLVM 项目更是编译器领域的中流砥柱,不仅支持 C、C++、Objective-C、Fortran 等,Rust 和 Swift 等语言也是基于 LLVM 后端编译的,不仅如此,还有很多显卡厂商的 OpenGL 驱动也是基于 LLVM 实现编译的。并且 Clang 身兼数职,不仅可以编译,还支持静态分析。许多 IDE 常见的语言服务协议 (LSP) 就是基于 Clang 的服务版————Clangd 实现的 (例如你可以按 Ctrl 点击,跳转到函数定义,这样的功能就是 IDE 通过调用 Clangd 的 LSP 接口实现)。不过 Clang 的性能优化比较激进,虽然有助于性能提升,如果你不小心犯了未定义行为,Clang 可能优化出匪夷所思的结果,如果你要实验未定义行为,Clang 是最擅长复现的。且 Clang 对一些 C++ 新标准特性支持相对较慢,没有 GCC 和 MSVC 那么上心。
#story[例如 C++20 早已允许 lambda 表达式捕获 structural-binding 变量,而 Clang 至今还没有支持,尽管 Clang 已经支持了很多其他 C++20 特性。]
- Apple Clang 是苹果公司自己魔改的 Clang 版本,只在 MacOS 系统上可用,支持 Objective-C 和 Swift 语言。但是版本较官方 Clang 落后一些,很多新特性都没有跟进,基本上只有专门伺候苹果的开发者会用。
#tip[GCC 和 Clang 也支持 Objective-C。]
- MSVC 是 Windows 限定的编译器,提供了很多 MSVC 特有的扩展。也有人在 Clang 上魔改出了 MSVC 兼容模式,兼顾 Clang 特性的同时,支持了 MSVC 的一些特性(例如 `__declspec`),可以编译用了 MSVC 特性的代码,即 `clang-cl`,在最新的 VS2022 IDE 中也集成了 `clang-cl`。值得注意的是,MSVC 的优化能力是比较差的,比 GCC 和 Clang 都差,例如 MSVC 几乎总是假定所有指针 aliasing,这意味着当遇到很多指针操作的循环时,几乎没法做循环矢量化。但是也使得未定义行为不容易产生 Bug,另一方面,这也导致一些只用 MSVC 的人不知道某些写法是未定义行为。
- Intel C++ compiler 是英特尔开发的 C++ 编译器,由于是硬件厂商开发的,特别擅长做性能优化。但由于更新较慢,基本没有更上新特性,也没什么人在用了。
#detail[最近他们又出了个 Intel DPC++ compiler,支持最新的并行编程领域特定语言 SyCL。]
== 使用编译器编译源码
=== MSVC
```bash
cl.exe /c main.cpp
```
这样就可以得到可执行文件 `main.exe` 了。
=== GCC
```bash
g++ -c main.cpp -o main
```
这样就可以得到可执行文件 `main` 了。
#tip[Linux 系统的可执行文件并没有后缀名,所以没有 `.exe` 后缀。]
=== Clang
Windows 上:
```bash
clang++.exe -c main.cpp -o main.exe
```
Linux / MacOS 上:
```bash
clang++ -c main.cpp -o main
```
== 编译器选项
编译器选项是用来控制编译器的行为的。不同的编译器有不同的选项,语法有微妙的不同,但大致功效相同。
例如当我们说“编译这个源码时,我用了 GCC 编译器,`-O3` 和 `-std=c++20` 选项”,说的就是把这些选项加到了 `g++` 的命令行参数中:
```bash
g++ -O3 -std=c++20 -c main.cpp -o main
```
其中 Clang 和 GCC 的编译器选项有很大交集。而 MSVC 基本自成一派。
Clang 和 GCC 的选项都是 `-xxx` 的形式,MSVC 的选项是 `/xxx` 的形式。
常见的编译器选项有:
=== C++ 标准
指定要选用的 C++ 标准。
Clang 和 GCC:`-std=c++98`、`-std=c++03`、`-std=c++11`、`-std=c++14`、`-std=c++17`、`-std=c++20`、`-std=c++23`
MSVC:`/std:c++98`、`/std:c++11`、`/std:c++14`、`/std:c++17`、`/std:c++20`、`/std:c++latest`
例如要编译一个 C++20 源码文件,分别用 GCC、Clang、MSVC:
GCC(Linux):
```bash
g++ -std=c++20 -c main.cpp -o main
```
Clang(Linux):
```bash
clang++ -std=c++20 -c main.cpp -o main
```
MSVC(Windows):
```bash
cl.exe /std:c++20 /c main.cpp
```
=== 优化等级
Clang 和 GCC:`-O0`、`-O1`、`-O2`、`-O3`、`-Ofast`、`-Os`、`-Oz`、`-Og`
- `-O0`:不进行任何优化,编译速度最快,忠实复刻你写的代码,未定义行为不容易产生诡异的结果,一般用于开发人员内部调试阶段。
- `-O1`:最基本的优化,会把一些简单的死代码(编译器检测到的不可抵达代码)删除,去掉没有用的变量,把部分变量用寄存器代替等,编译速度较快,执行速度也比 `-O0` 快。但是会丢失函数的行号信息,影响诸如 gdb 等调试,如需快速调试可以用 `-Og` 选项。
- `-O2`:比 `-O1` 更强的优化,会把一些循环展开,把一些函数内联,减少函数调用,把一些简单的数组操作用更快的指令替代等,执行速度更快。
- `-O3`:比 `-O2` 更激进的优化,会把一些复杂的循环用 SIMD 矢量指令优化加速,把一些复杂的数组操作用更快的指令替代等。性能提升很大,但是如果你的程序有未定义行为,可能会导致一些 Bug。如果你的代码没有未定义行为则绝不会有问题,对自己的代码质量有自信就可以放心开,编译速度也会很慢,一般用于程序最终成品发布阶段。
- `-Ofast`:在 `-O3` 的基础上,进一步对浮点数的运算进行更深层次的优化,但是可能会导致一些浮点数计算结果不准确。如果你的代码不涉及到 NaN 和 Inf 的处理,那么 `-Ofast` 不会有太大的问题,一般用于科学计算领域的终极性能优化。
- `-Os`:在 `-O2` 的基础上,专门优化代码大小,性能被当作次要需求,但是会禁止会导致可执行文件变大的优化。会把一些循环展开、内联等优化关闭,把一些代码用更小的指令实现,尽可能减小可执行文件的尺寸,比 `-O0`、`-O1`、`-O2` 都要小,通常用于需要节省内存的嵌入式系统开发。
- `-Oz`:在 `-Os` 的基础上,进一步把代码压缩,可能把本可以一条大指令完成的任务也拆成多条小指令,为了尺寸完全性能,大幅减少了函数内联的机会,有时用于嵌入式系统开发。
- `-Og`:在 `-O0` 的基础上,尽可能保留更多调试信息,不做破坏函数行号等信息的优化,建议配合产生更多调试信息的 `-g` 选项使用。但还是会做一些简单的优化,比 `-O0` 执行速度更快。但 `-Og` 的所有优化都不会涉及到未定义行为,因此非常适合调试未定义行为。但是由于插入了调试信息,最终的可执行文件会变得很大,一般在开发人员调试时使用。
MSVC:`/Od`、`/O1`、`/O2`、`/Ox`、`/Ob1`、`/Ob2`、`/Os`
- `/Od`:不进行任何优化,忠实复刻你写的代码,未定义行为不容易产生诡异的结果,一般用于调试阶段。
- `/O1`:最基本的优化,会把一些简单的死代码删除,去掉没有用的变量,把变量用寄存器代替等。
- `/O2`:比 `/O1` 更强的优化,会把一些循环展开,把一些函数内联,减少函数调用,还会尝试把一些循环矢量化,把一些简单的数组操作用更快的指令替代等。一般用于发布阶段。
- `/Ox`:在 `/O2` 的基础上,进一步优化,但是不会导致未定义行为,一般用于发布阶段。
- `/Ob1`:启用函数内联。
- `/Ob2`:启用函数内联,但是会扩大内联范围,一般比 `/Ob1` 更快,但是也会导致可执行文件变大。
- `/Os`:在 `/O2` 的基础上,专门优化代码大小,性能被当作次要需求,但是会禁止会导致可执行文件变大的优化。会把一些循环展开、内联等优化关闭,把一些代码用更小的指令实现,尽可能减小可执行文件的尺寸,通常用于需要节省内存的嵌入式系统开发。
=== 调试信息
Clang 和 GCC:`-g`、`-g0`、`-g1`、`-g2`、`-g3`
MSVC:`/Z7`、`/Zi`
=== 头文件搜索路径
=== 指定要链接的库
=== 库文件搜索路径
=== 定义宏
Clang 和 GCC:`-Dmacro=value`
MSVC:`/Dmacro=value`
例如:
=== 警告开关
== 标准库御三家
- libstdc++ 是 GCC 官方的 C++ 标准库实现,由于 GCC 是 Linux 系统的主流编译器,所以 libstdc++ 也是 Linux 上最常用的标准库。你可以在这里看到他的源码:https://github.com/gcc-mirror/gcc/tree/master/libstdc%2B%2B-v3
- libc++ 是 Clang 官方编写的 C++ 标准库实现,由于 Clang 是 MacOS 系统的主流编译器,所以 libc++ 也是 MacOS 上最常用的标准库。libc++ 也是 C++ 标准库中最早实现 C++11 标准的。项目的开源地址是:https://github.com/llvm/llvm-project/tree/main/libcxx
- MSVC STL 是 MSVC 官方的 C++ 标准库实现,由于 MSVC 是 Windows 系统的主流编译器,所以 MSVC STL 也是 Windows 上最常用的标准库。MSVC STL 也是 C++ 标准库中最晚实现 C++11 标准的,但是现在他已经完全支持 C++20,并且也完全开源了:https://github.com/microsoft/STL
值得注意的是,标准库和编译器并不是绑定的,例如 Clang 可以用 libstdc++ 或 MSVC STL,GCC 也可以被配置使用 libc++。
在 Linux 系统中,Clang 默认用的就是 libstdc++。需要为 Clang 指定 `-stdlib=libc++` 选项,才能使用。
#fun[牛头人笑话:“如果你不知道一个人是用的什么标准库,那么你可以猜他用的是 libstdc++。因为即使他的编译器是 Clang,他用的大概率依然是 libstdc++。”]
=== 标准库的调试模式
TODO
= 你好,世界
== 什么是函数
/ 函数: 一段用 `{}` 包裹的代码块,有一个独一无二的名字做标识。函数可以被其他函数调用。函数可以有返回值和参数。函数的 `{}` 代码块内的程序代码,每次该函数被调用时都会执行。
```cpp
int compute()
{
return 42;
}
```
上面的代码中,`compute` 就是函数的名字,`int` 表示函数的返回类型——整数。
#tip[乃取整数之英文#quote[integer]的#quote[int]而得名]
而 `{}` 包裹的是函数体,是函数被调用时会执行的代码。
此处 `return 42` 就是函数体内的唯一一条语句,表示函数立即执行完毕,返回 42。
/ 返回值: 当一个函数执行完毕时,会向调用该函数的调用者返回一个值,这个值就是 `return` 后面的表达式的值。返回值可以有不同的类型,此处 `compute` 的返回类型是 `int`,也就是说 `compute` 需要返回一个整数。
#tip[关于函数的参数我们稍后再做说明。]
== 从 main 函数说起
C++ 程序通常由一系列函数组成,其中必须有一个名为 `main` 的函数作为程序的入口点。
main 函数的定义如下:
```cpp
int main()
{
}
```
程序启动时,操作系统会调用 `main` 函数。
#detail[严格来说,是 C++ 运行时调用了 `main` 函数,但目前先理解为#quote[操作系统调用了 `main` 函数]也无妨。]
要把程序发展壮大,我们可以让 `main` 函数调用其他函数,也可以直接在 `main` 函数中编写整个程序的逻辑(不推荐)。
#fun[因此,`main` 可以被看作是#quote[宇宙大爆炸]。]
== main 函数的返回值
```cpp
int main()
{
return 0;
}
```
`return` 表示函数的返回,main 函数返回,即意味着程序的结束。
main 函数总是返回一个整数 (`int` 类型),用这个整数向操作系统表示程序退出的原因。
如果程序正常执行完毕,正常结束退出,那就请返回 0。
返回一个不为 0 的整数可以表示程序出现了异常,是因为出错了才退出的,值的多少可以用于表明错误的具体原因。
#fun[
操作系统:我调用了你这个程序的 main 函数,我好奇程序是否正确执行了?让我们约定好:如果你运转正常的话,就返回0表示成功哦!如果有错误的话,就返回一个错误代码,比如返回1表示无权限,2表示找不到文件……之类的。当然,错误代码都是不为0的。
]
== 这个黑色的窗口是?
TODO: 介绍控制台
== 打印一些信息
```cpp
int main()
{
std::println("Hello, World!");
}
```
以上代码会在控制台输出 `Hello, World!`。
== 注释
```cpp
int main()
{
// 小彭老师,请你在这里插入程序的逻辑哦!
}
```
这里的 `//` 是注释,注释会被编译器忽略,通常用于在程序源码中植入描述性的文本。有时也会用于多人协作项目中程序员之间互相沟通。
例如下面这段代码:
```cpp
int main()
{
std::println("编译器伟大,无需多言");
// 编译器是煞笔
// 编译器是煞笔
// 编译器是煞笔
// 诶嘿你看不见我
}
```
在编译器看来就只是:
```cpp
int main()
{
std::println("编译器伟大,无需多言");
}
```
#fun[
(\*编译器脸红中\*)
]
#space
C++ 支持行注释 `// xx` 和块注释 `/* xx */` 两种语法。
```cpp
int main()
{
// 我是行注释
/* 我是块注释 */
/* 块注释
可以
有
很多行 */
std::println(/* 块注释也可以夹在代码中间 */"你好");
std::println("世界"); // 行注释只能追加在一行的末尾
std::println("早安");
}
```
#tip[
在我们以后的案例代码中,都会像这样注释说明,充当*就地讲解员*的效果。去除这些注释并不影响程序的正常运行,添加文字注释只是小彭老师为了提醒你每一行的代码作用。
]
= 变量与类型
TODO
= 自定义函数
函数可以没有返回值,只需要返回类型写 `void` 即可,这样的函数调用的目的只是为了他的副作用(如修改全局变量,输出文本到控制台,修改引用参数等)。
```cpp
void compute()
{
return;
}
```
#tip[对于没有返回值(返回类型为 `void`)的函数,可以省略 `return` 不写。]
#warn[对于有返回值的函数,必须写 return 语句,否则程序出错。]
TODO:更多介绍函数
= 函数式编程
== 为什么需要函数?
```cpp
int main() {
std::vector<int> a = {1, 2, 3, 4};
int s = 0;
for (int i = 0; i < a.size(); i++) {
s += a[i];
}
fmt::println("sum = {}", s);
return 0;
}
```
这是一个计算数组求和的简单程序。
但是,他只能计算数组 a 的求和,无法复用。
如果我们有另一个数组 b 也需要求和的话,就得把整个求和的 for 循环重新写一遍:
```cpp
int main() {
std::vector<int> a = {1, 2, 3, 4};
int s = 0;
for (int i = 0; i < a.size(); i++) {
s += a[i];
}
fmt::println("sum of a = {}", s);
std::vector<int> b = {5, 6, 7, 8};
s = 0;
for (int i = 0; i < a.size(); i++) {
s += b[i];
}
fmt::println("sum of b = {}", s);
return 0;
}
```
这就出现了程序设计的大忌:代码重复。
#fun[例如,你有吹空调的需求,和充手机的需求。你为了满足这两个需求,购买了两台发电机,分别为空调和手机供电。第二天,你又产生了玩电脑需求,于是你又购买一台发电机,专为电脑供电……真是浪费!]
重复的代码不仅影响代码的*可读性*,也增加了*维护*代码的成本。
+ 看起来乱糟糟的,信息密度低,让人一眼看不出代码在干什么的功能
+ 很容易写错,看走眼,难调试
+ 复制粘贴过程中,容易漏改,比如这里的 `s += b[i]` 可能写成 `s += a[i]` 而自己不发现
+ 改起来不方便,当我们的需求变更时,需要多处修改,比如当我需要改为计算乘积时,需要把两个地方都改成 `s *=`
+ 改了以后可能漏改一部分,留下 Bug 隐患
+ 敏捷开发需要反复修改代码,比如你正在调试 `+=` 和 `-=` 的区别,看结果变化,如果一次切换需要改多处,就影响了调试速度
=== 狂想:没有函数的世界?
#story[如果你还是喜欢“一本道”写法的话,不妨想想看,完全不用任何标准库和第三方库的函数和类,把 `fmt::println` 和 `std::vector` 这些函数全部拆解成一个个系统调用。那这整个程序会有多难写?]
```cpp
int main() {
#ifdef _WIN32
int *a = (int *)VirtualAlloc(NULL, 4096, MEM_COMMIT, PAGE_EXECUTE_READWRITE);
#else
int *a = (int *)mmap(NULL, 4 * sizeof(int), PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
#endif
a[0] = 1;
a[1] = 2;
a[2] = 3;
a[3] = 4;
int s = 0;
for (int i = 0; i < 4; i++) {
s += a[i];
}
char buffer[64];
buffer[0] = 's';
buffer[1] = 'u';
buffer[2] = 'm';
buffer[3] = ' ';
buffer[4] = '=';
buffer[5] = ' '; // 例如,如果要修改此处的提示文本,甚至需要修改后面的 len 变量...
int len = 6;
int x = s;
do {
buffer[len++] = '0' + x % 10;
x /= 10;
} while (x);
buffer[len++] = '\n';
#ifdef _WIN32
WriteFile(GetStdHandle(STD_OUTPUT_HANDLE), buffer, len, NULL, NULL);
#else
write(1, buffer, len);
#endif
int *b = (int *)a;
b[0] = 4;
b[1] = 5;
b[2] = 6;
b[3] = 7;
int s = 0;
for (int i = 0; i < 4; i++) {
s += b[i];
}
len = 6;
x = s;
do {
buffer[len++] = '0' + x % 10;
x /= 10;
} while (x);
buffer[len++] = '\n';
#ifdef _WIN32
WriteFile(GetStdHandle(STD_OUTPUT_HANDLE), buffer, len, NULL, NULL);
#else
write(1, buffer, len);
#endif
#ifdef _WIN32
VirtualFree(a, 0, MEM_RELEASE);
#else
munmap(a);
#endif
return 0;
}
```
不仅完全没有可读性、可维护性,甚至都没有可移植性。
除非你只写应付导师的“一次性”程序,一旦要实现复杂的业务需求,不可避免的要自己封装函数或类。网上所有鼓吹“不封装”“设计模式是面子工程”的反智言论,都是没有做过大型项目的。
=== 设计模式追求的是“可改”而不是“可读”!
很多设计模式教材片面强调*可读性*,仿佛设计模式就是为了“优雅”“高大上”“美学”?使得很多人认为,“我这个是自己的项目,不用美化给领导看”而拒绝设计模式。实际上设计模式的主要价值在于*方便后续修改*!
#fun[例如 B 站以前只支持上传普通视频,现在叔叔突然提出:要支持互动视频,充电视频,视频合集,还废除了视频分 p,还要支持上传短视频,竖屏开关等……每一个叔叔的要求,都需要大量程序员修改代码,无论涉及前端还是后端。]
与建筑、绘画等领域不同,一次交付完毕就可以几乎永久使用。而软件开发是一个持续的过程,每次需求变更,都导致代码需要修改。开发人员几乎需要一直围绕着软件代码,不断的修改。调查表明,程序员 90% 的时间花在*改代码*上,*写代码*只占 10%。
#fun[软件就像生物,要不断进化,软件不更新不维护了等于死。如果一个软件逐渐变得臃肿难以修改,无法适应新需求,那他就像已经失去进化能力的生物种群,如《三体》世界观中“安顿”到澳大利亚保留区里“绝育”的人类,被淘汰只是时间问题。]
如果我们能在*写代码*阶段,就把程序准备得*易于后续修改*,那就可以在后续 90% 的*改代码*阶段省下无数时间。
如何让代码易于修改?前人总结出一系列常用的写法,这类写法有助于让后续修改更容易,各自适用于不同的场合,这就是设计模式。
提升可维护性最基础的一点,就是避免重复!
当你有很多地方出现重复的代码时,一旦需要涉及修改这部分逻辑时,就需要到每一个出现了这个逻辑的代码中,去逐一修改。
#fun[例如你的名字,在出生证,身份证,学生证,毕业证,房产证,驾驶证,各种地方都出现了。那么你要改名的话,所有这些证件都需要重新印刷!如果能把他们合并成一个“统一证”,那么只需要修改“统一证”上的名字就行了。]
不过,现实中并没有频繁改名字的需求,这说明:
- 对于不常修改的东西,可以容忍一定的重复。
- 越是未来有可能修改的,就越需要设计模式降重!
例如数学常数 PI = 3.1415926535897,这辈子都不可能出现修改的需求,那写死也没关系。如果要把 PI 定义成宏,只是出于“记不住”“写起来太长了”“复制粘贴麻烦”。所以对于 PI 这种不会修改的东西,降重只是增加*可读性*,而不是*可修改性*。
#tip[但是,不要想当然!需求的千变万化总是超出你的想象。]
例如你做了一个“愤怒的小鸟”游戏,需要用到重力加速度 g = 9.8,你想当然认为 g 以后不可能修改。老板也信誓旦旦向你保证:“没事,重力加速度不会改变。”你就写死在代码里了。
没想到,“愤怒的小鸟”老板突然要求你加入“月球章”关卡,在这些关卡中,重力加速度是 g = 1.6。
如果你一开始就已经把 g 提取出来,定义为常量:
```cpp
struct Level {
const double g = 9.8;
void physics_sim() {
bird.v = g * t; // 假装这里是物理仿真程序
pig.v = g * t; // 假装这里是物理仿真程序
}
};
```
那么要支持月球关卡,只需修改一处就可以了。
```cpp
struct Level {
double g;
Level(Chapter chapter) {
if (chapter == ChapterMoon) {
g = 1.6;
} else {
g = 9.8;
}
}
void physics_sim() {
bird.v = g * t; // 无需任何修改,自动适应了新的非常数 g
pig.v = g * t; // 无需任何修改,自动适应了新的非常数 g
}
};
```
#fun[小彭老师之前做 zeno 时,询问要不要把渲染管线节点化,方便用户动态编程?张猩猩就是信誓旦旦道:“渲染是一个高度成熟领域,不会有多少修改需求的。”小彭老师遂写死了渲染管线,专为性能极度优化,几个月后,张猩猩羞答答找到小彭老师:“小彭老师,那个,渲染,能不能改成节点啊……”。这个故事告诉我们,甲方的信誓旦旦放的一个屁都不能信。]
=== 用函数封装
函数就是来帮你解决代码重复问题的!要领:
*把共同的部分提取出来,把不同的部分作为参数传入。*
```cpp
void sum(std::vector<int> const &v) {
int s = 0;
for (int i = 0; i < v.size(); i++) {
s += v[i];
}
fmt::println("sum of v = {}", s);
}
int main() {
std::vector<int> a = {1, 2, 3, 4};
sum(a);
std::vector<int> b = {5, 6, 7, 8};
sum(b);
return 0;
}
```
这样 main 函数里就可以只关心要求和的数组,而不用关心求和具体是如何实现的了。事后我们可以随时把 sum 的内容偷偷换掉,换成并行的算法,main 也不用知道。这就是*封装*,可以把重复的公共部分抽取出来,方便以后修改代码。
#fun[sum 函数相当于,当需要吹空调时,插上空调插座。当需要给手机充电时,插上手机充电器。你不需要关心插座里的电哪里来,“国家电网”会替你想办法解决,想办法优化,想办法升级到绿色能源。你只需要吹着空调给你正在开发的手机 App 优化就行了,大大减轻程序员心智负担。]
=== 要封装,但不要耦合
但是!这段代码仍然有个问题,我们把 sum 求和的结果,直接在 sum 里打印了出来。sum 里写死了,求完和之后只能直接打印,调用者 main 根本无法控制。
这是一种错误的封装,或者说,封装过头了。
#fun[你把手机充电器 (fmt::println) 焊死在了插座 (sum) 上,现在这个插座只能给手机充电 (用于直接打印) 了,不能给笔记本电脑充电 (求和结果不直接用于打印) 了!尽管通过更换充电线 (参数 v),还可以支持支持安卓 (a) 和苹果 (b) 两种手机的充电,但这样焊死的插座已经和笔记本电脑无缘了。]
=== 每个函数应该职责单一,别一心多用
很明显,“打印”和“求和”是两个独立的操作,不应该焊死在一块。
sum 函数的本职工作是“数组求和”,不应该附赠打印功能。
sum 计算出求和结果后,直接 return 即可。
#fun[如何处理这个结果,是调用者 main 的事,正如“国家电网”不会管你用他提供的电来吹空调还是玩游戏一样,只要不妨碍到其他居民的正常用电。]
```cpp
int sum(std::vector<int> const &v) {
int s = 0;
for (int i = 0; i < v.size(); i++) {
s += v[i];
}
return s;
}
int main() {
std::vector<int> a = {1, 2, 3, 4};
fmt::println("sum of a = {}", sum(a));
std::vector<int> b = {5, 6, 7, 8};
fmt::println("sum of b = {}", sum(b));
return 0;
}
```
这就是设计模式所说的*职责单一原则*。
=== 二次封装
假设我们要计算一个数组的平均值,可以再定义个函数 average,他可以基于 sum 实现:
```cpp
int sum(std::vector<int> const &v) {
int s = 0;
for (int i = 0; i < v.size(); i++) {
s += v[i];
}
return s;
}
double average(std::vector<int> const &v) {
return (double)sum(v) / v.size();
}
int main() {
std::vector<int> a = {1, 2, 3, 4};
fmt::println("average of a = {}", average(a));
std::vector<int> b = {5, 6, 7, 8};
fmt::println("average of b = {}", average(b));
return 0;
}
```
进一步封装一个打印数组所有统计学信息的函数:
```cpp
void print_statistics(std::vector<int> const &v) {
if (v.empty()) {
fmt::println("this is empty...");
} else {
fmt::println("sum: {}", sum(v));
fmt::println("average: {}", average(v));
fmt::println("min: {}", min(v));
fmt::println("max: {}", max(v));
}
}
int main() {
std::vector<int> a = {1, 2, 3, 4};
print_statistics(a);
std::vector<int> b = {5, 6, 7, 8};
print_statistics(b);
return 0;
}
```
暴露 API 时,要同时提供底层的 API 和高层封装的 API。用户如果想要控制更多细节可以调用底层 API,想要省事的用户可以调用高层封装好的 API。
#tip[高层封装 API 应当可以完全通过调用底层 API 实现,提供高层 API 只是方便初级用户使用和理解。]
#story[
例如 `libcurl` 就提供了 `curl_easy` 和 `curl_multi` 两套 API。
- `curl_multi` 提供了超详细的参数,把每个操作分拆成多步,方便用户插手细节,满足高级用户的定制化需求,但太过复杂,难以学习。
- `curl_easy` 是对 `curl_multi` 的再封装,提供了更简单的 API,但是对具体细节就难以操控了,适合初学者上手。
]
=== Linus 的最佳实践:每个函数不要超过 3 层嵌套,函数体不要超过 24 行
Linux 内核为什么坚持使用 `TAB=8` 为代码风格?
TODO:还在写
== 为什么需要函数式?
你产生了两个需求,分别封装了两个函数:
- `sum` 求所有元素的和
- `product` 求所有元素的积
```cpp
int sum(std::vector<int> const &v) {
int ret = v[0];
for (int i = 1; i < v.size(); i++) {
ret += v[i];
}
return ret;
}
int product(std::vector<int> const &v) {
int ret = v[0];
for (int i = 1; i < v.size(); i++) {
ret *= v[i];
}
return ret;
}
int main() {
std::vector<int> a = {1, 2, 3, 4};
fmt::println("sum: {}", sum(a));
fmt::println("product: {}", product(a));
return 0;
}
```
注意到 `sum` 和 `product` 的内容几乎如出一辙,唯一的区别在于:
- `sum` 的循环体为 `+=`;
- `product` 的循环体为 `*=`。
这种函数体内有部分代码重复,但又有特定部分不同,难以抽离。
该怎么复用这重复的部分代码呢?
我们要把 `sum` 和 `product` 合并成一个函数 `generic_sum`。然后通过函数参数,把差异部分(0、`+=`)“注入”到两个函数原本不同地方。
=== 枚举的糟糕用法
如何表示我这个函数是要做求和 `+=` 还是求积 `*=`?
让我们定义枚举:
```cpp
enum Mode {
ADD, // 求和操作
MUL, // 求积操作
};
int generic_sum(std::vector<int> const &v, Mode mode) {
int ret = v[0];
for (int i = 1; i < v.size(); i++) {
if (mode == ADD) { // 函数内判断枚举,决定要做什么操作
ret += v[i];
} else if (mode == MUL) {
ret *= v[i];
}
}
return ret;
}
int main() {
std::vector<int> a = {1, 2, 3, 4};
fmt::println("sum: {}", generic_sum(a, ADD)); // 用户指定他想要的操作
fmt::println("product: {}", generic_sum(a, MUL));
return 0;
}
```
然而,如果用户现在想要求数组的*最大值*呢?
枚举中还没有实现最大值的操作……要支持,就得手忙脚乱地去修改 `generic_sum` 函数和 `Mode` 枚举原本的定义,真麻烦!
```cpp
enum Mode {
ADD,
MUL,
MAX, // **改**
};
int generic_sum(std::vector<int> const &v, Mode mode) {
int ret = v[0];
for (int i = 1; i < v.size(); i++) {
if (mode == ADD) {
ret += v[i];
} else if (mode == MUL) {
ret *= v[i];
} else if (mode == MAX) { // **改**
ret = std::max(ret, v[i]); // **改**
}
}
return ret;
}
int main() {
std::vector<int> a = {1, 2, 3, 4};
generic_sum(a, MAX); // **改**
return 0;
}
```
#tip[我用 `// **改**` 指示了所有需要改动的地方。]
为了增加一个求最大值的操作,就需要三处分散在各地的改动!
不仅如此,还容易抄漏,抄错,比如 `MAX` 不小心打错成 `MUL` 了,自己却没发现,留下 BUG 隐患。
这样写代码的方式,心智负担极大,整天就提心吊胆着东一块,西一块的散装代码,担心着有没有哪个地方写错写漏,严重妨碍了开发效率。
并且写出来的代码也不能适应需求的变化:假如我需要支持 `MIN` 呢?又得改三个地方!这违背了设计模式的*开闭原则*。
/ 开闭原则: 对扩展开放,对修改封闭。指的是软件在适应需求变化时,应尽量通过*扩展代码*来实现变化,而不是通过*修改已有代码*来实现变化。
使用枚举和 if-else 实现多态,难以扩展,还要一直去修改原函数的底层实现,就违背了*开闭原则*。
=== 函数式编程光荣救场
如果我们可以“注入”代码就好了!能否把一段“代码”作为 `generic_sum` 函数的参数呢?
代码,实际上就是函数,注入代码就是注入函数。我们先定义出三个不同操作对应的函数:
```cpp
int add(int a, int b) {
return a + b;
}
int mul(int a, int b) {
return a * b;
}
int max(int a, int b) {
return std::max(a, b);
}
```
然后,把这三个小函数,作为另一个大函数 `generic_sum` 的参数就行!
```cpp
int generic_sum(std::vector<int> const &v, auto op) {
int ret = v[0];
for (int i = 1; i < v.size(); i++) {
// 函数作者无需了解用户指定的“操作”具体是什么
// 只需要调用这一“操作”,得到结果就行
ret = op(ret, v[i]);
}
return ret;
}
int main() {
std::vector<int> a = {1, 2, 3, 4};
// 用户无需关心函数的具体实现是什么
// 只需随心所欲指定他的“操作”作为参数
generic_sum(a, add);
generic_sum(a, product);
generic_sum(a, max);
return 0;
}
```
责任明确了,我们成功把一部分细节从 `generic_sum` 中进一步抽离。
- 库作者 `generic_sum` 不必了解 `main` 的操作具体是什么,他只负责利用这个操作求“和”。
- 库用户 `main` 不必了解 `generic_sum` 如何实现操作累加,他只管注入“如何操作”的代码,以函数的形式。
=== 我用了 C++20 的函数参数 auto 语法糖
```cpp
int generic_sum(std::vector<int> const &v, auto op) P
}
```
这里的参数 op 类型声明为 auto,效果就是,op 这个参数现在能接受任意类型的对象了(包括函数!)
```cpp
int generic_sum(std::vector<int> const &v, auto op) {
...
}
```
#detail[准确的说,`auto op` 参数的效果是使 `generic_sum` 变为一个*模板函数*,其中 op 参数变成了模板参数,能够接受任意类型了。而写明类型的参数 `std::vector<int> const &v` 就没有任何额外效果,就只能接受 `vector<int>` 而已。]
如果你不支持 C++20 的话,需要显式写出 `template`,才能实现同样的效果:
```cpp
template <typename Op>
int generic_sum(std::vector<int> const &v, Op op) {
...
}
```
#fun[C++11:auto 只能用于定义变量;C++14:函数返回类型可以是 auto;C++17:模板参数也可以 auto;C++20:函数参数也可以是 auto 了;(狂想)C++47:auto 现在是 C++47 的唯一关键字,用户只需不断输入 auto-auto-auto,编译器内建人工智能自动识别你的意图生成机器码。]
=== 函数也是对象!
在过去的*面向对象编程范式*中,函数(代码)和对象(数据)被*割裂*开来,他们愚昧地认为*函数不是对象*。
*函数式编程范式*则认为:*函数也是一种变量,函数可以作为另一个函数的参数!*
#fun[Function lives matter!]
#detail[面向对象就好比计算机的“哈佛架构”,代码和数据割裂,代码只能单方面操作数据。函数式就好比“冯诺依曼架构”,代码也是数据。看似会导致低效,实则大大方便了动态加载新程序,因而现在的计算机基本都采用了“冯诺依曼架构”。]
总之,函数也是对象,被亲切地尊称为*函数对象*。
=== C++11 引入 Lambda 语法糖
C++98 时代,人们还需要单独跑到 `main` 外面,专门定义 `add`、`mul`、`max` 函数。弄得整个代码乱哄哄的,非常麻烦。
```cpp
int add(int a, int b) {
return a + b;
}
int mul(int a, int b) {
return a * b;
}
int max(int a, int b) {
return std::max(a, b);
}
int main() {
std::vector<int> a = {1, 2, 3, 4};
generic_sum(a, add);
generic_sum(a, product);
generic_sum(a, max);
return 0;
}
```
C++11 引入了 *Lambda 表达式*语法,允许你就地创建一个函数。
```cpp
int main() {
std::vector<int> a = {1, 2, 3, 4};
auto add = [](int a, int b) {
return a + b;
};
auto mul = [](int a, int b) {
return a * b;
};
auto max = [](int a, int b) {
return std::max(a, b);
};
generic_sum(a, add);
generic_sum(a, product);
generic_sum(a, max);
return 0;
}
```
不用往 `main` 外面塞垃圾了,一清爽。
更进一步,我们甚至不用定义变量,直接把 Lambda 表达式写在 `generic_sum` 的参数里就行了!
```cpp
int main() {
std::vector<int> a = {1, 2, 3, 4};
generic_sum(a, [](int a, int b) {
return a + b;
});
generic_sum(a, [](int a, int b) {
return a * b;
});
generic_sum(a, [](int a, int b) {
return std::max(a, b);
}); // **改**
return 0;
}
```
#tip[以上写法都是等价的。]
要支持一个新操作,只需修改一处地方:在调用 `generic_sum` 时就地创建一个函数。随叫随到,不用纠结于“起名强迫症”,是不是很方便呢?
#detail[准确的说,Lambda 创建的是函数对象 (function object) 或称仿函数 (functor) 而不是传统意义上的函数。]
#story[其实 C++98 时代人们就已经大量在用 `operator()()` 模拟函数对象了,著名的第三方库 Boost 也封装了各种函数式常用的容器和工具。C++11 才终于把*函数对象*这个概念转正,并引入了更方便的 Lambda 语法糖。]
#fun[即使是面向对象的头号孝子 Java,也已经开始引入函数式的 Lambda 语法糖,C\# 的 LINQ 更是明目张胆的致敬 map-reduce 全家桶,甚至 C 语言用户也开始玩各种函数指针回调……没办法,函数式确实方便呀!]
=== 依赖注入原则
函数对象 `op` 作为参数传入,让 `generic_sum` 内部去调用,就像往 `generic_sum` 体内“注入”了一段自定义代码一样。
这可以让 `generic_sum` 在不修改本体的情况下,通过修改“注入”部分,轻松扩展,满足*开闭原则*。
更准确的说,这体现的是设计模式所要求的*依赖注入原则*。
/ 依赖注入原则: 一个封装好的函数或类,应该尽量依赖于抽象接口,而不是依赖于具体实现。这可以提高程序的灵活性和可扩展性。
四大编程范式都各自发展出了*依赖注入原则*的解决方案:
- 面向过程编程范式中,*函数指针*就是那个抽象接口。
- 面向对象编程范式中,*虚函数*就是那个抽象接口。
- 函数式编程范式中,*函数对象*就是那个抽象接口。
- 模板元编程范式中,*模板参数*就是那个抽象接口。
同样是把抽象接口作为参数,同样解决可扩展问题。
函数指针贴近底层硬件,虚函数方便整合多个接口,函数对象轻量级、随地取用,模板元有助高性能优化,不同的编程范式殊途同归。
=== 低耦合,高内聚
依赖注入原则可以减少代码之间的耦合度,大大提高代码的灵活性和可扩展性。
/ 耦合度: 指的是一个模块、类、函数和其他模块、类、函数之间的关联程度。耦合度越低,越容易进行单元测试、重构、复用和扩展。
#fun[高耦合度的典型是“牵一发而动全身”。低耦合的典范是蚯蚓,因为蚯蚓可以在任意断面切开,还能活下来,看来蚯蚓的身体设计非常“模块化”呢。]
通常来说,软件应当追求低耦合度,适度解耦的软件能更快适应需求变化。但过度的低耦合也会导致代码过于分散,不易阅读和修改,甚至可能起到反效果。
#tip[若你解耦后,每次需求变化要改动的地方变少了,那就是合理的解耦。若你过分解耦,代码东一块西一块,以至于需求变化时需要到处改,比不解耦时浪费的时间还要多,那就是解耦过度。]
#fun[完全零耦合的程序每个函数互不联系,就像把蚯蚓拆散成一个个独立的细胞一样。连初始需求“活着”都实现不了,谈何适应需求变化?所以解耦也切勿矫枉过正。]
为了避免解耦矫枉过正,人们又提出了内聚的概念,并规定解耦的前提是:不耽误内聚。耽误到内聚的解耦,就只会起到降低可维护性的反效果了。
/ 内聚: 指的是同一个模块、类、函数内部各个元素之间的关联程度。内聚度越高,功能越独立,越方便集中维护。
#fun[例如,人的心脏专门负责泵血,肝脏只负责解毒,这就是高内聚的人体器官。若人的心脏还要兼职解毒,肝脏还兼职泵血,看似好像是增加了“万一心脏坏掉”的冗余性,实际上把“泵血”这一功能拆散到各地,无法“集中力量泵大血”了。]
#detail[人类的大脑和 CPU 一样,也有“缓存局域性 (cache-locality)”的限制:不能同时在很多个主题之间快速切换,无论是时间上的还是空间上的割裂 (cache-miss),都会干扰程序员思维的连贯性,从而增大心智负担。]
好的软件要保持低耦合,同时高内聚。
#fun[就像“民主集中制”一样,既要监督防止大权独揽,又要集中力量办一个人办不成的大事。]
=== 与传统面向对象的对比
传统的面向对象同样可以用*虚函数接口类*模拟*函数对象*一样的功能,只不过没有 lambda 和闭包的语法加持,写起来非常繁琐,就和在 C 语言里“模拟”面向对象一样。
#fun[为了这么小的一个代码块,单独定义一个类,就像妈妈开一架“空中战车” A380 只是为了接你放学一样,等你值好机的时间我自己走都走到了。而函数式中,用 lambda 就地定义函数对象,相当于随地抓来一台共享单车开走。]
```cpp
struct OpBase { // 面向对象:遇事不决先定义接口……
virtual int compute(int a, int b) = 0;
virtual ~OpBase() = default;
};
struct OpAdd : OpBase {
int compute(int a, int b) override {
return a + b;
}
};
struct OpMul : OpBase {
int compute(int a, int b) override {
return a * b;
}
};
struct OpMax : OpBase {
int compute(int a, int b) override {
return std::max(a, b);
}
};
int generic_sum(std::vector<int> const &v, OpBase *op) {
int ret = v[0];
for (int i = 1; i < v.size(); ++i) {
ret = op->compute(ret, v[i]); // 写起来也麻烦,需要调用他的成员函数,成员函数又要起名……
}
delete op;
return ret;
}
int main() {
std::vector<int> a = {1, 2, 3, 4};
generic_sum(a, new OpAdd());
generic_sum(a, new OpMul());
generic_sum(a, new OpMax());
return 0;
}
```
不仅需要定义一堆类,接口类,实现类,继承来继承去,还需要管理讨厌的指针,代码量翻倍,没什么可读性,又影响运行效率。
#fun[3 年 2 班小彭同学,你的妈妈开着 A380 来接你了。]
而现代 C++ 只需 Lambda 语法就地定义函数对象,爽。
```cpp
generic_sum(a, [](int a, int b) {
return a + b;
});
generic_sum(a, [](int a, int b) {
return a * b;
});
generic_sum(a, [](int a, int b) {
return std::max(a, b);
});
```
=== 函数对象在模板加持下静态分发
刚刚,我们的实现用了 `auto op` 做参数,这等价于让 `generic_sum` 变成一个模板函数。
```cpp
int generic_sum(std::vector<int> const &v, auto op);
// 不支持 C++20 时的替代写法:
template <typename Op>
int generic_sum(std::vector<int> const &v, Op op);
```
这意味着每当用户指定一个新的函数对象(lambda)时,`generic_sum` 都会重新实例化一遍。
```cpp
generic_sum(a, [](int a, int b) {
return a + b;
});
generic_sum(a, [](int a, int b) {
return a * b;
});
generic_sum(a, [](int a, int b) {
return std::max(a, b);
});
```
编译后,会变成类似于这样:
```cpp
generic_sum<add>(a);
generic_sum<mul>(a);
generic_sum<max>(a);
```
会生成三份函数,每个都是独立编译的:
```cpp
int generic_sum<add>(std::vector<int> const &v) {
int ret = v[0];
for (int i = 1; i < v.size(); ++i) {
ret = add(ret, v[i]);
}
return ret;
}
int generic_sum<mul>(std::vector<int> const &v) {
int ret = v[0];
for (int i = 1; i < v.size(); ++i) {
ret = mul(ret, v[i]);
}
return ret;
}
int generic_sum<max>(std::vector<int> const &v) {
int ret = v[0];
for (int i = 1; i < v.size(); ++i) {
ret = max(ret, v[i]);
}
return ret;
}
```
这允许编译器为每个版本的 `generic_sum` 单独做优化,量身定制最优的代码。
例如 `add` 这个函数对象,因为只在 `generic_sum<add>` 中使用了,会被被编译器自动内联,不会产生函数调用和跳转的指令,各自优化成单独一条加法 / 乘法 / 最大值指令等。
#detail[比如,编译器会检测到 `+=` 可以矢量化,于是用 `_mm_add_epi32` 替代了。同理,mul 则用 `_mm_mullo_epi32` 替代,max 则用 `_mm_max_epi32` 替代等,各自分别生成了各自版本最优的代码。而如果是普通的函数指针,不会生成三份量身定做的实例,无法矢量化(有一种例外,就是编译器检测到了 `generic_sum` 似乎只有这三种可能参数,然后做了 IPO 优化,但并不如模板实例化一样稳定强制)。]
为三种不同的 op 参数分别定做三份。虽然增加了编译时间,膨胀了生成的二进制体积;但生成的机器码是分别针对每种特例一对一深度优化的,更高效。
#story[例如矩阵乘法(gemm)的最优算法,对于不同的矩阵大小和形状是不同的。著名的线性代数库 CUBLAS 和 MKL 中,会自动根据用户输入的矩阵形状,选取最优的算法。也就是说,CUBLAS 库里其实存着适合各种矩阵大小排列组合的算法代码(以 fatbin 格式存储在二进制中)。当调用矩阵乘法时,自动查到最适合的一版来调用给你。类似 gemm,还有 gemv、spmv……所有的矩阵运算 API 都经历了这样的“编译期”暴力排列组合,只为“运行时”释放最大性能!这也导致编译好的 cublas.dll 文件来到了恐怖的 20 MB 左右,而我们称之为高效。]
=== 函数对象也可在 function 容器中动态分发
Lambda 函数对象的类型是匿名的,每个 Lambda 表达式都会创建一个全新的函数对象类型,这使得 `generic_sum` 对于每个不同的 Lambda 都会实例化一遍。虽然有利于性能优化,但也影响了编译速度和灵活性。
#detail[通常,我们只能通过 `decltype(add)` 获取 `add` 这个 Lambda 对象的类型。也只能通过 `auto` 来捕获 Lambda 对象为变量。]
为此,标准库提供了 `std::function` 容器,他能容纳任何函数对象!无论是匿名的 Lambda 函数对象,还是普普通通的函数指针,都能纳入 `std::function` 的体内。
唯一的代价是,你需要指定出所有参数的类型,和返回值的类型。
例如一个参数为两个 `int`, `std::function<int(int, int)>`
```cpp
auto add_lambda = [](int a, int b) { // Lambda 函数对象
return a + b;
};
struct AddClass {
int operator()(int a, int b) { // 自定义类模拟函数对象
return a + b;
}
};
AddClass add_object;
int add_regular_func(int a, int b) { // 普通函数
return a + b;
}
std::function<int(int, int)> add; // 所有广义函数对象,统统接纳
add = add_lambda; // OK
add = add_object; // OK
add = add_regular_func; // OK
```
```cpp
int generic_sum(std::vector<int> const &v,
std::function<int(int, int)> op) {
int ret = v[0];
for (int i = 1; i < v.size(); ++i) {
ret = op(ret, v[i]); // 写起来和模板传参时一样无感
}
// 无需指针,无需 delete,function 能自动管理函数对象生命周期
return ret;
}
```
#detail[如果还想支持任意类型的参数和返回值,那么你可以试试看 `std::function<std::any(std::any)>`。这里 `std::any` 是个超级万能容器,可以容纳任何对象,他和 `std::function` 一样都采用了“类型擦除 (type-erasure)”技术,缺点是必须配合 `std::any_cast` 才能取出使用,之后的模板元进阶专题中会详细介绍他们的原理,并带你自己做一个擦加法的类型擦除容器。]
函数式编程,能在静态与动态之间轻松切换,*高性能*与*灵活性*任君选择。
- 在需要性能的*瓶颈代码*中用模板传参,编译期静态分发,多次量身定做,提高运行时性能。
/ 瓶颈代码: 往往一个程序 80% 的时间花在 20% 的代码上。这 20% 是在程序中频繁执行的、计算量大的、或者调用特别耗时的函数。针对这部分瓶颈代码优化即可,而剩余的 80% 打酱油代码,大可以怎么方便怎么写。
- 在性能无关紧要的顶层业务逻辑中用 function 容器传参,运行时动态分发,节省编译体积,方便持久存储,灵活易用。
#tip[例如上面的 `generic_sum` 函数,如果我们突然想要高性能了,只需把 `std::function<int(int, int)> op` 轻轻改为 `auto op` 就轻松切换到静态分发模式了。]
而虚函数一旦用了,基本就只能动态分发了,即使能被 IPO 优化掉,虚表指针也永远占据着一个 8 字节的空间,且永远只能以指针形式传来传去。
#detail[一种静态分发版的虚函数替代品是 CRTP,他基于模板元编程,但与虚函数之间切换困难,不像函数对象那么无感,之后的模板元专题课中会专门介绍。]
=== 案例:函数对象的动态分发用于多线程任务队列
```cpp
mt_queue<std::function<void()>> task_queue;
void thread1() {
task_queue.push([] {
fmt::println("正在执行任务1");
});
task_queue.push([] {
fmt::println("正在执行任务2");
});
}
void thread2() {
while (true) {
auto task = task_queue.pop();
task();
}
}
```
#detail[`mt_queue` 是小彭老师封装的多线程安全的消息队列,实现原理会在稍后的多线程专题课中详细讲解。]
=== 函数对象的重要机制:闭包
=== 函数指针是 C 语言陋习,改掉
== bind 为函数对象绑定参数
```cpp
int hello(int x, int y) {
fmt::println("hello({}, {})", x, y);
return x + y;
}
int main() {
fmt::println("main 调用 hello(2, 3) 结果:{}", hello(2, 3));
fmt::println("main 调用 hello(2, 4) 结果:{}", hello(2, 4));
fmt::println("main 调用 hello(2, 5) 结果:{}", hello(2, 5));
return 0;
}
```
```cpp
int hello(int x, int y) {
fmt::println("hello({}, {})", x, y);
return x + y;
}
int main() {
fmt::println("main 调用 hello2(3) 结果:{}", hello2(3));
fmt::println("main 调用 hello2(4) 结果:{}", hello2(4));
fmt::println("main 调用 hello2(5) 结果:{}", hello2(5));
return 0;
}
```
```
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
TODO
```
= 字符编码那些事
== 字符集
=== ASCII
ASCII 为英文字母、阿拉伯数组、标点符号等 128 个字符,每个都用一个 0 到 127 范围内的数字对应。
如果你想要表示一个字符,就在这个表里寻找到相应的数字编号,然后存这个编号即可。
#image("pic/ascii.png")
例如下面的一串数字:
```
80 101 110 103
```
在 ASCII 表中查找,发现这些数字分别对应 `P`、`e`、`n`、`g` 四个字母,连起来就还原得到了原本的字符串“Peng”。
=== Latin-1
Latin-1 扩充了 ASCII 字符集,保持 ASCII 原有 0 到 127 的部分映射不变,额外追加了 128 到 255 的映射关系。因此也被称为 EASCII(扩展 ASCII)。
#image("pic/latin1.svg")
=== Unicode
Unicode 字符集为全世界的所有字符都对应了一个整数。
#codetab("字符", "编号", ("我", "戒", "戓", "戔", "戕", "或", "戗", "战", "戙", "戚"), ("25105", "25106", "25107", "25108", "25109", "25110", "25111", "25112", "25113", "25114"), 2)
出于历史兼容性考虑,Unicode 在 0 到 256 区间内的映射和 ASCII、Latin-1 是完全相同的。
#codetab("字符", "编号", ("P", "e", "n", "g"), ("80", "101", "110", "103"), 1)
Unicode 经过了许多版本的发展,早期的 Unicode 只收录了 65536 (0x10000) 个字符,后来扩充到了 1114112 (0x110000) 个字符。
#tip[虽然占用了 1114112 这多格码点空间,不过其中很多都是空号,留待未来扩充使用。]
Unicode 字符映射表可以在网上找到:
- https://symbl.cc/en/unicode-table/
- https://www.compart.com/en/unicode/
=== 总结
/ 字符集: 从字符到整数的一一映射。
/ ASCII: 只收录了英文字母、阿拉伯数字、标点符号的字符集。
/ Latin-1: 在 ASCII 基础上追加了注音字母,满足欧洲用户需要。
/ Unicode: 收录了全世界所有文字和符号的字符集。
计算机存储字符时,实际上是存储了那个对应的整数。
这些整数就被称为 *码点 (code point)*,每个字符对应一个码点。
不过,程序员通常喜欢用十六进制书写数字:
#codetab("字符", "编号", ("我", "戒", "戓", "戔", "戕", "或", "戗", "战", "戙", "戚"), ("0x6211", "0x6212", "0x6213", "0x6214", "0x6215", "0x6216", "0x6217", "0x6218", "0x6219", "0x621A"), 2)
例如“我”这个字,在 Unicode 表中编号为 0x6211。于是当计算机需要表示“我”这个字符时,就用 0x6211 这个整数代替。
如果要表示多个字符,那就用一个整数的数组吧!
例如当计算机要处理“我爱𰻞𰻞面!”这段文字,就可以用:
```
0x6211 0x7231 0x30EDE 0x30EDE 0x9762 0x21
```
这一串数字代替。
== 字符编码
Unicode 只是指定了整数,没有规定整数如何在内存中存在。
/ 字符编码: 将字符的整数编号序列化为计算机可直接存储的一个或多个实际存在的整数类型。
Unicode 字符可以选用以下这些字符编码来序列化:
/ UTF-32: 每个 Unicode 字符用 1 个 `uint32_t` 整数存储。
/ UTF-16: 每个 Unicode 字符用 1 至 2 个 `uint16_t` 整数存储。
/ UTF-8: 每个 Unicode 字符用 1 至 4 个 `uint8_t` 整数存储。
翻译出来的这些小整数叫 *码位 (code unit)*。例如对于 UTF-8 而言,每个 `uint8_t` 就是他的码位。
=== UTF-32
Unicode 字符映射的整数范围是 0x0 到 0x10FFFF。
最大值 0x10FFFF 有 21 个二进制位,C 语言中 `uint32_t` 能容纳 32 个二进制位,所以最简单的方法是直接用 `uint32_t` 数组来一个个容纳 Unicode 字符码点。虽然浪费了 11 位,但至少所有 Unicode 字符都能安全容纳。
例如当计算机要存储“我爱𰻞𰻞面!”这段文字,就可以用:
```cpp
std::vector<uint32_t> s = {
0x00006211, // 我
0x00007231, // 爱
0x00030EDE, // 𰻞
0x00030EDE, // 𰻞
0x00009762, // 面
0x00000021, // !
};
```
这个数组表示。
UTF-32 中,一个码点固定对应一个码位,所以说 UTF-32 是*定长编码*。定长编码的优点是:
- 数组的长度,就是字符串中实际字符的个数。
- 要取出单个字符,可以直接用数组的索引操作。
- 无论对数组如何切片,都不会把一个独立的字符破坏。
- 反转数组,就可以把字符串反转,不会产生破坏字符的问题。
缺点是:
- 浪费存储空间。
因此,我们推荐在计算机内存中,始终采用 UTF-32 形式处理文字。
#tip[UTF-32 也被称为 UCS-4,他俩是同义词。]
=== UTF-8
UTF-32 虽然方便了文字处理,然而,却浪费了大量的存储空间,不利于文字存储!一个字符,无论他是常用还是不常用,都要霸占 4 个字节的空间。
Unicode 编码字符时,特意把常用的字符靠前排列了。
世界上常用语言文字都被刻意编码在了 0 到 0xFFFF 区间内,超过 0x10000 的基本都是不常用的字符,例如甲骨文、埃及象形文字、Emoji 等,很多都是已经无人使用的古代文字和生僻字,例如“𰻞”。仅仅是为了这些偶尔使用的罕见文字,就要求所有文字都用同样的 4 字节宽度存储,实在是有点浪费。
在 0 到 0xFFFF 区间内,同样有按照常用度排序:
- 0 到 0x7F 是(欧美用户)最常用的英文字母、阿拉伯数字、半角标点。
- 0x80 到 0x7FF 是表音文字区,常用的注音字母、拉丁字母、希腊字母、西里尔字母、希伯来字母等。
- 0x800 到 0xFFFF 是表意文字,简繁中文、日文、韩文、泰文、马来文、阿拉伯文等。
- 0x10000 到 0x10FFFF 是不常用的稀有字符,例如甲骨文、埃及象形文字、Emoji 等。
UTF-8 就是为了解决压缩问题而诞生的。
UTF-8 把一个码点序列化为一个或多个码位,一个码位用 1 至 4 个 `uint8_t` 整数表示。
- 0 到 0x7F 范围内的字符,用 1 个字节表示。
- 0x80 到 0x7FF 范围内的字符,用 2 个字节表示。
- 0x800 到 0xFFFF 范围内的字符,用 3 个字节表示。
- 0x10000 到 0x10FFFF 范围内的字符,用 4 个字节表示。
序列化规则如下:
==== 0 到 0x7F
对于 0 到 0x7F 的字符,这个范围的字符需要 7 位存储。
我们选择直接存储其值。
例如 'P' 会被直接存储其 Unicode 值的 80(0x50):
```
01010000
```
由于 Unicode 在 0 到 0x7F 范围内与 ASCII 表相同,而 UTF-8 又把 0 到 0x7F 的值直接存储,所以说 UTF-8 兼容 ASCII。这使得原本设计于处理 ASCII 的 C 语言函数,例如 strlen、strcat、sprintf 等,都可以直接无缝切换到 UTF-8。反之亦然,任何设计用于 UTF-8 的程序都可以完全接受 ASCII 格式的输入文本。
#detail[但部分涉及字符长度的函数会有些许不兼容,例如 strlen 求出的长度会变成字节的数量而不是字符的数量了,例如 `strlen("我们")` 会得到 6 而不是 2,稍后讲解。]
==== 解码规则
UTF-8 的构造就像一列小火车一样,不同范围内的码位会被编码成不同长度的列车,但他们都有一个车头。
根据火车头的“等级”,我们可以推断出后面拉着几节车厢。
火车头是什么等级由他的二进制前缀决定:
+ 如果是 `0` 开头,就说明是单独一台火车头,后面没有车厢了,这表示车头里面直接装着 0 到 0x7F 范围的普通 ASCII 字符。
+ 如果是 `110` 开头,就说明后面拖着一节车厢,里面装着 0x80 到 0x7FF 范围内的欧洲字符。
+ 如果是 `1110` 开头,就说明后面拖着两节车厢,里面装着 0x800 到 0xFFFF 范围内的世界常用字符。
+ 如果是 `11110` 开头,就说明后面拖着三节车厢,里面装着 0x10000 到 0x10FFFF 范围内的生僻字符。
+ 如果是 `10` 开头,就说明这是一节车厢,车厢不会单独出现,只会跟在火车头屁股后面。如果你看到一节单独的车厢在前面无头驾驶,就说明出错了。
#fun[小朋友用小号列车装,大朋友用大号列车装。]
例如下面这一串二进制:
```
11100110 10000010 10000001
```
首先,看到第一个字节,是 `1110` 开头的三级车头!说明后面还有两节车厢是属于他的。火车头中 4 位用于表示车头等级了,剩下还有 4 位用于装乘客。
车厢也有固定的前缀,所有的车厢都必须是 `10` 开头的。去除这开头的 2 位,剩下的 6 位就是乘客。
对于这种三级列车,4 + 6 + 6 总共 16 位二进制,刚好可以装得下 0xFFFF 内的乘客。
```
0110 000010 000001
```
编码时则是反过来。
乘客需要被拆分成三片,例如对于“我”这个乘客,“我”的码点是 0x6211,转换成二进制是:
```
110010000010001
```
把乘客切分成高 4 位、中 6 位和低 6 位:
```
1100 100000 10001
```
加上 `1110`、`10` 和 `10` 前缀后,形成一列火车:
```
11100110 10000010 10000001
```
这样,我们就把“我”这个字符,编码成了三节列车,塞进字节流的网络隧道里了。
总结:
+ 前缀是 0 的火车头:火车头直接载客 7 名。
+ 前缀是 10 的是车厢:车厢不会单独出现,只会跟在火车头屁股后面。
+ 前缀是 110 的火车头:火车头直接载客 5 名 + 1 节车厢载客 6 名 = 共 11 名。
+ 前缀是 1110 的火车头:火车头直接载客 4 名 + 2 节车厢各载客 6 名 = 共 16 名。
+ 前缀是 11110 的火车头:火车头直接载客 3 名 + 3 节车厢各载客 6 名 = 共 21 名。
#fun[高级车头装了防弹钢板,载客空间变少,只好匀到后面的车厢。]
==== UTF-8 的抗干扰机制
如果发现 `10` 开头的独立车厢,就说明出问题了,可能是火车被错误拦腰截断,也可能是字符串被错误地反转。因为 `10` 只可能是火车车厢,不可能出现在火车头部。此时解码器应产生一个报错,或者用错误字符“�”替换。
```
10000010 10000001
```
#tip[在网络收发包时,如果你不妥善处理 TCP 粘包问题,就可能火车头进去了,火车尾巴还露在隧道外面,一段完整的列车被切断,导致 UTF-8 解读的时候出错。正确的做法是设立一个状态机来解码 UTF-8。C 语言的 `mbstate_t` 就是这种状态机,稍后讲解。]
除此之外,如果检测到一个三级火车头,却发现里面装着 0x394 (“Δ”),这是一个用二级火车头就能装下的欧洲字符,却用了三级火车头装,说明装箱那边的人偷懒滥用资源了!这种情况下 UTF-8 解码器也要产生一个报错,因为 UTF-8 要保证编码的唯一性,0x394 是 0x7F 到 0x7FF 范围的,就应该用二级火车头装。
以及,如果发现 `11111` 开头的五级火车头,也要报错,因为 UTF-8 最多只支持四级火车头。
如果检测到一个四级火车头拆开后的字符范围超过了 0x10FFFF,这超出了 Unicode 的范围,也要产生一个报错。如果一个三级火车头拆开后发现字符范围处在保留区 0xD800 到 0xDFFF 内,这是 Unicode 承诺永不加入字符的区间(稍后讲解 UTF-16 时会解释为什么),也要报错。总之 Unicode 码点的合法范围是 0x0 到 0xD7FF,0xE000 到 0x10FFFF。
总之,UTF-8 具有一定的冗余和自纠错能力,如果传输过程中出现差错,可能会爆出错误字符“�”。这个特殊字符是 Unicode 官方规定的,码点为 0xFFFD,出现他就意味着 UTF-8 解码失败了。
==== “我爱𰻞𰻞面!”
例如当计算机要以 UTF-8 格式存储“我爱𰻞𰻞面!”这段文字:
```cpp
std::vector<uint8_t> s = {
0xE6, 0x88, 0x91, // 我,需要三级列车
0xE7, 0x88, 0xB1, // 爱,需要三级列车
0xF0, 0xB0, 0xAF, 0x9B, // 𰻞,需要四级列车
0xF0, 0xB0, 0xAF, 0x9B, // 𰻞,需要四级列车
0xE9, 0x9D, 0xA2, // 面,需要三级列车
0x21, // !,这是个 ASCII 范围的字符,直接用单个火车头装
};
```
UTF-8 中,一个码点可能对应多个码位,所以说 UTF-8 是一种*变长编码*。变长编码的缺点是:
- 数组的长度,不一定是字符串中实际字符的个数。因此,要取出单个字符,需要遍历数组,逐个解析码位。
- 数组的单个元素索引,无法保证取出一个完整的字符。
- 对数组的切片,可能会把一个独立的字符切坏。
- 反转数组,不一定能把字符串的反转,因为可能不慎把一个字符的多个码位反转,导致字符破坏。
优点是:
- 节约存储空间。
我们推荐只在网络通信、硬盘存储时,采用 UTF-8 形式存储文字。
总之,UTF-8 适合存储,UTF-32 适合处理。
我们建议计算机从硬盘或网络中读出 UTF-8 字符串后,立即将其转换为 UTF-32,以方便后续文字处理。当需要写入硬盘或网络时,再转换回 UTF-8,避免硬盘容量和网络带宽的浪费。
计算机需要外码和内码两种:
+ 外码=硬盘中的文本=UTF-32
+ 内码=内存中的文本=UTF-8
=== UTF-16
UTF-16 的策略是:既然大多数常用字符的码点都在 0x0 到 0xFFFF 内,用 `uint32_t` 来存储也太浪费了。他的方案如下:
对于 0x0 到 0xFFFF 范围内的字符,就用一个 `uint16_t` 直接存。
对于 0xFFFF 到 0x10FFFF 范围的稀有字符,反正不常见,就拆成两个 `uint16_t` 存。这个拆的方案很有讲究,如果只是普通的拆,由于解码时收到的是个没头没尾的字节序列,无法分辨这到底是两个 `uint16_t` 的稀有字符,还是一个 `uint16_t` 的普通字符。
例如,我们把一个稀有字符“𰻞”,0x30EDE。拆成两个 `uint16_t`,得到 0x3 和 0x0EDE。如果直接存储这两个 `uint16_t`:
```
0x0003 0x0EDE
```
之后解码时,先读到 0x0003,还会以为他是单独的一个 `uint16_t`,表示 3 号字符“”。后面的 0x0EDE 就变成了一个单独的 0x0EDE,变成了 0x0EDE 号字符 “ໞ”。这样一来,“𰻞”就变成了两个毫不相干的字符,“ໞ”了。
为了避免与普通字符产生歧义,两个 `uint16_t` 需要采用一种特殊的方式以示区分。让解码器一看到,就能确定这两个 `uint16_t` 需要组装成同一个字符。
这就用到了一个“漏洞”:Unicode 并没有把码点分配的满满当当,或许是出于先见之明,在 0xD800 到 0xDFFF 之间预留了一大段空号:
#image("pic/ucs2range.png")
UTF-16 就是利用了这一段空间,他规定:0xD800 到 0xDFFF 之间的码点将永远不用来表示字符,而是作为*代理对 (surrogate-pair)*。其中 0xD800 到 0xDBFF 是*高位代理 (high surrogate)*,0xDC00 到 0xDFFF 是*低位代理 (low surrogate)*。高代理在前,低代理在后。
一个超过 0xFFFF 的稀有字符,会被拆成两段,一段放在高位代理里,一段放在低位代理里,一前一后放入 `uint16_t` 序列中。
#fun[搭载超宽超限货物的车辆需要被拆分成两段再进入隧道。]
具体拆分方法如下:
对于 0xFFFF 到 0x10FFFF 范围的码点,首先将其值减去 0x10000,变成一个范围 0x0 到 0xFFFFF 范围内的数字,这能保证他们只需 20 个二进制位即可表示。
例如“𰻞”对应的码点 0x30EDE,减去后就变成 0x20EDE。
然后,写出 0x20EDE 的二进制表示:
```
00100000111011011110
```
总共 20 位,我们将其拆成高低各 10 位:
```
0010000011 1011011110
```
各自写出相应的十六进制数:
```
0x083 0x2DE
```
因为最多只有 10 位,这两个数都会在 0 到 0x3FF 的范围内。
而 0xD800 到 0xDBFF,和 0xDC00 到 0xDFFF 预留的空间,刚好可以分别容纳 0x400 个数!
所以,我们将拆分出来的两个 10 位数,分别加上 0xD800 和 0xDC00:
```
0xD800+0x083=0xD883
0xDC00+0x2DE=0xDFDE
```
这两个数,必定是 0xD800 到 0xDBFF,和 0xDC00 到 0xDFFF 范围内的数。而这两个范围都是 Unicode 委员会预留的代理对区间,绝对没有普通字符。所以,生成的两个代理对不会与普通字符产生歧义,可以放心放进 `uint16_t` 数组,解码器如果检测到代理对,就说明是两节车厢,可以放心连续读取两个 `uint16_t`。
所以,`0xD883 0xDFDE` 就是“𰻞”用 UTF-16 编码后的结果。
代理字符不是一个完整的字符,当解码器检测到一个 0xD800 到 0xDBFF 范围内的高代理时,就预示着还需要再读取一个低代理,才能拼接成一个稀有字符。
如果接下来读到的不是 0xDC00 到 0xDFFF 范围的低代理字符,而是普通字符的话,那就说明出错了,可能是中间被人丢包了,需要报错或者用错误字符“�”顶替。
另外,如果读到了一个单独存在的 0xD800 到 0xDFFF 范围内的低代理字符,那也说明出错了,因为代理字符只有成对出现才有意义,低代理字符不可能单独在开头出现。
可见,UTF-16 和 UTF-8 一样,都是“小火车”式的变长编码,UTF-16 同样也有着类似于 UTF-8 的抗干扰机制。
=== 字节序问题,大小端之争
在计算机中,多字节的整数类型(如 `uint16_t` 和 `uint32_t`)需要被拆成多个字节来存储。拆开后的高位和低位按什么顺序存入内存?不同的硬件架构产生了争执:
- 大端派 (bit endian):低地址存放整数的高位,高地址存放整数的低位,也就是大数靠前!这样数值的高位和低位和人类的书写习惯一致。例如,0x12345678,在内存中就是:
```
0x12 0x34 0x56 0x78
```
- 小端派 (little endian):低地址存放整数的低位,高地址存放整数的高位,也就是小数靠前!这样数值的高位和低位和计算机电路的计算习惯一致。例如,0x12345678,在内存中就是:
```
0x78 0x56 0x34 0x12
```
例如,Intel 的 x86 架构和 ARM 公司的 ARM 架构都是小端派,而 Motorola 公司的 68k 架构和 Sun 公司的 SPARC 架构都是大端派。
#tip[这其实是很无聊的争执,为人类的书写习惯改变计算机的设计毫无道理,毕竟世界上也有从右往左书写的文字和从上往下书写的文字,甚至有左右来回书写的文字……如果要伺候人类,你怎么不改成十进制呢?总之,我认为小端才是最适合计算机的,市面上大多数主流硬件都是小端架构。]
在网络通信时,发消息和收消息的可能是不同的架构,如果发消息的是小端架构,收消息的是大端架构,那么发出去的是 0x12345678,收到的就会变成 0x78563421 了。
因此互联网一般规定,所有多字节的数据在网络包中统一采用大端。对于大端架构,他们什么都不需要做,对于小端架构,在发包前需要把自己的小端数据做字节序反转,变成大端的以后,再发送。之后的网络专题课中我们会详解这一块。
#story[基于字节码的虚拟机语言通常会规定一个字节序:像 Java 这种面向互联网语言,索性也规定了统一采用大端,无论 JVM 运行在大端机器还是小端机器上。这使得他与互联网通信比较方便,而在 x86 和 ARM 架构上,与本地只接受小端数据的 API,例如 OpenGL,沟通较为困难,需要做额外的字节序转换。而 C\# 主打游戏业务(例如 Unity),需要考虑性能,所以规定全部采用小端。作为底层编程语言的 C++ 则是入乡随俗,你的硬件是什么端,他就是什么端,不主动做任何额外的转换。]
UTF-16 和 UTF-32 的码位都是多字节的,也会有大小端问题。例如,UTF-16 中的 `uint16_t` 序列:
```
0x1234 0x5678
```
在大端派的机器中,就是:
```
0x12 0x34 0x56 0x78
```
在小端派的机器中,就是:
```
0x34 0x12 0x78 0x56
```
这样一来,UTF-16 和 UTF-32 的字节流,在不同的机器上,可能会有不同的顺序。这给跨平台的文本处理带来了麻烦。
所以当你需要把 UTF-16 存入硬盘和在网络发送时,还需要额外指明你用的是大端的 UTF-16 还是小端的 UTF-16。
因此 UTF-16 和 UTF-32 进一步分裂为:
- UTF-16LE:小端的 UTF-16
- UTF-16BE:大端的 UTF-16
- UTF-32LE:小端的 UTF-32
- UTF-32BE:大端的 UTF-32
如果只在内存的 `wchar_t` 中使用就不用区分,默认跟随当前机器的大小端。所以 UTF-16 和 UTF-32 通常只会出现在内存中用于快速处理和计算,很少用在存储和通信中。
UTF-8 是基于单字节的码位,火车头的顺序也有严格规定,火车头总是在最前,根本不受字节序大小端影响,也就没有影响。
由于压缩率低,又存在大小端字节序不同的问题。而互联网数据需要保证相同的大小端,在收发包时需要额外转换,因而可能不太适合网络。而 UTF-8 的存储单位是字节,天生没有大小端困扰。更妙的是,他且完全兼容 ASCII,而互联网又是古董中间件最多的地方……
总之,完全基于字节的 UTF-8 是最适合网络通信和硬盘存储的文本编码格式,而 UTF-32 是最适合在内存中处理的格式。
=== BOM 标记
0xFEFF 是一个特殊的不可见字符“”,这是一个零宽空格,没有任何效果。
你可以把这个字符加在文本文件的头部,告诉读取该文件的软件,这个文件是用什么编码的。
如果是 UTF-16 和 UTF-32,因为 0xFEFF 不对称,他还能告诉你是大端还是小端。因此 0xFEFF 被称为字节序标志(Byte-order-mark,BOM)。
如果读取该文件的软件不支持解析 BOM,那么他照常读出 0xFEFF,一个零宽空格,在文本中不显示,不影响视觉结果。
#story[一些老的编译器(远古 MinGW,现在已经没有了)不支持解析 BOM,会把带有 BOM 的 UTF-8 的 .cpp 源码文件,当作头部带有错误字符的乱码文件,从而报错。这是因为 Windows 的记事本保存为 UTF-8 时,总是会加上 BOM。如果记事本发现一个文件没有 BOM,会当作 ANSI(GBK)来读取。]
0xFEFF 在不同的编码下会产生不同的结果:
+ UTF-8:`0xEF 0xBB 0xBF`,他会占用 3 字节,而且不会告诉你是大端还是小端,因为 UTF-8 是没有大小端问题的。
+ UTF-16:如果是大端,就是 `0xFE 0xFF`,如果是小端,就是 `0xFF 0xFE`。
+ UTF-32:如果是大端,就是 `0x00 0x00 0xFE 0xFF`,如果是小端,就是 `0xFF 0xFE 0x00 0x00`。
因此,在文本头部加上 BOM 有助于软件推测该文件是什么编码的(如果那软件支持解析 BOM 的话)。
#story[例如 Windows 环境中,所有的文本文件都被默认假定为 ANSI(GBK)编码,如果你要保存文本文件为 UTF-8 编码,就需要加上 BOM 标志。当 MSVC 读取时,看到开头是 `0xEF 0xBB 0xBF`,就明白这是一个 UTF-8 编码的文件。这样,MSVC 就能正确地处理中文字符串常量了。如果 MSVC 没看到 BOM,会默认以为是 ANSI(GBK)编码的,从而中文字符串常量会乱码。开启 `/utf-8` 选项也能让 MSVC 把没有 BOM 的源码文件当作 UTF-8 来解析,适合跨平台宝宝体质。]
== C/C++ 中的字符
=== 字符类型
#table(
columns: 4,
inset: 3pt,
align: horizon,
[类型], [大小], [编码], [字面量],
[Linux `char`], [1 字节], [取决于 `$LC_ALL`], ["hello"],
[Windows `char`], [1 字节], [取决于系统区域设置], ["hello"],
[Linux `wchar_t`], [4 字节], [UTF-32], [L"hello"],
[Windows `wchar_t`], [2 字节], [UTF-16], [L"hello"],
[`char8_t`], [1 字节], [UTF-8], [u8"hello"],
[`char16_t`], [2 字节], [UTF-16], [u"hello"],
[`char32_t`], [4 字节], [UTF-32], [U"hello"],
)
由此可见,`char` 和 `wchar_t` 是不跨平台的。
对于中国区 Windows 来说,区域设置默认是 GBK。对于美国区 Windows 来说,区域设置默认是 UTF-8。
对于 Linux 用户来说,如果你没有专门修改过,`$LC_ALL` 默认是 `en_US.UTF-8` 或 `C.UTF-8`。
这带来了巨大的混淆!很多美国程序员潜意识里会想当然地把 `char` 当作 UTF-8 来用。很多开源项目,第三方库,甚至很多国人做的项目,都被这种“想当然”传染了。
#tip[好消息是无论“区域设置”是什么,肯定兼容 ASCII。例如 GBK 和 UTF-8 都兼容 ASCII,否则就和所有的 C 语言经典函数如 `strlen`,换行符 `'\n'`,路径分隔符 `'/'` 和 `'\\'` 冲突了。]
`wchar_t` 就好一些,虽然在 Windows 系统上是糟糕的 UTF-16,但至少稳定了,不会随着系统区域设置而随意改变,只要你不打算跨平台,`wchar_t` 就是 Windows 程序的标配。
=== 思考:UTF-8 为什么完美兼容 ASCII
UTF-8 的火车头和车厢,都是 `1` 开头的,而 ASCII 的单体火车头永远是 `0` 开头。这很重要,不仅火车头需要和 ASCII 区分开来,车厢也需要。考虑这样一个场景:
```cpp
std::u32string path = "一个老伯.txt";
```
“一个老伯” 转换为 Unicode 码点分别是:
```
0x4E00 0x4E2A 0x8001 0x4F2F
```
如果让他们原封不动直接存储进 char 数组里:
```
0x4E 0x00 0x4E 0x2A 0x80 0x01 0x4F 0x2F
```
就出问题了!首先,这里 0x4E00 的 0x00 部分,会被 C 语言当作是字符串的结尾。如果拿这样的字符串去调用操作系统的 open 函数,他会以为你在打开 0x4E 单个字符的文件名,也就是 `"N"`。
更糟糕的是,0x2F 对应的 ASCII 字符是 `'/'`,是路径分隔符。操作系统会以为你要创建一个子文件夹下的文件 `"N\x00N*\x80\x01O/.txt"`,文件夹名字叫 `"N\x00N*\x80\x01O"` 而文件叫 `".txt"`。
为了能让针对 ASCII 设计的操作系统 API 支持中文文件名,就只能绕开所有 0x7F 以下的值。这就是为什么 UTF-8 对车厢也全部抬高到 0x80 以上,避免操作系统不慎把车厢当作是 `'/'` 或 `'\0'`。
=== UTF-8 确实几乎完美支持字符串所有操作
由于巨大的惯性,很多人都想当然的把 `std::string` 当作 UTF-8 来使用。对于简单的打印,常规的字符串操作,是没问题的。
字符串操作有下面这几种,得益于 UTF-8 优秀的序列化涉及和冗余抗干扰机制,绝大多数 ASCII 支持的操作,UTF-8 字符串都能轻松胜任,唯独其中*涉及“索引”和“长度”的*一部分操作不行。这是由于变长编码的固有缺陷,如果需要做“索引”类操作,还是建议先转换成定长的 UTF-32 编码。
#table(
columns: 4,
inset: 3pt,
align: horizon,
[操作], [UTF-8], [UTF-32], [GBK],
[求字符串长度], [×], [√], [×],
[判断相等], [√], [√], [√],
[字典序的大小比较], [√], [√], [×],
[字符串拼接], [√], [√], [√],
[搜索子字符串], [√], [√], [×],
[搜索单个字符], [×], [√], [×],
[按索引切下子字符串], [×], [√], [×],
[按索引获取单个字符], [×], [√], [×],
[遍历所有字符], [×], [√], [×],
[按子字符串切片], [√], [√], [×],
[按索引切片], [×], [√], [×],
[查找并替换子字符串], [√], [√], [×],
[查找并删除子字符串], [√], [√], [×],
[按索引删除子字符串], [×], [√], [×],
[删除单个字符], [×], [√], [×],
)
为什么?我们来看一个实验:
```cpp
std::string s = "你好";
fmt::println("s 的长度:{}", s.size());
```
(使用 `/utf-8` 编译)运行后,会得到 6。
因为 `std::string` 的 `size()` 返回的是 `char` 的数量,而不是真正字符的数量。在 UTF-8 中,一个非 ASCII 的字符会被编码为多个 `char`,对于中文而言,中文都在 0x2E80 到 0x9FFF 范围内,属于三级列车,也就是每个汉字会被编码成 3 个 `char`。
`char` 是字节(码位)而不是真正的字符(码点)。真正的 Unicode 字符应该是 `char32_t` 类型的。调用 `std::string` 的 `size()` 或者 `strlen` 得到的只是“字节数量”。
而 UTF-32 中,每个字符(码点)都对应一个独立的 `char32_t`(码位),`size()` 就是真正的“字符数量”,这就是定长编码的优势。
```cpp
std::u32string s = U"你好";
fmt::println("s 的长度:{}", s.size());
```
如果你的操作只涉及字符串查拼接与查找,那就可以用 UTF-8。如果大量涉及索引,切片,单个字符的操作,那就必须用 UTF-32(否则一遇到汉字就会出错)。
```cpp
std::vector<std::string> slogan = {
"小彭老师公开课万岁", "全世界程序员大团结万岁",
};
std::string joined;
for (auto const &s: slogan) {
joined += s; // 只是拼接而已,UTF-8 没问题
}
```
UTF-8 按索引切片的出错案例:
```cpp
std::string s = "小彭老师公开课万岁";
fmt::println("UTF-8 下,前四个字节:{}", s.substr(0, 4));
// 会打印 “小�”
```
```cpp
std::u32string s = U"小彭老师公开课万岁";
fmt::println("UTF-32 下,前四个字符:{}", s.substr(0, 4));
// 会打印 “小彭老师”
```
只有当索引来自 `find` 的结果时,UTF-8 字符串的切片才能正常工作:
```cpp
std::string s = "小彭老师公开课万岁";
size_t pos = s.find("公"); // pos = 12
fmt::println("UTF-8 下,“公”前的所有字节:{}", s.substr(0, pos));
// 会打印 “小彭老师”
```
```cpp
std::u32string s = U"小彭老师公开课万岁";
size_t pos = s.find(U'公'); // pos = 4
fmt::println("UTF-32 下,“公”前的所有字符:{}", s.substr(0, pos));
// 会打印 “小彭老师”
```
#tip[注意到这里 UTF-8 的 `"公"` 需要是字符串,而不是单个字符。]
UTF-8 无法取出单个非 ASCII 字符,对于单个中文字符,仍然只能以字符串形式表达(由多个字节组成)。
```cpp
std::string s = "小彭老师公开课万岁";
fmt::print("UTF-8 下第一个字节:{}", s[0]);
// 可能会打印 ‘å’ (0xE5),因为“小”的 UTF-8 编码是 0xE5 0xB0 0x8F
// 也可能是乱码“�”,取决于终端理解的编码格式
```
```cpp
std::u32string s = U"小彭老师公开课万岁";
fmt::print("UTF-32 下第一个字符:{}", s[0]);
// 会打印 ‘小’
```
UTF-8 字符串的反转也会出问题:
```cpp
std::string s = "小彭老师公开课万岁";
strrev(s.data()); // 会以字节为单位反转,导致乱码
```
```cpp
std::u32string s = U"小彭老师公开课万岁";
strrev(s.data()); // 会把按字符正常反转,得到 “岁万课开公师老彭小”
```
*总结:UTF-8 只能拼接、查找、打印。不能索引、切片、反转。*
#tip[按索引切片不行,但如果索引是 find 出来的就没问题。]
=== 轶事:“ANSI” 与 “Unicode” 是什么
在 Windows 官方的说辞中,有“Unicode 编码”和“ANSI 编码”的说法。当你使用 Windows 自带的记事本程序,保存文本文件时,就会看到这样的选单:
#image("pic/notepad.png")
翻译一下:
- “ANSI”指的是“区域设置”里设置的那个编码格式。
- 所谓“Unicode”其实指的是 UTF-16。
- 所谓“Unicode big endian”指的是大端 UTF-16。
- “UTF-8”指的是 UTF-8 with BOM 而不是正常的 UTF-8。
实际上 Unicode 只是一个字符集,只是把字符映射到整数,更没有什么大端小端,UTF-16 才是编码格式。
而 ANSI 本来应该是 ASCII 的意思,`char` 本来就只支持 ASCII。
但由于当时各国迫切需要支持自己本国的文字,就在兼容 ASCII 的基础上,发展出了自己的字符集和字符编码。这些当地特供的字符集里只包含了本国文字,所有这些各国的字符编码也都和 UTF-8 类似,采用火车头式的变长编码,对 0 开头的 ASCII 部分也都是兼容。所以 Windows 索性把 ANSI 当作“各国本地文字编码”的简称了。但后来互联网的出现,“区域设置”带来了巨大的信息交换困难。
#fun[例如你在玩一些日本的 galgame 时,会发现里面文字全部乱码。这是因为 Windows 在各个地区发行的是“特供版”:在中国大陆地区,他发行的 Windows 采用 GBK 字符集,在日本地区,他发行的 Windows 采用 Shift-JIS 字符集。日本程序员编译程序时,程序内部存储的是 Shift-JIS 的那些“整数”。这导致日本的 galgame 在中国大陆特供的 Windows 中,把 Shift-JIS 的“整数”用 GBK 的表来解读了,从而乱码(GBK 里的日文区域并没有和 Shift-JIS 重叠)。需要用 Locale Emulator 把 Shit-JIS 翻译成 Unicode 读给 Windows 听。如果日本程序员从一开始就统一用 Unicode 来存储,中国区玩家的 Windows 也统一用 Unicode 解析,就没有这个问题。]
这种情况下,Unicode 组织出现了,他的使命就是统一全世界的字符集,保证全世界所有的文字都能在全世界所有的计算机上显示出来。首先创办了 Unicode 字符集,然后规定了 UTF-8、UTF-16、UTF-32 三种字符编码,最终 UTF-8 成为外码的主流,UTF-32 成为内码的主流。
接下来为了方便记忆,我们索性就顺着微软的这个说法:
- 管 `char` 叫 ANSI:随“区域设置”而变。
- 管 `wchar_t` 叫 Unicode:在 Windows 上是 UTF-16,在 Linux 上是 UTF-32。
=== 小笑话:UTF-16 的背刺
微软管 UTF-16 叫 Unicode 是纯粹的历史遗留问题:
因为当年 Unicode 5.0 的时候只有 0 到 0xFFFF 的字符,16 位就装得下,所以当时 UTF-16 还是一个*定长编码*。微软于是决定把 `wchar_t` 定义成 2 字节,并在 NT 内核中,为每个系统调用都升级成了基于 `wchar_t` 字符串的 “W 系” API。
比尔盖子当时以为这样 UTF-16 定长内码就一劳永逸了,并号召所有程序都改用 UTF-16 做内码,别用 “A 系” API 了。
#fun[起初,所有人都以为 UTF-16 就是最终答案。]
没想到后来 Unicode 委员会“背刺”了比尔盖子!偷偷把范围更新到了 0x10FFFF,突破了 16 位整数的容量。原来的 UTF-16 已经容纳不下,只好利用之前预留的 0xD800 到 0xDFFF 空号区间丑陋地实现了变长编码。
#fun[直到 UTF-16 一夜之间成了丑陋的*变长编码*。]
闹了半天,Windows 费心费力替 Unicode 委员会好不容易推广的 `wchar_t`,既没有 UTF-8 兼容 ASCII 的好处,又没有 UTF-32 *定长编码*的好处。可 “W 系” API 却又焊死在了 NT 内核最底层,反复来坑第一次用 Windows 编程的初学者。
#fun[比尔盖子:你这样显得我很小丑诶?]
除 Windows 外,Java 也是“UTF-16 背刺”的受害者,他们想当然的把 char 定义为 UTF-16,以为这就是未来永久的定长内码,一劳永逸…… 直到 Unicode 加入了 0x10FFFF,Java 不得不重新定义了个 Character 作为 UTF-32 字符,还弄个 char 到 Character 的转换,好不尴尬!
#fun[Linux 成立于 1991 年,当时 Unicode 也才刚刚出现。Unicode 宣布加入 0x10FFFF 后,Linux 才开始引入支持 Unicode。在知道了 Unicode 包含 0x10FFFF 后,他们一开始就把 `wchar_t` 定义成 4 字节,逃过了 UTF-16 的背刺。]
#tip[后来新出的语言,如 Python 3、Go、Rust、Swift、Kotlin,把字符钦定为 UTF-32 了。他们只有在调用 Windows API 时,才会临时转换为 UTF-16 来调用,除此之外再无 UTF-16 出现。]
#fun[许多糟糕的博客声称:是因为“UTF-16 最有利于中文压缩”,所以 Java 和 Windows 才采用的?然而就我了解到的实际情况是因为他们错误的以为 0xFFFF 是 Unicode 的上限才错误采用了,不然为什么后来的新语言都采用了 UTF-32 内码 + UTF-8 外码的组合?而且在外码中采用 UTF-8 或 UTF-16 压缩确实没问题,但是 Java 和 Windows 的失误在于把 UTF-16 当作内码了!内码就理应是定长编码的才方便,如果你有不同想法,欢迎留言讨论。]
总之,UTF-16 是糟粕,但他是 Windows 唯一完整支持的 Unicode 接口。不建议软件内部用 UTF-16 存储文字,你可以用更紧凑的 UTF-8 或更方便切片的 UTF-32,只需在调用操作系统 API 前临时转换成 UTF-16 就行。
=== 强类型系统只是君子协议
必须指出:在 `std::string` 中装 UTF-8 并不是未定义行为,在 `std::u8string` 里同样可以装 GBK。这就好比一个名叫 `Age` 的枚举类型,实际却装着性别一样。
```cpp
enum Age { // 错误示范
Male,
Female,
Custom,
};
// 除了迷惑同事外,把年龄和性别的类型混用没有好处
void registerStudent(Age age, Age sex);
```
区分类型只是大多数人设计接口的规范,只是方便你通过看函数接口一眼区分这个函数接受的是什么格式的字符串,并没有强制性。例如下面这段代码一看就知道这些函数需要的是什么编码的字符串。
```cpp
void thisFuncAcceptsANSI(std::string msg);
void thisFuncAcceptsUTF8(std::u8string msg);
void thisFuncAcceptsUTF16(std::u16string msg);
void thisFuncAcceptsUnicode(std::wstring msg);
void thisFuncAcceptsUTF32(std::u32string msg);
```
用类型别名同样可以起到差不多的说明效果(缺点是无法重载):
```cpp
using ANSIString = std::string;
using UTF8String = std::string;
using UTF16String = std::vector<uint16_t>;
void thisFuncAcceptsANSI(ANSIString msg);
void thisFuncAcceptsUTF8(UTF8String msg);
void thisFuncAcceptsUTF16(UTF16String msg);
```
之所以我会说,`std::string` 应该装 ANSI 字符串,是因为所有标准库官方提供的函数,都会假定 `std::string` 类型是 ANSI 编码格式(GBK)。并不是说,你不能用 `std::string` 存其他编码格式的内容。
如果你就是想用 `std::string` 装 UTF-8 也可以,只不过你要注意在传入所有使用了文件路径的函数,如 `fopen`,`std::ifstream` 的构造函数前,需要做一个转换,转成 GBK 的 `std::string` 或 UTF-16 的 `std::wstring` 后,才能使用,很容易忘记。
而如果你始终用 `std::u8string` 装 UTF-8,那么当你把它输入一个接受 ANSI 的普通 `std::string` 参数时,就会发生类型不匹配错误,强迫你重新清醒,或是强迫你使用一个转换函数,稍后会介绍这个转换函数的写法。
例如当你使用 `std::cout << u8string` 时会报错,迫使你改为 `std::cout << u8toansi(u8string)` 才能编译通过,从而避免了把 UTF-8 的字符串打印到了只支持 GBK 的控制台上。
#detail[其中转换函数签名为 `std::string u8toansi(std::u8string s)`,很可惜,标准库并没有提供这个函数,直到 C++26 前,标准库对字符编码支持一直很差,你不得不自己实现或依赖第三方库。]
==== u8 字符串常量转换问题
TODO
== 选择你的阵营!
#image("pic/utfwar.png")
=== ANSI 阵营
把字符串当作纯粹的“字节流”,无视字符编码。或者说,你从系统输入进来的是什么编码,我就存储的什么编码。对于 Unicode 则采取完全摆烂的态度,完全无视 Unicode 的存在。
- 适用场景:通常与文字处理领域无关的软件会采取这种方案。
- 优点:方便,且内部对字符串无任何转换和判断,效率最高。
- 缺点:在调用 Windows 系统 API,读写带有中文的文件路径时,会饱受乱码和找不到文件的困扰。
- 方法:完全使用 `const char *` 和 `std::string`。
- 代表作:Linux 文件系统 ext4、Lua 编程语言、现代 Python 中的 `bytes` 类型、HTTP 的 `?` 参数、早期 FAT32 文件系统等。
这类软件是最常见的初学者写法,如果你从未想过字符编码问题,从不了解 `wchar_t`、`char32_t` 之间的战争,只知道 `char`,那么你已经自动在此阵营里。
#detail[有人说 Linux 文件系统是 UTF-8?并不是!Linux 文件系统根本不会检验你的文件名是不是合法的 UTF-8,只不过是因为你设定了 `export LC_ALL=zh_CN.UTF-8`,这会使所有程序(包括终端模拟器)假定文件名和文件内容都按 UTF-8 编码,从而调用操作系统各类 API 时(如 open、write)都会使用 UTF-8 编码的 `const char *` 输入,在 Linux 系统 API 看来,所谓“文件名”只是纯粹的字节流,只要保证不包含 `'/'` 和 `'\0'`,无论你是什么编码,他都不在乎。而所有的 locale 都兼容 ASCII,所以绝不会出现一个中文汉字编码后产生 `'/'` 的情况(例如 GB2312 会把一个中文编码成两个 0x80 到 0xFF 区间的字节,和 ASCII 的范围没有重叠,更不可能出现 `'/'`),即使换成 `export LC_ALL=zh_CN.GB2312`,Linux 文件系统一样能正常工作,只不过读取你之前以 UTF-8 写入的文件会变成乱码而已。]
对于中国区的 Windows 而言,他的所有 A 函数只支持 GBK 编码。这意味着如果你 Lua 中把字符串“当作” UTF-8 来用。那么当你在调用 Lua 的 io.open 前,需要先做一个 UTF-8 到 GBK 的转换,这还会导致丢失部分不在 GBK 内的字符,比如如果你的文件名包含 Emoji,那就会变成 `???` 乱码。而使用 W 函数的 UTF-16 就不会,因为 UTF-16 能容纳完整的 Unicode 映射。而完全摆烂的 Lua,其 `io.open` 只是使用 C 语言库函数 `fopen`,`fopen` 又是基于 Windows 的 A 系列函数,Lua 又没有提供对 Windows C 运行时库特有的 `_wfopen` 函数的封装,从而永远不可能打开一个带有 Emoji 的文件。
*总结:要支持 ANSI 阵营,你什么都不需要做,char 满天飞摆烂。*
=== UTF-8 阵营
支持 Unicode,字符串统一以 UTF-8 形式存储、处理和传输。
- 应用场景:常见于文字处理需求不大,但有强烈的跨平台需求,特别是互联网方面的软件。他们通常只用到字符串的拼接、查找、切片通常也只是在固定的位置(例如文件分隔符 `'/'`)。也非常适合主要面对的是以 ASCII 为主的“代码”类文本,UTF-8 是对英文类文本压缩率最高的,所以也广泛用于编译器、数据库之类的场景。同时因为 UTF-8 完全兼容 ASCII,使得他能轻易适配远古的 C 语言程序和库。
- 方法:始终以 UTF-8 编码存储和处理字符串。
- 优点:跨平台,在网络传输时无需任何转码,UTF-8 是互联网的主流编码格式,不同平台上运行的 UTF-8 软件可以随意共享文本数据。兼容 ASCII,方便复用现有库和生态。对英文类文本压缩率高,对中文文本也不算太差。
- 缺点:对于底层 API 均采用 UTF-16 的 Windows 系统,需要进行字符编码转换,有少量性能损失。且字符串的正确切片、求长度等操作的复杂度会变成 $O(N)$ 而不是通常的 $O(1)$。
- 代表作:Rust 语言、Go 语言、CMake 构建系统、Julia 语言等。
在 C++ 中,可以通过 `u8"你好"` 创建一个保证内部是 UTF-8 编码的字符串常量,类型为 `char8_t []`。
如果用无前缀的 `"你好"` 创建,则 MSVC 默认会以编译者所在系统的“区域设置 (locale)” 作为字符串常量的编码格式(而不是运行者的区域设置!),开启 `/utf-8` 选项可以让 MSVC 编译器默认采用 UTF-8 编码,即让 `"你好"` 和 `u8"你好"` 一样采用 UTF-8。而 GCC 默认就是 UTF-8,除非手动指定 `-fexec-charset=GBK` 来切换到 GBK。稍后会详细讨论编译器的字符编码问题。
假设你通过 `/utf-8` 或 `-fexec-charset=utf-8` 搞定了编译期常量字符串的编码。接下来还有一个问题,文件系统。
Linux 文件系统内部,均使用 8 位类型 `char` 存储,将文件名当作平凡的字节流,不会做任何转换。因此你用 UTF-8 创建和打开的文件,其他使用 UTF-8 区域设置的软件都可以照常打开,不会有乱码问题。
#story[其实 Windows 上以 GBK 编码的压缩文件或文本文件,拷贝到 Linux 上打开出现乱码问题,就是因为 Linux 的区域设置默认都是 UTF-8 的。实际上如果把你的文件拷给一个美国的 Windows 用户,他也会看到乱码,因为美国大区的 Windows 区域设置默认是 UTF-8,而中国大区的是 GBK,稍后我们会讲到解决方案。]
而 Windows 的 NTFS 文件系统,采用 16 位的 `wchar_t` 存储,Windows 的所有 API,也都是基于 `wchar_t` 的,Windows 内核内部也都用 `wchar_t` 储存文本字符串,只有二进制的字节流会用 `char` 存储。这类基于 `wchar_t` 的系统 API 都有一个 `W` 后缀,例如:
```cpp
MessageBoxW(NULL, L"你好", L"标题", MB_OK);
```
#detail[这个 `MessageBoxW` 函数,只接受 `const wchar_t *` 类型的字符串。`L"你好"` 是一个 `wchar_t []` 类型的字符串常量,它的内部编码类型固定是 UTF-16,不会随着“区域设置”而变。]
虽然也有提供 `A` 后缀的系列函数,他们和 `W` 一样,只不过是接受 `const char *` 类型的字符串。问题在于,这些字符串都必须是“区域设置”里的那个编码格式,也就是 GBK 编码!而且无法修改。
当调用 `A` 系函数时,他们内部会把 GBK 编码转换为 UTF-16 编码,然后调用 Windows 内核。
这是一个糟糕的设计,而所有的 C/C++ 标准库都是基于 `A` 函数的!如果你用 `const char *` 字符串调用 C 标准库,相当于调用了 `A` 函数。而 `A` 函数只接受 GBK,但你却输入了 UTF-8!从而 UTF-8 中所有除 ASCII 以外的,各种中文字符、Emoji 都会变成乱码。
例如 `fopen` 函数,只有 `fopen(const char *path, const char *mode)` 这一个基于 `char` 的版本,里面也是直接调用的 `A` 函数,完全不给我选择的空间。虽然 Windows 也提供了 `_wfopen(const wchar_t *path, const wchar_t *mode)`,但那既不是 POSIX 标准的一部分,也不是 C 语言标准的一部分,使用这样的函数就意味着无法跨平台。
#fun[Windows 官方认为:`W` 函数才是真正的 API,`A` 函数只是应付不听话的宝宝。可你就没发现你自己的 C/C++ 标准库也全部在调用的 `A` 函数么?]
总之,`A` 函数是残废的,我们只能用 `W` 函数,尽管 UTF-16 是历史债,但我们别无选择,`W` 函数是唯一能支持完整 Unicode 字符输入的方式。
```cpp
// 假设这段 C++ 代码使用 /utf-8 选项编译:
std::ifstream f("你好.txt"); // 找不到文件,即使“你好.txt”存在
std::ofstream f("你好.txt"); // 会创建一个乱码文件
```
正确的做法是采用 `std::filesystem::u8path` 这个函数做 UTF-8 到 UTF-16 的转换:
```cpp
// C++17,需要用 u8path 这个函数构造 path 对象:
std::ifstream f(std::filesystem::u8path("你好.txt"));
std::ofstream f(std::filesystem::u8path("你好.txt"));
// C++20 引入 char8_t,区分于普通 char,path 类也有了针对 const char8_t * 的构造函数重载:
std::ifstream f(std::filesystem::path(u8"你好.txt"));
std::ofstream f(std::filesystem::path(u8"你好.txt"));
```
#detail[`std::filesystem::path` 类的 `c_str()` 在 Windows 上返回 `const wchar_t *`,在 Linux 上返回 `const char *`。这很合理,因为 Windows 文件系统确实以 `wchar_t` 存储路径名,而 Linux 文件系统完全用 `char`。]
每次需要加 `std::filesystem::u8path` 也挺麻烦的,容易忘记,一忘记就无法访问中文目录。
#story[很多软件在 Windows 上无法支持中文路径名,就是因为他们习惯了 Linux 或 MacOS 的全 UTF-8 环境,对文件路径没有任何转换。而 Windows 底层全是 UTF-16,根本没有提供 UTF-8 的 API,你 UTF-8 只能转换成 UTF-16 才能避免中文乱码。个人认为,死活不肯接受明摆着已经是国际通用标准的 UTF-8,A 函数的编码连当前进程切换的方法都不给一个,这个应该由 Windows 全责承担。]
好消息是,最近 MSVC 标准库提供了一种方案,在你的程序开头,加上 `setlocale(LC_ALL, ".utf8")` 就可以让 C 和 C++ 标准库进入 UTF-8 模式:不再调用 `A` 系函数操作文件,而是会把文件名从 UTF-8 转换成 UTF-16 后再调用真正稳定的 `W` 系函数。
```cpp
setlocale(LC_ALL, ".utf8"); // 只需要这一行
FILE *fp = fopen(u8"你好.txt", "r"); // 可以了
std::ifstream fin(u8"你好.txt"); // 可以了
```
#tip[`setlocale(LC_ALL, ".utf8");` 只是把 C 标准库的 `const char *` 参数变成了接受 UTF-8,并不会让系统的 `A` 函数也变成 UTF-8 哦,调用本地 API 时仍需 UTF-8 到 UTF-16 的转换。]
*总结:要支持 UTF-8 阵营,开启 `/utf-8`,程序开头写 `setlocale(LC_ALL, ".utf8")`。Linux 用户则什么都不用做。*
看看各大软件站在 UTF-8 阵营的理由:
CMake:作为跨平台的构建系统,为了让项目的 `CMakeLists.txt` 能跨平台共用而不必重写,他理所当然地站在了 UTF-8 阵营:所有 `CMakeLists.txt` 都必须以 UTF-8 格式书写,且统一使用正斜杠 `'/'` 路径分隔符。
CMake 会自动在 Windows 系统上,将 UTF-8 字符串转换成 UTF-16 后,调用 Windows 系统 API,在 Linux 系统上则不做转换。在 Windows 系统上还会自动把文件路径中的正斜杠 `'/'` 转换成 Windows 专属的反斜杠 `'\\'`,无需用户操心。
小彭老师自主研发的 Zeno 节点仿真软件:由于保存的项目工程文件需要在 Linux 和 Windows 平台上互通,不能采用 Windows 各自为政的 GBK 格式,且工程文件内容是以 ASCII 为主的“代码”类文本,所以我们也站在了 UTF-8 阵营中。
Rust 和 Go:严格区分“字符 (32 位)”和“字节 (8 位)”的概念。在字符串类型中存储字节,但可以选择以字节方式读取或以字符方式读取。
这相当于是把 UTF-8 当作了内码,但 UTF-8 是一种变长编码,处理切片和索引时不方便。
#table(
columns: 3,
inset: 3pt,
align: horizon,
[编程语言], [字符类型 (32 位)], [字节类型 (8 位)],
[Rust], [`char`], [`u8`],
[Go], [`rune`], [`byte`],
[Julia], [`Char`], [`UInt8`],
)
为此,这些语言都为字符串提供了两套 API,一种是按字符索引,一种是按字节索引。按字符索引时,会从头开始,逐个解析码位,直到解析到想要的字符为止,复杂度 $O(N)$。按字节索引时,直接跳到指定字节,无需解析,复杂度 $O(1)$。
```rust
let s = "你好";
// 按字符遍历
for c in s.chars() {
// c: char
println!("{}", c);
}
// 按字节遍历
for b in s.bytes() {
// b: u8
println!("{:02x}", b);
}
```
在 C++ 中,若要采用这种 UTF-8 方案,可以使用 `utfcpp` 这个库:
https://github.com/nemtrif/utfcpp
#tip[稍后我们会以案例详细演示这个库的用法,也会尝试自己手搓。]
方法1:使用 `utf8to32` 一次性完成转换,用完后再转回去。
```cpp
std::string s = "你好";
std::u32string u32 = utf8::utf8to32(s);
fmt::println("U+{:04X}", u32[0]);
fmt::println("U+{:04X}", u32[1]);
u32[1] = U'坏';
s = utf8::utf32to8(u32);
fmt::println("{}", s); // 你坏
```
方法2:`utfcpp` 也封装了一个 utf8::iterator 迭代器适配器,效果类似于 Rust 的 `.chars()`,可以字符而不是字节遍历字符串容器。
```cpp
char s[] = "你好";
utf8::unchecked::iterator<char *> bit(s);
utf8::unchecked::iterator<char *> eit(s + strlen(s));
for (auto it = bit; it != eit; ++it) {
// *it: char32_t
fmt::println("U+{:04X}", *it);
}
// 安全(带边界检测)的版本
char s[] = "你好";
utf8::iterator<char *> bit(s, s, s + strlen(s));
utf8::iterator<char *> eit(s + strlen(s), s, s + strlen(s));
for (auto it = bit; it != eit; ++it) {
// *it: char32_t
fmt::println("U+{:04X}", *it);
}
// 基于 std::string 的版本
std::string s = "你好";
utf8::iterator<std::string::iterator> bit(s.begin(), s.begin(), s.end());
utf8::iterator<std::string::iterator> eit(s.end(), s.begin(), s.end());
for (auto it = bit; it != eit; ++it) {
// *it: char32_t
fmt::println("U+{:04X}", *it);
}
```
由于迭代器接口复杂难懂,建议先封装成带有 `begin()` 和 `end()` 的 range 对象,方便使用 C++17 range-based loop 语法直观遍历:
```cpp
template <class It>
struct Utf8Range {
utf8::iterator<It> bit;
utf8::iterator<It> eit;
template <class T>
Utf8Range(T &&t)
: bit(std::begin(t), std::begin(t), std::end(t))
, eit(std::end(t), std::begin(t), std::end(t)) {}
auto begin() const { return bit; }
auto end() const { return eit; }
};
template <class T>
Utf8Range(T &&t) -> Utf8Range<decltype(std::begin(t))>;
// 以下是新类的使用方法
std::string s = "你好";
for (char32_t c : Utf8Range(s)) {
fmt::println("U+{:04X}", c);
}
```
=== UTF-16 阵营
支持 Unicode 过早,误以为 0xFFFF 就是 Unicode 的上限。
一开始,人们错误地把 UTF-16 当成永远的定长编码,一劳永逸解决乱码问题,所以那段时期的软件都大举使用 UTF-16 作为内码。没想到后来 Unicode 又引入 0x10FFFF 范围的稀有字符,而现有的已经采用了 16 位内码的软件又已经无法根除,只好使用“代理对”机制,增量更新修复了现有的 16 位内码软件。UTF-16 既没有 UTF-8 兼容 ASCII 的好处,又没有 UTF-32 定长编码的好处,留下历史债。
#story[事实上,Unicode 已经无法继续扩容突破 0x10FFFF,就是因为双 `uint16_t` 编码的代理对最多只能容纳额外 0x100000 个字符的空间。本来 UTF-8 一开始的草案是打算最多支持 8 节列车,完全容纳高达 0x7FFFFFFF 范围的字符。为了让 Windows 还能继续用,Unicode 才被迫止步 0x10FFFF,UTF-8 也终结于 4 节列车。]
- 应用场景:通常认为,UTF-16 是纯粹的历史遗留糟粕,新软件不应该再使用 UTF-16。只有在和这些糟粕软件的 API 打交道时,才必须转换为 UTF-16。但也有人指出:UTF-16 是纯中文压缩率最高的编码格式,所以 UTF-16 还比较适合纯中文或以中文内容为主的文本数据压缩。
- 方法:始终以 UTF-16 编码存储和处理字符串。
- 优点:调用 Windows 系统 API 时无需任何转换,直接就能调用,最适合 Windows 本地开发,非跨平台。且对纯中文内容可比 UTF-8 额外节省 33% 空间。
- 缺点:对于 Windows 以外的系统就需要转换回 UTF-8,有少量性能开销。且如果存储的内容主要是纯英文,如 XML 代码等,内存占用会比 UTF-8 翻倍。而且 UTF-16 仍然是变长编码,虽然出现变长的概率较低,但不为 0,仍需要开发者做特殊处理。字符串的按码位反转会导致生僻字符出错,字符串以码点为单位的的正确切片、求长度等操作的复杂度仍然 $O(N)$ 而不是通常的 $O(1)$。并且 UTF-16 有大小端转换的问题。
- 代表作:Windows 系统 API、Java 语言、Windows 文件系统 (NTFS)、Qt、Word、JSON,他们都是 UTF-16 的受害者。
这相当于是把 UTF-16 当作了内码,但 UTF-16 依然是一种变长编码,对常见的中文处理没问题,生僻字就容易出问题,且因为出现概率低,很容易不发现,埋下隐患。
Java 就是受到了 UTF-16 历史债影响,`char` 是 16 位的码位,而不是字符,真正的一个字符是 32 位的 `Character` 类型。
#table(
columns: 3,
inset: 3pt,
align: horizon,
[编程语言], [码点类型 (32 位)], [码位类型 (16 位)],
[Java], [`Character`], [`char`],
)
而后续新出的 Kotlin 是 Java 的合法继承者,他果断放弃 UTF-16,加入了 UTF-32 阵营。可见,老软件坚持用 UTF-32 是因为他们积重难返,新软件再 UTF-16 就是自作孽了!
*总结:不要支持 UTF-16 阵营,除非你被迫维护史山。*
#fun[例如小彭老师发微信朋友圈时,输入 Emoji 表情后剪切,再粘贴,就和发现一个 Emoji 被切断成了两个代理对,以乱码的形式显现。估计是因为微信基于 Java 编写,疼逊程序员对 UTF-16 代理对处理的不利索。]
Java 中以码点遍历一个字符串的写法:
```java
String s = "你好";
// 按码点遍历
for (int i = 0; i < s.length();) {
Character c = s.codePointAt(i);
System.out.println(String.format("U+%04X", c));
i += Character.charCount(c);
}
// 按码位遍历
for (char c : s.toCharArray()) {
System.out.println(String.format("U+%04X", (int) c));
}
```
由于 JSON 是和 Java 一块发明的。对于超出 0xFFFF 范围的字符,采用的转义,也是基于 UTF-16 编码。即同一个字会变成两个代理对,以保证 JSON 文件总是 ASCII 格式,避免 Windows 的 GBK 编码乱做额外的干扰。
```json
// 以下两种写法等价
{"name": "𰻞"}
{"name": "\ud883\udfde"}
```
在刚刚介绍的 C++ 库 `utfcpp` 中,也有针对 UTF-16 的转换函数,如 `utf16to32`:
```cpp
std::u16string s = u"你好";
std::u32string u32 = utf16::utf16to32(s);
fmt::println("U+{:04X}", u32[0]);
fmt::println("U+{:04X}", u32[1]);
u32[1] = U'𰻞';
s = utf16::utf32to16(u32);
fmt::println("{}", s); // 你𰻞
fmt::println("{}", u32.size()); // 2
fmt::println("{}", s.size()); // 3
```
=== UTF-32 阵营
支持 Unicode,每个码点都用一个 `uint32_t` 或 `char32_t` 表示。
- 应用场景:适合需要经常处理文字的领域,如文本编辑器、浏览器等。但不适合存储和传输,因为浪费硬盘和网络带宽。字符串一般都长期以 UTF-8 存储,只有在需要频繁索引码位时,才需要转换为 UTF-32。
- 方法:始终以 UTF-32 编码存储和处理字符串。
- 优点:字符串的按码位反转、切片、求长度等操作都是 $O(1)$ 的复杂度,可以当作普通数组一样,随意处理。例如你可以设想一个文本编辑框,需要支持“退格”操作,如果是 UTF-8 和 UTF-16 就需要繁琐的判断代理对、各种车厢,而 UTF-32 的字符串只需要一次 `pop_back` 就搞定了。
- 缺点:浪费空间大,通常在保存时,仍然需要转换回 UTF-8 后再写入文件,有一定性能开销。
*总结:要支持 UTF-32 阵营,请全部使用 `char32_t` 和 `std::u32string`。字面量全用 `U"你好"` 的形式书写,读文件时转为 UTF-32,写文件时转回 UTF-8。*
=== 善用第三方库
由于 C++26 前标准库对编码转换几乎没有支持,在 C++ 中转换编码格式,通常都需要第三方库。
=== 不同 UTF 之间互转:`utfcpp`
如果你只是需要不同 UTF 格式之间的转换,没有处理 GBK 等的需求:那么之前已经介绍了 `utfcpp` 这个方便的库,已经够用。
```cpp
```
缺点是他不能处理 GBK、Shift-JIS 等非 Unicode 编码,也不能自动检测当前的 ANSI 区域设置。
=== 跨平台的任意编码转换:`boost::locale`
如果你还要支持其他编码格式,比如 GBK、Shift-JIS、Latin-1。
一种是 C 语言的 `iconv`,另一种是现代 C++ 的 `boost::locale`。
虽然功能差不多,底层都是调用 `icu` 的。`boost::locale` 的 API 更加友好,而且是现代 C++ 风格的。
```bash
# Ubuntu 用户安装 Boost.locale 方法:
$ sudo apt-get install libboost-locale-dev
# Arch Linux 用户安装 Boost 全家桶方法:
$ sudo pacman -S boost
```
#fun[不喜欢 Boost 的人有难了。]
==== UTF 之间互转
使用 `boost::locale::conv::utf_to_utf` 就能轻易做到。
```cpp
#include <boost/locale.hpp>
#include <iostream>
using boost::locale::conv::utf_to_utf;
int main() {
std::string s8 = u8"你好";
// UTF-8 转 UTF-32:
std::u32string s32 = utf_to_utf<char32_t>(s8);
// UTF-32 转 UTF-16:
std::u16string s16 = utf_to_utf<char16_t>(s8);
// UTF-32 转 UTF-8:
s8 = utf_to_utf<char>(s32);
std::cout << s8 << '\n';
return 0;
}
```
模板参数中,只需指定转换到的是什么类型就行,来自什么类型,他自己会重载的。
比如从 `char32_t` 转到 `char16_t`,只需要 `utf_to_utf<char32_t>` 就可以,非常方便。
#warn[`boost::locale` 有一个缺点 TODO]
编译:
```bash
$ g++ -std=c++17 -lboost_locale main.cpp
```
输出:
```
你好
```
建议用同样跨平台的 CMake 链接 Boost,否则 Windows 用户要有难了……
```cmake
find_package(Boost REQUIRED COMPONENTS locale)
target_link_libraries(你的程序 Boost::locale)
```
==== GBK 和 UTF 互转
使用 `boost::locale::conv::to/from_utf` 就能轻易做到。
```cpp
#include <boost/locale.hpp>
#include <iostream>
using boost::locale::conv::to_utf;
using boost::locale::conv::from_utf;
int main() {
std::string s = "你好";
// 从 GBK 转到 UTF-16
std::wstring ws = to_utf<wchar_t>(s, "GBK");
std::wcout << ws << '\n';
// 从 UTF-16 转回 GBK
s = from_utf(ws, "GBK");
std::wcout << s << '\n';
return 0;
}
```
第二个参数可以是 `GBK`、`Shift-JIS`、`Latin1` 等其他编码格式,完整的列表可以在看到。
这里 `to_utf<wchar_t>` 会自动判断 `wchar_t` 的大小。如果是 2 字节(Windows 平台情况)会认为你要转为 UTF-16,如果是 4 字节(Linux 平台情况),会认为你要转为 UTF-32。
而 `to_char<char16_t>` 则是无论什么平台,都会转为 UTF-16。
`from_utf` 不需要指定任何模板参数,因为他总是返回 `std::string`(ANSI 或 GBK 编码的字符串),参数是什么编码,会自动通过重载判断,例如 `from_utf(ws, "GBK")` 这里的参数是 `wchar_t`,那么在 Windows 上,他会检测到 `wchar_t` 是 2 字节,就认为是 UTF-16 到 GBK 的转换。
==== UTF 和 ANSI 互转
我们程序的用户不一定是中国用户(GBK),也可能是俄罗斯用户(CP1251)、日本用户(Shift-JIS)、西班牙用户(CP1252)等。
如果要采用用户的区域设置,即“ANSI”,可以把字符串留空(`""`)。
空字符串就表示当前系统区域设置了,在中国大区等价于 `"GBK"`,俄罗斯大区等价于 `"CP1251"` 等。
```cpp
#include <boost/locale.hpp>
#include <iostream>
using boost::locale::conv::from_utf;
using boost::locale::conv::to_utf;
int main() {
std::string u8s = u8"你好";
// UTF-8 转 ANSI
std::string s = from_utf(u8s, "");
// ANSI 转 UTF-8
u8s = to_utf<char>(s, "");
return 0;
}
```
==== 大总结
#table(
columns: 3,
inset: 3pt,
align: horizon,
[函数名称], [从], [到],
[`utf_to_utf<char>`], [UTF-x], [UTF-8],
[`utf_to_utf<char8_t>`], [UTF-x], [UTF-8],
[`utf_to_utf<char16_t>`], [UTF-x], [UTF-16],
[`utf_to_utf<char32_t>`], [UTF-x], [UTF-32],
[`utf_to_utf<wchar_t>`], [UTF-x], [Linux 上UTF-32 \ Win 上 UTF-16],
)
#tip[UTF-x 表示取决于参数类型的大小,如果参数是 `char16_t` 的字符串 `std::u16string`,那 x 就是 16。]
#table(
columns: 3,
inset: 3pt,
align: horizon,
[函数名称], [从], [到],
[`to_utf<char>("GBK", string)`], [GBK], [UTF-8],
[`to_utf<char8_t>("GBK", string)`], [GBK], [UTF-8],
[`to_utf<char16_t>("GBK", string)`], [GBK], [UTF-16],
[`to_utf<char32_t>("GBK", string)`], [GBK], [UTF-32],
[`to_utf<wchar_t>("GBK", string)`], [GBK], [Linux 上UTF-32 \ Win 上 UTF-16],
[`to_utf<char>("", string)`], [区域设置], [UTF-8],
[`to_utf<char8_t>("", string)`], [区域设置], [UTF-8],
[`to_utf<char16_t>("", string)`], [区域设置], [UTF-16],
[`to_utf<char32_t>("", string)`], [区域设置], [UTF-32],
[`to_utf<wchar_t>("", string)`], [区域设置], [Linux 上UTF-32 \ Win 上 UTF-16],
)
#table(
columns: 3,
inset: 3pt,
align: horizon,
[函数名称], [从], [到],
[`from_utf("GBK", string)`], [UTF-8], [GBK],
[`from_utf("GBK", u16string)`], [UTF-16], [GBK],
[`from_utf("GBK", u32string)`], [UTF-32], [GBK],
[`from_utf("GBK", wstring)`], [Linux 上UTF-32 \ Win 上 UTF-16], [GBK],
[`from_utf("", string)`], [UTF-8], [区域设置],
[`from_utf("", u16string)`], [UTF-16], [区域设置],
[`from_utf("", u32string)`], [UTF-32], [区域设置],
[`from_utf("", wstring)`], [Linux 上UTF-32 \ Win 上 UTF-16], [区域设置],
)
==== 指定处理错误的方法
如果遇到无法编码的字符,该如何处置?
默认情况下 Boost 会忽视错误,编码失败的字符会被丢弃。
```cpp
#include <boost/locale.hpp>
#include <iostream>
using boost::locale::conv::from_utf;
int main() {
std::string utf8 = u8"我爱𰻞𰻞面";
// UTF-8 转 GBK
std::string gbk = from_utf(utf8, "GBK");
// 错误,“𰻞”无法用 GBK 表示!
std::cout << gbk << '\n';
// 在 Windows 的 GBK 终端上,只显示“我爱面”
return 0;
}
```
可以用 `method_type` 这个枚举来指定错误处理的方式。
默认是 `skip`,跳过所有解码出错的地方(导致“𰻞”丢失)。
我们可以切换到 `stop`,当遇到解码错误时,会直接抛出异常,终止程序执行。
```cpp
#include <boost/locale.hpp>
#include <iostream>
using boost::locale::conv::from_utf;
using boost::locale::conv::method_type;
int main() {
std::string utf8 = u8"我爱𰻞𰻞面";
// UTF-8 转 GBK
std::string gbk = from_utf(utf8, "GBK",
method_type::stop);
// 错误,“𰻞”无法用 GBK 表示!
// from_utf 会抛出 `conversion_error` 异常
std::cout << gbk << '\n';
return 0;
}
```
举例:尝试以 GBK 保存,如果失败,则改为带有 BOM 的 UTF-8。
```cpp
#include <boost/locale.hpp>
#include <fstream>
using boost::locale::conv::from_utf;
using boost::locale::conv::method_type;
using boost::locale::conv::conversion_error;
void try_save(std::u32string content, std::wstring path) {
std::string binary;
try {
// 尝试将 UTF-32 转成 GBK 编码
binary = from_utf(content, "GBK",
method_type::stop);
} catch (conversion_error const &e) { // 若 GBK 无法表示
// 改用前面带有 BOM 的 UTF-8 编码
binary = "\xEF\xBB\xBF" + utf_to_utf<char>(content);
}
std::ofstream(path) << binary;
}
```
举例:支持 UTF-8 字符串(而不是 ANSI 字符串)的打印函数。
```cpp
#include <boost/locale.hpp>
#include <iostream>
using boost::locale::conv::from_utf;
using boost::locale::conv::utf_to_utf;
void u8print(std::string msg) {
std::cout << from_utf(msg, "");
// 或者:
// std::wcout << utf_to_utf<wchar_t>(msg, "");
}
```
#detail[更多细节详见官方文档:https://www.boost.org/doc/libs/1_81_0/libs/locale/doc/html/group__codepage.html]
==== 更多功能?!
编码转换只是 `boost::locale::conv` 这个子模块下的一个小功能而已!`boost::locale` 还提供了更多功能,如按照地域语言规范格式化数字、货币、日期、时间等,下一小节中我们继续介绍。完全是 `std::locale` 的上位替代。
#fun[Boost 哪里都好,你想要的功能应有尽有。而且不需要 C++20,很低版本的 C++ 也能用。唯一缺点可能就是太肥了,编译慢。]
=== Windows 用户:MultiByteToWideChar
如果你是 Windows 程序员,没有跨平台需求,不想用 Boost,且需要在 Windows 系统区域设置规定的 ANSI(在中国区是 GBK)编码和 UTF-16 之间转换:
可以用 Windows 官方提供的 `MultiByteToWideChar` 和 `WideCharToMultiByte` 函数。
这两个函数因为 C 语言特色的缘故,参数比较多而杂,建议自己动手封装成更易用的 C++ 函数:
```cpp
std::wstring ansi_to_wstring(const std::string &s) {
// ACP = ANSI Code Page,指定 s 里的是当前区域设置指定的编码(在中国区,ANSI 就是 GBK 了)
int len = MultiByteToWideChar(CP_ACP, 0,
s.c_str(), s.size(),
nullptr, 0);
std::wstring ws(len, 0);
MultiByteToWideChar(CP_ACP, 0,
s.c_str(), s.size(),
ws.data(), ws.size());
return ws;
}
std::string wstring_to_ansi(const std::wstring &ws) {
int len = WideCharToMultiByte(CP_ACP, 0,
ws.c_str(), ws.size(),
nullptr, 0,
nullptr, nullptr);
std::string s(len, 0);
WideCharToMultiByte(CP_ACP, 0,
ws.c_str(), ws.size(),
s.data(), s.size(),
nullptr, nullptr);
return s;
}
std::wstring utf8_to_wstring(const std::string &s) {
int len = MultiByteToWideChar(CP_UTF8, 0,
s.c_str(), s.size(),
nullptr, 0);
std::wstring ws(len, 0);
MultiByteToWideChar(CP_UTF8, 0,
s.c_str(), s.size(),
ws.data(), ws.size());
return ws;
}
std::string wstring_to_utf8(const std::wstring &ws) {
int len = WideCharToMultiByte(CP_UTF8, 0,
ws.c_str(), ws.size(),
nullptr, 0,
nullptr, nullptr);
std::string s(len, 0);
WideCharToMultiByte(CP_UTF8, 0,
ws.c_str(), ws.size(),
s.data(), s.size(),
nullptr, nullptr);
return s;
}
```
#detail[C 语言特色:所有要返回字符串的函数,都需要调用两遍,第一波先求出长度,第二波才写入。这是为了避免与内存分配器耦合,所有的 C 风格 API 都是这样。]
=== Linux 用户:`iconv`
如果你是 Linux 用户,且没有跨平台需求,不想用 Boost,可以使用 C 语言的 `iconv` 库。
#tip[`iconv` 也有 Windows 的版本,但安装比较困难。如果你连 `iconv` 都搞得定,没理由 Boost 搞不定。]
```cpp
#include <iconv.h>
#include <string>
std::string convert(std::string const &s,
char const *from, char const *to) {
iconv_t cd = iconv_open(to, from);
if (cd == (iconv_t)-1) {
throw std::runtime_error("iconv_open failed");
}
auto in = s.data();
auto inbytesleft = s.size();
size_t outbytesleft = inbytesleft * 4;
std::string buffer(outbytesleft, 0);
auto out = buffer.data();
iconv(cd, &in, &inbytesleft, &out, &outbytesleft);
iconv_close(cd);
buffer.resize(buffer.size() - outbytesleft);
return buffer;
}
// 举例:UTF-8 转 GBK
std::string utf8_to_gbk(std::string const &s) {
return convert(s, "UTF-8", "GBK");
}
// 举例:GBK 转 UTF-8
std::string gbk_to_utf8(std::string const &s) {
return convert(s, "GBK", "UTF-8");
}
```
=== `iconv` 命令行工具
`iconv` 不仅是一个库,也是一个命令行工具(大多 Linux 发行版都自带了)。用法如下:
```bash
iconv -f 来自什么编码 -t 到什么编码 (输入文件名...) > 输出文件名
```
如不指定输入文件名,默认从终端输入流读取。
如不使用 `> 输出文件名` 重定向输出,则默认输出到终端。
可以用 `echo` 配合管道来创建输入流:
```bash
$ echo 我爱小彭老师 | iconv -f UTF-8 -t GBK
�Ұ�С����ʦ
```
#tip[此处显示乱码是因为我的终端是 UTF-8 格式,无法正确解析 iconv 输出的 GBK 格式数据。]
把“我爱小彭老师”转换为 GBK 格式写入 `gbk.txt`,然后再重新还原回 UTF-8 格式查看:
```bash
$ echo 我爱小彭老师 | iconv -f UTF-8 -t GBK > gbk.txt
$ cat gbk.txt
�Ұ�С����ʦ
$ iconv -f GBK -t UTF-8 gbk.txt
我爱小彭老师
```
#fun[Windows 可能也有类似的工具,比如 `iconv.exe`,但我没找到。]
=== Latin-1 神教
Latin-1 是一个 8 位编码,能表示 256 个字符,包括了拉丁字母、阿拉伯数字、标点符号、常用的西欧字符,以及一些特殊字符。
#image("pic/latin1.svg")
因此,如果你需要把一个 Latin-1 编码的 `char` 字符串转换为 `wchar_t` 字符串,可以直接强转,然后用 `std::wstring` 来存储。
```cpp
std::string latin1 = "I love Péng";
std::wstring wstr = reinterpret_cast<wchar_t *>(latin1.data());
std::wcout << wstr << '\n';
```
== 本地化
本地化是指根据用户的语言、地区等环境,显示不同的界面。比如说,同样是文件菜单,中文用户看到的是“文件”、英文用户看到的是“File”。
=== 区分字符类型
C 语言提供了 `<ctype.h>` 头文件,里面封装了大量形如 `isspace`、`isdigit` 这样的判断字符分类的函数。
```c
#include <ctype.h>
```
C++ 对其实施了再封装,改名为 `<cctype>`。若你导入的是该头文件,那么这些函数可以带有 `std` 名字空间前缀的方式 `std::isspace`,`std::isdigit` 访问了,看起来更加专业(确信)。
```cpp
#include <cctype>
```
函数清单:
#table(
columns: 2,
inset: 3pt,
align: horizon,
[函数名称], [判断的字符类型],
[isascii], [0 到 0x7F 的所有 ASCII 字符],
[isalpha], [大小写字母 A-Z a-z],
[isupper], [大写字母 A-Z],
[islower], [小写字母 a-z],
[isdigit], [数字 0-9],
[isxdigit], [十六进制数字 A-F a-f 0-9],
[isprint], [可打印字符,包括字母、数字和标点等],
[isgraph], [可打印字符,不包括空格],
[iscntrl], [控制字符,除可打印字符外的全部],
[isspace], [空白字符,如空格、换行、回车、制表符等],
[ispunct], [标点符号],
[isalnum], [字母或数字],
)
更详细的表格可以看:
https://en.cppreference.com/w/cpp/string/byte/isspace
#image("pic/cctype.png")
=== 区域设置与 `std::locale`
=== 字符串编码转换 `<codecvt>`
=== 时间日期格式化
=== 正则表达式匹配汉字?
- 狭义的汉字:0x4E00 到 0x9FA5(“一”到“龥”)
- 广义的汉字:0x2E80 到 0x9FFF(“⺀”到“鿿”)
广义的汉字包含了几乎所有中日韩使用的汉字字符,而狭义的汉字只是中文里最常用的一部分。
=== 根据编号输入 Unicode 字符
== 宽字符流
之所以把宽字符流放到最后,是因为,首先 `iostream` 本来就是一个失败的设计。
#fun[小彭老师在本书开头就多次强调过他是 `format` 孝子。]
而宽字符 `wchar_t` 本身就充斥着历史遗留糟粕(例如 Windows 被 UTF-16 背刺)。
现在 `iostream` 与 `wchar_t` 一起出现在我面前,不能说是梦幻联动吧,至少也可以说是答辩超人了。
总之,我个人还是推荐程序内部以 UTF-8(`char8_t`)或 UTF-32(`char32_t`)的字符串来处理万物。
#tip[UTF-8 或 UTF-32 的选择取决于你的中文处理需求是否旺盛,是否在乎空间,是否需要切片和索引等。]
当需要调用操作系统 API 读写文件时,再用 `boost::locale`、`utfcpp` 等工具转换成 ANSI(`char`)或 UTF-16(`wchar_t`)。
对于 Linux 用户,也可以检测如果是 Linux 系统,则什么转换都不做,因为 Linux 用户几乎都是 UTF-8,那么 `const char8_t *` 可以强转为 `const char *` 而不用任何额外开销。
```cpp
std::string to_os_string(std::string const &u8s) {
#if _WIN32
// UTF-8 到 ANSI
return boost::locale::conv::from_utf(u8s, "");
#elif __linux__
// 不转换
return u8s;
#else
#error "Unsupported system."
#endif
}
```
总之,如果你实在要学糟糕的宽字符流,那我也奉陪到底。
=== `wchar_t` 系列函数
=== `std::wcout` 的使用
=== `std::wfstream` 的使用
//=== 跨平台软件何去何从?
//
//理论上,跨平台软件都应该采用 `char{n}_t` 系列字符类型。
//
//然而,所有的操作系统 API,甚至标准库,都是基于 `char` 和 `wchar_t` 来构建的。例如标准库有 `std::cout` 和 `std::wcout`,却并没有 `std::u8cout` 和 `std::u32cout`。使用这些所谓的跨平台字符类型,相当于每次调用标准库和系统 API 时,都需要做一次编码转换(转换方法我们稍后介绍)。
//
//刚刚说了,任何文字处理软件都需要内码和外码两套。外码 (UTF-8) 是不能直接用于文字处理的,会出现码点截断问题,读到内存中后必然要转成定长的内码 (UTF-32) 再处理。
//
//为应对这种情况,有多种流派,以用他们采用的内码来命名。
//
//==== Unicode 派
//
//- `char` 作外码,ANSI
//- `wchar_t` 作内码,Unicode
//
//这似乎是 C++ 官方推荐的流派。
//
//典型案例:GCC、
//
//缺点是这样的软件会无法跨平台,因为 `wchar_t` 在 Linux 上是安全的内码 UTF-32。而 Windows 上是 UTF-16,是不定长的编码,如果存在“𰻞”和“😉”这样超过 0x10000 的生僻字,就会产生两个 `wchar_t`!如果文字处理涉及切片,就会出问题。概率很低,但不为零,软件仍然需要对可能存在的双 `wchar_t` 做特殊处理。若不处理,轻则乱码,重则留下漏洞,被黑客攻击,加重了 Windows 和 Java 程序员的心智负担。
//
//如果一个程序(例如 GCC)只适配了 `wchar_t` 是 UTF-32 的平台,想当然的把 `wchar_t` 当作安全的定长内码使用,那移植到 Windows 上后就会丧失处理“𰻞”和“😉”的能力。要么就需要对所有代码大改,把原本 $O(1)$ 的字符串求长度改成 $O(N)$ 的;要么出现乱码,被黑客攻击。
//
//当需要读写二进制文件时,使用 `fstream`,原封不动地按“字节”为单位读取。
//
//当需要读写文本文件时,使用 `wfstream`,`w` 系的流会自动把文本文件中的 ANSI 转换成 Unicode,存入 `wstring` 字符串。
//
//但是,程序启动前,必须加上这一行:
//
//C 和 C++ 标准库才能会读取 Linux 的环境变量,或 Windows 的“区域设置”,将其设为默认的 char 编码格式。
//
//```cpp
//int main() {
//setlocale(LC_ALL, "");
//std::wcout << L"你好,世界\n";
//}
//```
//
//上述代码会将 “你好,世界”
//
//==== ANSI 派
//
//- `char` 作外码,ANSI
//- `char` 作内码,ANSI
//
//==== TCHAR 派
//
//==== UTF-8 派
//
//=== 跨平台字符类型
//
//`char8_t` 是无符号 8 位整数类型,可用范围是 0 到 255。
//- `char8_t` 字符的编码格式固定是 UTF-8。
//- 相应的字符串类型是 `std::u8string`。
//
//`char16_t` 是无符号 8 位整数类型,可用范围是 0 到 65535。
//- `char16_t` 字符的编码格式固定是 UTF-16。
//- 相应的字符串类型是 `std::u16string`。
//
//`char32_t` 是无符号 8 位整数类型,可用范围是 0 到 1114111。
//- `char32_t` 字符的编码格式固定是 UTF-32。
//- 相应的字符串类型是 `std::u32string`。
//
//理论上,现代程序应该都采用 `char8_t` 和 `char32_t`,他们是跨平台的。
//
//=== 不跨平台字符类型
//
//`char` 字符的编码格式随 locale 而变,并不固定。
//- 如果你的环境变量 `LC_ALL` 设为 `zh_CN.UTF-8`,那他就是 UTF-8。如果你的 `LC_ALL` 设为 `zh_CN.GBK`,那他里面就是 GBK。
//
//`wchar_t` 是无符号 32 位整数类型,可用范围是 0 到 1114111。
//- `wchar_t` 字符的编码格式在 Linux 系统上固定是 UTF-32。
//
//虽然都保证是 Unicode,但不同操作系统影响,是系统 ABI 的一部分,非常麻烦,不跨平台。
//
//C 语言提供了大量针对 `char` 的字符串函数,`const char *` 成了事实上的字符串标准。
//
//=== 变长编码带来的问题
//
//如果把 UTF-8 编码的火车序列直接当普通数组来处理文字,会出现哪些问题?
//
//例如,当我们写下:
//
//```cpp
//std::string s = "我爱𰻞𰻞面!";
//```
//
//这段代码,实际上会被编译器解释为:
//
//```cpp
//std::string s = {
//0xE6, 0x88, 0x91, // 我
//0xE7, 0x88, 0xB1, // 爱
//0xF0, 0xB0, 0xAF, 0x9B, // 𰻞
//0xF0, 0xB0, 0xAF, 0x9B, // 𰻞
//0xE9, 0x9D, 0xA2, // 面
//0x21, // !
//};
//```
|
https://github.com/jassielof/typst-templates | https://raw.githubusercontent.com/jassielof/typst-templates/main/apa7/template/sections/introduction.typ | typst | MIT License | // Implicit introduction heading level 1
#lorem(100)
#lorem(100)
== Subsection
#lorem(100)
=== Subsubsection
#lorem(100)
==== Paragraph
#lorem(100)
===== Subparagraph
#lorem(100)
|
https://github.com/frectonz/the-pg-book | https://raw.githubusercontent.com/frectonz/the-pg-book/main/book/202.%20newideas.html.typ | typst | newideas.html
Crazy New Ideas
May 2021There's one kind of opinion I'd be very afraid to express publicly.
If someone I knew to be both a domain expert and a reasonable person
proposed an idea that sounded preposterous, I'd be very reluctant
to say "That will never work."Anyone who has studied the history of ideas, and especially the
history of science, knows that's how big things start. Someone
proposes an idea that sounds crazy, most people dismiss it, then
it gradually takes over the world.Most implausible-sounding ideas are in fact bad and could be safely
dismissed. But not when they're proposed by reasonable domain
experts. If the person proposing the idea is reasonable, then they
know how implausible it sounds. And yet they're proposing it anyway.
That suggests they know something you don't. And if they have deep
domain expertise, that's probably the source of it.
[1]Such ideas are not merely unsafe to dismiss, but disproportionately
likely to be interesting. When the average person proposes an
implausible-sounding idea, its implausibility is evidence of their
incompetence. But when a reasonable domain expert does it, the
situation is reversed. There's something like an efficient market
here: on average the ideas that seem craziest will, if correct,
have the biggest effect. So if you can eliminate the theory that
the person proposing an implausible-sounding idea is incompetent,
its implausibility switches from evidence that it's boring to
evidence that it's exciting.
[2]Such ideas are not guaranteed to work. But they don't have to be.
They just have to be sufficiently good bets — to have sufficiently
high expected value. And I think on average they do. I think if you
bet on the entire set of implausible-sounding ideas proposed by
reasonable domain experts, you'd end up net ahead.The reason is that everyone is too conservative. The word "paradigm"
is overused, but this is a case where it's warranted. Everyone is
too much in the grip of the current paradigm. Even the people who
have the new ideas undervalue them initially. Which means that
before they reach the stage of proposing them publicly, they've
already subjected them to an excessively strict filter.
[3]The wise response to such an idea is not to make statements, but
to ask questions, because there's a real mystery here. Why has this
smart and reasonable person proposed an idea that seems so wrong?
Are they mistaken, or are you? One of you has to be. If you're the
one who's mistaken, that would be good to know, because it means
there's a hole in your model of the world. But even if they're
mistaken, it should be interesting to learn why. A trap that an
expert falls into is one you have to worry about too.This all seems pretty obvious. And yet there are clearly a lot of
people who don't share my fear of dismissing new ideas. Why do they
do it? Why risk looking like a jerk now and a fool later, instead
of just reserving judgement?One reason they do it is envy. If you propose a radical new idea
and it succeeds, your reputation (and perhaps also your wealth)
will increase proportionally. Some people would be envious if that
happened, and this potential envy propagates back into a conviction
that you must be wrong.Another reason people dismiss new ideas is that it's an easy way
to seem sophisticated. When a new idea first emerges, it usually
seems pretty feeble. It's a mere hatchling. Received wisdom is a
full-grown eagle by comparison. So it's easy to launch a devastating
attack on a new idea, and anyone who does will seem clever to those
who don't understand this asymmetry.This phenomenon is exacerbated by the difference between how those
working on new ideas and those attacking them are rewarded. The
rewards for working on new ideas are weighted by the value of the
outcome. So it's worth working on something that only has a 10%
chance of succeeding if it would make things more than 10x better.
Whereas the rewards for attacking new ideas are roughly constant;
such attacks seem roughly equally clever regardless of the target.People will also attack new ideas when they have a vested interest
in the old ones. It's not surprising, for example, that some of
Darwin's harshest critics were churchmen. People build whole careers
on some ideas. When someone claims they're false or obsolete, they
feel threatened.The lowest form of dismissal is mere factionalism: to automatically
dismiss any idea associated with the opposing faction. The lowest
form of all is to dismiss an idea because of who proposed it.But the main thing that leads reasonable people to dismiss new ideas
is the same thing that holds people back from proposing them: the
sheer pervasiveness of the current paradigm. It doesn't just affect
the way we think; it is the Lego blocks we build thoughts out of.
Popping out of the current paradigm is something only a few people
can do. And even they usually have to suppress their intuitions at
first, like a pilot flying through cloud who has to trust his
instruments over his sense of balance.
[4]Paradigms don't just define our present thinking. They also vacuum
up the trail of crumbs that led to them, making our standards for
new ideas impossibly high. The current paradigm seems so perfect
to us, its offspring, that we imagine it must have been accepted
completely as soon as it was discovered — that whatever the church thought
of the heliocentric model, astronomers must have been convinced as
soon as Copernicus proposed it. Far, in fact, from it. Copernicus
published the heliocentric model in 1532, but it wasn't till the
mid seventeenth century that the balance of scientific opinion
shifted in its favor.
[5]Few understand how feeble new ideas look when they first appear.
So if you want to have new ideas yourself, one of the most valuable
things you can do is to learn what they look like when they're born.
Read about how new ideas happened, and try to get yourself into the
heads of people at the time. How did things look to them, when the
new idea was only half-finished, and even the person who had it was
only half-convinced it was right?But you don't have to stop at history. You can observe big new ideas
being born all around you right now. Just look for a reasonable
domain expert proposing something that sounds wrong.If you're nice, as well as wise, you won't merely resist attacking
such people, but encourage them. Having new ideas is a lonely
business. Only those who've tried it know how lonely. These people
need your help. And if you help them, you'll probably learn something
in the process.Notes[1]
This domain expertise could be in another field. Indeed,
such crossovers tend to be particularly promising.[2]
I'm not claiming this principle extends much beyond math,
engineering, and the hard sciences. In politics, for example,
crazy-sounding ideas generally are as bad as they sound. Though
arguably this is not an exception, because the people who propose
them are not in fact domain experts; politicians are domain experts
in political tactics, like how to get elected and how to get
legislation passed, but not in the world that policy acts upon.
Perhaps no one could be.[3]
This sense of "paradigm" was defined by <NAME> in his
Structure of Scientific Revolutions, but I also recommend his
Copernican Revolution, where you can see him at work developing the
idea.[4]
This is one reason people with a touch of Asperger's may have
an advantage in discovering new ideas. They're always flying on
instruments.[5]
Hall, Rupert. From Galileo to Newton. Collins, 1963. This
book is particularly good at getting into contemporaries' heads.Thanks to <NAME>, <NAME>, <NAME>, Daniel
Gackle, <NAME>, and <NAME> for reading drafts of this.
|
|
https://github.com/LDemetrios/Typst4k | https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/layout/inline/hyphenate.typ | typst | // Test hyphenation.
--- hyphenate ---
// Test hyphenating english and greek.
#set text(hyphenate: true)
#set page(width: auto)
#grid(
columns: (50pt, 50pt),
[Warm welcomes to Typst.],
text(lang: "el")[διαμερίσματα. \ λατρευτός],
)
--- hyphenate-off-temporarily ---
// Test disabling hyphenation for short passages.
#set page(width: 110pt)
#set text(hyphenate: true)
Welcome to wonderful experiences. \
Welcome to `wonderful` experiences. \
Welcome to #text(hyphenate: false)[wonderful] experiences. \
Welcome to wonde#text(hyphenate: false)[rf]ul experiences. \
// Test enabling hyphenation for short passages.
#set text(hyphenate: false)
Welcome to wonderful experiences. \
Welcome to wo#text(hyphenate: true)[nd]erful experiences. \
--- hyphenate-between-shape-runs ---
// Hyphenate between shape runs.
#set page(width: 80pt)
#set text(hyphenate: true)
It's a #emph[Tree]beard.
--- hyphenate-shy ---
// Test shy hyphens.
#set text(lang: "de", hyphenate: true)
#grid(
columns: 2 * (20pt,),
gutter: 20pt,
[Barankauf],
[Bar-?ankauf],
)
--- hyphenate-punctuation ---
// This sequence would confuse hypher if we passed trailing / leading
// punctuation instead of just the words. So this tests that we don't
// do that. The test passes if there's just one hyphenation between
// "net" and "works".
#set page(width: 60pt)
#set text(hyphenate: true)
#h(6pt) networks, the rest.
--- hyphenate-outside-of-words ---
// More tests for hyphenation of non-words.
#set text(hyphenate: true)
#block(width: 0pt, "doesn't")
#block(width: 0pt, "(OneNote)")
#block(width: 0pt, "(present)")
#set text(lang: "de")
#block(width: 0pt, "(bzw.)")
--- hyphenate-pt-repeat-hyphen-natural-word-breaking ---
// The word breaker naturally breaks arco-da-velha at arco-/-da-velha,
// so we shall repeat the hyphen, even that hyphenate is set to false.
#set page(width: 4cm)
#set text(lang: "pt")
Alguma coisa no arco-da-velha é algo que está muito longe.
--- hyphenate-pt-repeat-hyphen-hyphenate-true ---
#set page(width: 4cm)
#set text(lang: "pt", hyphenate: true)
Alguma coisa no arco-da-velha é algo que está muito longe.
--- hyphenate-pt-repeat-hyphen-hyphenate-true-with-emphasis ---
#set page(width: 4cm)
#set text(lang: "pt", hyphenate: true)
Alguma coisa no _arco-da-velha_ é algo que está muito longe.
--- hyphenate-pt-no-repeat-hyphen ---
#set page(width: 4cm)
#set text(lang: "pt", hyphenate: true)
Um médico otorrinolaringologista cuida da garganta do paciente.
--- hyphenate-pt-dash-emphasis ---
// If the hyphen is followed by a space we shall not repeat the hyphen
// at the next line
#set page(width: 4cm)
#set text(lang: "pt", hyphenate: true)
Quebabe é a -melhor- comida que existe.
--- hyphenate-es-repeat-hyphen ---
#set page(width: 6cm)
#set text(lang: "es", hyphenate: true)
Lo que entendemos por nivel léxico-semántico, en cuanto su sentido más
gramatical: es aquel que estudia el origen y forma de las palabras de
un idioma.
--- hyphenate-es-capitalized-names ---
// If the hyphen is followed by a capitalized word we shall not repeat
// the hyphen at the next line
#set page(width: 6.2cm)
#set text(lang: "es", hyphenate: true)
Tras el estallido de la contienda Ruiz-Giménez fue detenido junto a sus
dos hermanos y puesto bajo custodia por las autoridades republicanas, con
el objetivo de protegerle de las patrullas de milicianos.
--- costs-widow-orphan ---
#set page(height: 60pt)
#let sample = lorem(12)
#sample
#pagebreak()
#set text(costs: (widow: 0%, orphan: 0%))
#sample
--- costs-runt-avoid ---
#set par(justify: true)
#let sample = [please avoid runts in this text.]
#sample
#pagebreak()
#set text(costs: (runt: 10000%))
#sample
--- costs-runt-allow ---
#set par(justify: true)
#set text(size: 6pt)
#let sample = [a a a a a a a a a a a a a a a a a a a a a a a a a]
#sample
#pagebreak()
#set text(costs: (runt: 0%))
#sample
--- costs-hyphenation-avoid ---
#set par(justify: true)
#let sample = [we've increased the hyphenation cost.]
#sample
#pagebreak()
#set text(costs: (hyphenation: 10000%))
#sample
--- costs-invalid-type ---
// Error: 18-37 expected ratio, found auto
#set text(costs: (hyphenation: auto))
--- costs-invalid-key ---
// Error: 18-52 unexpected key "invalid-key", valid keys are "hyphenation", "runt", "widow", and "orphan"
#set text(costs: (hyphenation: 1%, invalid-key: 3%))
--- costs-access ---
#set text(costs: (hyphenation: 1%, runt: 2%))
#set text(costs: (widow: 3%))
#context test(text.costs, (hyphenation: 1%, runt: 2%, widow: 3%, orphan: 100%))
|
|
https://github.com/Anastasia-Labs/project-close-out-reports | https://raw.githubusercontent.com/Anastasia-Labs/project-close-out-reports/main/f10-design-patterns-closeout-report/design-patterns-close-out-report.typ | typst | #let image-background = image("images/Background-Carbon-Anastasia-Labs-01.jpg", height: 100%)
#set page(
background: image-background,
paper: "a4",
margin: (left: 20mm, right: 20mm, top: 40mm, bottom: 30mm)
)
#set text(15pt, font: "Barlow")
#v(3cm)
#align(center)[#box(width: 75%, image("images/Logo-Anastasia-Labs-V-Color02.png"))]
#v(1cm)
#set text(20pt, fill: white)
#align(center)[#strong[Design Patterns - Final Milestone]\
#set text(15pt); Project Close-out Report]
#v(5cm)
#set text(13pt, fill: white)
#table(
columns: 2,
stroke: none,
[*Project Number*], [1000012],
[*Project manager*], [<NAME>],
[*Date Started*], [2023, October],
[*Date Completed*], [2024, August],
)
#set text(fill: luma(0%))
#set page(
background: none,
header: [
#place(right, dy: 12pt)[#box(image(height: 75%,"images/Logo-Anastasia-Labs-V-Color01.png"))]
#line(length: 100%)
],
header-ascent: 5%,
footer: [
#set text(11pt)
#line(length: 100%)
#align(center)[*Anastasia Labs* \ Project Close-out Report]
],
footer-descent: 20%
)
#show link: underline
#show outline.entry.where(level: 1): it => {
v(12pt, weak: true)
strong(it)
}
#counter(page).update(0)
#set page(
footer: [
#set text(11pt)
#line(length: 100%)
#align(center)[*Anastasia Labs* \ Project Close-out Report]
#place(right, dy: -7pt)[#counter(page).display("1/1", both: true)]
]
)
#v(100pt)
#outline(depth: 2, indent: 1em)
#pagebreak()
#set terms(separator: [: ], hanging-indent: 40pt)
#v(150pt)
= Design Patterns
We would like to briefly take a look back and go over the implemented design patterns
#v(20pt)
/ Project Name: Streamlining Development: A User-Friendly Smart Contract Library \
for Plutarch and Aiken Design Patterns & Efficiency
/ URL: #link("https://projectcatalyst.io/funds/10/f10-development-and-infrastructure/anastasia-labs-streamlining-development-a-user-friendly-smart-contract-library-for-plutarch-and-aiken-design-patterns-and-efficiency")[Project Catalyst Proposal]
#pagebreak()
#v(20pt)
= Introduction
This project aimed to create user-friendly smart contract libraries for Plutarch and Aiken, addressing the challenge of unintuitive design patterns in Cardano development. Our goal was to abstract away complex patterns, making them accessible to developers across the ecosystem while maintaining code readability and efficiency
As developers ourselves, we understood the frustration of dealing with complex patterns that hindered productivity and made it difficult for new developers to enter the ecosystem. We believe that by abstracting away these complexities, we could make Cardano development more accessible to developers across the ecosystem, without sacrificing code readability or efficiency.
#v(60pt)
= Objectives and Challenges
== Objectives
- Create comprehensive libraries for both Plutarch and Aiken
- Document key design patterns and efficiency tricks, help developers avoid common pitfalls and optimize their code
- Develop wrapper functions to improve efficiency without sacrificing readability, in general the aim is to make developers' lives easier
- Engage with the community to share knowledge and gather feedback
== Challenges Addressed
- Redundant efforts across projects, we saw that many developers were struggling with the same issues and reinventing the wheel
- Complex design patterns and lack of standardization made it difficult to write secure and efficient code increasing the risk of vulnerabilities
- Higher barriers due to a steep learning curve and lack of user friendly resources for new developers entering the ecosystem
#pagebreak()
#v(20pt)
= Planning
Our project was divided into three main phases:
#v(60pt)
=== Design and Documentation
We started by identifying the most unintuitive design patterns and documenting them in detail. We published this documentation on our GitHub repository and contributed to other resources as well
=== Library Development
We focused on creating reusable, efficient code and implemented key design patterns and wrapper functions for both Plutarch and Aiken, focusing on usability and performance optimization.
=== Testing
No implementation is complete without thorough testing. We developed comprehensive testing suites, including unit tests and property-based tests, to ensure the reliability and correctness of what we implemented
#pagebreak()
#v(20pt)
= Recap - Design Patterns
#v(20pt)
1 - *Enhanced Enum Data Mapping Functions*
Streamlined implementation of simple redeemers, reducing complexity and lowering costs. This pattern directly maps enumeration cases to integer values, improving efficiency over standard mapping functions
2 - *Stake Validator*
Optimized transaction-level validation using the "withdraw zero trick." This approach significantly reduced script size and ExUnits cost, with theoretical 5-10x efficiency improvement for transaction-level validation compared to traditional implementations
#box(height: 150pt,
columns(3, gutter: 11pt)[
#figure(
image(fit: "contain", height: 100%, width: 100%,"Stake-val1.png"),
)
#figure(
image(fit: "contain", height: 100%, width: 100%,"Stake-val2.png"),
)
#figure(
image(fit: "contain", height: 100%, width: 100%,"Stake-val3.png"),
)
])
3 - *Merkelized Validators*
Addressed script size limitations by leveraging reference scripts and the "withdraw zero trick." This pattern allows for powerful optimizations while keeping main validator size "within limits", effectively creating smart contracts with near-infinite size potential
#pagebreak()
#v(20pt)
4 - *Transaction Level Validation - Minting Policy*
Optimized batch processing of UTxOs by delegating validation to a minting script executed once per transaction. This significantly improves efficiency for high-throughput applications, potentially lowering transaction costs.
#box(height: 150pt,
columns(3, gutter: 11pt)[
#figure(
image(fit: "contain", height: 100%, width: 100%,"tx-level-val1.png"),
)
#figure(
image(fit: "contain", height: 100%, width: 100%,"tx-level-val2.png"),
)
#figure(
image(fit: "contain", height: 100%, width: 100%,"tx-level-val3.png"),
)
])
5 - *Strict && Checks*
Addressed inconsistencies in boolean operations across Plutus, Plutarch, and Aiken, providing predictable compilation outcomes and optimizing transaction costs
6 - *UTxO Indexer*
Introduced UTxO indices within the redeemer, allowing validators to efficiently sort and pair inputs with outputs, optimizing transactions with multiple inputs and outputs
#box(height: 150pt,
columns(1, gutter: 11pt)[
#figure(
image(fit: "contain", height: 100%, width: 100%,"utxo-indexer.png"),
)
])
#pagebreak()
#v(20pt)
7 - *TxInfoMint Normalization*
Addressed the challenge of automatic 0 Lovelace value appending in txInfoMint field, mitigating unintended consequences
8 - *Validity Range Normalization*
Introduced a normalized representation of time ranges, reducing ambiguity and eliminating redundant or meaningless instances.
#box(height: 120pt,
columns(2, gutter: 11pt)[
#figure(
image(fit: "contain", height: 100%, width: 100%,"val-range-norm1.png"),
)
#figure(
image(fit: "contain", height: 100%, width: 100%,"val-range-norm2.png"),
)
])
#pagebreak()
#v(20pt)
= Detailed list of KPIs and references
#v(60pt)
#box(height: 360pt,
stroke: none,
columns(2, gutter: 21pt)[
== Challenge KPIs
=== Performance Optimization
- Optimized #link("https://github.com/Anastasia-Labs/design-patterns/blob/main/enum-redeemers/ENUM-REDEEMERS.md")[mapping functions] to reduce complexity and cost of smart contracts
- Managed #link("https://github.com/Anastasia-Labs/design-patterns/blob/main/merkelized-validators/merkelized-validators.md")[script size and execution budgets] to reduce transaction fees
- #link("https://github.com/Anastasia-Labs/design-patterns/blob/main/stake-validator/STAKE-VALIDATOR.md")[Reduced ExUnits cost] compared to traditional checks
=== Security Enhancement
- Measures against known exploits like #link("https://github.com/Anastasia-Labs/design-patterns/blob/main/stake-validator/STAKE-VALIDATOR.md")[double satisfaction]
- Comprehensive validation by incorporating #link("https://github.com/Anastasia-Labs/design-patterns/blob/main/utxo-indexers/UTXO-INDEXERS.md")[UTxO indices] within the redeemer
=== Consistency
- Predictable #link("https://github.com/Anastasia-Labs/design-patterns/blob/main/strict-and-checks/STRICT-AND-CHECKS.md")[compilation outcomes]
- Provided a normalized representation of #link("https://github.com/Anastasia-Labs/design-patterns/blob/main/validity-range-normalization/VALIDITY-RANGE-NORMALIZATION.md")[validity ranges]
== Project KPIs
=== Library Completeness
- Inclusion of key #link("https://github.com/Anastasia-Labs/design-patterns/tree/main/enum-redeemers")[design patterns] for #link("https://github.com/Anastasia-Labs/plutarch-design-patterns/tree/main")[Plutarch] and #link("https://github.com/Anastasia-Labs/aiken-design-patterns/tree/main")[Aiken]
=== Documentation Quality
- High-quality, detailed documentation for each smart contract library with detailed flow charts/images displaying solution architectures
=== Engagement
- Active participation in social networks, GitHub, and community events
]
)
#pagebreak()
#v(20pt)
= Key achievements <key-achievements>
#v(60pt)
=== Development of Comprehensive Libraries
User-friendly libraries for Plutarch and Aiken, simplifying complex design patterns without sacrificing readability and circumventing repetitive boilerplate.
A comprehensive testing suite has been developed utilizing unit and property based tests. More on it can be observed on our extensive #link("https://drive.google.com/file/d/1Oju4cMF7jrIjh5VbIueTyp45T45g1159/view?usp=sharing")[*Milestone-4 report*]
Exemplary use of these libraries are found for 8 different design pattern scenarios: \
\
- #link("https://github.com/Anastasia-Labs/aiken-design-patterns/tree/main/validators")[*For Aiken*] \
- #link("https://github.com/Anastasia-Labs/plutarch-design-patterns/blob/main/src/Plutarch/MerkelizedValidator.hs")[*For Plutarch*]
#v(20pt)
=== Engagement
This year presentations on our implemented design patterns was given in Buidlfest, a community event specifically scheduled for 100 developers on Cardano. Communication with the developer community is really important to us, as we create tools specifically to make development on Cardano easier day by day.
Examples of the #link("https://docs.google.com/document/d/1DV6hN0lrFCPdHLbMYQHwmUUv_VsRQYvKjSe-U7k1x9s/edit?usp=sharing")[feedback we received during our presentations/on-stage] (Toulouse, Buidlfest)
#pagebreak()
#v(20pt)
= Measurable Result Examples
#v(30pt)
*Transaction-Level Validation Efficiency*
Utilizing the "Withdraw" redeemer in staking scripts to run global logic once, rather than for each input. A really significant efficiency improvement over current design patterns can be observed, depending on the number of script inputs in the transaction.
*Script Size Reduction*
Implementation of techniques like Merkelized Validators to address script size limitations. Potential for "near-infinite" script size while maintaining efficiency, enabling more complex on-chain logic.
*Transaction Cost Reduction*
Implementation of the UTxO Indexer Design Pattern for efficient sorting and pairing of inputs and outputs. A performance boost for transactions with multiple inputs and outputs. Or the introduction of Stict && Checks for more predictable compilation outcomes and optimized transaction costs
#v(10pt)
= Key learnings <key-learnings>
#v(15pt)
=== User Feedback
Incorporated feedback from developers/users to improve the libraries
=== Process Improvements
Development process has been improved based on insights gained during the project development
=== Best Practices
Documenting best practices for smart contract development and future maintainability and the importance of clear documentation and examples in promoting adoption of advanced design patterns
#pagebreak()
#v(20pt)
= Next steps <next-steps>
#v(10pt)
=== Feature Enhancements
We will maintain and further optimize our existing libraries created for the developers.
Additional design pattern libraries that streamline the implementation process for other existing smart contract languages might come to life as the needs of our developer community requires it. (Such as #link("https://github.com/Anastasia-Labs/scalus-design-patterns")[Scalus], Helios, Plu-ts ...)
=== Expansion
Targeting a wider developer audience through increased outreach. We are utilizing our design patterns in other tools we develop on Cardano too.
For example, for Lucid Evolution we want to display design patterns in our tutorial series via the evolution library. We strive to create value by making our tools complimentary to each other
#v(10pt)
= Final thoughts
#v(10pt)
The project successfully addresses its purpose by creating a freely accessible library of design patterns for Cardano developers. Initiatives alike help best practices and already solved puzzles of development on Cardano spread and create ecosystem-wide returns.
We would like to believe our long-lasting open-source efforts have simplified design decisions and improved developer accessibility.
#v(15pt)
= Resources
#v(10pt)
#box(height: 50pt,
columns(3, gutter: 1pt)[
== Project
- #link("https://github.com/Anastasia-Labs/design-patterns")[GitHub Repository] \
- #link("https://projectcatalyst.io/funds/10/f10-development-and-infrastructure/anastasia-labs-streamlining-development-a-user-friendly-smart-contract-library-for-plutarch-and-aiken-design-patterns-and-efficiency")[Catalyst Proposal]
=== Aiken
- #link("https://github.com/Anastasia-Labs/aiken-design-patterns")[Aiken - Design Patterns] \
- #link("https://github.com/Anastasia-Labs/aiken-design-patterns/blob/main/assets/images/test_report.png")[Test Results] / #link("https://github.com/Anastasia-Labs/aiken-design-patterns/blob/main/assets/images/aiken-design-patterns.gif")[GIF]
=== Plutarch
- #link("https://github.com/Anastasia-Labs/plutarch-design-patterns")[Plutarch - Design Patterns] \
- #link("https://github.com/Anastasia-Labs/plutarch-design-patterns/blob/main/assets/images/test_report.png")[Test Results] / #link("https://github.com/Anastasia-Labs/plutarch-design-patterns/blob/main/assets/images/plutarch-design-patterns.gif")[GIF]
]
)
#v(15pt)
#align(center)[== Close-out Video <link-other>
- #link("https://youtu.be/k6ovQpRyUOM")[Youtube]]
|
|
https://github.com/Mouwrice/thesis-typst | https://raw.githubusercontent.com/Mouwrice/thesis-typst/main/introduction.typ | typst | #import "lib.typ": *
= Introduction
This introductory chapter provides the necessary context for this master's dissertation. It starts by presenting the concept of body pose estimation and motion capture. Next is an introduction to the demo application that will be developed as part of this project and the motivation behind it. Next, it discusses the research questions that will be addressed and the goals to be achieved. Finally, it outlines the structure of the dissertation and provides an overview of the chapters that follow.
Note that this master's dissertation is not a pure computer vision or machine learning research project. It is a project that aims to uncover some practical issues when using body pose estimation for interactive applications and find ways to mitigate these issues. The project is a combination of research and development, with a focus on the practical aspects of using body pose estimation for interactive applications.
It evaluates the MediaPipe Pose model, which is a body pose estimation model provided by the MediaPipe framework. The evaluation is done by measuring the accuracy and deviation of the model under different conditions, achieving an average accuracy of 5-10 mm. The measurements also reveal some limitations of the model, such as jitter and noise in the output. To mitigate these issues, a method is proposed based on predicting the output of the model. The method is evaluated and shown to reduce jitter and noise in the output. Finally, a demo application was developed that uses the MediaPipe Pose model for air drumming. The application allows users to play virtual drums by moving their hands and feet in the air. The application is evaluated in terms of user experience and performance, and some insights are provided for future work.
== On-device body pose estimation
Before diving into the goals and research questions of this project, it is important to provide some context on what is meant by on-device body pose estimation. Body pose estimation is the task of inferring the pose of a person from an image or video. The pose typically consists of the 2D or 3D locations of key body parts, such as the head, shoulders, elbows, wrists, hips, knees, and ankles. In recent developments, more and more key points are commonly found in these estimation tools, sometimes with the ability to achieve complete hand and finger tracking. Body pose estimation has a wide range of applications, including human-computer interaction, augmented reality, and motion capture @object-pose-survey. It can be considered a new form of motion capture based on computer vision.
On-device body pose estimation refers to the ability to perform body pose estimation directly on a device, such as a smartphone or tablet, without the need for specialized hardware or an internet connection. This is made possible by recent advancements in deep learning and computer vision, which have enabled the development of lightweight and efficient models that can run in real-time on mobile devices.
== Traditional motion capture systems
Traditional motion capture systems are used to track the movements of actors or performers either in real-time or offline. These systems typically consist of multiple cameras that capture the movements of reflective markers placed on the actor's body. The captured data is then processed to reconstruct the actor's movements in 3D space. Motion capture systems are widely used in the entertainment industry for creating realistic animations for movies, video games, and virtual reality experiences.
== Motivation
Currently traditional motion capture systems are still the most accurate and reliable way to capture human movements. However, they are expensive, require specialized equipment and expertise to set up and operate. On the other hand, on-device body pose estimation offers a more accessible and affordable alternative that can run in real-time on consumer devices. By developing a demo application that uses on-device body pose estimation for air drumming, we can explore the capabilities and limitations of this technology and its potential for interactive applications. This can help inform future research and development efforts in the field of computer vision and human-computer interaction as well as inspire new applications and use cases.
== Demo Application
The project aims to develop a demo application that uses on-device body pose estimation to enable air drumming. The application allows users to play virtual drums by moving their hands in the air, as well as use their feet to press down on virtual pedals. The goal is to provide a fun and interactive experience that showcases the capabilities and limitations of on-device body pose estimation.
The main inspiration came from an older sketch performed by <NAME> as part of his Rowan Atkinson stand up tours during the years 1981 to 1986. In the clip, Rowan bumps into, what appears to be, an invisible drum kit.
#footnote[
The clip is available on YouTube from the official "<NAME> Live" channel: #link("https://www.youtube.com/watch?v=A_kloG2Z7tU")[https://www.youtube.com/watch?v=A_kloG2Z7tU #link-icon]
]
There are no actual attributes on stage, the only thing standing on the stage is a drum stool. Various drum sounds can be heard which seem to perfectly match the movements performed by <NAME>. After the character played by <NAME> understands that he has stumbled upon an invisible drum kit, he starts playing the drums with his hands and feet. What follows is a neat trick of coordination and timing, as the sounds that we are hearing are obviously either prerecorded or performed by someone off-stage. The demo application aims to capture some of that magic by allowing users to actually play drums without the need for physical drumsticks or a drum kit.
The demo application will be developed using the MediaPipe framework, which provides a sufficiently accurate implementation of body pose estimation. The application will leverage the body pose estimation provided by MediaPipe to track the user's body movements in real-time. It will then use this information to generate drum sounds based on the user's hand and foot movements. The application will also include a graphical user interface that provides visual feedback to the user.
== Goals and research questions
As mentioned, one part of this project is to develop a demo application that uses on-device body pose estimation to enable air drumming. But that is not all. One aspect of this research is to evaluate the performance of the body pose estimation model provided by MediaPipe and identify its limitations. This will involve conducting experiments to measure the accuracy and robustness of the model under different conditions. Another goal, on top of the performance evaluation, is to provide a more pragmatic comparison when it comes to using body pose estimation versus traditional motion capture systems. During the development of the demo application, some properties of the body pose estimation have been identified that need to be considered when developing interactive applications. All of this addresses the lengthy research question: "What are the capabilities and limitations of on-device body pose estimation, specifically MediaPipe Pose, for interactive applications compared to traditional motion capture systems?"
During the measurements some signal stability issues were identified. These issues are caused by jitter and noise in the body pose estimation output. So another goal is to come up with a method that can reduce these issues. This leads to the second research question: "How can jitter and noise in the body pose estimation output be reduced or mitigated to improve the stability of interactive applications?"
== Structure of the dissertation
Following this introduction, the dissertation is structured as follows:
- @sota[Chapter] provides an overview of the state-of-the-art in body pose estimation and motion capture, focusing on recent developments and advancements in the field.
- @mediapipe-pose[Chapter] introduces the MediaPipe framework and its body pose estimation model, highlighting its key features and capabilities.
- @measuring-accuracy-and-deviation[Chapter] presents the measurements that were conducted to evaluate the performance of the MediaPipe Pose model and identify its limitations.
- @jitter-noise[Chapter] discusses the issues of jitter and noise in the body pose estimation output and proposes a method to reduce these issues.
- @drum-application[Chapter] describes the development of the demo application for air drumming, including the design and implementation of the application as well as some insights into user experience and performance. The capter also includes a comparison between body pose estimation and traditional motion capture systems for interactive applications.
- @future-work[Chapter] provides some insights into future work that could be done to improve the demo application and address the limitations of on-device body pose estimation.
- Finally, @conclusion[Chapter] concludes the dissertation by summarizing the key findings and contributions of this research.
|
|
https://github.com/RiccardoTonioloDev/Bachelor-Thesis | https://raw.githubusercontent.com/RiccardoTonioloDev/Bachelor-Thesis/main/chapters/results.typ | typst | Other | #pagebreak(to: "odd")
#import "../config/functions.typ": *
= Risultati <ch:risultati>
Nel seguente capitolo si farà un breve riepilogo sui risultati quantitativi ottenuti usando le metriche di valutazione usate in @eigen e le due metriche in più introdotte nella valutazione dei modelli PyXiNet, soffermandosi ad analizzare casistiche interessanti. Successivamente vengono esposti come risultati qualitativi, le mappe di profondità prodotte rispettivamente da PDV1, PDV2 (riscritti in @PyTorch) e i dai migliori modelli sperimentali $MM$ II e $beta" CBAM"$ I.
== Risultati quantitativi
I seguenti sono tutti i risultati ottenuti dai due PyDNet e dai tredici esperimenti effettuati:
#ext_eval_table(
(
(name: [PDV1], vals: (1971624.0,0.15,0.16,1.52,6.229,0.253,0.782,0.916,0.964)),
(name: [PDV2], vals: (716680.0,0.10,0.157,1.487,6.167,0.254,0.783,0.917,0.964)),
(name: [PyXiNet $alpha" I"$],vals:(429661.0,0.14,0.17,1.632,6.412,0.269,0.757,0.903,0.958)),
(name: [PyXiNet $alpha" II"$],vals:(709885.0,0.12,0.168,1.684,6.243,0.259,0.777,0.913,0.960)),
(name: [PyXiNet $beta" I"$],vals:(941638.0,0.16,0.156,1.546,6.259,0.251,0.791,0.921,0.965)),
(name: [PyXiNet $beta" II"$],vals:(481654.0,0.14,0.168,1.558,6.327,0.259,0.762,0.910,0.963)),
(name: [PyXiNet $beta" III"$],vals:(1246422.0,0.16,0.148,1.442,6.093,0.241,0.803,0.926,0.967)),
(name: [PyXiNet $beta" IV"$],vals:(1446014.0,0.18,0.146,1.433,6.161,0.241,0.802,0.926,0.967)),
(name: [PyXiNet $MM" I"$],vals:(1970643.0,0.36,0.147,1.351,5.98,0.244,0.8,0.926,0.967)),
(name: [PyXiNet $MM" II"$],vals:(2233197.0,0.38,0.14,1.289,5.771,0.234,0.814,0.933,0.969)),
(name: [PyXiNet $MM" III"$],vals:(1708499.0,0.35,0.141,1.279,5.851,0.239,0.808,0.927,0.968)),
(name: [PyXiNet $MM" IV"$],vals:(1839981.0,0.36,0.145,1.25,5.885,0.242,0.798,0.926,0.967)),
(name: [PyXiNet $beta"CBAM I"$],vals:(1250797.0,0.19,0.143,1.296,5.91,0.239,0.805,0.928,0.968)),
(name: [PyXiNet $beta"CBAM II"$],vals:(1450389.0,0.23,0.147,1.379,5.974,0.239,0.806,0.927,0.968)),
(name: [CBAM PyDNet],vals:(746673.0,0.28,0.167,1.722,6.509,0.251,0.776,0.916,0.965)),
),
2,
[Risultati di tutti gli esperimenti a confronto],
res: 102pt
)
Come già espresso nei capitoli precedenti, i risultati ottenuti con $MM$ II sono i migliori. Essendo però troppo pesante e lento come modello per essere eseguito, in un contesto @embedded sicuramente $beta$CBAM I sarebbe preferibile.
#block([
Possiamo notare però che il divario tra le _performance_ dei due modelli appena menzionati non è eccessivo, soprattutto considerando il divario nei tempi di inferenza e nel numero di parametri:
#ext_eval_table(
(
(name: [PyXiNet $MM" II"$],vals:(2233197.0,0.38,0.14,1.289,5.771,0.234,0.814,0.933,0.969)),
(name: [PyXiNet $beta"CBAM I"$],vals:(1250797.0,0.19,0.143,1.296,5.91,0.239,0.805,0.928,0.968)),
),
0,
[PyXiNet $MM" II"$ e PyXiNet $beta"CBAM I"$ a confronto],
res: 102pt
)
],breakable: false,width: 100%)
#block([
Si vuole inoltre far notare come, sebbene il modulo CBAM sia più semplice come meccanismo di attenzione rispetto alla _self attention_, non tutti gli esperimenti utilizzanti quest'ultima tecnica hanno portato a prestazioni migliori risetto alla prima:
#ext_eval_table(
(
(name: [PyXiNet $MM" I"$],vals:(1970643.0,0.36,0.147,1.351,5.98,0.244,0.8,0.926,0.967)),
(name: [PyXiNet $MM" IV"$],vals:(1839981.0,0.36,0.145,1.25,5.885,0.242,0.98,0.926,0.967)),
(name: [PyXiNet $beta"CBAM I"$],vals:(1250797.0,0.19,0.143,1.296,5.91,0.239,0.805,0.928,0.968)),
),
0,
[PyXiNet $MM" I e IV"$ vs. PyXiNet $beta"CBAM I"$],
res: 102pt
)
],breakable: false,width: 100%)
#block([
Se si va invece a prendere in considerazione l'uso di XiNet come @encoder, si può facilmente dedurre che è il diretto responsabile per parte dell'aumento del tempo di inferenza. Questo lo si può notare nel confronto tra i modelli $alpha$ e i modelli PDV1 e PDV2:
#ext_eval_table(
(
(name: [PDV1], vals: (1971624.0,0.15,0.16,1.52,6.229,0.253,0.782,0.916,0.964)),
(name: [PDV2], vals: (716680.0,0.10,0.157,1.487,6.167,0.254,0.783,0.917,0.964)),
(name: [PyXiNet $alpha" I"$],vals:(429661.0,0.14,0.17,1.632,6.412,0.269,0.757,0.903,0.958)),
(name: [PyXiNet $alpha" II"$],vals:(709885.0,0.12,0.168,1.684,6.243,0.259,0.777,0.913,0.960)),
),
0,
[PyXiNet $MM" I e IV"$ vs. PyXiNet $beta"CBAM I"$],
res: 102pt
)
],breakable: false,width: 100%)
La famiglia $alpha$ infatti anche possedendo in tutti i suoi esperimenti un numero di parametri inferiore a PDV2, ha comunque un tempo di inferenza maggiore, rispetto a quest'ultimo, di almeno il 20%. Questo è dovuto al gran numero di somme tensoriali _element wise_ che le XiNet eseguono. Questo tipo di operazioni infatti, anche se non contribuiscono direttamente al far crescere il numero dei parametri, aumentano il numero di calcoli da eseguire. Di conseguenza a meno di un uso radicalmente diverso di XiNet all'interno delle architetture, rispetto a quanto provato, non sarà possibile scendere sotto il tempo di inferenza di PDV2.
#block([
== Risultati qualitativi
In seguito vengono elencati quattro risultati qualitativi dei modelli precedentemente citati.
],breakable: false,width: 100%)
#stack(dir: ltr,
align(left)[#figure(image("../images/Inferences/RisultatiQualitativi1.drawio.png",width: 200pt),caption:[Inferenza sulla prima immagine.])],
align(right)[#figure(image("../images/Inferences/RisultatiQualitativi2.drawio.png",width: 200pt),caption:[Inferenza sulla seconda immagine.])]
)
#stack(dir: ltr,
align(left)[#figure(image("../images/Inferences/RisultatiQualitativi3.drawio.png",width: 200pt),caption:[Inferenza sulla terza immagine.])],
align(right)[#figure(image("../images/Inferences/RisultatiQualitativi5.drawio.png",width: 200pt),caption:[Inferenza sulla quarta immagine.])]
)
Sebbene i risultati siano molto simili tra loro a livello qualitativo, si può notare con occhio più attento che i modelli $beta$CBAM I e $MM$ II riescono a carpire meglio le forme, al contempo riducendo artefatti e distorsioni presenti nell'immagine.
|
https://github.com/teamdailypractice/pdf-tools | https://raw.githubusercontent.com/teamdailypractice/pdf-tools/main/typst-pdf/readme.md | markdown | # typst - how to generate pdf
* <https://typst.app/docs/tutorial/writing-in-typst/>
* filename - `filename.typ`
## Installations
* typst compiler
* vs code extension - **typst LSP**
## How to use?
```bash
# Creates `file.pdf` in working directory.
typst compile file.typ
# Creates PDF file at the desired path.
SET PROJECT_ROOT=D:\git\pdf-tools\typst-pdf
SET OUTPUT_PATH=D:\git\pdf-tools\typst-pdf\output
SET FILENAME=example-02
typst compile %FILENAME%.typ %OUTPUT_PATH%\%FILENAME%.pdf
typst compile %FILENAME%.typ %OUTPUT_PATH%\%FILENAME%.pdf --root %PROJECT_ROOT%
```
## images
By NASA / <NAME> - http://www.nasa.gov/mission_pages/icebridge/multimedia/spr13/DSCN3043.html, Public Domain, https://commons.wikimedia.org/w/index.php?curid=25778382
## thirukkural
மு. வரதராசனார் உரை
## commands used
* cmd
```batch
cd D:\git\pdf-tools\typst-pdf\examples
SET FILENAME=a51
SET OUTPUT_PATH=D:\git\pdf-tools\typst-pdf\output
SET PROJECT_ROOT=D:\git\pdf-tools\typst-pdf
typst compile %FILENAME%.typ %OUTPUT_PATH%\%FILENAME%.pdf --root %PROJECT_ROOT%
```
* bash `cd /d/git/spring-boot-learning/data-jpa-sqlite`
* bash tamil-vu - database: `/d/git/tamilvu-thirukkural/output`
* Check the count `ls -l muva_urai_1???.txt | wc -l`
* powershell `copy-item D:\git\spring-boot-learning\data-jpa-sqlite\data-out\thirukkural-muva-urai.typ D:\git\pdf-tools\typst-pdf\examples\a61.typ
`
https://www.tamilvu.org/library/l2100/html/l2100ind.htm |
|
https://github.com/tfachada/thesist | https://raw.githubusercontent.com/tfachada/thesist/main/template/Chapters/Appendix-A.typ | typst | MIT License | #import "@preview/thesist:0.2.0": flex-caption, subfigure-grid
#import "@preview/glossarium:0.5.0": gls, glspl
= An appendix
#lorem(500)
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/par-bidi_01.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test that consecutive, embedded LTR runs stay LTR.
// Here, we have two runs: "A" and italic "B".
#let content = par[أنت A#emph[B]مطرC]
#set text(font: ("PT Sans", "Noto Sans Arabic"))
#text(lang: "ar", content)
#text(lang: "de", content)
|
https://github.com/typst-jp/typst-jp.github.io | https://raw.githubusercontent.com/typst-jp/typst-jp.github.io/main/docs/tutorial/welcome.md | markdown | Apache License 2.0 | ---
description: Typst チュートリアル
---
# チュートリアル
Typstのチュートリアルへようこそ!このチュートリアルでは、Typstを使って文書を書き、書式を整える方法を学びます。
まずは日常的な作業から始め、徐々に高度な機能を導入していきます。
このチュートリアルではTypstや他のマークアップ言語、プログラミングに対する予備知識は前提としていません。ただし、テキストファイルの編集方法はすでにご存知であることを前提とします。
最も良い始め方は、Typst appに無料で登録し、下記の各章を順番に見ていくことです。
Typst appはリアルタイムプレビュー、シンタックスハイライト、強力なオートコンプリートを提供します。あるいは、[オープンソースのCLI](https://github.com/typst/typst)を利用して、ローカルのテキストエディタでこのチュートリアルを試すこともできます。
## いつTypstを使うか { #when-typst }
始める前に、Typstとは何なのか、いつ使うものかを確認しておきましょう。Typstは文書の組版を行うためのマークアップ言語です。Typstは学習しやすく、汎用性が高いように設計されています。Typstはマークアップされたテキストファイルを取り込み、PDFを出力します。
Typstは、エッセイ・記事・学術論文・本・レポート・宿題など、あらゆる種類の長い文章を書くのに適しています。さらに、数学・物理学・工学分野の論文のような、数式を含んだ文書を書くのにも最適です。その上、強力なスタイリングと自動化機能を備えているため、共通のスタイルを持つ一連の文書(たとえば、書籍のシリーズなど)を書く時にも最高の選択肢となります。
## チュートリアルで学ぶこと { #learnings }
このチュートリアルは4つの章で構成されています。それぞれの章は、前の章の内容を基にしています。それぞれの章で学ぶ内容は以下の通りです:
1. [Typstで執筆するには]($tutorial/writing-in-typst):テキストの書き方や、画像・数式などの各種要素の挿入方法を学びます。
2. [書式を設定する]($tutorial/formatting):文書のフォーマットを調整する方法(フォントサイズ、見出しスタイルなど)を学びます。
3. [高度なスタイリング]($tutorial/advanced-styling):著者リストや追い込み見出し(run-in headings)などのタイポグラフィ機能を使用して、学術論文の複雑なページレイアウトを作成します。
4. [テンプレートを作成する]($tutorial/making-a-template): 前章で作成した論文を基に、再利用可能なテンプレートを作成します。
Typstをお楽しみ頂けることを願っています!
|
https://github.com/max-niederman/MATH51 | https://raw.githubusercontent.com/max-niederman/MATH51/main/midterm_6-a_justification.typ | typst | #import "lib.typ": *
#set page(numbering: "1/1")
#let note(content) = text(
style: "italic",
weight: "bold",
size: 0.9em,
content
)
Suppose there is a function $f : RR^2 -> RR$ such that
$
f_x = e^x^2 sin y
#h(0.25in)
"and"
#h(0.25in)
f_y = e^x^2 cos y
$
Then, because $f$ is a scalar field,
we can use the gradient theorem to compute the difference between
the values of $f$ at $(0, 0)$ and any point $(x_1, y_1) in RR^2$.
In particular, let us examine two curves beginning at $(0, 0)$ and ending at $(x_1, y_1)$:
1. The curve (call it $cal(C)$) which goes in a straight line from $(0, 0)$ to $(x_1, 0)$ along the x-axis, and then straight from $(x_1, 0)$ to $(x_1, y_1)$ along the line $x = x_1$.
2. The curve (call it $cal(C)'$) which goes in a straight line from $(0, 0)$ to $(0, y_1)$ along the y-axis, and then straight from $(0, y_1)$ to $(x_1, y_1)$ along the line $y = y_1$.
For these paths, the line integral from the gradient theorem can be computed using regular integration (which is the only kind of integration I know how to do):
$
integral_cal(C) nabla f dot dif vname(r)
&= integral_0^x_1 f_x (x, 0) dif x + integral_0^y_1 f_y (x_1, y) dif y \
integral_(cal(C)') nabla f dot dif vname(r)
&= integral_0^y_1 f_y (0, y) dif y + integral_0^x_1 f_x (x, y_1) dif x
$
#note[
I want to note that at the time I took the exam,
I was not actually aware of either the gradient theorem or the line integral.
I was thinking of the problem purely in terms of the 3D graph of $f$,
and taking slices parallel to the $x$- and $y$-planes.
In those slices, I reasoned, $f$ should behave like a single-variable function of $y$ or $x$,
so I could just use the fundamental theorem of calculus in two perpindicular slices and add the results.
]
So by the gradient theorem,
$
f(x_1, y_1) - f(0, 0)
&= integral_cal(C) nabla f dot dif vname(r) \
&= integral_(cal(C)') nabla f dot dif vname(r) \
$
We equate the two line integrals and evaluate:
$
integral_cal(C) nabla f dot dif vname(r)
&= integral_(cal(C)') nabla f dot dif vname(r) \
integral_0^x_1 f_x (x, 0) dif x + integral_0^y_1 f_y (x_1, y) dif y
&= integral_0^y_1 f_y (0, y) dif y + integral_0^x_1 f_x (x, y_1) dif x \
integral_0^x_1 e^x^2 sin 0 dif x + integral_0^y_1 e^(x_1^2) cos y dif y
&= integral_0^y_1 e^0^2 cos y dif y + integral_0^x_1 e^(x^2) sin y_1 dif x \
integral_0^x_1 0 dif x + e^(x_1^2) integral_0^y_1 cos y dif y
&= integral_0^y_1 cos y dif y + sin y_1 integral_0^x_1 e^(x^2) dif x \
e^(x_1^2) (sin y_1 - sin 0)
&= (sin y_1 - sin 0) + sin y_1 integral_0^x_1 e^(x^2) dif x \
e^(x_1^2) sin y_1
&= (1 + integral_0^x_1 e^(x^2) dif x) sin y_1 \
$
#note[
This is where I stopped on the exam,
and simply wrote that they are inequal,
but I'll continue here.
]
Let us investigate the case where $y_1 = pi / 2$:
$
e^(x_1^2) sin pi/2
&= (1 + integral_0^x_1 e^(x^2) dif x) sin pi/2 \
e^(x_1^2)
&= 1 + integral_0^x_1 e^(x^2) dif x \
(dif) / (dif x_1) e^(x_1^2)
&= (dif) / (dif x_1) (1 + integral_0^x_1 e^(x^2) dif x) \
2 x_1 e^(x_1^2)
&= dif / (dif x_1) integral_0^x_1 e^(x^2) dif x \
2 x_1 e^(x_1^2)
&= e^(x_1^2) \
2 x_1
&= 1 \
$
And in the case that $x_1 = 0$:
$
2 (0) &= 1 \
0 &= 1
$
Which is a contradiction,
so the premise that $f$ exists is false. |
|
https://github.com/Lelidle/Q12-cs-scripts | https://raw.githubusercontent.com/Lelidle/Q12-cs-scripts/main/complexity.typ | typst | #import "template.typ": *
#import "@preview/truthfy:0.2.0": generate-table, generate-empty
#import "@preview/tablex:0.0.6": tablex, cellx, rowspanx, colspanx
#import "@preview/codelst:2.0.0": sourcecode
#show: setup
#set heading (
numbering: "1."
)
#let sc = sourcecode.with(
numbers-style: (lno) => text(
size: 10pt,
font: "Times New Roman",
fill: rgb(255,255,255),
str(lno)
),
frame: block.with(
stroke: 1pt + rgb("#a2aabc"),
radius: 2pt,
inset: (x: 10pt, y: 5pt),
fill: rgb("777777")
)
)
#let hm = h(1mm)
#v(1fr)
#align(center)[#text(32pt)[Zeitkomplexität und Grenzen der Berechenbarkeit \ #align(center)[#image("images/timecomplexity.png", width: 80%)] ] \ Stable Diffusion Art "Time Complexity"]
#v(1fr)
#pagebreak()
#set page(
header: align(right)[
Zeitkomplexität - Skript 2inf1 \
],
numbering: "1"
)
#show outline.entry.where(
level: 1
): it => {
v(12pt, weak: true)
strong(it)
}
#outline(title: "Inhaltsverzeichnis", indent: auto)
#pagebreak()
= Zeitkomplexität
== Zeitmessungen von Algorithmen
=== Einzelmessungen
Bevor wir uns mit der Theorie beschäftigen wollen wir einige Algorithmen ganz praktisch analysieren, d.h. wir messen die Zeit, die diese Algorithmen brauchen, um zu einem Ergebnis zu kommen.
Wie so oft benutzen wir dafür die Fibonaccizahlen, hier noch einmal die rekursive und iterative Implementierung:
#sc[```java
public static long fibRek(int n) {
if (n < 3) {
return 1;
} else {
return fibRek(n-1) + fibRek(n-2);
}
}
```]
#sc[```java
public static long fibIt(int n){
long x = 1;
long y = 1;
long result = 1;
for (int i=3; i<=n; i++){
result = x + y;
x = y;
y = result;
}
return result;
}
```]
*Zur Erinnerung*: um Zeitmessungen durchzuführen können wir in Java den Befehl *System.nanoTime()* verwenden, eine Zeitmessungsmethode sieht also typischerweise so aus:
#sc[```java
public static void timeMeasurement() {
long start = System.nanoTime();
methodToMeasure();
long ende = System.nanoTime();
System.out.println("The measurement took " + (start - ende) + " nanoseconds");
}
```]
Damit können wir jetzt die Zeit für große Eingaben messen, also z.B. die $5000$. Fibonaccizahl berechnen.
Insgesamt sind diese Einzelmessungen aber noch nicht wirklich aussagekräftig, da wir nur eine absolute Dauer in einem *einzigen* Fall bekommen - möglicherweise hat das Betriebssystem zu dieser Zeit einfach einen anderen Thread bevorzugt.
Um dieses Problem zu umgehen könnte man auf die Idee kommen, die Methode mehrfach ausführen und dann einen Mittelwert zu bilden, z.B.:
#sc[```java
private static final int RUNS = 100;
public static void main(String[] args) {
timeMeasurementMeans();
}
public static void timeMeasurementMeans() {
long sum = 0;
for(int i = 0; i < RUNS; i++) {
long start = System.nanoTime();
fibIt(5000);
long end = System.nanoTime();
sum += (end-start);
}
System.out.println("The measurement took " + (sum/RUNS) + " nanoseconds");
}
```]
Leider macht uns Java hier einen Strich durch die Rechnung, denn nach einer gewissen Zeit dauert eine einzige Messung plötzlich verdächtig kurz, siehe #link(<measurementMean>)[hier].
Java weigert sich also offensichtlich das Ganze immer wieder neu zu berechnen und liest nach einer Weile das Ergebnis aus einem Cache.
Wählt man für RUNS den Wert $10$ scheint dies allerdings noch nicht zu passieren und wir können die statistische Varianz zumindest etwas reduzieren.
=== Messreihen
Noch interessanter als Einzelmessungen sind aber ganze Messreihen. Unser endgültiges Ziel wird sein, Algorithmen nach ihrer generellen Laufzeit zu klassifizieren. (siehe #link(<onotation>)[nächstes Kapitel]).
Dabei ist der wichtigste Faktor, wie sich die Laufzeit eines Algorithmus verhält, wenn die Problemgröße zunimmt - in unserem Fall also immer größere Fibonaccizahlen berechnet werden sollen.
Um eine ganze Messreihe aufnehmen zu können, muss nur noch eine weitere Methode geschrieben werden, die diese durchführt. _timeMeasureMeans_ muss dann auch umgeschrieben werden, und einen Eingabeparameter annehmen, das Ganze sieht dann z.B. so aus:
#sc[```java
private static final int RUNS = 10;
private static final int INPUT_SIZE = 45;
public static String timeMeasurementSeries(){
String result = "";
for(int i = 0; i < INPUT_SIZE; i++) {
result += timeMeasurementMeans(i);
}
return result;
}
public static String timeMeasurementMeans(int n) {
long sum = 0;
for(int i = 0; i < RUNS; i++) {
long start = System.nanoTime();
fibRek(n);
long end = System.nanoTime();
sum += (end-start);
}
return n + ";" + (sum/RUNS) + "\n";
}
```]
Eine zweite Änderung, die an dieser Stelle nützlich ist wurde ebenfalls eingepflegt: die Ergebnisse werden jetzt nicht mehr auf die Konsole ausgegeben, sondern in einem String gespeichert.
Der Strichpunkt und die neue Zeile \\n deuten schon darauf hin, dass wir die Werte dann in einer Datei speichern wollen - bevorzugt in einer .csv Datei, diese kann direkt von z.B. Excel gelesen werden.
Der Vollständigkeit halber hier noch die _save_-Methode, mit der wir die Dateien dann abspeichern können:
#sc[```java
public static void main(String[] args) {
save(timeMeasurementSeries());
}
public static void save(String str) {
Path path
= Paths.get("measurement.csv");
try {
Files.writeString(path, str, StandardCharsets.UTF_8);
}
catch (IOException ex) {
System.out.print("Invalid Path");
}
}
```]
Die Ergebnisse können dann mit einem Tabellenkalkulationsprogramm als Diagramm dargestellt werden, z.B. als *Punktdiagramm*.
#task[Nutzen Sie den bereitgestellten Code und führen Sie verschiedene Messreihen mit _fibIt_ und _fibRek_ durch.
Analysieren Sie anschließend ihre Ergebnisse mit Hilfe von Diagrammen.]
#merke[In Excel kann in einer .csv Datei ein Diagramm erstellt, aber nicht gespeichert werden! Kopieren Sie also z.B. Ihre Ergebnisse in eine .xlsx Datei und erstellen Sie die Diagramme dort. ]
#pagebreak()
*Analyse der rekursiven Implementierung*
Versucht man den Bereich für die rekursive Implementierung zu vergrößern, so stellt man bereits bei der etwa $45.$ Fibonacci Zahl fest, dass die Berechnung sehr lange dauert.
#align(center)[#image("images/fibrek.png", width:70%)]
Betrachtet man das erste Diagramm, so kann man bereits einen exponentiellen Zusammenhang vermuten. Dies wird bestätigt, wenn man die y-Achse logarithmisch skaliert (zweite Abbildung).
Ein exponentieller Zusammenhang führt in einer logarithmischen Darstellung zu einer Geraden, die hier (nach einigen Ausreißerwerten bei kleinen Größen von $n$) gut zu sehen ist.
Offensichtlich ist eine rein rekursive Berechnung der Fibonacci Zahlen ohne weitere Methoden wie z.B. *dynamische Programmierung* extrem ineffizient.
#task(customTitle:"Denkaufgabe")[Begründen Sie anhand des Codes, warum hier ein exponentieller Zusammenhang beobachtbar ist!]
#pagebreak()
*Analyse der iterativen Implementierung*
Bei der iterativen Implementierung können leicht viel größere Fibonacci-Zahlen berechnet werden, das folgende Beispiel zeigt die Ergebnisse für alle Zahlen bis $400$ und dann bis $10000$:
#align(center)[#image("images/fibiter.png", width: 70%)]
Grundlegend sehen wir in der zweiten Graphik schön den linearen Trend, den wir aufgrund der einen *Wiederholung* im Code erwarten würden.
Bei kleinen Größen scheint noch etwas wunderliches zu passieren, deswegen sind die ersten 400 Werte extra aufgetragen. Während der genaueren Analyse dieser Ausreißer hat sich ein kleines Kaninchenloch aufgetan. Die Ergebnisse dieser Recherche sind sicher nicht für das Abitur relevant, aber eventuell interessant und finden sich im #link(<results>)[Anhang].
#pagebreak()
<onotation>
== Die O-Notation
Wir haben uns bereits in der 11. Klasse mit der sogenannten $cal(O)$ -Notation beschäftigt, zur Erinnerung hier noch einmal der Auszug aus dem Skript zu Bäumen.
Grundsätzlich geht es bei der $cal(O)$-Notation darum, wie der Ressourcenbedarf eines Algorithmuses für große Eingaben skaliert- dabei kann sowohl die Laufzeit als auch der Speicherplatzbedarf betrachtet werden.
Es wird explizit *nicht* betrachtet:
- Programmiersprache
- Betriebssystem
- Prozessorleistung
- Speicherausstattung
- Computerarchitektur
- etc.
Es geht rein um die prinzipielle _Effizienz_ des Algorithmuses, nicht um seine Umsetzung auf einer bestimmten Hardware. Ist der Algorithmus zu "schlecht", so ist er ggf. auf keiner Hardware praktisch umsetzbar!
Bei der Analyse geht es in der Regel darum eine *Worst Case*, oder eine *Average Case*- Abschätzung zu machen, d.h. wie lange braucht mein Algorithmus im schlechtesten Fall, oder im "Durchschnittsfall".
*Beispiel*:
Ist ein Array bereits sortiert, so wird ein guter Sortieralgorithmus nicht so lange brauchen wie bei einem "durchschnittlichen" Array - sprich einem Array, das Werte in einer völlig zufälligen Reihenfolge enthält.
Im *Best Case* wäre der Algorithmus also "sofort" fertig, in den anderen Fällen braucht er länger.
#hinweis[Der *Best Case* wird selten betrachtet, da er ebenfalls recht selten auftritt. Es handelt sich meistens um einen Spezialfall mit wenig praktischer Relevanz, wir beschränken uns also auf die anderen beiden Fälle.]
Grob gesprochen ist es unser Ziel, eine Funktion $T(n)$ zu finden, die das Wachstum unserer Laufzeit *nach oben* begrenzt. Diese Funktion können wir z.B. durch Messreihen wie im vorangehenden Kapitel bestimmen.
Der Eingabeparameter $n$ entspricht dann der Problemgröße, so wie auch schon zuvor.
Findet man eine solche Funktion, so gibt man diese in der Regel mit dem *Landau-Symbol* $cal(O)$ an, man schreibt z.B.:
$ T(n) in cal(O)(n^2) $
Das würde bedeuten: die Funktion, die die Laufzeit unserer Algorithmus beschreibt verhält sich wie eine quadratische Funktion.
#hinweis[Dabei ist es uns völlig egal, ob es sich um $n^2$ oder $1000n^2$ handelt. Was entscheidend ist, ist der _quadratische_ Zusammenhang, denn auf lange Sicht wird z.B. $n^3$ trotzdem immer stärker wachsen als jede beliebige quadratische Funktion.]
Wenn ein Algorithmus designed wird ist das Ziel demzufolge natürlich, dass die beschreibende Funktion eine möglichst "gute" Funktion ist, d.h. eine Funktion, die möglichst schwach "steigt". Im besten Fall ist es sogar eine konstante Funktion, dann schreibt man $cal(O)(1)$
Man spricht bei der *O-Notation* auch von *Komplexitätsklassen*. Die folgende Abbildung zeigt einige weitere Komplexitätsklassen.
#align(center)[#image("images/o.png")]
Man sieht bereits deutlich, dass eine logarithmische Laufzeit immer besser als sogar eine lineare Laufzeit ist. Das bei _fibRek_ auftretenden *exponentielle* Verhalten ist hier gar nicht abgebildet, da es fast an der y-Achse "kleben" würde.
#task[Messen Sie die Laufzeit verschiedener Algorithmen, die sie im Laufe der 11. Klasse implementiert haben, z.B.:
- Berechnung der Länge einer Liste
- Suchen in einem binären Baum
- Graphenalgorithmen wie Tiefensuche
- Sortierverfahren
Nutzen Sie dazu auch weiterhin Diagramme! Geben Sie anschließend an, in welche Komplexitätsklasse diese Algorithmen fallen. (Wir betrachten hier immer die Laufzeit).
]
#pagebreak()
Natürlich ist es unpraktisch den Algorithmus zuerst zu schreiben und dann seine Zeitkomplexität zu testen. Ggf. dauert es sehr lange einen passenden Algorithmus für sein Problem zu entwickeln, nur um dann festzustellen, dass er gar nicht in sinnvoller Zeit terminiert.
Deswegen ist es üblich die Laufzeit bereits durch den *Code* abzuschätzen. Dies kann natürlich beliebig kompliziert werden, es gibt aber einige *Faustregeln* für einfachere Programme, z.B.:
- "einfache" Operationen wie das Verrechnen von Zahlen, das Prüfen einer Bedingung, eine Zuweisung, etc. haben eine *konstante* Laufzeit.
- ist eine Wiederholung im Spiel (z.B. for $i = 0$ bis $n$), dann ist mindestens eine *lineare* Laufzeit zu erwarten.
- auch wenn die Wiederholung "nur" bis z.B. $n/2$ läuft spricht man von einer linearen Laufzeit, mathematisch ausgedrückt: multiplikative Konstanten spielen keine Rolle (siehe oben).
- Werden Wiederholungen ineinander geschachtelt, so erhält man höhere Polynome als Laufzeit, da beispielsweise für jeden einzelnen Listeneintrag die gesamte Liste wieder durchlaufen wird (das ergibt dann $n^2$), usw.
- bei Strukturen wie dem binären Baum kommt es häufig zu logarithmischen Laufzeiten, da wir immer die Hälfte unseres Problemraums (z.B. den Suchraum) einschränken
*Ein Beispiel*:
#sc[```java
int sum = 0;
for(int i = 0; i < n; i++) {
for(int j = 0; j < n/2; i++) {
summe += 1;
}
}
for(int k = 0; k < n; k++) {
sum -= 1;
}
```]
Im ersten Block gibt es zwei ineinandergeschachtelte Wiederholungen, d.h. das inkrementieren wird $n dot n/2 = 1/2 n^2$ mal ausgeführt. In der zweiten Wiederholung gibt es dagegen $n$ Ausführungen.
Damit insgesamt: $ 1/2 n^2 + n $ Operationen. Konstante multiplikative Faktoren spielen keine Rolle, ebensowenig wie das $n$, denn für $n -> infinity$ wächst das Quadrat viel stärker als der lineare Anteil. Insgesamt ergibt sich hier also: $cal(O)(n^2)$
Gelegentlich müssen diese Analysen auch im Abitur durchgeführt werden, dann ist das Programm in der Regel aber nicht rekursiv und relativ einfach angelegt, ein Beispiel findet sich im nächsten Kapitel.
#pagebreak()
== Brute-Force-Attacken
Wie meistens gehen Abitur-Aufgaben nicht so tief in die Materie wie oben dargestellt, explizite Zeitmessungen müssen natürlich nicht durchgeführt werden, auch muss in der Regel kein Code dafür angegeben werden. Die Analyse von Laufzeiten spielt aber schon eine Rolle.
Häufig haben die Aufgaben mit *Brute-Force-Attacken* zu tun, d.h. wir versuchen einfach systematisch Lösungen zu "erraten", z.B. bei der Bestimmung eines Passworts.
Hat das Passwort die Länge n und besteht der Zeichenvorrat aus $z$ Zeichen, so gibt es $z^n$ Möglichkeiten, die ausprobiert werden müssen, die Zeitkomplexität liegt also in $cal(O)(z^n)$
Es liegt also wieder ein *exponentieller* Zusammenhang vor, der in der Praxis nicht knackbar ist, ohne weitere Tricks anzuwenden.
Zwei der äußerst komplizierten Aufgaben aus dem Abitur 2023 finden sich unten und veranschaulichen das grundlegende Niveau:
#task[#image("images/abi23lz3.png")]
#task[#image("images/abi23lz1.png") \ #image("images/abi23lz2.png")]
#pagebreak()
#set page(
header: align(right)[
Grenzen der Berechenbarkeit - Skript 2inf1 \
]
)
= Grenzen der Berechenbarkeit
Neben der Frage, wie schnell oder wie effizient ein Algorithmus ist gibt es natürlich noch eine weitere, noch grundlegendere Frage:
#align(center)[*Existiert überhaupt ein Algorithmus, der in vernünftiger Zeit ein gegebenes Problem löst?*]
Man möchte natürlich nicht Jahre- oder Jahrzehntelang an einem Problem arbeiten, nur um dann festzustellen, dass es gar nicht lösbar ist!
Stellt man sich die Frage in dieser Allgemeinheit, so ist man zunächst etwas verloren, wie man das Ganze angehen soll.
Es braucht eine große Menge theoretischen Unterbaus, um die folgenden Ergebnisse wirklich zu verstehen, aber wir tauchen dennoch eine kleine Zehe in die tiefsten Wasser der *theoretischen Informatik* und beleuchten einige Aussagen, z.B. die folgende:
#satz(customTitle: "Die Church-Thuring-These")[Die Klasse der Turing-berechenbaren Funktionen stimmt mit der Klasse der intuitiv berechenbaren Funktionen überein.]
Hier tauchen zunächst zwei Namen auf, die in den Fundamenten der Informatik nicht wegzudenken sind: #link("https://de.wikipedia.org/wiki/Alan_Turing")[Alan Turing] und #link("https://de.wikipedia.org/wiki/Alonzo_Church")[Alonzo Church].
Was ebenfalls bemerkenswert ist: Es ist kein mathematischer Satz, sondern eine *These*, die hier aufgestellt wurde. Es gibt also keinen formalen Beweis, sondern "nur" eine mit vielen Begründungen versehene Behauptung, die auch allgemeine Anerkennung findet.
Nun aber zum Inhalt, zunächst einige Begriffsklärungen:
- *intuitiv-berechenbar*: Turing selbst stellte sich hier alles vor, was ein Mensch mit Papier und Stift auf Papier berechnen kann.
- *turing-berechenbar*: alles, was durch eine *Turingmaschine* berechnet werden kann. Eine Turingmaschine ist im Wesentlichen ebenfalls der Prototyp eines Computers, aber nicht in technischer, sondern in konzeptioneller Hinsicht. Vereinfacht gesagt besteht eine Turingmaschine aus einem Band, auf das Daten geschrieben werden können, einem Kopf, der sich nach links oder rechts bewegen kann und der Zeichen auf das Band schreiben kann (und auch löschen).
Eine Turingmaschine ist im Wesentlichen eine Verallgemeinerung der endlichen Automaten, die wir bereits in der Theorie der formalen Sprachen betrachtet haben (man kann z.B. zeigen, dass die Wörter beliebiger Typ-0 Sprachen von Turingmaschinen erkannt werden können).
Auf diesen Turingmaschinen basiert demzufolge eines der vielen *Berechenbarkeitsmodelle*, also Antworten auf die Frage:
#align(center)[
"Was kann ich überhaupt berechnen"?
]
Die Aussage der Church-Turing-These besagt nun: alles was ein Mensch nur potentiell ausrechnen könnte, kann eine Turingmaschine auch ausrechnen.
Das klingt erst einmal wunderbar, d.h. wir konstruieren einfach munter Turingmaschinen und versuchen eine zu finden, die unser Problem löst!
Auf vielen mathematischen Umwegen lassen sich viele der Probleme, die man gerne lösen möchte auf *Wortprobleme* zurückführen, also Probleme der Form:
#align(center)[*Ist ein gegebenes Wort in einer Sprache $L$*]
Diese Art Frage haben wir bereits z.B. mit den Automaten beantwortet, aber nur für den Fall der *regulären Sprachen*.
Für allgemeinere Sprachen (die allgemeinere Probleme beschreiben) kann man sich also fragen, welche "Antwort" eine gegebene Turingmaschine *T* bei Eingabe eines bestimmten Wortes $omega$ ausgibt. (Die Antwort dabei kann nur sein: "Ist Teil der Sprache" oder "Ist nicht Teil der Sprache")
Turing ging noch einen Schritt weiter, er fragte sich, ob die Turingmaschine bei Eingabe eines Wortes überhaupt immer hält und eine Antwort gibt, ausformuliert:
#definition(customTitle: "Das Halteproblem")[Das allgemeine _Halteproblem_ lautet wie folgt:
- Gegeben: eine Turingmaschine $T$ und ein Eingabewort $omega$
- Gesucht: Terminiert (also "hält") $T$ bei Eingabe von $omega$]
Wenn man weiß, dass die Maschine hält, hat man zumindest Hoffnung, sie klassifizieren zu können und ggf für eine Berechnung zu nutzen. *Turing* machte dies aber völlig zunichte, indem er den folgenden Satz bewies (und zwar mathematisch formal!):
#satz[Das allgemeine Halteproblem ist *unentscheidbar*]
Das bedeutet leider genau das wonach es sich anhört: wenn eine Turingmaschine anfängt zu rechnen können wir *prinzipiell nicht* vorhersagen, ob diese jemals halten wird. Wenn sie losläuft könnte sie nach 100 Jahren stoppen, vielleicht aber auch nie, es gibt keinen Weg dies festzustellen. Diese prinzipielle Nicht-Vorhersagbarkeit hat für die Informatik ähnlich tragische Folgen wie z.B. der #link("https://de.wikipedia.org/wiki/G%C3%B6delscher_Unvollst%C3%A4ndigkeitssatz")[Gödelsche Unvollständigkeitssatz] für die Mathematik.
Die Folgerungen aus dem Halteproblem sind monumental und macht die obige Frage nach der Existenz eines Algorithmus noch wesentlich schwerer. Außerdem ist es für den menschlichen Forschergeist natürlich extrem unbefriedigend bewiesen zu haben, dass wir etwas nicht wissen *können*.
Mit dieser traurigen Erkenntnis endet der Stoff der 12. Klasse.
#align(center)[#text(24pt)[*Ab hier kommt nur noch das Abitur*]
#text(8pt)[(nach dem Anhang)]
#text(24pt)[*Viel Erfolg*]]
#pagebreak()
= Anhang
<measurementMean>
```terminal
The measurement took 51900 nanoseconds
The measurement took 46100 nanoseconds
The measurement took 45900 nanoseconds
The measurement took 43300 nanoseconds
The measurement took 69900 nanoseconds
The measurement took 46000 nanoseconds
The measurement took 68300 nanoseconds
The measurement took 70300 nanoseconds
The measurement took 46000 nanoseconds
The measurement took 45400 nanoseconds
The measurement took 46100 nanoseconds
The measurement took 42400 nanoseconds
The measurement took 57000 nanoseconds
The measurement took 17200 nanoseconds
The measurement took 46100 nanoseconds
The measurement took 24400 nanoseconds
The measurement took 7000 nanoseconds
The measurement took 35100 nanoseconds
The measurement took 23400 nanoseconds
The measurement took 6700 nanoseconds
The measurement took 32000 nanoseconds
The measurement took 6700 nanoseconds
The measurement took 24600 nanoseconds
The measurement took 31400 nanoseconds
The measurement took 6800 nanoseconds
The measurement took 26800 nanoseconds
The measurement took 6900 nanoseconds
The measurement took 31300 nanoseconds
The measurement took 27200 nanoseconds
The measurement took 7000 nanoseconds
The measurement took 44100 nanoseconds
The measurement took 10200 nanoseconds
The measurement took 34500 nanoseconds
The measurement took 6700 nanoseconds
The measurement took 22700 nanoseconds
The measurement took 6700 nanoseconds
The measurement took 21700 nanoseconds
The measurement took 6600 nanoseconds
The measurement took 20700 nanoseconds
The measurement took 6700 nanoseconds
The measurement took 23100 nanoseconds
The measurement took 6700 nanoseconds
The measurement took 1400 nanoseconds
The measurement took 1300 nanoseconds
The measurement took 1300 nanoseconds
```
#pagebreak()
<results>
== Interessantes Laufzeitverhalten
Ausgangspunkt war das merkwürdige Verhalten von Java:
#align(center)[#image("images/fibiter.png", width: 70%)]
Die Arbeitstheorie an dieser Stelle ist, dass sich das Verhalten von Java ändert: zunächst interpretiert die JVM, dann - wenn langsam klar wird, dass es sich um eine rechenintensive Methode handelt dreht der Compiler auf. Scheinbar in mehreren Stufen.
#pagebreak()
Um Vergleichswerte aus anderen Sprachen zu haben folgte zuerst eine vergleichbare Implementierung in $C$. Da das Kompilieren von $C$ unter Windows wenig spaßig ist, wurde dazu ein WSL Ubuntu verwendet. Diese Implementierung lieferte die folgenden Ergebnisse:
#align(center)[#image("images/C.png", width: 70%)]
Hier gibt es keine Startschwierigkeiten, allerdings gibt es auch hier ein Interessantes Phänomen von im Wesentlichen zwei Geraden. Dieses Phänomen wird aber schwächer, wenn man den $C-$Compiler optimieren lässt, er wird dann auch deutlich schneller:
#align(center)[#image("images/CO3.png", width: 70%)]
#pagebreak()
Um wieder Vergleichbarkeit herzustellen wurde nun auch noch der Java Code im WSL ausgeführt, das Ergebnis sieht man hier:
#align(center)[#image("images/JavaWSLCO3.png", width: 70%)]
Der $C-$ Compiler und der Java-Compiler sind sich also im Wesentlichen einig, wie dieses Problem zu optimieren ist und liefern dieselbe Performanz bei größeren Werten, bei kleineren Werten ergibt sich weiter die Diskrepanz des erst langsam aufdrehenden Java-Compilers.
#align(center)[#image("images/JavaWSLCO3400.png", width: 70%)]
Um die Hypothese weiter zu überprüfen und Java dazu zu bringen, früher "warmzulaufen" wurde nun die Anzahl der Durchläufe für die Mittelwertbildung erhöht. Und tatsächlich liefert Java hier (im Gegensatz zu oben) vernünftige Werte und zeigt sich früher bereit vernünftig zu optimieren.
#align(center)[#image("images/Java1000.png", width: 70%)]
Und im entscheidenden Intervall:
#align(center)[#image("images/Java1000smol.png", width: 70%)]
Zum Abschluss noch etwas Amüsantes, aus Neugierde die Ergebnisse eines äquivalenten Codes in Python (zuerst nicht im WSL):
#align(center)[#image("images/paisn.png", width: 70%)]
Scheinbar hat das Zeitmessungsmodul von Python unter Windows Probleme bei der Messung im Nanosekundenbereich.
Auffällig ist in jedem Fall die Langsamkeit (nicht unbedingt überraschend) an sich und der scheinbar nicht lineare Zusammenhang.
#pagebreak()
Um wieder Vergleichbarkeit herzustellen ein letzer Durchlauf im WSL und siehe da:
#align(center)[#image("images/paisnWSL.png", width: 70%)]
Vernünftige Werte! Das Verhalten von Python sieht eher quadratisch als linear aus - und ist einige Größenordnungen größer weiterhin.
Eine mögliche Erklärung für die nahezu quadratische Laufzeit ist möglicherweise das Speichern der Ergebnisse in einem String und das Zusammenkleben dieser Strings zu einem Ausgabestring, der in die Datei gespeichert wird.
Der String wächst ebenfalls mit $n$ und dadurch ergibt sich näherungsweise eine quadratische Laufzeit - Java und C gehen damit intern besser um und bilden den String scheinbar nicht explizit. |
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/text/numbers_06.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test the `repr` function with floats.
#repr(12.0) \
#repr(3.14) \
#repr(1234567890.0) \
#repr(0123456789.0) \
#repr(0.0) \
#repr(-0.0) \
#repr(-1.0) \
#repr(-9876543210.0) \
#repr(-0987654321.0) \
#repr(-3.14) \
#repr(4.0 - 8.0)
|
https://github.com/ivaquero/scibook | https://raw.githubusercontent.com/ivaquero/scibook/main/README.md | markdown | # SciBook
A simple book template for scientific books and manuals.
## Usage
### Clone Official Repository
To compile, please refer to the guide on [typst-packages](https://github.com/typst/packages) and clone this repository in the following path:
- Linux:
- `$XDG_DATA_HOME/typst`
- `~/.local/share/typst`
- macOS:`~/Library/Application Support/typst`
- Windows:`%APPDATA%/typst`
### Import the Template
Clone the [scibook](https://github.com/ivaquero/scibook) repository in the above path, and then import it in the document
```typst
#import "@local/scibook:0.1.0": *
```
|
|
https://github.com/jbirnick/typst-rich-counters | https://raw.githubusercontent.com/jbirnick/typst-rich-counters/main/README.md | markdown | MIT License | > [!NOTE]
> This is a [Typst](https://typst.app/) package. Click [here](https://typst.app/universe/package/rich-counters/) to find it in the Typst Universe.
# `rich-counters`
This package allows you to have **counters which can inherit from other counters**.
Concretely, it implements `rich-counter`, which is a counter that can _inherit_ one or more levels from another counter.
The interface is pretty much the same as the [usual counter](https://typst.app/docs/reference/introspection/counter/).
It provides a `display()`, `get()`, `final()`, `at()`, and a `step()` method.
An `update()` method will be implemented soon.
## Simple typical Showcase
In the following example, `mycounter` inherits the first level from `heading` (but not deeper levels).
```typ
#import "@preview/rich-counters:0.2.1": *
#set heading(numbering: "1.1")
#let mycounter = rich-counter(identifier: "mycounter", inherited_levels: 1)
// DOCUMENT
Displaying `mycounter` here: #context (mycounter.display)()
= First level heading
Displaying `mycounter` here: #context (mycounter.display)()
Stepping `mycounter` here. #(mycounter.step)()
Displaying `mycounter` here: #context (mycounter.display)()
= Another first level heading
Displaying `mycounter` here: #context (mycounter.display)()
Stepping `mycounter` here. #(mycounter.step)()
Displaying `mycounter` here: #context (mycounter.display)()
== Second level heading
Displaying `mycounter` here: #context (mycounter.display)()
Stepping `mycounter` here. #(mycounter.step)()
Displaying `mycounter` here: #context (mycounter.display)()
= Aaand another first level heading
Displaying `mycounter` here: #context (mycounter.display)()
Stepping `mycounter` here. #(mycounter.step)()
Displaying `mycounter` here: #context (mycounter.display)()
```

## Construction of a `rich-counter`
To create a `rich-counter`, you have to call the `rich-counter(...)` function.
It accepts three arguments:
- `identifier` (required)
Must be a unique `string` which identifies the counter.
- `inherited_levels`
Specifies how many levels should be inherited from the parent counter.
- `inherited_from` (Default: `heading`)
Specifies the parent counter. Can be a `rich-counter` or any key that is accepted by the [`counter(...)` constructor](https://typst.app/docs/reference/introspection/counter#constructor), such as a `label`, a `selector`, a `location`, or a `function` like `heading`.
If not specified, defaults to `heading` (and hence it will inherit from the counter of the headings).
If it's a `rich-counter` and `inherited_levels` is _not_ specified, then `inherited_levels` will default to one level higher than the given `rich-counter`.
For example, the following creates a `rich-counter` `foo` which inherits one level from the headings, and then another `rich-counter` `bar` which inherits two levels (implicitly) from `foo`.
```typ
#import "@preview/rich-counters:0.2.1": *
#let foo = rich-counter(identifier: "foo", inherited_levels: 1)
#let bar = rich-counter(identifier: "bar", inherited_from: foo)
```
## Usage of a `rich-counter`
- `display(numbering)` (needs `context`)
Displays the current value of the counter with the given numbering style. Giving the numbering style is optional, with default value `"1.1"`.
- `get()` (needs `context`)
Returns the current value of the counter (as an `array`).
- `final()` (needs `context`)
Returns the value of the counter at the end of the document.
- `at(loc)` (needs `context`)
Returns the value of the counter at `loc`, where `loc` can be a `label`, `selector`, `location`, or `function`.
- `step(depth: 1)`
Steps the counter at the specified `depth` (default: `1`).
That is, it steps the `rich-counter` at level `inherited_levels + depth`.
**Due to a Typst limitation, you have to put parentheses before you put the arguments. (See below.)**
For example, the following steps `mycounter` (at depth 1) and then displays it.
```typ
#import "@preview/rich-counters:0.2.1": *
#let mycounter = rich-counter(...)
#(mycounter.step)()
#context (mycounter.display)("1.1")
```
## Limitations
Due to current Typst limitations, there is no way to detect manual updates or steps of Typst-native counters, like `counter(heading).update(...)` or `counter(heading).step(...)`.
Only occurrences of actual `heading`s can be detected.
So make sure that after you call e.g. `counter(heading).update(...)`, you place a heading directly after it, before you call any `rich-counter`s.
## Roadmap
- implement `update()`
- use Typst custom types as soon as they become available
- adopt native Typst implementation of dependent counters as soon it becomes available
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/cetz-plot/0.1.0/src/plot/annotation.typ | typst | Apache License 2.0 | #import "/src/cetz.typ"
#import cetz: draw, process, util, matrix
#import "util.typ"
#import "sample.typ"
/// Add an annotation to the plot
///
/// An annotation is a sub-canvas that uses the plots coordinates specified
/// by its x and y axis.
///
/// #example(```
/// plot.plot(size: (2,2), x-tick-step: none, y-tick-step: none, {
/// plot.add(domain: (0, 2*calc.pi), calc.sin)
/// plot.annotate({
/// rect((0, -1), (calc.pi, 1), fill: rgb(50,50,200,50))
/// content((calc.pi, 0), [Here])
/// })
/// })
/// ```)
///
/// Bounds calculation is done naively, therefore fixed size content _can_ grow
/// out of the plot. You can adjust the padding manually to adjust for that. The
/// feature of solving the correct bounds for fixed size elements might be added
/// in the future.
///
/// - body (drawable): Elements to draw
/// - axes (axes): X and Y axis names
/// - resize (bool): If true, the plots axes get adjusted to contain the annotation
/// - padding (none,number,dictionary): Annotation padding that is used for axis
/// adjustment
/// - background (bool): If true, the annotation is drawn behind all plots, in the background.
/// If false, the annotation is drawn above all plots.
#let annotate(body, axes: ("x", "y"), resize: true, padding: none, background: false) = {
((
type: "annotation",
body: {
draw.set-style(mark: (transform-shape: false))
body;
},
axes: axes,
resize: resize,
background: background,
padding: cetz.util.as-padding-dict(padding),
),)
}
// Returns the adjusted axes for the annotation object
//
// -> array Tuple of x and y axis
#let calc-annotation-domain(ctx, x, y, annotation) = {
if not annotation.resize {
return (x, y)
}
ctx.transform = matrix.ident()
let (ctx: ctx, bounds: bounds, drawables: _) = process.many(ctx, annotation.body)
if bounds == none {
return (x, y)
}
let (x-min, y-min, ..) = bounds.low
let (x-max, y-max, ..) = bounds.high
x-min -= annotation.padding.left
x-max += annotation.padding.right
y-min -= annotation.padding.bottom
y-max += annotation.padding.top
x.min = calc.min(x.min, x-min)
x.max = calc.max(x.max, x-max)
y.min = calc.min(y.min, y-min)
y.max = calc.max(y.max, y-max)
return (x, y)
}
|
https://github.com/sitandr/typst-examples-book | https://raw.githubusercontent.com/sitandr/typst-examples-book/main/src/basics/must_know/tables.md | markdown | MIT License | # Tables and grids
While tables are not that necessary to know if you don't plan to use them in your documents, grids may be very useful for _document layout_. We will use both of them them in the book later.
Let's not bother with copying examples from official documentation. Just make sure to skim through it, okay?
## Basic snippets
### Spreading
Spreading operators (see [there](../scripting/arguments.md)) may be especially useful for the tables:
```typ
#set text(size: 9pt)
#let yield_cells(n) = {
for i in range(0, n + 1) {
for j in range(0, n + 1) {
let product = if i * j != 0 {
// math is used for the better look
if j <= i { $#{ j * i }$ }
else {
// upper part of the table
text(gray.darken(50%), str(i * j))
}
} else {
if i == j {
// the top right corner
$times$
} else {
// on of them is zero, we are at top/left
$#{i + j}$
}
}
// this is an array, for loops merge them together
// into one large array of cells
(
table.cell(
fill: if i == j and j == 0 { orange } // top right corner
else if i == j { yellow } // the diagonal
else if i * j == 0 { blue.lighten(50%) }, // multipliers
product,),
)
}
}
}
#let n = 10
#table(
columns: (0.6cm,) * (n + 1), rows: (0.6cm,) * (n + 1), align: center + horizon, inset: 3pt, ..yield_cells(n),
)
```
### Highlighting table row
```typ
#table(
columns: 2,
fill: (x, y) => if y == 2 { highlight.fill },
[A], [B],
[C], [D],
[E], [F],
[G], [H],
)
```
For individual cells, use
```typ
#table(
columns: 2,
[A], [B],
table.cell(fill: yellow)[C], table.cell(fill: yellow)[D],
[E], [F],
[G], [H],
)
```
### Splitting tables
Tables are split between pages automatically.
```typ
#set page(height: 8em)
#(
table(
columns: 5,
[Aligner], [publication], [Indexing], [Pairwise alignment], [Max. read length (bp)],
[BWA], [2009], [BWT-FM], [Semi-Global], [125],
[Bowtie], [2009], [BWT-FM], [HD], [76],
[CloudBurst], [2009], [Hashing], [Landau-Vishkin], [36],
[GNUMAP], [2009], [Hashing], [NW], [36]
)
)
```
However, if you want to make it breakable inside other element, you'll have to make that element breakable too:
```typ
#set page(height: 8em)
// Without this, the table fails to split upon several pages
#show figure: set block(breakable: true)
#figure(
table(
columns: 5,
[Aligner], [publication], [Indexing], [Pairwise alignment], [Max. read length (bp)],
[BWA], [2009], [BWT-FM], [Semi-Global], [125],
[Bowtie], [2009], [BWT-FM], [HD], [76],
[CloudBurst], [2009], [Hashing], [Landau-Vishkin], [36],
[GNUMAP], [2009], [Hashing], [NW], [36]
)
)
``` |
https://github.com/typst/templates | https://raw.githubusercontent.com/typst/templates/main/badformer/README.md | markdown | MIT No Attribution | # badformer
Reach the goal in this retro-inspired wireframing platformer. Play in 3
dimensions and compete for the lowest number of steps to win!
This small game is playable in the Typst editor and best enjoyed with the web
app or `typst watch`. It was first released for the 24 Days to Christmas
campaign in winter of 2023.
## Usage
You can use this template in the Typst web app by clicking "Start from template"
on the dashboard and searching for `badformer`.
Alternatively, you can use the CLI to kick this project off using the command
```
typst init @preview/badformer
```
Typst will create a new directory with all the files needed to get you started.
Move with WASD and jump with space. You can also display a minimap by pressing
E.
## Configuration
This template exports the `game` function, which accepts a positional argument
for the game input.
The template will initialize your package with a sample call to the `game`
function in a show rule. If you want to change an existing project to use this
template, you can add a show rule like this at the top of your file:
```typ
#import "@preview/badformer:0.1.0": game
#show: game(read("main.typ"))
// Move with WASD and jump with space.
```
|
https://github.com/andymeneely/examify.typst | https://raw.githubusercontent.com/andymeneely/examify.typst/master/examples/full/questions/lorem_ipsum.typ | typst | MIT License | #import "@local/exam:0.1.0": *
#question[
#points(10)
#lorem(15)
+ Incorrect
+ Incorrect
+ #correct[Correct]
] |
https://github.com/yongweiy/cv | https://raw.githubusercontent.com/yongweiy/cv/master/publications.typ | typst | // Imports
#import "@preview/brilliant-cv:2.0.2": cvSection, cvPublication
#let metadata = toml("./metadata.toml")
#let cvSection = cvSection.with(metadata: metadata, highlighted: false)
#cvSection("Publications")
#cvPublication(
bib: bibliography("./src/publications.bib"),
refStyle: "association-for-computing-machinery",
refFull: true,
)
|
|
https://github.com/maucejo/book_template | https://raw.githubusercontent.com/maucejo/book_template/main/template/chapters/conclusion.typ | typst | MIT License | #import "../../src/book.typ": *
#chapter("Conclusion et perspectives", toc: false, numbered: false)[
#lorem(100)
] |
https://github.com/RandomcodeDev/FalseKing-Design | https://raw.githubusercontent.com/RandomcodeDev/FalseKing-Design/main/game/audio.typ | typst | = Audio
== Music
There will be three types of track for each region: exploration music, combat music, and boss music. There might be a few varieties of each, and they'll be cycled between. This system isn't too dissimilar to how Faster Than Light handles music.
== Sound effects
There will be sounds for different attacks and actions, and sound cues when certain things happen, such as discovering a new area or defeating a combat encounter (much like Breath of the Wild).
|
|
https://github.com/soul667/typst | https://raw.githubusercontent.com/soul667/typst/main/PPT/typst-slides-fudan/themes/polylux/book/src/themes/gallery/bipartite.md | markdown | # Bipartite theme

This theme is inspired by
[Modern Annual Report](https://slidesgo.com/theme/modern-annual-report).
and a bit more opinionated.
It features a dominant partition of space into a bright and a dark side and is
rather on the "artsy" than functional side.
Use it via
```typ
#import "@preview/polylux:0.2.0": *
#import themes.bipartite: *
#show: bipartite-theme.with(...)
```
The `bipartite` theme cannot display content that exceeds one page, in general.
Note that, against the convention, `bipartite` offers no `#slide` function.
Use either `#west-slide` or #`east-slide` for regular content.
Also, this theme features no sections or slide numbers.
## Options for initialisation
`bipartite-theme` accepts the following optional keyword arguments:
- `aspect-ratio`: the aspect ratio of the slides, either `"16-9"` or `"4-3"`,
default is `"16-9"`
## Slide functions
`bipartite` provides the following custom slide functions:
```typ
#title-slide(...)
```
Displays presentation title on a large bright portion above the subtitle, the
author and the date.
If a date was given, separates it from the author by a central dot.
Accepts the following keyword arguments:
- `title`: title of the presentation, default: `[]`
- `subtitle`: subtitle of the presentation, default: `none`
- `author`: author of presentation, arbitrary content, default: `[]`
- `date`: date of the presentation, default: `none`
Does not accept additional content.
---
```typ
#west-slide(title: ...)[
...
]
```
Splits the slide into a larger bright section on the right where the content
goes and a smaller, darker, left section where the title is displayed.
Everything is left aligned.
---
```typ
#east-slide(title: ...)[
...
]
```
Same as `#west-slide` but with the title and the content switching places, and
everything being right aligned.
---
```typ
#split-slide[
...
][
...
]
```
Splits the slide into two equal sections on the left and the right that both
contain content (`#split-slide` requires exactly two content blocks to be passed).
The left half is dark text on a bright background and right aligned, the right
half is bright text on dark background and left aligned.
Does not display a slide title.
## Example code
The image at the top is created by the following code:
```typ
#import "@preview/polylux:0.2.0": *
{{#include bipartite.typ:3:}}
```
|
|
https://github.com/mcanouil/generate-quarto-invoices | https://raw.githubusercontent.com/mcanouil/generate-quarto-invoices/main/_extensions/mcanouil/invoice/typst-template.typ | typst | MIT License | #let parse-date(date) = {
let date = date.replace("\\", "")
let date = str(date).split("-").map(int)
datetime(year: date.at(0), month: date.at(1), day: date.at(2))
}
#let format-date(date) = {
let day = date.day()
let ord = super(if 10 < day and day < 20 {
"th"
} else if calc.rem(day, 10) == 1 {
"st"
} else if calc.rem(day, 10) == 2 {
"nd"
} else if calc.rem(day, 10) == 3 {
"rd"
} else {
"th"
})
[the #day#ord of #date.display("[month repr:long]"), #date.year()]
}
#let count-days(x, y) = {
let duration = y - x
str(duration.days())
}
#let invoice(
logo: none,
title: none,
description: none,
sender: none,
recipient: none,
invoice: none,
bank: none,
fee: 2.28,
penalty: "€40",
paper: "a4",
margin: (x: 2.5cm, y: 2.5cm),
lang: "en_UK",
font: ("Alegreya Sans", "Alegreya Sans SC", "Alegreya Sans", "Alegreya Sans SC"),
fontsize: 12pt,
body
) = {
let issued = parse-date(invoice.at("issued"))
if "penalty" in invoice and invoice != none {
let penalty = invoice.at("penalty", default: "€40")
} else {
let penalty = "€40"
}
if "fee" in invoice and invoice != none {
let fee = invoice.at("fee", default: 2.28)
} else {
let fee = 2.28
}
set document(
title: "Invoice " + invoice.at("number").replace("\\", "") + " - " + recipient.at("name").replace("\\", ""),
author: sender.at("name").replace("\\", ""),
date: issued
)
set page(
paper: paper,
margin: margin,
)
set par(justify: true)
set text(
lang: lang,
font: font,
size: fontsize,
)
grid(
columns: (50%, 50%),
align(left, {
heading(level: 2, sender.at("name").replace("\\", ""))
if "address" in sender and sender != none {
v(fontsize * 0.5)
emph(sender.at("address").at("street").replace("\\", ""))
linebreak()
sender.at("address").at("zip").replace("\\", "") + " " + sender.at("address").at("city").replace("\\", "")
if "state" in sender.at("address") and not sender.at("address").at("state") in (none, "") {
", " + sender.at("address").at("state").replace("\\", "")
} else {
""
}
linebreak()
sender.at("address").at("country").replace("\\", "")
}
v(fontsize * 0.1)
if "email" in sender and sender != none {
link("mailto:" + sender.at("email").replace("\\", ""))
} else {
hide("a")
}
}),
align(right, {
heading(level: 2, recipient.at("name").replace("\\", ""))
if "address" in recipient and recipient != none {
v(fontsize * 0.5)
emph(recipient.at("address").at("street").replace("\\", ""))
linebreak()
recipient.at("address").at("zip").replace("\\", "") + " " + recipient.at("address").at("city").replace("\\", "")
if "state" in recipient.at("address") and not recipient.at("address").at("state") in (none, "") {
", " + recipient.at("address").at("state").replace("\\", "")
} else {
""
}
linebreak()
recipient.at("address").at("country").replace("\\", "")
}
})
)
v(fontsize * 1)
grid(
columns: (50%, 50%),
align(left, {
if "registration" in sender and sender != none and sender.at("registration") != "" {
"Registration number: " + sender.at("registration").replace("\\", "")
linebreak()
} else {
hide("a")
}
if "vat" in sender and sender != none and sender.at("vat") != "" {
"VAT number: " + sender.at("vat").replace("\\", "")
} else {
hide("a")
}
v(fontsize * 1)
if "number" in invoice and invoice != none and invoice.at("number") != "" {
"Invoice number: " + invoice.at("number").replace("\\", "")
linebreak()
} else {
hide("a")
}
if "issued" in invoice and invoice != none {
"Issued on: " + invoice.at("issued").replace("\\", "")
linebreak()
} else {
hide("a")
}
if "due" in invoice and invoice != none {
"Payment due date: " + invoice.at("due").replace("\\", "")
} else {
hide("a")
}
}),
align(center, {
if logo != "none" and logo != none {
image(logo, width: 3cm)
} else {
hide("a")
}
})
)
align(horizon, {
if title != none {
heading(level: 1, title.replace("\\", ""))
if description != none {
emph(description.replace("\\", ""))
}
}
body
align(right, if "exempted" in sender and sender != none and sender.exempted != "none" and sender.exempted != none {
text(luma(100), emph(sender.at("exempted").replace("\\", "")))
} else {
hide("a")
})
})
align(bottom, {
if "bic" in bank and "iban" in bank and bank != none {
heading(level: 3, "Payment information")
v(fontsize * 0.5)
"BIC: " + bank.at("bic").replace("\\", "")
linebreak()
"IBAN: " + bank.at("iban").replace("\\", "")
linebreak()
"Reference: " + strong(invoice.at("reference").replace("\\", ""))
linebreak()
text(luma(100), emph("To use as label on your bank transfer to identify the transaction."))
linebreak()
} else {
hide("a")
}
v(fontsize * 2)
text(luma(100),
emph(
sender.at("name").replace("\\", "")
+ " sent you this invoice on "
+ format-date(issued)
+ ". The invoice must be paid under "
+ count-days(issued, parse-date(invoice.at("due")))
+ " day(s), otherwise you will have to pay a late fee of "
+ str(fee)
+ " % and a "
+ str(penalty)
+ " penalty for recovery costs. "
+ "No discount will be granted for early settlement."
)
)
})
}
|
https://github.com/bchaber/typst-template | https://raw.githubusercontent.com/bchaber/typst-template/main/documents/ncn.typ | typst | #let ncn(
title: none,
bibliography-file: none,
body
) = {
set text(font: "Times New Roman", size: 11pt)
set par(justify: true, leading: 13pt)
set page(margin: (x: 2cm, y: 1.5cm))
if title != none {
v(3pt, weak: true)
align(center, text(18pt, title))
v(8.35mm, weak: true)
}
body
if bibliography-file != none {
show bibliography: set text(8pt)
bibliography(bibliography-file, title: text(10pt)[References], style: "ieee")
}
}
|
|
https://github.com/protohaven/printed_materials | https://raw.githubusercontent.com/protohaven/printed_materials/main/common-tools/mig_welder.typ | typst |
#import "../environment/env-protohaven-class_handouts.typ": *
= MIG Welder
(Overview paragraph(s))
== Notes
=== Safety
=== Common Hazards
=== Care
=== Use
=== Consumables
== Parts of the TOOL
===
== Basic Operation
=== Setting Up
+ Power the MIG Welder on.
+ Turn gas tank valve knob 2-3 turns counter-clockwise.\
_This will enable the flow of gas to the pressure valve._
+ Turn pressure valve a few turns clockwise until pressure is between 30-40.
+ Adjust wire feed speed and voltage according to chart recommendations for metal thickness.
+ Attach grounding cord to table or piece to be welded
=== Workholding
There are a variety of tools to hold the workpiece firmly to the welding table:
- Clamps
- Magnets
=== Making a Weld
=== Cleaning Up
+ Turn the gas tank knob clockwise until gas flow stops and the tank has a hand-tight seal.
+ Turn the pressure valve a few turns counter-clockwise to release pressure.
+ Turn both wire feed and voltage knobs to their lowest setting.
+ Depress trigger on welding wand to release gas pressure until gas pressure valve indicates pressure has dropped to less than 10.
+ Turn the MIG welder off.
+ Wind up cables for storage.
+ Stow the welding wand in the holder on the side of the NIG welder.
|
|
https://github.com/saecki/zig-grep | https://raw.githubusercontent.com/saecki/zig-grep/main/README.md | markdown | # zig-grep
## Build & Run
- Install the [zig toolchain](https://ziglang.org/download)
- Install the [rust toolchain](https://rustup.rs/)
- Initialize the rust regex (`rure`) submodule `git submodule init rure`
- Build the `rure` crate `cargo build --release --manifest-path=rure/regex-capi/Cargo.toml`
- Build and run the program `zig build -Doptimize=ReleaseFast run -- -c "rure([a-zA-Z_]*)" src`
- The executable should now be at `zig-out/bin/zig-grep`
## Build Paper
- Install [typst](https://typst.app)
- Build the paper `typst compile paper/paper.typ`
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.