repo
stringlengths 26
115
| file
stringlengths 54
212
| language
stringclasses 2
values | license
stringclasses 16
values | content
stringlengths 19
1.07M
|
---|---|---|---|---|
https://github.com/rxt1077/it610 | https://raw.githubusercontent.com/rxt1077/it610/master/markup/slides/automation.typ | typst | #import "/templates/slides.typ": *
#show: university-theme.with(
short-title: [Automation],
)
#title-slide(
title: [Automate the Boring Stuff],
)
#alternate(
title: [Python],
image: licensed-image(
file: "/images/python-logo.svg",
license: "fairuse",
title: [Python SVG Vector],
url: "https://www.svgrepo.com/svg/376344/python",
),
text: [
- Huge library base
- Easy to get started
- Binaries for everything
- Easy(ish) to maintain
- #link("https://github.com/rxt1077/it610/blob/master/markup/typst_make.py")[These slides are automated with Python]
- “I’m not really a coder”
],
)
#alternate(
title: [BASH Scripts],
image: licensed-image(
file: "/images/bash-logo.svg",
license: "fairuse",
title: [full_color_dark.svg],
url: "https://bashlogo.com/img/logo/svg/full_colored_dark.svg",
),
text: [
- They get the job done
- If you can type on the command line, you can understand it
- Consider migrating if they get huge
- Any Linux system has a BASH interpreter
- #link("https://medium.com/@viswa08/bash-script-weird-syntaxes-61b585ff1fb7")[Syntax can be pretty wonky]
],
)
#alternate(
title: [Cron / At],
image: licensed-image(
file: "/images/crontab.png",
license: "CC BY 2.0",
title: [Cron Job],
url: "https://www.flickr.com/photos/91795203@N02/16199272841",
author: [xmodulo],
author-url: "https://www.flickr.com/photos/91795203@N02",
),
text: [
- at - run once _at_ a later time
- cron - run repeatedly according to a schedule
- Every user (even root) can have a crontab that runs with their permissions
- man cron, crontab -e
- You better hope the cron daemon keeps running
- If your container has a cronjob, you're doing it wrong
]
)
#alternate(
title: [Push vs. Pull],
image: licensed-image(
file: "/images/push-pull.jpg",
license: "CC BY 2.0",
title: [Push and Pull Humour],
url: "https://www.flickr.com/photos/89165847@N00/49319453518",
author: [mikecogh],
author-url: "https://www.flickr.com/photos/89165847@N00",
),
text: [
- Hybrid solutions are probably best (pull, but push when needed)
- Some automation systems periodically check in (Puppet, Chef)
- #link("https://clusterlabs.org")[High availability software] can be used to check system status
]
)
#focus-slide()[What should I automate?]
#matrix-slide(columns: 3, rows: 3,
[Provisioning],
[Deployment (git)],
[Backups (rsync)],
[Configuration],
[Security],
[Orchestration],
[User notification],
[Testing],
[Coffee?],
)
#alternate(
title: [Ansible],
image: licensed-image(
file: "/images/ansible-logo.svg",
license: "fairuse",
title: [Ansible logo.svg],
url: "https://upload.wikimedia.org/wikipedia/commons/2/24/Ansible_logo.svg",
),
text: [
- Python-based, uses SSH
- Push (can also do pull)
- YAML (watch your spaces)
- Playbooks
- Roles
- Lots of Modules
- Descriptive: Define what you want to end up with
- Cloud-ready
- "I don't need Ansible, I'll just put all that in the Dockerfile"
],
)
#alternate(
title: [Automation Mindset],
image: licensed-image(
file: "/images/mindset.jpg",
license: "CC BY 2.0",
title: [Mindset],
url: "https://www.flickr.com/photos/128573122@N05/18658685910",
author: [davis.steve32],
author-url: "https://www.flickr.com/photos/128573122@N05",
),
text: [
- Change the way you think, tools will always change
- What do I spend my time doing?
- What provides the best experience for your users?
- Can I automate and monetize something that others are doing by hand?
- #link("https://en.wikipedia.org/wiki/Microsoft_System_Center_Configuration_Manager")[Windows Options]
]
)
|
|
https://github.com/Zuttergutao/Typstdocs-Zh-CN- | https://raw.githubusercontent.com/Zuttergutao/Typstdocs-Zh-CN-/main/Classified/Symbol.typ | typst | #[
#import "symm.typ":*
#set page(
paper:"a4",
margin: (
top:2mm,
bottom:2mm,
left:2mm,
right:2mm
),
numbering: none,
header: none,
)
#set text(size:18pt)
#set box(baseline:40%,fill:luma(230),height: 1.6em,inset:0.5em,outset:2pt,radius:5pt)
#let arr=()
#for (ii,i) in symm.keys().enumerate() [
#if type(symm.at(i)) == "dictionary" {
for (jj,j) in symm.at(i).keys().enumerate() [
#if type(symm.at(i).at(j)) == "dictionary" {
for (kk,k) in symm.at(i).at(j).keys().enumerate() [
#if type(symm.at(i).at(j).at(k))=="dictionary" {
for (mm,m) in symm.at(i).at(j).at(k).keys().enumerate() [
#if type(symm.at(i).at(j).at(k).at(m))=="dictionary" {
for (nn,n) in symm.at(i).at(j).at(k).at(m).keys().enumerate() [
#if symm.at(i).at(j).at(k).at(m).keys().at(nn) == "o" [
#arr.push(align(center+horizon)[#box[#symm.at(i).at(j).at(k).at(m).at(n)] \ #text(size:10pt)[#symm.keys().at(ii)\.#symm.at(i).keys().at(jj)\.#symm.at(i).at(j).keys().at(kk)\.#symm.at(i).at(j).at(k).keys().at(mm)]])
] else [
#arr.push(align(center+horizon)[#box[#symm.at(i).at(j).at(k).at(m).at(n)] \ #text(size:10pt)[#symm.keys().at(ii)\.#symm.at(i).keys().at(jj)\.#symm.at(i).at(j).keys().at(kk)\.#symm.at(i).at(j).at(k).keys().at(mm)\.#symm.at(i).at(j).at(k).at(m).keys().at(nn)]])
]
]
} else [
#if symm.at(i).at(j).at(k).keys().at(mm) == "o" [
#arr.push(align(center+horizon)[#box[#symm.at(i).at(j).at(k).at(m)] \ #text(size:10pt)[#symm.keys().at(ii)\.#symm.at(i).keys().at(jj)\.#symm.at(i).at(j).keys().at(kk)]])
] else [
#arr.push(align(center+horizon)[#box[#symm.at(i).at(j).at(k).at(m)] \ #text(size:10pt)[#symm.keys().at(ii)\.#symm.at(i).keys().at(jj)\.#symm.at(i).at(j).keys().at(kk)\.#symm.at(i).at(j).at(k).keys().at(mm)]])
]
]
]
} else [
#if symm.at(i).at(j).keys().at(kk) == "o" [
#arr.push(align(center+horizon)[#box[#symm.at(i).at(j).at(k)] \ #text(size:10pt)[#symm.keys().at(ii)\.#symm.at(i).keys().at(jj)]])
] else [
#arr.push(align(center+horizon)[#box[#symm.at(i).at(j).at(k)] \ #text(size:10pt)[#symm.keys().at(ii)\.#symm.at(i).keys().at(jj)\.#symm.at(i).at(j).keys().at(kk)]])
]
]
]
} else [
#if symm.at(i).keys().at(jj) == "o" [
#arr.push(align(center+horizon)[#box[#symm.at(i).at(j)] \ #text(size:10pt)[#symm.keys().at(ii)]])
] else [
#arr.push(align(center+horizon)[#box[#symm.at(i).at(j)] \ #text(size:10pt)[#symm.keys().at(ii)\.#symm.at(i).keys().at(jj)]])
]
]
]
} else [
#arr.push(align(center+horizon)[#box[#symm.at(i)] \ #text(size:10pt)[#symm.keys().at(ii)]])
]
]
#table(
columns: (1fr,1fr,1fr,1fr,1fr,1fr),
stroke:gray+1.0pt,
align:horizon,
..arr
)
]
|
|
https://github.com/TOD-theses/paper-T-RACE | https://raw.githubusercontent.com/TOD-theses/paper-T-RACE/main/main.typ | typst | #import "@preview/definitely-not-tuw-thesis:0.1.0": flex-caption
#import "@preview/lovelace:0.3.0": *
#import "utils.typ": *
/*
Checklist:
- [ ] past vs present (in particular experiments)
- [ ] reference after or before dot?
*/
= Introduction
Ethereum is a blockchain that keeps track of a world state and updates this state by executing transactions. The transactions can execute so-called smart contracts, which are programs that are stored on the blockchain. As these programs are nearly Turing-complete, they can have vulnerabilities and become exploited.
This thesis focuses on transaction order dependence (TOD), which is a prerequisite for a kind of attack called front-running. TOD means that the state changes performed by transactions depend on the order in which the transactions are executed. In a front-running attack, an attacker sees that someone is about to perform a transaction and then quickly inserts a transaction before it. Because of TOD, executing the attacker's transaction before the victim's transaction yields different state changes than when execution the victim's transaction first.
This work proposes a method to take a pair of transactions and to simulate the two transactions in both orders. When executing the transactions in both orders, we can compare their behaviors to see if they are TOD and also if it exhibits characteristics of a front-running attack.
We use the state changes of transactions to calculate the world states used for transaction execution. This removes the need to execute intermediary transactions that were originally executed between the two transactions we analyze. Instead, we can use their state changes to maintain the effect they had on the second transaction, while updating the world states according to the different orders.
Moreover, we search the blockchain history for transaction pairs that are potentially TOD. We match transactions that access and modify the same state and define several filters to remove irrelevant transaction pairs. On these transaction pairs, we use our simulation method to check if they are TOD and if they have characteristics of a front-running attack. We check for the characteristics of the ERC-20 multiple withdrawal attack @rahimian_resolving_2019, the TOD properties implemented by Securify @tsankov_securify_2018, and financial gains and losses@zhang_combatting_2023.
We show that our concepts can be implemented with endpoints exposed by an archive node. We neither require custom modifications nor local access to an archive node.
Overall, our main contributions are:
- A method to simulate a pair of transactions in two different orders.
- A precise definition of TOD in the context of blockchain transaction analysis.
- An evaluation of an approximation for TOD.
- A compilation of EVM instructions that can cause TOD.
- A method to mine and filter transaction pairs that are potentially TOD.
== Related works
The studies by #cite(<zhang_combatting_2023>, form: "prose") and #cite(<torres_frontrunner_2021>, form: "prose") both detect and analyze front-running attacks that occurred on the Ethereum blockchain. We discuss potential inaccuracies in their simulation approaches. Contrary to these works, our study focuses on TOD, a prerequisite of front-running. However, we also implement the attack definition by #cite(<zhang_combatting_2023>, form: "prose") to compare our results with theirs.
#cite(<daian_flash_2020>, form: "prose") detect a specific kind of front-running attack by observing transaction executions. They measure so-called arbitrage opportunities, where a single transaction can make net revenue. While this is TOD, as only the first transaction that uses an arbitrage opportunity makes revenue, they do not need to simulate the different transaction orders for their analysis. Similarly, #cite(<wang_impact_2022>, form: "prose") also study a type of front-running attack without simulating different transaction orders.
#cite(<perez_smart_2021>, form: "prose") explicitly analyze transactions for TOD. They do so by recording for each transaction which storage it accessed and modified and then matching transactions where these overlap. Our work discusses the theoretical background of this approach and our method to detect potential TODs works in a similar way as theirs.
Several other works provide frameworks to analyze attack transactions in Ethereum @ferreira_torres_eye_2021@zhang_txspector_2020@wu_time-travel_2022@chen_soda_2020. None of these frameworks supports the simulation of transactions in different orders, therefore we cannot directly use them to detect TOD. Regarding the use of archive nodes, an evaluation by #cite(<wu_time-travel_2022>, form: "prose") states that replaying transactions with them is slow, taking "[...] more than 47 min to replay 100 normal transactions". However, #cite(<ferreira_torres_eye_2021>, form: "prose") show that it is indeed feasible to use archive nodes for attack analysis. We follow this work and use archive nodes to implement our simulation method.
= Background
This chapter provides background knowledge on Ethereum that is helpful for following the remaining paper. We also introduce a notation for these concepts.
== Ethereum
Ethereum is a blockchain that can be characterized as a "transactional singleton machine with shared-state" @wood_ethereum_2024[p.1]. By using a consensus protocol, a decentralized set of nodes agrees on a globally shared state. This state contains two types of accounts: #emph[externally owned accounts] (EOA) and #emph[contract accounts] (also referred to as smart contracts). The shared state is modified by executing #emph[transactions] @tikhomirov_ethereum_2018.
== World State
Similar to @wood_ethereum_2024[p.3], we will refer to the shared state as #emph[world state]. The world state maps each 20-byte address to an account state, containing a #emph[nonce], #emph[balance], #emph[storage] and #emph[code]#footnote[Technically, the account state only contains hashes that identify the storage and code, not the actual storage and code. This distinction is not relevant in this paper, therefore we simply refer to them as storage and code.]. They store the following data @wood_ethereum_2024[p.4]:
- #emph[nonce]: For EOAs, this is the number of transactions submitted
by this account. For contract accounts, this is the number of
contracts created by this account.
- #emph[balance]: The value of Wei this account owns, the smallest unit of
Ether.
- #emph[storage]: The storage allows contract accounts to persist information across transactions. It is a key-value mapping where both, key and value, are 256 bits long. For EOAs, the storage is empty.
- #emph[code]: For contract accounts, the code is a sequence of EVM
instructions.
We denote the world state as $sigma$ and the value at a specific #emph[state key] $K$ as $sigma(K)$. For the nonce, balance and code the state key denotes the state type and the account's address, written as $sigma(stateKey("nonce", a))$, $sigma(stateKey("balance", a))$ and $sigma(stateKey("code", a))$, respectively. For the value at a storage slot $k$ we use $sigma(stateKey("storage", a, k))$.
== EVM
The Ethereum Virtual Machine (EVM) is used to execute code in Ethereum. It executes instructions that can access and modify the world state. The EVM is Turing-complete, except that it is executed with a limited amount of #emph[gas], and each instruction costs some gas. When it runs out of gas, the execution will halt @wood_ethereum_2024[p.14]. This prevents infinite loops, as their execution exceeds the gas limit.
Most EVM instructions are formally defined in the Yellowpaper @wood_ethereum_2024[p.30-38]. However, the Yellowpaper currently does not include the changes from the Cancun upgrade @noauthor_history_2024, therefore we will also refer to the informal descriptions available on #link("https://www.evm.codes/")[evm.codes] @smlxl_evm_2024.
== Transactions
A transaction can modify the world state by transferring Ether and executing EVM code. It must be signed by the owner of an EOA and contains the following data relevant to our work:
- #emph[sender]: The address of the EOA that signed this transaction.#footnote[The sender is implicitly given through a valid signature and the transaction hash @wood_ethereum_2024[p.25-27]. We are only interested in transactions that are included in the blockchain, thus the signature must be valid, and the transaction’s sender can always be derived.]
- #emph[recipient]: The destination address.
- #emph[value]: The value of Wei that should be transferred from the sender to the recipient.
- #emph[gasLimit]: The maximum amount of gas that can be used for the execution.
If the recipient address is empty, the transaction will create a new contract account. These transactions also include an #emph[init] field that contains the code to initialize the new contract account.
When the recipient address is given and a value is specified, this will be transferred to the recipient. Moreover, if the recipient is a contract account, it also executes the recipient’s code. The transaction can specify a #emph[data] field to pass input data to the code execution @wood_ethereum_2024[p.4-5].
For every transaction, the sender must pay a #emph[transaction fee]. This fee is composed of a #emph[base fee] and a #emph[priority fee]. Every transaction must pay the base fee. The amount of Wei will be withdrawn from the sender and not given to any other account. For the priority fee, the transaction can specify if and how much they are willing to pay. This fee will be taken from the sender and given to the block validator, which is explained in the next section @wood_ethereum_2024[p.8].
We denote a transaction as $T$, sometimes adding a subscript $T_A$ to differentiate it from another transaction $T_B$.
== Blocks
The Ethereum blockchain consists of a sequence of blocks, where each block builds upon the state of the previous block. To achieve consensus about the canonical sequence of blocks in a decentralized network of nodes, Ethereum uses a consensus protocol. In this protocol, validators build and propose blocks to be added to the blockchain @noauthor_gasper_2023. It is the choice of the validator which transactions to include in a block. However, they are incentivized to include transactions that pay high transaction fees, as they receive the fee @wood_ethereum_2024[p.8].
Each block consists of a block header and a sequence of transactions that are executed in this block.
== Transaction submission
This section discusses how a transaction signed by an EOA ends up being included in the blockchain.
Traditionally, the signed transaction is broadcasted to the network of nodes, which temporarily store them in a #emph[mempool], a collection of pending transactions. The current block validator then picks transactions from the mempool and includes them in the next block. With this submission method, the pending transactions in the mempool are publicly known to the nodes in the network, even before being included in the blockchain. This time window will be important for our discussion on front-running, as it gives nodes time to react to a transaction before it becomes part of the blockchain @eskandari_sok_2020.
A different approach, the Proposer-Builder Separation (PBS), has gained more popularity recently. Here, we separate the task of collecting transactions and building blocks with them from the task of proposing them as a validator. A user submits their signed transaction or transaction bundle to a block builder. The block builder has a private mempool and uses it to create profitable blocks. Finally, the validator picks one of the created blocks and adds it to the blockchain @heimbach_ethereums_2023.
== Transaction execution
In Ethereum, transaction execution is deterministic @wood_ethereum_2024[p.9]. Transactions can access the world state and their block environment, therefore their execution can depend on these values. After executing a transaction, the world state is updated accordingly.
#let changesEqual = sym.tilde.op
#let changesDiffer = sym.tilde.not
We denote a transaction execution as $sigma ->^T sigma'$, implicitly letting the block environment correspond to the transaction’s block. Furthermore, we denote the state change by a transaction $T$ as $Delta_T$, with $pre(Delta_T) = sigma$ being the world state before execution and $post(Delta_T) = sigma'$ the world state after the execution of $T$.
For two state changes $Delta_T_A$ and $Delta_T_B$, we say that they are equivalent, $Delta_T_A changesEqual Delta_T_B$, if the relative change of the values is equal. Formally, let $Delta_T_A changesEqual Delta_T_B$ be true if and only if:
$
forall K: post(Delta_T_A)(K) - pre(Delta_T_A)(K) = post(Delta_T_B)(K) - pre(Delta_T_B)(K)
$
We extend this equivalence definition to sequences of state changes by summing up the differences of the state changes on both sides. We define that two sequences of state changes $angle.l Delta_T_A_0, ..., Delta_T_A_n angle.r$ and $angle.l Delta_T_B_0, ..., Delta_T_B_m angle.r$ are equivalent if:
$
forall K: sum_(i=0)^n post(Delta_T_A_i)(K) - pre(Delta_T_A_i)(K) = sum_(j=0)^m post(Delta_T_B_j)(
K
) - pre(Delta_T_B_j)(K)
$
For example, if both $Delta_T_A$ and $Delta_T_B$ increase the balance at address $a$ by 10 Wei and make no other state changes, then $Delta_T_A changesEqual Delta_T_B$. If one of them had modified it by e.g. 15 Wei or 0 Wei, or additionally modified some storage slot, we would have $Delta_T_A changesDiffer Delta_T_B$.
We define $sigma + Delta_T$ to be equal to the state $sigma$, except that every state that was changed by the execution of $T$ is overwritten with the value in $post(Delta_T)$. Similarly, $sigma - Delta_T$ is equal to the state $sigma$, except that every state that was changed by the execution of $T$ is overwritten with the value in $pre(Delta_T)$. Formally, these definitions are as follows:
$ changedKeys(Delta_T) colon.eq {K \| pre(Delta_T)(K) != post(Delta_T) (K)} $
$
(sigma + Delta_T) (
K
) & colon.eq cases(
post(Delta_T) (K) & "if" K in changedKeys(Delta_T),
sigma (K) & "otherwise"
)\
(sigma - Delta_T) (
K
) & colon.eq cases(
pre(Delta_T) (K) & "if" K in changedKeys(Delta_T),
sigma (K) & "otherwise"
)
$
For instance, if transaction $T$ changed the storage slot 1234 at address 0xabcd from 0 to 100, then we have $changedKeys(Delta_T) = {stateKey("storage", "0xabcd", "1234")}$. Further, we have $(sigma + Delta_T)(stateKey("storage", "0xabcd", 1234)) = 100$ and $(sigma - Delta_T)(stateKey("storage", "0xabcd", 1234)) = 0$. For all other storage slots $k$ we have $(sigma + Delta_T) (stateKey("storage", a, k)) = sigma(stateKey("storage", a, k)) = (sigma - Delta_T)(stateKey("storage", a, k))$.
== Nodes
A node consists of an #emph[execution client] and a #emph[consensus client]. The execution client keeps track of the world state and the mempool and executes transactions. The consensus client takes part in the consensus protocol. For this work, we will use an #emph[archive node], which is a node that allows reproducing the state and transactions at any block @noauthor_nodes_2024.
== RPC
Execution clients implement the Ethereum JSON-RPC specification @noauthor_ethereum_2024. This API gives remote access to an execution client, for instance, to inspect the current block number with `eth_blockNumber` or to execute a transaction without committing the state via `eth_call`. In addition to the standardized RPC methods, we will also make use of methods in the debug namespace, such as `debug_traceBlockByNumber`. While this namespace is not standardized, several execution clients implement these additional methods @noauthor_go-ethereum_2024@noauthor_rpc_2024@noauthor_reth_2024.
== Tokens
In Ethereum, tokens are assets managed by contract accounts @chen_tokenscope_2019. The contract account stores which address holds how many tokens. There are several token standards that a contract account can implement, allowing standardized methods to interact with the token. For instance, the ERC-20 standard defines a `transfer` method, which allows the holder of a token to transfer the token to someone else @vogelsteller_erc-20_nodate.
= Transaction order dependency
In this chapter we discuss our definitions of transaction order simulations and transaction order dependency (TOD). We first introduce the idea of TOD with a preliminary definition and then show several shortcomings of this simple definition. Based on these insights, we construct precise definitions of TOD and our transaction order simulation.
== Approaching TOD
Intuitively, a pair of transactions $(T_A , T_B)$ is transaction order dependent (TOD), if the result of sequentially executing the transactions depends on the order of their execution. As a preliminary TOD definition we use the following:
$
sigma ->^(T_A) sigma_1 ->^(T_B) sigma' \
sigma ->^(T_B) sigma_2 ->^(T_A) sigma'' \
sigma' != sigma''
$
So, starting from an initial state, when we execute first $T_A$ and then $T_B$ it will result in a different state, than when we execute $T_B$ and afterwards $T_A$.
We will refer to the execution order $T_A -> T_B$, the one that occurred on the blockchain, as the #emph[normal] execution order, and $T_B -> T_A$ as the #emph[reverse] execution order.
== Motivating examples
We provide two examples to illustrate how TOD can be exploited.
=== Password leaking <sec:password-leaking>
The first example is an attack from a dataset by @torres_frontrunner_2021#footnote[The attacker transaction is #link("https://etherscan.io/tx/0x15c0d7252fa93c781c966a98ab46a1c8c086ca2a0da7eb0a7a06c522818757da")[0x15c0d7252fa93c781c966a98ab46a1c8c086ca2a0da7eb0a7a06c522818757da], and the victim transaction is #link("https://etherscan.io/tx/0x282e4de019b59a50b89c1fdc2e70c4bbd45a7ad7f7a1a6d4807a587b5fcdcdf6")[0x282e4de019b59a50b89c1fdc2e70c4bbd45a7ad7f7a1a6d4807a587b5fcdcdf6].]. A simplified version of the vulnerable contract with added comments is presented below. It allows depositing some Ether and locking it with a password, and then anyone with the password can withdraw this Ether.
#figure(
kind: "code",
supplement: "Contract",
caption: flex-caption(
[The `PasswordEscrow` contract. This is a simplified version with added comments, based on the source code from Etherscan @etherscan_ethereum_nodate.],
[The `PasswordEscrow` contract.],
),
)[
```solidity
contract PasswordEscrow {
struct Transfer {
address from;
uint256 amount;
}
mapping(bytes32 => Transfer) private transferToPassword;
function deposit(bytes32 _password) public payable {
// REMARK: this stores an entry for the password and saves the amount of Ether
// that was sent along the transaction
bytes32 pass = sha3(_password);
transferToPassword[pass] = Transfer(msg.sender, msg.value);
}
function getTransfer(bytes32 _password) public payable {
// REMARK: this verifies that an entry for the password exists
// and gets the amount of Ether that was deposited for the password
require(
transferToPassword[sha3(_password)].amount > 0
);
bytes32 pass = sha3(_password);
uint256 amount = transferToPassword[pass].amount;
transferToPassword[pass].amount = 0;
// REMARK: this transfers the Ether to the account that transaction's sender
msg.sender.transfer(amount);
}
}
```
] <contract:password-escrew>
The victim previously interacted with the contract to deposit some Ether and lock it with a password. For the sake of the argument, we ignore that the password is already public at this step. This could be fixed, e.g. by directly submitting `sha3(password)` rather than the password itself, without resolving the TOD issue we discuss here.
Later, the victim tried to withdraw this Ether by creating a transaction that calls `getTransfer` with the password. However, in the time between the transaction submission and its inclusion in a block, an attacker saw this transaction and determined that they can perform the Ether withdrawal themselves. They copied the transaction data, including the password, and submitted their own transaction with a higher gas price than the victim's. The attacker's transaction ended up being executed first and withdrew all the Ether.
If we map this attack to our preliminary TOD definition above, the first transaction that executes will withdraw the Ether and thus increase the sender's balance. If the attacker's transaction executes first, we end up in a state where the attacker has more balance than if the victim's transaction is executed first. Therefore, $sigma' != sigma''$.
=== ERC-20 multiple withdrawal <sec:erc-20-multiple-withdrawal>
As a second example, we explain the ERC-20 multiple withdrawal attack @rahimian_resolving_2019. Contracts that implement the ERC-20 token standard must include an `approve` method @vogelsteller_erc-20_nodate. This method takes as parameters a `spender` and a `value` and allows the `spender` to spend `<value>` tokens from your account. For instance, when some account $a$ calls `approve(b, 0x1234)`, then `b` can transfer `0x1234` tokens from $a$ to any other account. If the `approve` method is called another time, the currently approved value is overwritten with the new value, regardless of the previous value.
We illustrate that approvals and the spending of approved tokens can be TOD in @tab:erc20-multiple-withdrawal-example. In the benign scenario, $b$ spends one token and remains with two tokens that are still approved. However, in the attack scenario, $b$ spends one token and only afterwards $a$ approves $b$ to spend three tokens. Therefore, $b$ remains with three tokens approved tokens instead of two. As such, changing the order of the second and third transaction results in different states, hence they are TOD.
From the perspective of $a$, they only wanted to allow $b$ to use three tokens. However, when $b$ reacts to a pending approval by executing a `transferFrom` before the approval is included in a block, then $b$ is able to use more than three tokens in total. This happened in the attack scenario, where the `transferFrom` is executed before the second `approve` got included in a block.
#figure(
grid(
columns: 2,
gutter: 1em,
[*Benign scenario*], [*Attack scenario*],
table(
columns: 2,
align: (left, center),
table.header([Action], [Approved tokens]),
[`approve(b, 1)`], [1],
[`approve(b, 3)`], [3],
[`transferFrom(a, b, 1)`], [2],
),
table(
columns: 2,
align: (left, center),
table.header([Action], [Approved tokens]),
[`approve(b, 1)`], [1],
[`transferFrom(a, b, 1)`], [0],
[`approve(b, 3)`], [3],
),
),
caption: flex-caption(
[Benign and attack scenario for ERC-20 approvals.],
[Benign and attack scenario for ERC-20 approvals.],
),
) <tab:erc20-multiple-withdrawal-example>
== Relation to previous works <sec:tod-relation-previous-works>
This section discusses, how our preliminary TOD definition relates to previous works that detect front-running attacks.
In @torres_frontrunner_2021, the authors do not provide a formal definition of TOD or front-running attacks. Nevertheless, for displacement attacks, they include the following check to detect if two transactions fall into this category:
#quote(block: true)[
[...] we run in a simulated environment first $T_A$ before $T_V$ and then $T_V$ before $T_A$. We report a finding if the number of executed EVM instructions is different across both runs for $T_A$ and $T_V$, as this means that $T_A$ and $T_V$ influence each other.
]
Similar to our preliminary TOD definition, they execute $T_A$ and $T_V$ in different orders and check if it affects the result. In their case, they only check the number of executed instructions, instead of the resulting state. This check misses attacks where the same instructions are executed, but the operands of instructions in the second transaction change because of the first transaction.
In @zhang_combatting_2023, the authors define an attack as a triple $A = angle.l T_a , T_v , T_a^p angle.r$, where $T_a$ and $T_v$ are similar to $T_A$ and $T_B$ from our definition, and $T_a^p$ is an optional third transaction. They consider the execution orders $T_a -> T_v -> T_a^p$ and $T_v -> T_a -> T_a^p$ and check if the execution order influences financial gains, which we will discuss in more detail in @sec:gain-and-loss-property.
We note that if these two execution orders result in different states, this is not because of the last transaction $T_a^p$, but because of a TOD between $T_a$ and $T_v$. As we always execute $T_a^p$ last, and transaction execution is deterministic, it only gives a different result if the execution of $T_a$ and $T_v$ gave a different result. Therefore, if the execution order results in different financial gains, then $T_a$ and $T_v$ must be TOD.
== Imprecise definitions
Our preliminary definition of TOD, and the related definitions above, are not precise regarding the semantics of a reordering of transactions and their executions. This makes it impossible to apply exactly the same methodology without analyzing the source code related to the papers. We describe three issues where the definition is not precise enough, and show how these are differently interpreted by the two papers.
For the analysis of the tools by @zhang_combatting_2023 and @torres_frontrunner_2021, we will use the current version of the source codes, @zhang_erebus-redgiant_2023 and @torres_frontrunner_2022, respectively.
=== Intermediary transactions
To analyze a TOD $(T_A , T_B)$, we are interested in how $T_A$ affects $T_B$ in the normal order, and how $T_B$ affects $T_A$ in the reverse order. Our preliminary definition does not specify how to handle transactions that occur between $T_A$ and $T_B$, which we will name #emph[intermediary transactions].
Suppose that there is one transaction $T_X$ between $T_A$ and $T_B$: $sigma ->^(T_A) sigma_A ->^(T_X) sigma_(A X) ->^(T_B) sigma_(A X B)$. The execution of $T_B$ may depend on both $T_A$ and $T_X$. When we are interested in the effect of $T_A$ on $T_B$, we need to define what happens with $T_X$.
For executing in the normal order, we have two possibilities:
+ $sigma ->^(T_A) sigma_A ->^(T_X) sigma_(A X) ->^(T_B) sigma_(A X B)$, the same execution as on the blockchain, including the effects of $T_X$.
+ $sigma ->^(T_A) sigma_A ->^(T_B) sigma_(A B)$, leaving out $T_X$ and thus having a normal execution that potentially diverges from the results on the blockchain (as $sigma_(A B)$ may differ from $sigma_(A X B)$).
When executing the reverse order, we have the following choices:
+ $sigma ->^(T_B) sigma_B ->^(T_A) sigma_(B A)$, which ignores $T_X$ and thus may influence the execution of $T_B$.
+ $sigma ->^(T_X) sigma_X ->^(T_B) sigma_(X B) ->^(T_A) sigma_(X B A)$, which executes $T_X$ on $sigma$ rather than $sigma_A$ and now also includes the effects of $T_X$ for executing $T_A$.
+ $sigma ->^(T_B) sigma_B ->^(T_X) sigma_(B X) ->^(T_A) sigma_(B X A)$, which executes $T_X$ after $T_B$ and before $T_A$, thus potentially influencing the execution of both $T_A$ and $T_B$.
All of these scenarios are possible, but none of them provides a clean solution to solely analyze the effect of $T_A$ on $T_B$, as we always may have some indirect effect from the (non-)execution of $T_X$.
In @zhang_combatting_2023, this influence of intermediary transactions is acknowledged as causing a few false positives:
#quote(block: true)[
In blockchain history, there could be many other transactions between $T_a$, $T_v$, and $T_p^a$. When we change the transaction orders to mimic attack-free scenarios, the relative orders between $T_a$ (or $T_v$) and other transactions are also changed. Financial profits of the attack or victim could be affected by such relative orders. As a result, the financial profits in the attack-free scenario could be incorrectly calculated, and false-positively reported attacks may be induced, but our manual check shows that such cases are rare.
]
Nonetheless, it is not clear, which of the above scenarios they applied for their analysis. The other work, @torres_frontrunner_2021, does not mention this issue of intermediary transactions.
==== Code analysis of @zhang_combatting_2023
In @zhang_combatting_2023, algorithm 1 takes all the executed transactions as its input. These transactions and their results are used in the `searchVictimGivenAttack` method, where `ar` represents the attack transaction with result and `vr` represents the victim transaction with result.
For the normal execution order ($T_a -> T_v$), the authors use `ar` and `vr` and pass them to their `CheckOracle` method, which then compares the resulting states. As `ar` and `vr` are obtained by executing all transactions, they also include the intermediary transactions for these results (similar to our $sigma ->^(T_A) sigma_A ->^(T_X) sigma_(A X) ->^(T_B) sigma_(A X B)$ case).
For the reverse order ($T_v -> T_a$), they take the state before $T_a$, i.e. $sigma$. Then they execute all transactions obtained from the `SlicePrerequisites` method. And finally, they execute $T_v$ and $T_a$.
The `SlicePrerequisites` method uses the `hbGraph`, which is built in `StartSession`. `hbGraph` seems to be a graph where each transaction points to the previous transaction from the same EOA. The `SlicePrerequisites` method uses this graph to obtain all transactions between $T_a$ and $T_v$ that are from the same sender as $T_v$. This interpretation matches the test case "should slide prerequisites correctly" from the source code. As the paper does not mention these prerequisite transactions, we do not know why this subset of intermediary transactions was chosen.
We can conclude that @zhang_combatting_2023 executes all intermediary transactions in the normal order. However, in the reverse order, they only execute intermediary transactions that are also sent by the victim, but do not execute any other intermediary transactions.
==== Code analysis of @torres_frontrunner_2021
In the file `displacement.py`, lines 154-155 replay the normal execution order, and lines 158-159 the reverse execution order. They only execute $T_A$ and $T_V$ (in normal and reverse order), but do not execute any intermediate transactions.
=== Block environments
When we analyze a pair of transactions $(T_A , T_B)$, it may happen that these are not part of the same block. The execution of the transactions may depend on the block environment they are executed in, for instance, if they access the current block number. Thus, executing $T_A$ or $T_B$ in a block environment different from the blockchain may alter their behavior. From our preliminary TOD definition, it is not clear which block environment(s) we use when replaying the transactions in normal and reverse order.
==== Code analysis of @zhang_combatting_2023
In the normal scenario, the block environments used are the same as originally used for the transaction.
For the reverse scenario, the block environment used to execute all transactions is contained in `ar.VmContext` and corresponds to the block environment of $T_a$. Therefore, $T_a$ is executed in the same block environment as on the blockchain, while $T_v$ and the intermediary transactions may be executed in a block environment different from the normal scenario.
==== Code analysis of @torres_frontrunner_2021
In the file `displacement.py` line 151, we see that the emulator uses the same block environment for both transactions. Therefore, at least one of them will be executed in a block environment different from the blockchain. However, it uses the same block environment for both scenarios, thus being consistently different from the execution on the blockchain.
=== Initial state $sigma$
While our preliminary TOD definition specifies that we start with the same $sigma$ in both execution orders, it is up to interpretation which world state $sigma$ actually designates.
==== Code analysis of @zhang_combatting_2023
Both, in the normal and reverse scenario, it uses the state directly before executing $T_a$, including the state changes of previous transactions within the same block. In the reverse scenario, this is the case as it uses `ar.State`.
==== Code analysis of @torres_frontrunner_2021
The emulator is initialized with the block `front_runner["blockNumber"]-1` and no single transactions are executed prior to running the analysis. Therefore, the state cannot include transactions that were executed in the same block before $T_A$.
Similar to the case with the block environment, this could lead to differences between the emulation and the results from the blockchain when $T_A$ or $T_V$ are affected by a previous transaction in the same block.
== TOD simulation <sec:tod-simulation>
To address the issues above, we define a TOD simulation method that explicitly defines the used world states and block environments while also taking intermediary transactions into account:
#definition("Normal and reverse scenarios")[
Consider a sequence of transactions, with $sigma$ being the world state right before $T_A$ was executed on the blockchain:
$ sigma ->^(T_A) sigma_A ->^(T_X_1) dots.h ->^(T_X_n) sigma_X_n ->^(T_B) sigma_B $
Let $Delta_T_A$ and $Delta_T_B$ be the corresponding state changes from executing $T_A$ and $T_B$, and let all transactions be executed in the same block environment as they were executed on the blockchain.
Let $Delta'_T_B$ be the state change when executing $(sigma_X_n - Delta_T_A) ->^(T_B) sigma'_B$ and $Delta'_T_A$ be the state change when executing $(sigma + Delta'_T_B) ->^(T_A) sigma'_A$.
We call $Delta_T_A$ and $Delta_T_B$ the state changes from the normal scenario and $Delta'_T_A$ and $Delta'_T_B$ the state changes from the reverse scenario.
]
The normal scenario represents the order $T_A -> T_B$. The state changes $Delta_T_A$ and $Delta_T_B$ are equal to the ones observed on the blockchain, as we execute the transactions in their original block environment and with their original prestate.
The reverse scenario models the order $T_B -> T_A$, where $T_B$ occurs before $T_A$. Therefore, we execute $T_B$ on a state that does not contain the changes of $T_A$. We do so by taking the world state exactly before executing $T_B$, namely $sigma_X_n$, and then removing the state changes of $T_A$ by computing $sigma_X_n - Delta_T_A$. Executing $T_B$ on $sigma_X_n - Delta_T_A$ gives us the state change $Delta'_T_B$. To model the execution of $T_A$ after $T_B$, we take the state $sigma$ on which $T_A$ was originally executed and add the state changes $Delta'_T_B$.
/*
Additionally, for the special case that $T_A$ and $T_B$ do not have intermediary transactions, we can compute the states we would get from the preliminary definition using the normal and reverse scenarios:
#proposition[
Consider a sequence of transactions, with $sigma$ being the world state right before $T_A$ and the following two execution orders:
$
sigma ->^(T_A) sigma_1 ->^(T_B) sigma'\
sigma ->^(T_B) sigma_2 ->^(T_A) sigma''
$
When $Delta_T_A$, $Delta_T_B$, $Delta'_T_A$ and $Delta'_T_B$ are the corresponding state changes of the normal and reverse order, we must have $sigma' = sigma + Delta_T_A + Delta_T_B$ and $sigma'' = sigma + Delta'_T_B + Delta'_T_A$.
]
#proof[
For the normal scenario our definition uses the original prestates, therefore we use $sigma$ for $T_A$ and $sigma_1$ for $T_B$. Because we use the same prestates as for $sigma ->^(T_A) sigma_1 ->^(T_B) -> sigma'$, we end up with the same poststates, therefore $sigma' = sigma + Delta_T_A + Delta_T_B$. For the reverse scenario, we also compute the same prestates as $sigma ->^(T_B) sigma_2 ->^(T_A) sigma''$ and therefore get the same result. To execute $T_B$ in the reverse scenario, we compute $sigma_1 - Delta_T_A = (sigma + Delta_T_A) - Delta_T_A = sigma$. We then execute $T_A$ on $sigma + Delta'_T_B = sigma_2$ and therefore end up with $sigma'' = sigma + Delta'_T_B + Delta'_T_A$.
]
*/
== TOD definition <sec:tod-definition>
Based on the definition of normal and reverse scenarios, we define TOD as follows:
#definition("TOD")[
Let $T_A$ and $T_B$ be two transactions with the corresponding state changes $Delta_T_A$ and $Delta_T_B$ from the normal scenario and $Delta'_T_A$ and $Delta'_T_B$ from the reverse scenario.
We say that $(T_A, T_B)$ is TOD if and only if $angle.l Delta_T_A, Delta_T_B angle.r changesDiffer angle.l Delta'_T_A, Delta'_T_B angle.r$.
]
Consider the example of the ERC-20 multiple withdrawal from @sec:erc-20-multiple-withdrawal, with $T_A$ being the attacker transaction that calls `transferFrom(a, b, 1)` and $T_B$ being the victim transaction that calls `approve(b, 3)`. In the normal scenario, we have shown that the attacker remains with three approved tokens, while in the reverse scenario, only two tokens would remain. Intuitively, this satisfies $angle.l Delta_T_A, Delta_T_B angle.r changesDiffer angle.l Delta'_T_A, Delta'_T_B angle.r$, as the change approved tokens differs between the normal and the reverse scenario.
More formally, let $K$ be the state key that tracks how many tokens are approved by $a$ for $b$. Initially, one token is approved, therefore $sigma(K) = 1$. When executing $T_A$ in the normal scenario, where the attacker spends the one approved token, this changes to $sigma(K) = 0$. Therefore, we have a change of $post(Delta_T_A)(K) - pre(Delta_T_A)(K) = -1$. We then continue to execute $T_B$ in the normal scenario, which sets $sigma(K) = 3$, therefore $post(Delta_T_B)(K) - pre(Delta_T_B)(K) = 3$. When we add up these two state changes, we get an overall state change of $2$ for the state at key $K$. However, doing the same calculations for the reverse scenario results in an overall state change of $1$ for $K$, as $T_B$ first increases it by two and $T_A$ then reduces it by one. As the overall changes differ between the normal and reverse scenario, we have $angle.l Delta_T_A, Delta_T_B angle.r changesDiffer angle.l Delta'_T_A, Delta'_T_B angle.r$ and $(T_A, T_B)$ is TOD.
Similarly, for the password leaking example in @sec:password-leaking we showed that the execution order determines who can withdraw the stored Ether. If the attacker's transaction is executed first, they withdraw the Ether. If it is executed second, the attacker does not withdraw any Ether. Therefore, the change at the state key $K = stateKey("balance", italic("attacker"))$ depends on the transaction order, and thus, the transactions are TOD.
== TOD approximation <sec:tod-approximation>
This paper focuses on detecting TOD attacks, in which the attacker inserts a transaction prior to some transaction $T$. We assume that the first transaction tries to influence the second transaction, which implies that in every TOD attack, the state changes of $T_B$ should be dependent on the transaction order. We use this assumption to define an approximation of TOD:
#definition("Approximately TOD")[
Let $T_A$ and $T_B$ be transactions with the state changes $Delta_T_B$ for the normal scenario and $Delta'_T_B$ for the reverse scenario.
We say that $(T_A, T_B)$ is approximately TOD if and only if $Delta_T_B changesDiffer Delta'_T_B$.
]
In principle, the assumption that an attack influences the transaction it front-runs need not hold. For example, suppose a transaction $T$ leaks a password that can be used to withdraw Ether, but at the same time, $T$ locks the contract that contains this Ether. An attacker may use the password to withdraw the Ether without necessarily influencing the execution of $T$ but it needs need to front-run $T$ because of the contract locking.
== Definition strengths <sec:definition-strengths>
=== Performance
To check if two transactions, $T_A$ and $T_B$, are TOD, we need the initial world state $sigma$, and the state changes from $T_A$, $T_B$ and the intermediary transactions $T_(X_n)$. With the state changes we can compute $sigma_(X_n) - Delta_(T_A) = sigma + Delta_(T_A) + (sum_(i = 0)^n Delta_(T_(X_i))) - Delta_(T_A)$ and then execute $T_B$ on this state. With the recorded state changes, $Delta'_T_B$, we can compute $sigma + Delta'_T_B$ and execute $T_A$ on this state. As such, we need one transaction execution to check for the TOD approximation and two transaction executions to check for TOD. Despite including the effect of arbitrarily many intermediary transactions, we do not need to execute them to check for TOD.
When we want to check $n$ transactions for TOD, there are $frac(n^2 - n, 2)$ possible transaction pairs. Thus, if we want to test each pair for TOD we end up with a total of $frac(n^2 - n, 2)$ transaction executions for the approximation and $n^2 - n$ executions for the exact TOD check. Similar to @torres_frontrunner_2021 and @zhang_combatting_2023, we can filter irrelevant transaction pairs to reduce the search space.
Depending on the available world states and state changes, the exact number of required transaction executions and the method to compute world states may differ. For instance, the archive nodes Erigon 2 and Reth currently only store state changes for each block, but not on a transaction level @noauthor_erigon_2023@noauthor_reth_2024-1. We show the state calculations under such constraints in @sec:tod-detection. Other systems, such as EthScope @wu_time-travel_2022, and Erigon 3 @rebuffo_erigon_2024, store changes for every transaction. However, EthScope is not publicly available anymore and Erigon 3 is still in development.
=== Similarity to blockchain executions
With our definition, the state changes $Delta_T_A$ and $Delta_T_B$ from the normal execution are equivalent to the state changes that happened on the blockchain. Also, the reverse order is closely related to the state from the blockchain, as we start with the world states before $T_A$ and $T_B$ and only change state keys that were modified by $T_A$ and $T_B$, thus only the state keys relevant for TOD simulation. Furthermore, we prevent the effects of block environment changes by using the same environments as on the blockchain.
This contrasts with other implementations, where transactions are executed in different block environments than originally, a different world state is used for the first transaction or the effect of intermediary transactions is ignored. All three cases can alter the execution of $T_A$ and $T_B$, such that the result is not closely related to the blockchain.
== Definition weaknesses
<sec:weaknesses>
=== Approximation focuses on effect on $T_B$ <sec:weakness-focus-on-tb>
In some cases, the transaction order can affect the execution of the individual transactions, but does not affect the overall result of executing both transactions. The approximation does not consider the execution of $T_A$ after $T_B$ in the reverse order, which could lead to incorrect TOD classification.
For example, consider the case where both $T_A$ and $T_B$ multiply a value in a storage slot by 5. If the storage slot initially has the value 1, then executing both $T_A$ and $T_B$ will result in 25, regardless of the order. However, the state changes $Delta_T_B$ and $Delta'_T_B$ are different, as for one scenario, the value changes from 1 to 5 and for the other from 5 to 25. Therefore, this would be classified as approximately TOD.
Note that the approximation is robust against the cases, where the absolute values differ, but the change is constant. For instance, if both $T_A$ and $T_B$ would increase the storage slot by 5 rather than multiplying it, the state changes $Delta_T_B$ and $Delta'_T_B$ would be from 1 to 6 and from 6 to 11. As our definition for state change equivalence uses the difference between the state before and after execution, we would compare the change $6 - 1 = 5$ against $11 - 6 = 5$, thus $Delta_T_B changesEqual Delta'_T_B$.
=== Indirect dependencies <sec:weakness-indirect-dependencies>
An intuitive interpretation of our TOD definition is that we compare $T_A -> T_X_i -> T_B$ with $T_X_i -> T_B -> T_A$, i.e. reckon what happens if $T_A$ is not executed first but last. However, the definition we provide does not perfectly match this concept, because it does not consider interactions between $T_A$ and the intermediary transactions $T_(X_i)$. In the intuitive model, not executing $T_A$ before the intermediary transactions may influence them and thus indirectly change the behavior of $T_B$. Then, we do not know if $T_A$ directly influences $T_B$, or only through some interplay with intermediary transactions. Similarly, when executing $T_A$ last, we do not know if $T_A$ behaves differently this is because of an interaction with $T_B$ or an intermediary transaction.
Therefore, our exclusion of interactions between $T_A$ and $T_(X_i)$ may be desirable to focus only on interactions between $T_A$ and $T_B$, however it can cause divergences between our analysis results and what would have happened on the blockchain.
As an example, consider the three transactions $T_A$, $T_X$ and $T_B$:
+ $T_A$: sender $a$ transfers 5 Ether to address $x$.
+ $T_X$: sender $x$ transfers 5 Ether to address $b$.
+ $T_B$: sender $b$ transfers 5 Ether to address $y$.
When executing these transactions in the normal order, and $a$ initially has 5 Ether and the others have 0, then all of these transactions succeed. If we remove $T_A$ and only execute $T_X$ and $T_B$, then firstly $T_X$ would fail, as $x$ did not get the 5 Ether from $a$, and consequently, also $T_B$ fails.
However, when using our TOD definition and computing $(sigma_(X_n) - Delta_(T_A))$, we would only modify the balances for $a$ and $x$, but not for $b$, because $b$ is not modified in $Delta_(T_A)$. Thus, $T_B$ would still succeed in the reverse order according to our definition, but would fail in practice due to the indirect effect. This shows, how the concept of removing $T_A$ does not map exactly to our TOD definition.
In this example, we had a TOD for $(T_A , T_X)$ and $(T_X , T_B)$. However, we can also have an indirect dependency between $T_A$ and $T_B$ without a TOD for $(T_X , T_B)$. For instance, if $T_X$ and $T_B$ would be TOD, but $T_A$ caused $T_X$ to fail. When inspecting the normal order, $T_X$ failed, so there is no TOD between $T_X$ and $T_B$. However, when executing the reverse order without $T_A$, then $T_X$ would succeed and could influence $T_B$.
== State collisions
We denote the state accesses by a transaction $T$ as a set of state keys $R_T = { K_1 , dots.h , K_n }$ and the state modifications as $W_T = { K_1 , dots.h , K_m }$.
Inspired by the definition of a transaction race in @ma_transracer_2023, we define the state collisions of two transactions as:
$
colls(T_A , T_B) = (W_(T_A) sect R_(T_B)) union (W_(T_A) sect W_(T_B))
$
For instance, if transaction $T_A$ modifies the balance of some address $a$, and $T_B$ accesses this balance, we have $colls(T_A, T_B) = ({ stateKey("balance", a) } sect {stateKey("balance", a)}) union ({stateKey("balance", a)} sect emptyset) = {stateKey("balance", a)}$.
With $W_(T_A) sect R_(T_B)$ we include write-read collisions, where $T_A$ modifies some state key and $T_B$ accesses the same state key. With $W_(T_A) sect W_(T_B)$ we include write-write collisions, where both transactions write to the same state location, for instance by writing to the same storage. Following the assumption of the TOD approximation, we do not include $R_(T_A) sect W_(T_B)$, as in this case $T_A$ does not influence the execution of $T_B$.
== TOD candidates
We will refer to a transaction pair $(T_A , T_B)$, where $T_A$ was executed before $T_B$ and $colls(T_A , T_B) != nothing$ as a TOD candidate.
A TOD candidate is not necessarily TOD or approximately TOD. For instance, consider the case that $T_B$ only reads the value that $T_A$ wrote but never uses it for any computation. This would be a TOD candidate, as they have a collision, however the result of executing $T_B$ is not influenced by this collision.
If $(T_A , T_B)$ is approximately TOD, then $(T_A , T_B)$ must also be a TOD candidate. We can only have $Delta_T_B changesDiffer Delta'_T_B$ if the state keys that $T_B$ accesses or modifies differ between the normal and reverse scenarios. As the only difference between the scenarios is the removal of $Delta_T_A$ in the reverse scenario, the differences of the state keys must come from $Delta_T_A$. Therefore, $T_A$ also modifies these state keys, and we have $(W_T_A sect R_T_B) union (W_T_A sect W_T_B) != nothing$. This is equivalent to $colls(T_A, T_B) != nothing$, showing that $(T_A, T_B)$ must be a TOD candidate.
Therefore, the set of all approximately TOD transaction pairs is a subset of all TOD candidates.
In the case that $(T_A, T_B)$ is TOD but not approximately TOD, the pair $(T_A, T_B)$ need not be a TOD candidate. By the definitions of TOD and approximately TOD, we have $angle.l Delta_T_A, Delta_T_B angle.r changesDiffer angle.l Delta'_T_A, Delta'_T_B angle.r$ and $Delta_T_B changesEqual Delta'_T_B$, which implies that $Delta_T_A changesDiffer Delta'_T_A$ must hold. Similar to the previous argument, $Delta_T_A changesDiffer Delta'_T_A$ implies $(R_T_A sect W_T_B) union (W_T_A sect W_T_B) != nothing$. However, in this case we cannot conclude $colls(T_A, T_B) != nothing$, because we excluded $R_T_A sect W_T_A$ from our collision definition.
As such, the definition of TOD candidates aligns with the approximation of TOD, but not necessarily the exact TOD definition.
== Causes of state collisions
This section discusses what can cause two transactions $T_A$ and $T_B$ to have state collisions. To do so, we show the ways a transaction can access and modify the world state.
=== Causes with code execution
When the recipient of a transaction is a contract account, it will execute the recipient’s code. The code execution can access and modify the state through several instructions. By inspecting the EVM instruction definitions @wood_ethereum_2024[p.30-38]@smlxl_evm_2024, we compiled a list of instructions that can access and modify the world state.
In @tab:state_reading_instructions, we see the instructions that can access the world state. For most, the reason of the access is clear, for instance `BALANCE` needs to access the balance of the target address. Less obvious is the nonce access of several instructions, which is because the EVM uses the nonce (among other things) to check if an account already exists @wood_ethereum_2024[p.4]. For `CALL`, `CALLCODE` and `SELFDESTRUCT`, this is used to calculate the gas costs @wood_ethereum_2024[p.37-38]. For `CREATE` and `CREATE2`, this is used to prevent creating an account at an already active address @wood_ethereum_2024[p.11]#footnote[In the Yellowpaper, the check for the existence of the recipient for `CALL`, `CALLCODE` and `SELFDESTRUCT` is done via the `DEAD` function. For `CREATE` and `CREATE2`, this is done in the condition (113) named `F`.].
In @tab:state_writing_instructions, we see instructions that can modify the world state.
#figure(
table(
columns: 5,
align: (left, center, center, center, center),
table.header([Instruction], [Storage], [Balance], [Code], [Nonce]),
table.hline(),
[`SLOAD`], [$checkmark$], [], [], [],
[`BALANCE`], [], [$checkmark$], [], [],
[`SELFBALANCE`], [], [$checkmark$], [], [],
[`CODESIZE`], [], [], [$checkmark$], [],
[`CODECOPY`], [], [], [$checkmark$], [],
[`EXTCODECOPY`], [], [], [$checkmark$], [],
[`EXTCODESIZE`], [], [], [$checkmark$], [],
[`EXTCODEHASH`], [], [], [$checkmark$], [],
[`CALL`], [], [$checkmark$], [$checkmark$], [$checkmark$],
[`CALLCODE`], [], [$checkmark$], [$checkmark$], [$checkmark$],
[`STATICCALL`], [], [], [$checkmark$], [],
[`DELEGATECALL`], [], [], [$checkmark$], [],
[`CREATE`], [], [$checkmark$], [$checkmark$], [$checkmark$],
[`CREATE2`], [], [$checkmark$], [$checkmark$], [$checkmark$],
[`SELFDESTRUCT`], [], [$checkmark$], [$checkmark$], [$checkmark$],
),
caption: flex-caption(
[Instructions that access state. A checkmark indicates,
that the execution of this instruction can depend on this state type.],
[State accessing instructions],
),
kind: table,
)<tab:state_reading_instructions>
#figure(
table(
columns: 5,
align: (left, center, center, center, center),
table.header([Instruction], [Storage], [Balance], [Code], [Nonce]),
table.hline(),
[`SSTORE`], [$checkmark$], [], [], [],
[`CALL`], [], [$checkmark$], [], [],
[`CALLCODE`], [], [$checkmark$], [], [],
[`CREATE`], [], [$checkmark$], [$checkmark$], [$checkmark$],
[`CREATE2`], [], [$checkmark$], [$checkmark$], [$checkmark$],
[`SELFDESTRUCT`], [$checkmark$], [$checkmark$], [$checkmark$], [$checkmark$],
),
caption: flex-caption(
[Instructions that modify state. A checkmark indicates,
that the execution of this instruction can modify this state type.],
[State modifying instructions],
),
kind: table,
) <tab:state_writing_instructions>
=== Causes without code execution
Some state accesses and modifications are inherent to transaction execution. To pay the transaction fees, the sender's balance is accessed and modified. When a transaction transfers some Wei from the sender to the recipient, it also modifies the recipient’s balance. To check if the recipient is a contract account, the transaction also needs to access the code of the recipient. Finally, it also verifies the sender’s nonce and increments it by one @wood_ethereum_2024[p.9].
=== Relevant collisions for attacks
<sec:relevant-collisions>
The previous sections list possible ways to access and modify the world state. Many previous works have focused on storage and balance collisions, however they did not discuss if or why code and nonce collisions are not important @tsankov_securify_2018@wang_etherfuzz_2022@kolluri_exploiting_2019@luu_making_2016@munir_todler_2023. Here, we argue, why only storage and balance collisions are relevant for TOD attacks and code and nonce collisions can be neglected.
Following the assumption we made in @sec:tod-approximation, in a TOD attack an attacker influences the execution of some transaction $T_B$, by placing a transaction $T_A$ before it. To have some effect, there must be a write-write or write-read collisions between $T_A$ and $T_B$. Therefore, our scenario is that we start from some (victim) transaction $T_B$ and try to create impactful collisions with a new transaction $T_A$.
Let us first focus on the instructions that could modify the codes and nonces that $T_B$ accesses or modifies. As we see in @tab:state_writing_instructions, these are `SELFDESTRUCT`, `CREATE` and `CREATE2`. Since the EIP-6780 update @ballet_eip-6780_2023, `SELFDESTRUCT` only destroys a contract if the contract was created in the same transaction. Therefore, `SELFDESTRUCT` can only modify a code and nonce within the same transaction, but cannot be used to attack an already submitted transaction $T_B$. The instructions to create a new contract, `CREATE` and `CREATE2`, both fail when there is already a contract at the target address @wood_ethereum_2024[p.11]. Therefore, we can only modify the code if the contract previously did not exist. In the case that $T_B$ interacts with some address $a$ that contains no code, the attacker needs `CREATE` or `CREATE2` to create a contract at the address $a$ to force a collision. This is not possible for arbitrary addresses, as the address computation uses the sender's address as an input to a hash function in both cases @wood_ethereum_2024[p.11]. A similar argument can be made about contract creation directly via the transaction and some init code.
Apart from instructions, the nonces of an EOA can also be increased by transactions themselves. $T_B$ could make a `CALL` or `CALLCODE` to the address of an EOA and transfer some Ether. The gas costs for these instructions depend on whether the recipient account already exists or has to be newly created. As such, if $T_B$ makes a `CALL` or `CALLCODE` to a non-existent account, then an attacker could create this account in $T_A$ to reduce the gas costs of the transfer by $T_B$. We do not consider this an attack, as it only reduces the gas costs for $T_B$, which likely has no adverse effects.
Therefore, the remaining attack vectors are `SSTORE`, which can modify the storage of an account, and Ether transfers of `CALL`, `CALLCODE`, `SELFDESTRUCT`, which modify the balance of an account.
== Everything is TOD
Our definition of TOD is very broad and marks many transaction pairs as TOD. For instance, if a transaction $T_B$ uses some storage value for a calculation, then the execution likely depends on the transaction that previously has set this storage value. Similarly, when someone wants to transfer Ether, they can only do so when they first received that Ether. Thus, they are dependent on some transaction that gave them this Ether previously.
#proposition[For every transaction $T_B$ after the London upgrade#footnote[We reference the London upgrade here, as this introduced the base fee for transactions.], there exists a transaction $T_A$ such that $(T_A , T_B)$ is TOD.]
#proof[
Consider an arbitrary transaction $T_B$ with the sender being some address $italic("sender")$. The sender must pay some upfront cost $v_0 > 0$, because they must pay a base fee @wood_ethereum_2024[p.8-9]. Therefore, we must have $sigma(stateKey("balance", italic("sender"))) gt.eq v_0$. This requires that a previous transaction $T_A$ increased the balance of $italic("sender")$ to be high enough to pay the upfront cost, i.e. $pre(Delta_(T_A))(stateKey("balance", italic("sender"))) < v_0$ and $post(Delta_(T_A))(stateKey("balance", italic("sender"))) gt.eq v_0$.#footnote[For block validators, their balance could have also increased from staking rewards, rather than a previous transaction. However, this would require that a previous transaction gave them enough Ether for staking in the first place. @noauthor_proof--stake_nodate]
When we calculate $sigma - Delta_(T_A)$ for our TOD definition, we would set the balance of $italic("sender")$ to $pre(Delta_(T_A))(("balance", italic("sender"))) < v_0$ and then execute $T_B$ based on this state. In this case, $T_B$ would be invalid, as the $italic("sender")$ would not have enough Ether to cover the upfront cost.
]
Given this property, it is clear that TOD alone is not a useful attack indicator, since every transaction would be considered as having been attacked. In @sec:tod-attack-characteristics, we discuss more restrictive definitions.
= TOD candidate mining <cha:mining>
In this chapter, we discuss how we search for potential TODs in the Ethereum blockchain. We use the RPC from an archive node to obtain transactions and their state accesses and modifications. Then we search for collisions between these transactions to find TOD candidates. Lastly, we filter out TOD candidates that are not relevant to our analysis.
== TOD candidate finding
We make use of the RPC method `debug_traceBlockByNumber`, which allows for replaying all transactions of a block the same way they were originally executed. With the `prestateTracer` config, this method also outputs, which part of the state has been accessed, and using the `diffMode` config, also which part of the state has been modified#footnote[When running the prestateTracer in diffMode, several fields are only implicit in the response. We need to make these fields explicit for further analysis. Refer to the documentation or the source code for further details.].
By inspecting the source code of the tracers for Reth @paradigm_revm-inspectors_2024 and the results of the RPC call, we found out that for every touched account, it always includes the account’s balance, nonce and code in the prestate. For instance, even when only the balance was accessed, it will also include the nonce in the prestate#footnote[I opened a #link("https://github.com/ethereum/go-ethereum/pull/30081")[pull request] to clarify this behavior and now this is also reflected in the documentation@noauthor_go-ethereum_2024-1.]. Therefore, we do not know precisely which part of the state has been accessed, which can be a source of false positives for collisions.
We store all the accesses and modifications in a database and then query for accesses and writes that have the same state key. As in our definition of collisions, we only match state keys where the first transaction modifies the state. We then use the transactions that cause these collisions as a preliminary set of TOD candidates.
== TOD candidate filtering
Many of the TOD candidates from the previous section are not relevant for our further analysis. To prevent unnecessary computation and distortion of our results, we define which TOD candidates are not relevant and then filter them out.
A summary of the filters is given in @tab:tod_candidate_filters with detailed explanations in the following sections. The filters are executed in the order as presented in the table and always operate on the output of the previous filter. The only exception is the "Same-value collision" filter, which is directly incorporated into the initial collisions query for performance reasons.
The "Block windows", "Same senders" and "Recipient Ether transfer" filters have already been used in @zhang_combatting_2023. The filters "Nonce and code collision" and "Indirect dependency" follow directly from our discussion above. Furthermore, we also applied an iterative approach, where we searched for TOD candidates in a sample block range and manually analyzed whether some of these TOD candidates may be filtered. This approach led us to the "Same-value collisions" and the "Block validators" filter.
#figure(
table(
columns: 2,
align: (left, left),
table.header([Filter name], [Description of filter criteria]),
table.hline(),
[Same-value collision], [Drop collision if $T_A$ writes a different value than the value accessed or overwritten by $T_B$.],
[Block windows], [Drop candidate if $T_A$ and $T_B$ are 25 or more blocks apart.],
[Block validators], [Drop collisions on the block validator’s balance.],
[Nonce and code collision], [Drop nonce and code collisions.],
[Indirect dependency], [Drop candidates $(T_A, T_B)$ with an indirect dependency, e.g. when candidates $(T_A , T_X )$ and $(T_X , T_B)$ exist.],
[Same senders], [Drop candidate if $T_A$ and $T_B$ are from the same sender.],
[Recipient Ether transfer], [Drop candidate if $T_B$ does not execute code.],
),
caption: flex-caption(
[TOD candidate filters sorted by usage order. When a filter describes the removal of collisions, the TOD candidates will be updated accordingly.],
[TOD candidate filters],
),
kind: table,
) <tab:tod_candidate_filters>
=== Filters
==== Same-value collisions
When we have many transactions that modify the same state, e.g. the balance of the same account, they will all have a write-write conflict with each other. The number of TOD candidates grows quadratic with the number of transactions modifying the same state. For instance, if 100 transactions modify the balance of address $a$, the first transaction has a write-write conflict with all other 99 transactions, the second transaction with the remaining 98 transactions, etc., leading to a total of $frac(n^2 - n, 2) = 4,950$ TOD candidates.
To reduce this growth of TOD candidates, we additionally require for a collision that $T_A$ writes exactly the value that is read or overwritten by $T_B$. Formally, the following condition must hold to pass this filter:
$
forall K in colls(T_A , T_B) :
post(Delta_(T_A)) (K) = pre(Delta_(T_B)) (K)
$
With the example of 100 transactions modifying the balance of address $a$, when the first transaction sets the balance to 1234, it only has a write-write conflict with transactions where the balance of $a$ is exactly 1234 before the execution. If all transactions write different balances, this will reduce the amount of TOD candidates to $n - 1 = 99$.
Apart from the performance benefit, this filter also removes many TOD candidates that are potentially indirectly dependent. For instance, let us assume that we removed the TOD candidate $(T_A , T_B)$. By definition of this filter, there must be some key $K$ with $post(Delta_(T_A)) (K) != pre(Delta_(T_B)) (K)$, thus some transaction $T_X$ must have modified the state at $K$ between $T_A$ and $T_B$. Therefore, we also have a collision (and TOD candidate) between $T_A$ and $T_X$, and between $T_X$ and $T_B$. This is a potential indirect dependency, which may lead to unexpected results, as argued in @sec:weakness-indirect-dependencies.
==== Block windows
According to a study of 24 million transactions from 2019 @zhang_evaluation_2021, the maximum observed time it took for a pending transaction to be included in a block was below 200 seconds. Therefore, when a transaction $T_B$ is submitted, and someone instantly attacks it by creating a new transaction $T_A$, the inclusion of them in the blockchain differs by at most 200 seconds. We currently add a new block to the blockchain every 12 seconds according to Etherscan @etherscan_ethereum_2024, thus $T_A$ and $T_B$ are at most $200 / 12 approx 17$ blocks apart from each other. As the study is already five years old, we use a block window of 25 blocks instead to account for a potential increase in latency since then.
Thus, we filter out all TOD candidates, where $T_A$ is in a block that is 25 or more blocks away from the block of $T_B$.
==== Block validators
In Ethereum, each transaction must pay a transaction fee to the block validator and thus modifies the block validator’s balance. This makes each transaction pair in a block a TOD candidate, as they all modify the balance of the block validator’s address.
We exclude TOD candidates, where the only collision is the balance of any block validator.
==== Nonce and code collisions
We showed in @sec:relevant-collisions that nonce and code collisions are not relevant for TOD attacks. Therefore, we ignore collisions of this state type.
==== Indirect dependency
As argued in @sec:weakness-indirect-dependencies, indirect dependencies can cause unexpected results in our analysis, therefore we will filter TOD candidates that have an indirect dependency. We will only consider the case, where the indirect dependency is already visible in the normal order and accept that we may miss indirect dependencies. Alternatively, we could also remove a TOD candidate $(T_A , T_B)$ when we there exists a TOD candidate $(T_A , T_X)$ for some intermediary transaction $T_X$, however this would remove many more TOD candidates.
We already have a model of all direct (potential) dependencies with the TOD candidates. We can build a transaction dependency graph $G = (V , E)$ with $V$ being all transactions and $E = { (T_A , T_B) divides (T_A , T_B) in "TOD candidates" }$. We then filter out all TOD candidates $(T_A , T_B)$ where there exists a path $T_A , T_(X_1) , dots.h , T_(X_n) , T_B$ with at least one intermediary node $T_(X_i)$.
@fig:tod_candidate_dependency shows an example dependency graph, where transaction $A$ influences both $X$ and $B$ and $B$ is influenced by all other transactions. We filter out the candidate $(A , B)$ as there is a path $A -> X -> B$, but keep $(X , B)$ and $(C , B)$.
#figure(
[
#text(size: 0.8em)[
#diagram(
node-stroke: .1em,
mark-scale: 100%,
edge-stroke: 0.08em,
node((3, 0), `A`, radius: 1.2em),
edge("-|>"),
node((2, 2), `X`, radius: 1.2em),
edge("-|>"),
node((4, 3), `B`, radius: 1.2em),
edge((3, 0), (4, 3), "--|>"),
edge("<|-"),
node((5, 1), `C`, radius: 1.2em),
)
]
],
caption: flex-caption(
[ Indirect dependency graph. An arrow from x to y indicates that y depends on x. A dashed arrow indicates an indirect dependency. ],
[Indirect dependency graph],
),
)
<fig:tod_candidate_dependency>
==== Same sender
If the sender of both transactions is the same, the victim would attack themselves.
To remove these TOD candidates, we use the `eth_getBlockByNumber` RPC method and compare the sender fields for $T_A$ and $T_B$.
==== Recipient Ether transfer
If a transaction sends Ether without executing code, it only depends on the balance of the EOA that signed the transaction. Other entities can only increase the balance of this EOA, which has no adverse effects on the transaction.
Thus, we exclude TOD candidates, where $T_B$ has no code access.
== Experiment
In this section, we discuss the results of applying the TOD candidate mining methodology to a randomly sampled sequence of 100 blocks, different from the block range we used for the filters' development. Refer to @cha:reproducibility for the experiment setup.
We mined the blocks from block 19,830,547 up to block 19,830,647, containing a total of 16,799 transactions.
=== Performance
The mining process took a total of 502 seconds, with 311 seconds used to fetch the data via RPC calls and store it in the database, 6 seconds used to query the collisions in the database, 17 seconds for filtering the TOD candidates and 168 seconds for preparing statistics. If we consider the running time as the total time excluding the statistics preparation, we analyzed an average of 0.30 blocks per second.
We also see that 93% of the running time was spent fetching the data via the RPC calls and storing it locally. This could be parallelized to significantly speed up the process.
=== Filters
In @tab:experiment_filters, we see the number of TOD candidates before and after each filter, showing how many candidates were filtered at each stage. This shows the importance of filtering as we reduce the number of TOD candidates to analyze from more than 60 million to only 8,127.
Note that this does not directly imply that "Same-value collision" filters out more TOD candidates than "Block windows", as they operate on different sets of TOD candidates. Even if "Block windows" filtered out every TOD candidate, this would be less than "Same-value collision" did, because of the order of filter application.
#figure(
table(
columns: 3,
align: (left, right, right),
table.header([Filter name], [TOD candidates after filtering], [Filtered TOD candidates]),
table.hline(),
[(unfiltered)], [(lower bound) 63,178,557], [],
[Same-value collision], [56,663], [(lower bound) 63,121,894],
[Block windows], [53,184], [3,479],
[Block validators], [39,899], [13,285],
[Nonce collision], [23,284], [16,615],
[Code collision], [23,265], [19],
[Indirect dependency], [16,235], [7,030],
[Same senders], [9,940], [6,295],
[Recipient Ether transfer], [8,127], [1,813],
),
caption: flex-caption(
[This table shows the application of all filters used to reduce the number of TOD candidates. Filters were applied from top to bottom. Each row shows how many TOD candidates remained and were filtered. The unfiltered value is a lower bound, as we only calculated this number afterwards, and the calculation does not include write-write collisions.],
[TOD candidate filters evaluation],
),
kind: table,
)
<tab:experiment_filters>
=== Transactions
After applying the filters, 7,864 transactions are part of at least one TOD candidate. This amounts to 46.8% of all transactions marked as potentially TOD with some other transaction. Only 2,381 of these transactions are part of exactly one TOD candidate. At the other end, there exists one transaction that is part of 22 TOD candidates.
=== Block distance
In @fig:tod_block_dist, we see that most TOD candidates are within the same block. Moreover, the further two transactions are apart, the less likely we are to include them as a TOD candidate. A reason for this may be that having many intermediary transactions makes it more likely to be filtered by our "Indirect dependency" filter. Nonetheless, we can conclude that when using our filters, the block window can be reduced even further without missing many TOD candidates.
#figure(
image("charts/tod_candidates_block_dist.png", width: 80%),
caption: flex-caption(
[
The histogram and the empirical cumulative distribution function (eCDF) of the block distance for TOD candidates. The blue bars show how many TOD candidates have been found, where $T_A$ and $T_B$ are $n$ blocks apart. The orange line shows the percentage of TOD candidates that are at most $n$ blocks apart.
],
[Block distances of TOD candidates],
),
)
<fig:tod_block_dist>
=== Collisions
After applying our filters, we have 8,818 storage collisions and 5,654 balance collisions remaining. When we analyze how often each account is part of a collision, we see that collisions are concentrated around a small set of accounts. For instance, the five accounts with the most collisions#footnote[All of them are token accounts:
#link("https://etherscan.io/address/0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2")[WETH],
#link("https://etherscan.io/address/0x97a9a15168c22b3c137e6381037e1499c8ad0978")[DOP],
#link("https://etherscan.io/address/0xdac17f958d2ee523a2206206994597c13d831ec7")[USDT],
#link("https://etherscan.io/address/0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48")[USDC]
and
#link("https://etherscan.io/address/0xf<KEY>")[CHOPPY]]
contribute 43.0% of all collisions. In total, the collisions occur in only 1,472 different account states.
@fig:collisions_address_limit depicts how many collisions we get when we only consider the first $n$ collisions for each address. If we set the limit to one collision per address, we end up with 1,472 collisions, which is exactly the number of unique addresses where collisions happened. When we keep 10 collisions per address, we get 3,964 collisions. This criterion already reduces the number of collisions by 73%, while still retaining a sample of 10 collisions for each address.
This paper tires to obtain a diverse set of attacks. With such a strong imbalance towards a few contracts, it will take a long time to analyze TOD candidates related to these frequent addresses, and the attacks are likely related and do not cover a wide range of attack types. To prevent this, we define additional deduplication filters in @sec:deduplication.
#figure(
image("charts/collisions_limited_per_address.png", width: 80%),
caption: flex-caption(
[
The chart shows how many collisions we have when we limit the number of collisions we include per address. For instance, if we only included 10 collisions for each address we would end up with about 4,000 collisions.
],
[Limit for collisions per address],
),
)
<fig:collisions_address_limit>
== Deduplication <sec:deduplication>
To reduce the prevalence of specific contracts among the TOD candidates, we randomly pick 10 collisions of each contract and drop the rest. We apply three mechanisms to group similar contracts:
Firstly, we group the collisions by the address where they happened and randomly select 10 collisions from each group. For instance, if many transactions access the balance and code of the same address, we would only retain 10 of these accesses.
Secondly, we also group collisions at different addresses if the addresses share exactly the same code. To do so, we group the collisions by the code hash and sample 10 collisions per code hash.
Finally, instead of matching for exactly the same code, we also group similar codes together. We use the grouping mechanism from @di_angelo_bytecode_2024, where the authors compute a "skeleton" for each code by removing the metadata and the values for `PUSH` instructions. They have shown that codes with the same skeleton mostly yield the same vulnerability detection results. Therefore, we only keep 10 collisions per code skeleton.
=== Results
We ran the same experiment as in the previous section, but now with the additional deduplication filters. In @tab:experiment_deduplication, we see that from the initial 8,127 TOD candidates, only 2,320 remain after removing duplicates. Most TOD candidates are already removed by limiting the amount of collisions per address and the other group limits reduce it further.
#figure(
table(
columns: 3,
align: (left, right, right),
table.header([Filter name], [TOD candidates after filtering], [Filtered TOD candidates]),
table.hline(),
[(previous filters)], [8,127], [],
[Limited collisions per address], [2,645], [5,482],
[Limited collisions per code hash], [2,435], [210],
[Limited collisions per skeleton], [2,320], [115],
),
caption: flex-caption(
[This table shows the application of the deduplication filters. We start with the TOD candidates from @tab:experiment_filters and then apply each deduplication filter.],
[TOD candidate deduplication evaluation],
),
kind: table,
)
<tab:experiment_deduplication>
= TOD detection <sec:tod-detection>
After mining a list of TOD candidates, we now check which of them are actually TOD. We first execute $T_A$ and $T_B$ according to the normal and reverse scenario defined in @sec:tod-simulation. Then we compare the state changes of the scenarios to apply the definitions for TOD and approximately TOD.
== Transaction execution via RPC <sec:transaction-execution-rpc>
Let $(T_A, T_B)$ be our TOD candidate. We split the block containing $T_B$ into three sections:
$ sigma ->^(T_X_0) dots ->^(T_X_n) sigma_X_n ->^(T_B) sigma_T_B ->^(T_Y_0) dots ->^(T_Y_m) sigma_B $
In the normal scenario, we want to execute $T_B$ on $sigma_X_n$ and in the reverse scenario on $sigma_X_n - Delta_T_A$. We use the `debug_traceCall` RPC method for these transaction executions. As parameters, it takes the transaction data, a block number that specifies the block environment and the initial world state, and state overrides that allow us to customize specific parts of the world state. Per default, the method uses the world state #emph[after] executing all transactions in this block, i.e. $sigma_B$. Therefore, we use the state overrides parameter to get from $sigma_B$ to $sigma_X_n$ and $sigma_X_n - Delta_T_A$.
For the normal scenario, we want to execute $T_B$ on $sigma_X_n$. Conceptually, we start from $sigma_B$ and then undo all transaction changes after $T_X_n$ in reverse order, to reach $sigma_X_n$. We do this with the state overrides $sum_(i=m)^0(-Delta_T_Y_i) - Delta_T_B$. For the reverse scenario, we also subtract $Delta_T_A$ from the state overrides, thus simulating how $T_B$ behaves without the changes from $T_A$, giving us the state change $Delta'_T_B$.
To execute $T_A$ in the normal scenario we use the same method as for $T_B$, except that we apply it on the block of $T_A$. For the reverse scenario, we take the state overrides from the normal scenario and add $Delta'_T_B$ to it, simulating how $T_A$ behaves after executing $T_B$. This yields the state changes $Delta'_T_A$.
== Execution inaccuracy <sec:execution-inaccuracy>
While manually testing this method, we found that using `debug_traceCall` with state overrides can lead to incorrect gas cost calculations with Erigon#footnote[See https://github.com/erigontech/erigon/issues/11254.]. To account for these inaccuracies, we compare the state changes from the normal execution via `debug_traceCall` with the state changes from `debug_traceBlockByNumber`. As we do not provide state overrides to `debug_traceBlockByNumber`, this method should yield the correct state changes, and we can detect differences to our simulation.
If the state changes of a transaction only differ in the balances of the senders and the block validators, we keep TOD candidates containing this transaction. Such differences are to be expected when gas costs vary, as the gas costs affect the priority fee sent from the transaction sender to the block validator. However, if there are other differences, we exclude the transaction from further analysis, as the simulation does not reflect the actual behavior in such cases.
A drawback of this inaccuracy is that we do not detect Ether flows between the senders of $T_A$ and $T_B$ that are TOD. For instance, if the sender of $T_A$ sends one Ether to the sender of $T_B$ in the normal scenario, but two Ether in the reverse scenario, then $(T_A, T_B)$ is TOD. However, our analysis would assume that the Ether changes are due to incorrect gas cost calculations and exclude the TOD candidate from further analysis.
== TOD assessment
We use the state changes $Delta_T_A$ and $Delta_T_B$ from the normal scenario and $Delta'_T_A$ and $Delta'_T_B$ from the reverse scenario to check for TOD. For the approximation, we test $Delta_T_B changesDiffer Delta'_T_B$ and for the exact definition we test $angle.l Delta_T_A, Delta_T_B angle.r changesDiffer angle.l Delta'_T_A, Delta'_T_B angle.r$.
@alg:tod-assessment shows, how we perform these state change comparisons. The changed keys, prestates and poststates are obtained from the RPC calls. The black lines show the calculation for the approximation and the blue lines the modifications for the exact definition. For each state key, we compute the change for this key in the normal scenario ($d_1$), and the change in the reverse scenario ($d_2$). If the changes differ between the scenarios, we have a TOD.
#figure(
kind: "algorithm",
caption: flex-caption(
[TOD assessment],
[TOD assessment],
),
pseudocode-list(hooks: 0.5em)[
+ *for* $K in changedKeys(Delta_T_B) union changedKeys(Delta'_T_B)$
+ #hide[*for* $K$] #text(fill: blue)[$union changedKeys(Delta_T_A) union changedKeys(Delta'_T_A)$]
+ $d_1 = post(Delta_T_B)(K) - pre(Delta_T_B)(K)$
+ $d_2 = post(Delta'_T_B)(K) - pre(Delta'_T_B)(K)$
+ #text(fill: blue)[$d_1 = d_1 + post(Delta_T_A)(K) - pre(Delta_T_A)(K)$]
+ #text(fill: blue)[$d_2 = d_2 + post(Delta'_T_A)(K) - pre(Delta'_T_A)(K)$]
+ *if* $d_1 != d_2$
+ *return* \<TOD\>
+ *return* \<not TOD\>
],
) <alg:tod-assessment>
== Experiment
We checked all 2,320 TOD candidates we found previously for TOD and approximately TOD. We then compare the results of these, to evaluate how well the approximation performs in practice.
=== Results
In @tab:experiment_check_definition, we see the results for both definitions. From the 2,320 TOD candidates we analyzed, slightly more than one third are TOD according to both definitions. For the approximation, 19 TOD candidates cannot be analyzed because of execution inaccuracies. For the exact definition, this number is higher, as we need to execute double the amount of transactions.
With both definitions, for 29% of the TOD candidates, $T_B$ fails because of insufficient funds to cover the transaction fee when it is executed without the state changes by $T_A$. This can happen when $T_A$ transfers Ether to the sender of $T_B$, and $T_B$ has less balance than the transaction fee without this transfer. Furthermore, if the execution of $T_B$ consumes more gas without the changes of $T_A$, it needs to pay a higher transaction fee which can also lead to insufficient funds. In both cases, the existence of $T_A$ enables the execution of $T_B$, therefore we do not consider these to be TOD attacks and ignore them from further analysis.
Finally, one error occurred when analyzing for the TOD approximation which did not occur with the exact definition. However, this error is not reproducible, potentially being a temporary fault with the RPC requests.
#figure(
table(
columns: 3,
align: (left, right, right),
table.header([Result], [Approximately TOD], [TOD]),
[TOD], [809], [775],
[not TOD], [819], [839],
[inaccurate execution], [19], [34],
[insufficient Ether], [672], [672],
[error], [1], [0],
),
caption: flex-caption(
[The results of analyzing TOD candidates for TOD and the approximation of TOD.],
[TOD checking with definition comparison.],
),
kind: table,
)
<tab:experiment_check_definition>
=== Analysis of differences <sec:analysis-of-differences>
To understand in which cases the two definitions lead to different results, we manually evaluate the cases where one result was TOD and the other not. To assist the analysis, we let our tool output the relative changes of each transaction in both scenarios. In all the cases, we manually verify that the manual application of @alg:tod-assessment on the relative changes gives the same result as the automatic application, to ensure the algorithm was correctly implemented.
Our analysis shows that 34 TOD candidates have been marked as approximately TOD but not TOD. As such, we have $Delta_T_B changesDiffer Delta'_T_B$ and $angle.l Delta_T_A, Delta_T_B angle.r changesEqual angle.l Delta'_T_A, Delta'_T_B angle.r$. In all these cases, the differences of $T_A$ between the normal and reverse scenario balance out the differences of $T_B$ between the normal and reverse scenario. One example is discussed in detail in @app:analysis-of-definition-differences.
Further 10 TOD candidates are TOD but not approximately TOD, i.e. $angle.l Delta_T_A, Delta_T_B angle.r changesDiffer angle.l Delta'_T_A, Delta'_T_B angle.r$ but $Delta_T_B changesEqual Delta'_T_B$. In these cases, $T_A$ creates different state changes depending on whether it was executed before or after $T_B$, thus being TOD. The execution of $T_B$ is not dependent on the transaction order.
A weakness of this comparison is that we use TOD candidates that are tailored for the TOD approximation and therefore TOD candidates that are TOD may be underrepresented. This could be why we found 34 TOD candidates that are approximately TOD but not TOD, while we only found 10 TOD candidates that are TOD but not approximately TOD.
Nonetheless, of the 1,628 TOD candidates labeled as TOD or not TOD according to our approximation, we obtained the same label with the exact TOD definition for 96.4% of these TOD candidates. In the case that TOD transaction pairs are underrepresented in our sample, this still demonstrates that most candidates labeled as approximately TOD are also TOD.
= TOD attack characteristics <sec:tod-attack-characteristics>
Previously, we noted that the TOD definition is too general to be directly used for attack or vulnerability detection. In this section, we discuss several characteristics of TOD attacks that cover more specific cases than the general TOD definition.
== Attacker gain and victim losses <sec:gain-and-loss-property>
In @sec:tod-relation-previous-works, we already discussed how the definition by #cite(<zhang_combatting_2023>, form: "prose") relates to our preliminary definition of TOD. We now present their definition in more detail.
Their definition considers two transaction orderings: $T_A -> T_B -> T_P$ and $T_B -> T_A -> T_P$. When an attack occurs, $T_A$ and $T_B$ are TOD. The transaction $T_P$ is an optional third transaction, which sometimes is required for the attacker to make financial profits. Our study only considers transaction pairs, therefore we adapt their definition and remove $T_P$ from it.
They define an attack to occur when both of the following properties hold:
+ Attacker Gain: "The attacker obtains financial gain in the attack scenario compared with the attack-free scenario."
+ Victim Loss: "The victim suffers from financial loss in the attack scenario compared with the attack-free scenario."
Their attack scenario corresponds to the normal order and the attack-free scenario to the reverse order.
For financial gains and losses, they consider Ether and ERC-20, ERC-721, ERC-777, and ERC-1155 tokens. As an attacker, they consider either the sender of $T_A$ or the contract that $T_A$ calls. The rationale for using the contract that $T_A$ calls is that it may be designed to conduct attacks and temporarily store the profits (see e.g. @torres_frontrunner_2021 for more details). The victim is the sender of $T_B$.
=== Formalization
The authors of @zhang_combatting_2023 do not provide a precise definition of attacker gain and victim loss, therefore we formalize these definitions. For simplicity, we do not explicitly mention $T_A$ and $T_B$ in all formulas, but assume that we inspect a specific TOD candidate $(T_A, T_B)$ and usages of the normal and reverse scenario refer to these two transactions.
==== Assets
#let assetsNormal = "assets_normal"
#let assets = $"Assets"(T_A, T_B)$
#let assetsReverse = "assets_reverse"
We use $assets$ to denote a set of assets that occur in $T_A$ and $T_B$ in any of the scenarios. As an asset, we consider Ether and the tokens that implement one of the standards ERC-20, ERC-721, ERC-777 or ERC-1155. Let $assetsNormal(C, a) in ZZ$ be the amount of assets $C$ that address $a$ gained or lost by executing both transactions in the normal scenario. Let $assetsReverse(C, a)$ be the counterpart for the reverse scenario.
For example, assume an address $a$ converts 1 Ether to 3,000 #link("https://etherscan.io/token/0xdac17f958d2ee523a2206206994597c13d831ec7")[USDT] tokens in the normal scenario, but in the reverse scenario converts 1 Ether to only 2,500 USDT. The assets that occur are $assets = {"Ether", "USDT"}$. The currency changes are $assetsNormal("Ether", a) = -1$, $assetsNormal("USDT", a) = 3,000$, $assetsReverse("Ether", a) = -1$ and $assetsReverse("USDT", a) = 2,500$.
For Ether, we use the `CALL` and `CALLCODE` instructions to compute which addresses gained and lost Ether in a transaction. We do not include the transaction value, as it stays the same regardless of the transaction order#footnote[In the course of the evaluation, we actually discover that it would make sense to include the transaction value. See @sec:attacker-gain-victim-loss-shortcomings.]. Furthermore, we ignore gas costs because of the inaccuracies described in @sec:execution-inaccuracy.
To track the gains and losses for tokens we use the following standardized events:
- ERC-20: `Transfer(address _from, address _to, uint256 _value)`
- ERC-721: `Transfer(address _from, address _to, uint256 _tokenId)`
- ERC-777: `Minted(address operator, address to, uint256 amount, bytes data, bytes operatorData)`
- ERC-777: `Sent(address operator,address from,address to,uint256 amount,bytes data,bytes operatorData)`
- ERC-777: `Burned(address operator, address from, uint256 amount, bytes data, bytes operatorData)`
- ERC-1155: `TransferSingle(address _operator, address _from, address _to, uint256 _id, uint256 _value)`
- ERC-1155: `TransferBatch(address _operator, address _from, address _to, uint256[] _ids, uint256[] _values)`
We only consider calls and event logs if their call context has not been reverted. In Ethereum, a reverted call context means that all changes except for the gas payment are discarded, therefore reverted calls and logs do not influence the gained or lost assets.
==== Attacker gain and victim loss
#let gain = "Gain"
#let onlyGain = "OnlyGain"
#let loss = "Loss"
#let onlyLoss = "OnlyLoss"
#let attack = "Attack"
#let attacker = "attacker"
#let victim = "victim"
#let sender = "sender"
#let recipient = "recipient"
We use the following predicates to express the existence of some asset gain or loss for an address $a$:
$
gain(a) &<-> exists C in assets: assetsNormal(C, a) > assetsReverse(C, a)\
loss(a) &<-> exists C in assets: assetsNormal(C, a) < assetsReverse(C, a)\
$
Continuing the previous example of Ether to USDT token conversion, we would have $gain(a) = top$, as $a$ makes more USDT in the normal scenario than in the reverse scenario, and $loss(a) = bot$, as neither for Ether, nor for USDT $a$ has fewer assets in the normal scenario than in the reverse scenario.
However, we also need to consider the case, where both $gain(a)$ and $loss(a)$ are true. For instance, maybe the attacker gains more USDT tokens but also pays more Ether in the normal scenario. It is not trivial to compare arbitrary assets in Ether, therefore we cannot determine if the lost Ether outweighs the gained tokens. To avoid such cases, we introduce the following two predicates:
$
onlyGain(a) &<-> gain(a) and not loss(a)\
onlyLoss(a) &<-> loss(a) and not gain(a)\
$
Note that this only considers assets we explicitly model. In the case that $a$ loses some asset that is not modeled, e.g. a token not implementing any of the above standards, $onlyGain(a)$ can be true despite having losses of an unmodeled asset. This is a limitation when not all relevant assets that occur in $T_A$ and $T_B$ are modeled.
With $onlyGain$ and $onlyLoss$ we define an attack to occur when the attacker has only advantages in the normal scenario compared to the reverse scenario, and the victim has only disadvantages:
$
attack <-> (&onlyGain(sender(T_A)) or onlyGain(recipient(T_A)))\
and &onlyLoss(sender(T_B))
$
We note that the definition by @zhang_combatting_2023 is not explicit on how different kinds of assets are compared. As such, our attack formalization may differ from their intention and implementation. This is a best effort to match their implementation and also the definitions of a subsequent @zhang_nyx_2024#footnote[We use the tests in `profit_test.go` @zhang_erebus-redgiant_2023 and Appendix A of @zhang_nyx_2024 to understand the intended definition.].
== Securify TOD properties
The authors of Securify describe three TOD properties @tsankov_securify_2018:
- *TOD Transfer*: "[...] the execution of the Ether transfer depends on transaction ordering".
- *TOD Amount*: "[...] the amount of Ether transferred depends on the transaction ordering".
- *TOD Receiver*: "[...] the recipient of the Ether transfer might change, depending on the transaction ordering".
For Ether transfers, they consider only `CALL` instructions. We also use `CALLCODE` instructions, as these can be used to transfer Ether similar to `CALL`s.
The properties can be applied by comparing the execution of a transaction in the normal scenario with the reverse scenario. We say that a property holds for a transaction pair $(T_A, T_B)$ if it holds for at least one of the transactions $T_A$ and $T_B$, i.e. at least one of the transactions shows attack characteristics.
=== Formalization
#let location = math.italic("Loc")
#let instruction = math.italic("Instruction")
#let inputs = math.italic("Inputs")
#let contextAddr = math.italic("ContextAddress")
#let pc = math.italic("ProgramCounter")
We denote the execution of an instruction as a tuple $(instruction, location, inputs)$. The location $location$ is a tuple $(contextAddr, pc)$, where $contextAddr$ is the address that is used for storage and balance accesses when executing the instruction, and $pc$ is the byte offset of the instruction in the executed code. Finally, $inputs$ is a sequence of stack values passed as arguments to the instruction.
#let normalCalls = $F_N$
#let reverseCalls = $F_R$
Let $normalCalls$ denote the set of `CALL` and `CALLCODE` instruction executions with a positive value (i.e. $inputs[2] > 0$) in the normal scenario and $reverseCalls$ the equivalent for the reverse scenario. We exclude calls that have been reverted. For a call execution $C in normalCalls$, we denote its location with $C_L$, the value it transfers with $C_v$ and the recipient of the transfer with $C_a$.
==== TOD Transfer
If there is a location where the number of `CALL`s differ between the normal and the reverse scenario, we say that TOD Transfer is fulfilled:
$
"TOD-Transfer" <-> exists l o c: |{C in normalCalls | C_L = l o c}| != |{C in reverseCalls | C_L = l o c}|
$
==== TOD Amount
If there is a location and a value where the number of `CALL`s differ between the normal and the reverse scenario, we say that TOD Amount is fulfilled:
$
"TOD-Amount" <-> ¬"TOD-Transfer"\
&and exists l o c, v:
|{C in normalCalls | C_L = l o c and C_v = v}| != |{C in reverseCalls | C_L = l o c and C_v = v}|
$
We exclude cases where TOD Transfer is fulfilled, as TOD Amount would always be fulfilled if TOD Transfer is fulfilled.
// For the case that at maximum one call happens per location, we could directly compare the values used at each call between the normal and reverse scenario. However, with loops, multiple call executions can happen at the same location, which is the reason we choose the definition that compares the number of occurrences.
// For example, consider a case where in the normal scenario we have three `CALL`s at the same location $l$, two with value 5 and one with value 6, but in the reverse scenario we have only one `CALL` with value 5 and one with value 6. For location $l$ and value 5 two `CALL`s exist in the normal scenario, but only one in the reverse scenario, therefore TOD Amount is fulfilled.
==== TOD Receiver
We define TOD Receiver analogously to TOD Amount, except that we use the `address` input instead of the `value`:
$
"TOD-Receiver" <-> ¬"TOD-Transfer"\
&and exists l o c, a: |{C in normalCalls | C_L = l o c and C_a = a}| != |{
C in reverseCalls | C_L = l o c and C_a = a
}|
$
== ERC-20 multiple withdrawal
Finally, we also consider ERC-20 multiple withdrawal attacks, which we already discussed in @sec:erc-20-multiple-withdrawal. The ERC-20 standard defines that the following events must be emitted when an approval takes place and when tokens are transferred @vogelsteller_erc-20_nodate.
#let transfer = `Transfer`
#let approval = `Approval`
- `Approval(address _owner, address _spender, uint256 _value)`
- `Transfer(address _from, address _to, uint256 _value)`
As a pattern to detect ERC-20 multiple withdrawal attacks we require the following conditions to be true:
+ Executing $T_A$ in the normal scenario must emit an event $transfer(v, a, x)$ at address $t$.
+ Executing $T_B$ in the normal scenario must emit an event $approval(v, a, y)$ at address $t$.
+ Executing $T_B$ in the reverse scenario must emit an event $approval(v, a, y)$ at address $t$.
The variable $a$ represents the attacker address, $v$ the victim address $x$ the transferred value and $y$ the approved value. We require that the events are not reverted.
As shown in @tab:erc20-multiple-withdrawal-example, executions of `transferFrom` and `approve` can be TOD because `approve` overwrites the currently approved value with the newly approved value. While this behavior is standardized in @vogelsteller_erc-20_nodate, other methods may prevent ERC-20 multiple withdrawal attacks by making a relative increase of the approved value rather than overwriting it. To ensure that there is indeed an overwrite, we require that the approval in the normal scenario is equal to the one in the reverse scenario. If there were a relative change of the approval, the approved value $y$ would differ.
== Trace analysis
To check for the TOD characteristics, we use the same approach to compute state overrides for the normal and reverse scenario as in @sec:transaction-execution-rpc. The `debug_traceCall` method allows the definition of a custom tracer in Javascript that can process each execution step. We use this tracer to track `CALL` and `CALLCODE` instructions and token events.
The Javascript tracer is described in @app:javascript-tracer. When executing a transaction, it returns all non-reverted `CALL`, `CALLCODE`, `LOG0`, `LOG1`, `LOG2`, `LOG3` and `LOG4` instructions and their inputs. We parse the call instructions to obtain Ether changes and the log instructions for token changes and ERC-20 `Approval` events. The results are used to check for the previously defined characteristics.
= Evaluation <sec:evaluation>
In this section, we evaluate the methods proposed above. We use a dataset from @zhang_combatting_2023 as a ground truth to evaluate our TOD candidate mining, the TOD detection and the detection of the attacker gain and victim loss characteristic. For the Securify and ERC-20 multiple withdrawal characteristics, we rely solely on a manual evaluation.
The ground truth dataset contains 6,765 attacks in the block range of 11,299,000 to 11,300,000. From these attacks, 5,601 contain no profit transaction, which we excluded from our definition of the attacker gain and victim loss property. The study by @torres_frontrunner_2021 also investigated this block range, and the attacks they found are a subset of the 6,765 attacks @zhang_combatting_2023. Therefore, showing that our method works well for this ground truth indirectly also shows that it works well for the results of @torres_frontrunner_2021.
First, we combine the TOD candidate mining, the TOD detection, and the TOD attack analysis method to analyze this block range. The results are discussed in @sec:overall-evaluation, where we evaluate our method for false positives. Afterwards, we compare the results of each step individually with the ground truth to check for false negatives.
== Evaluation limitations
We note that in our evaluation, we verify the correctness of the normal scenario, however our verification of the reverse scenario is limited as we do not have access to a ground truth for comparison.
For the normal scenario, we can directly compare it with data from the blockchain, as the executions in the normal scenario should equal the executions that happened on the blockchain. We will use Etherscan @etherscan_ethereum_nodate to access this blockchain data.
Contrary, for the reverse scenario, we simulate a transaction order that did not occur on the blockchain. We can verify that our normal and reverse scenarios are suitable for detecting TOD attacks by comparing our results to the ground truth dataset. However, we only have the results given in the dataset and not the exact executions of the reverse scenario. Therefore, when we encounter differences we cannot conduct an in-depth analysis to understand why differences occur between our method and the ground truth. To allow future research making in-depth comparisons we provide traces for all cases that we manually analyze (see @sec:data-availability), which contain each execution step for the normal and reverse scenarios.
We also compare our normal scenario with the reverse scenario and evaluate where these executions differ. We do so in @sec:evaluate-reverse-scenario, where we verify that the first difference between the normal and reverse scenario matches the state calculations we perform according to the definitions of the normal and normal scenario.
== Overall evaluation <sec:overall-evaluation>
We mined TOD candidates in the 1,000 blocks starting at 11,299,000, which resulted in 14,500 TOD candidates. From those, the TOD detection reported 2,959 as TOD. For 280 of these transaction pairs, we found an attacker gain and victim loss.
We compare the TOD candidates, TODs, and TOD attacks we found against the ground truth in @tab:eval-overall. Our mining procedure marks 115 of the attacks in the ground truth as TOD candidates. From the 115 TOD candidates, 95 are detected as TOD, and of those, 85 are marked as an attack.
When mining the TOD candidates we drop 98% of the ground truth attacks. The following steps drop another 26% of the attacks. We evaluate the reasons for this in the following sections, where we evaluate each component individually.
This section focuses on the 195 attacks we found that are not part of the ground truth.
#figure(
table(
columns: 4,
align: (left, right, right, right),
table.header([In ground truth], [TOD candidate], [TOD], [Attacker gain and victim loss]),
table.hline(),
[Yes], [115], [95], [85],
[No], [14,385], [2,864], [195]
),
caption: flex-caption(
[Comparison of results with the 5,601 attacks from the ground truth. The first row shows how many of the 5,601 attacks in the ground truth are also found by our analysis at the individual stages. The second row shows the results our method found, which are not in the ground truth.],
[Comparison of results with the ground truth.],
),
kind: table,
)<tab:eval-overall>
=== Block window filter
#cite(<zhang_combatting_2023>, form: "prose") only consider transactions within block windows of size three. If transactions are three more blocks apart from each other, they are not part of their analysis. We use a block window of size 25, therefore finding more attacks.
Of the 195 attacks we find that are not in the ground truth, only 19 are within a block window of size 3.
=== Manual analysis of attacks
We manually evaluate the 19 attacks to check if the attacker gain and victim loss property holds. We perform the following steps for each attack:
+ We manually parse the execution traces of the normal and reverse scenario for calls and events related to the attacker and victim accounts.
+ We compute the attacker gain and victim loss property based on these calls and events.
+ For the normal scenario, we verify that the calls and logs for the attacker and victim accounts are equal to those that occurred on the blockchain.
In all 19 cases, the manual evaluation shows that the attacker gain and victim loss property holds and that the relevant calls and logs in the normal scenario match those on the blockchain. However, we notice two shortcomings in our definition of the attacker gain and victim loss property.
==== Definition shortcomings <sec:attacker-gain-victim-loss-shortcomings>
Firstly, we argued that the transaction value is independent of the transaction order, because it is part of the transaction itself. However, when a transaction is reverted, the value is not sent to the receiver. Therefore, the transfer of the transaction value may depend on the transaction order. If we considered the transaction value in the calculation, six of the 19 attacks would be false positives.
Secondly, in five cases, we have a loss for the sender of $T_A$ (the attacker's EOA), while we have only gains for the recipient of $T_A$ (considered the attacker's bot in this case). Our definition considers the attacker gain fulfilled for the attacker's bot and ignores the loss of the attacker's EOA. If we considered them together, we may have different results in such cases.
== Evaluation of Securify and ERC-20 multiple withdrawal characteristics
In the overall analysis, we also analyze the 2,959 transaction pairs that are TOD for the Securify and ERC-20 multiple withdrawal characteristics.
We find that 626 transaction pairs fulfill the TOD Transfer characteristic, 244 TOD Amount, and 1 TOD Receiver. Moreover, we have 15 that fulfill our definition of ERC-20 multiple withdrawal. As the ground truth does not cover these characteristics, we manually analyze samples of each.
=== Manual evaluation of TOD Transfer
We take a sample of 20 transaction pairs that fulfill TOD Transfer. Our tool outputs the locations at which there is a different amount of calls in the normal and reverse scenario. For each sample, we verify the first call location it shows for $T_A$ and $T_B$. To do so, we manually check the execution traces of the normal and reverse scenario for this location and extract the relevant calls. We further verify that these calls match the calls in the normal scenario are equal to those on the blockchain.
We find that in all cases, the TOD transfer property holds for $T_B$, and only in one case it holds additionally for $T_A$.
In 9 of the cases, $T_B$ makes a `CALL` in the normal scenario that is reverted in the reverse scenario. As our definition only considers calls that are not reverted, these fulfill TOD Transfer.
In 8 further cases, $T_B$ makes a `CALL` in the normal scenario but makes no `CALL` at this location in the reverse scenario. In the 3 remaining cases, $T_B$ makes a `CALL` in the reverse scenario but makes no `CALL` in the normal scenario at this location.
We also observe that the locations are often the same. For instance, in five of the cases, the location we analyze is the address `0x7a250d5630b4cf539739df2c5dacb4c659f2488d` at program counter `15784`. When inspecting all 626 transaction pairs that fulfill TOD Transfer we find this location 86 times. Considering that we limit similar collisions to a maximum of 10, this implies that different collisions affect the same functionality.
=== Manual evaluation of TOD Amount
We take a sample of 20 transaction pairs that fulfill TOD Amount. Similar to the TOD Transfer evaluation, we manually verify the first location reported by our tool. For TOD Amount, we verify that in both scenarios there exists a call at this location, but with different values.
The evaluation shows that the property holds in all cases for $T_B$, and in 3 cases also for $T_A$. In 12 cases, the amount of Ether sent is increased in the reverse scenario, and in 11 cases, it is decreased.
Again, we observe many calls happening at the same location. Of the 20 call locations we analyze, the location is 16 times at the address `0x7a250d5630b4cf539739df2c5dacb4c659f2488d` at program counter `15784`.
=== Manual evaluation of TOD Receiver
We evaluate the one transaction pair we found for TOD Receiver similar to how we evaluate TOD Amount, except that we now verify whether the receiver of the Ether transfer changed. Our evaluation shows that this is indeed the case. By inspecting the traces, we can see that in the normal scenario the receiver address is loaded from a different storage slot than in the normal scenario, resulting in different recipients of the Ether transfer.
=== Manual evaluation of ERC-20 multiple withdrawal
We evaluate all 15 transaction pairs where our tool reports an ERC-20 multiple withdrawal attack. Our tool outputs pairs of `Transfer` and `Approval` events that should fulfill the definition. For each case, we manually evaluate the first of these pairs by verifying that the `Transfer` event exists in $T_A$ in the normal scenario and the `Approval` event exists in $T_B$ in the normal and reverse scenario. We further verify that the logs in the normal scenario are equal to those on the blockchain.
While we confirm that all of them fulfill the definition we provide for the ERC-20 multiple withdrawal attack, none of them actually is an attack.
==== Definition shortcomings
Firstly, our definition does not require that the `Transfer` and `Approval` events have positive values. In nine cases we find an `Approval` event that approves 0 tokens and in one case we find a transfer of 0 tokens. These should be excluded from the definition.
Moreover, in 14 cases $T_A$ contains an `Approval` event for the tokens that are transferred in $T_A$. As such, $T_A$ does not use any previously approved tokens, but approves the token itself.
== Evaluation of TOD candidate mining
In this section, we analyze why 98% of the attacks in the ground truth are not reported as TOD candidates, and whether the TOD candidate filters work as intended.
We rerun the TOD candidate mining and count the number of attacks from the ground truth that are in the TOD candidates before and after each filter is applied. Therefore, we know how many of the attacks were removed by which filter.
In @tab:eval-mining, we see that most filters do not filter out any attack from the ground truth. However, they still filter out 500,141 other TOD candidates, thus significantly reducing the search space for further analysis without affecting the attacks we can find.
Furthermore, @tab:eval-mining shows that only one attack is filtered because there is no collision between the accessed and modified states of $T_A$ and $T_B$. This TOD candidate is filtered, because the second transaction of the filtered TOD candidate is part of block 11,300,000, which is not part of the blocks we analyze#footnote[In @zhang_combatting_2023, the dataset is described as originating from an analysis of 1,000 blocks. Block 11,300,000 would be the $1,001$-th block, thus we assume an off-by-one error.].
The filters "Same-value collision" and "Indirect dependency" remove 4,275 TOD candidates with potential indirect dependencies. Finally, our deduplication filters remove another 1,210 TOD candidates. In the following subsections, we evaluate whether these filters fulfill their intention.
#figure(
table(
columns: 5,
align: (left, right, right, right, right),
table.header(
[Filter name],
[TOD candidates after filtering],
[Filtered TOD candidates],
[Ground truth attacks after filtering],
[Filtered ground truth attacks],
),
table.hline(),
[(unfiltered)], [], [], [5,601], [],
[Collision], [(unknown)], [], [5,600], [1],
[Same-value collision], [638,313], [(unknown)], [3,537], [2,063],
[Block windows], [422,384], [215,929], [3,537], [0],
[Block validators], [288,264], [134,120], [3,537], [0],
[Nonce collision], [220,687], [67,577], [3,537], [0],
[Code collision], [220,679], [8], [3,537], [0],
[Indirect dependency], [161,062], [59,617], [1,325], [2,212],
[Same senders], [100,690], [60,372], [1,325], [0],
[Recipient Ether transfer], [78,555], [22,135], [1,325], [0],
[Limited collisions per address], [17,300], [61,255], [199], [1,126],
[Limited collisions per code hash], [14,996], [2,304], [123], [76],
[Limited collisions per skeleton], [14,500], [496], [115], [8],
),
caption: flex-caption(
[Comparison of all filtered TOD candidates with filtered attacks from the ground truth. Each row shows how many TOD candidate and attacks are filtered by this filter. TOD candidates before filtering for same-value collisions were not compute because of performance limitations.],
[Comparison of all filtered TOD candidates with filtered attacks from the ground truth],
),
kind: table,
)
<tab:eval-mining>
=== Evaluation of indirect dependency filters
The "Same-value collision" and "Indirect dependency" filters both target TOD candidates with indirect dependencies, as these may lead to unexpected analysis results (see @sec:weakness-indirect-dependencies).
We evaluate, for how many of the removed attack TOD candidates $(T_A, T_B)$, there exists an intermediary transaction $T_X$, such that both $(T_A, T_X)$ and $(T_X, T_B)$ are TOD. In such cases, any reordering that moves $T_A$ after $T_X$ or $T_X$ after $T_B$ may influence how $T_A$ and $T_B$ execute. While our filters also remove indirect dependencies which require more than one intermediary transaction (e.g. $T_A -> T_X_1 -> T_X_2 -> T_B$), we limit our evaluation to only one intermediary transaction for performance reasons.
We rerun the TOD candidate mining until the "Indirect dependency" filter would be executed. For 1,720 of the 4,275 TOD candidates $(T_A, T_B)$ we evaluate, we find another two TOD candidates $(T_A, T_X)$ and $(T_X, T_B)$. These TOD candidates show a potential indirect dependency of $(T_A, T_B)$ with the one intermediary transaction $T_X$. We do not evaluate the remaining 2,555 TOD candidates, which either have an indirect dependency with multiple intermediary transactions, or have an indirect dependency where one of the TOD candidates $(T_A, T_X)$ or $(T_X, T_B)$ has already been filtered.
We run or TOD detection on the 1,720 $(T_A, T_X)$ TOD candidates and the 1,720 $(T_X, T_B)$ TOD candidates. We find that in 1,319 cases both $(T_A, T_X)$ and $(T_X, T_B)$ being TOD. In 159 cases, at least one analysis failed. In the remaining 242 cases, at least one of the TOD candidates $(T_A, T_X)$ or $(T_X, T_B)$ is confirmed not to be TOD.
In summary, we show that in at least 1,319 of the 4,275 cases where we filtered out a an attack of the ground truth, there exists a transaction that is TOD with both $T_A$ and $T_B$ of this attack and thus potentially interferes with the TOD simulation.
=== Evaluation of duplicate limits
The filters "Limited collisions per address", "Limited collisions per code hash" and "Limited collisions per skeleton" aim to reduce the amount of TOD candidates without reducing the diversity of the attacks we find.
For our evaluation, we do not directly measure the diversity of the attacks. Instead, we evaluate how well the attacks that were not filtered cover the attacks that were filtered. To measure the coverage, we use collisions. We say, that a TOD candidate $(T_A, T_B)$ is covered by a set of TOD candidates ${(T_C_0, T_D_0), ..., (T_C_n, T_D_n)}$ if the following condition holds:
$
colls(T_A, T_B) subset.eq union.big_(0 <= i <= n) colls(T_C_i, T_D_i)
$
For this analysis, we only consider collisions that remain after applying all previous filters.
From the 1,210 attacks that were removed by duplicate limits, we have 703 that are covered by the remaining attacks. Thus, if we combine the collisions of the 115 remaining attacks, we have the same collisions as if we included these 703 covered attacks. From the 703 covered attacks, we can match at least#footnote[We use a naive algorithm to detect collision coverage, which does not minimize the required attacks for coverage. Thus, the number of attacks covered by a single other attack is a lower bound.] 504 removed attacks $(T_A, T_B)$ with a remaining attack $(T_C, T_D)$, such that $colls(T_A, T_B) subset.eq colls(T_C, T_D)$. Thus, in 504 cases a removed attack is covered by exactly one remaining attack.
We also notice that attacks are concentrated around the same collisions. When we take the top three attacks that were not removed by duplicate limits, we already cover 373 of the 1,210 attacks we removed.
Our calculations of coverages are lower limits, as we only consider the 115 remaining attacks from the ground truth but not the 195 attacks we find that are not in the ground truth. All attacks we find are subject to the duplicate limit, therefore some ground truth attacks may have been removed while keeping an attack not in the ground truth that has similar collisions.
== Evaluation of TOD detection
To evaluate our TOD detection method, we run it on the attacks from the ground truth.
From the 5,601 attacks, our method finds that 4,827 are TOD and 4,857 approximately TOD. We do not manually compare the differences of the TOD detection with the approximation, as we already did so in @sec:tod-detection. However, that for the attacks from the ground truth one can use the TOD approximation without loosing attacks.
There are 774 attacks that our method misses. For 20 of those an error occurred while analyzing for TOD#footnote[18 of the errors are caused by a bug in Erigon, where it reports negative balances for accounts for some transactions (#link("https://github.com/erigontech/erigon/issues/9531")[fixed in v2.60.3]). 2 of them were caused by connection errors.] and for 296 we detected execution inaccuracies (see @sec:execution-inaccuracy) and stopped the analysis.
From the remaining 458 attacks, we find that most have the metadata "out of gas" in the ground truth dataset. Attacks with this "out of gas" label account for 97.6% of the attacks we do not find, while they only account for 19.1% of the 5,601 attacks in the ground truth.
=== Manual evaluation of attacks labeled "out of gas" <sec:evaluation-out-of-gas>
According to the dataset description, this label refers to a #emph[gas estimation griefing attack], which is described in @zhang_combatting_2023. The authors consider such an attack to occur when $T_B$ runs out of gas in the normal scenario but not in the reverse scenario.
We manually inspect a sample of 20 attacks and find that in 12 attacks, $T_B$ is indeed reverted according to Etherscan. In these cases, our simulation method further shows that $T_B$ is reverted in both scenarios. As $T_B$ also revertes in the reverse scenario, these cases are no gas estimation griefing attacks according to our simulation.
In the remaining 8 cases, our method reports no reverts in either scenario. For one case, Etherscan reports that $T_B$ had an internal out of gas error, which was caught without reverting the whole transaction. Therefore, at least the 7 cases where $T_B$ did not revert in the normal scenario are no gas estimation griefing attacks.
As such, it is unclear if this label classifies these attacks as gas estimation griefing attacks. Furthermore, it appears that attacks with this label do not necessarily fulfill the attacker gain and victim loss property. The dataset usually describes the profits of an attacker and losses of a victim in each attack. However, 347 of the 1,043 attacks with the "out of gas" label do not contain a description of the victim losses. However, this description exists for all attacks without the "out of gas" label. In summary, it is unclear how we should interpret these attacks from the ground truth and thus we ignore them from further analysis.
=== Manual evaluation of attacks not labeled "out of gas"
We manually check the remaining 11 attacks that our method does not report as TOD.
We check if these are caused by bugs in the RPC method implementation by rerunning the analysis with a Reth archive node, in addition to the Erigon archive node we use for our experiments. In two cases, using Reth we report them as TOD because of the same balance changes as reported in the ground truth, showing the inaccuracies from @sec:execution-inaccuracy.
Furthermore, we compare the traces of the instruction executions between the scenarios. For 8 attacks, the traces in the normal scenario are equal to those in the reverse scenario, therefeore no write-read or read-write TOD has occurred. By inspecting the state changes in Etherscan, we also rule out write-write TODs, where both transactions write to the same storage slot. As such, these are indeed TOD accocrding to the traces of our simulation.
Finally, for one attack $T_B$ reverts in both scenarios. The ground truth dataset reports token changes in the reverse scenario, therefore our execution must differ from theirs, which we do not further investigate.
== Evaluation of TOD attack analysis
We run our TOD attack analysis on the 5,601 attacks from the ground truth. Our analysis reports an attacker gain and victim loss in 4,524 of the cases. In 19 cases we encountered the same errors as for the TOD checking. In another 152 cases, we detect execution inaccuracies. In the remaining 907 cases, our analysis runs without failures but reports different results than in the ground truth.
From these 907 cases, 850 are labeled as "out of gas" in the ground truth. As discussed in @sec:evaluation-out-of-gas, it seems that these do not necessarily fulfill the attacker gain and victim loss property. Therefore, we do not investigate these cases. We manually evaluate 10 of the 56 cases without the "out of gas" label.
=== Manual evaluation of attacks
==== Evaluation of profit calculations
We verify that the transaction pair does not fulfill the attacker gain and victim loss property according to the execution traces of our normal and reverse scenarios. In each case, we disprove the property by manually parsing the calls and logs and calculating the profits and losses.
In five cases, there is a victim gain according to our traces. In three cases, we calculate an attacker loss. In the two other cases, the traces of the attacker's transaction behave identical in the normal and reverse scenarios. We show that one of these cases is not TOD in @app:analysis-TOD.
==== Evaluation of reverse scenario <sec:evaluate-reverse-scenario>
We further want to verify that our tool correctly executes the reverse scenario.
For each case, we pick one of the transactions. For this transaction, we compare the $n$-th executed instruction of the normal scenario with the $n$-th executed instruction of the reverse scenario. We start with $n = 0$ and continue until we find a difference between the executions. For the comparison, we use the current EVM stack, memory, program counter, gas, and call context depth.
In one case, we do not find any difference, as the transactions are not TOD. In the other nine cases, the first difference is after executing the `SLOAD` instruction, which loads a value from the storage.
When we analyze the execution of the attacker's transaction ($T_A$), we execute it on the states $sigma$ and $sigma + Delta'_T_B$. Because we observer that `SLOAD` returns different values for $sigma$ and $sigma + Delta'_T_B$, the accessed storage slot should be modified by $Delta'_T_B$. We verify that in the normal scenario, the result of the `SLOAD` is equal to the value this storage slot had before executing $T_A$ according to Etherscan. For the reverse scenario, we compare it against the last `SSTORE` of the execution of $T_B$ in the reverse scenario, i.e. the value that $Delta'_T_B$ changes it to.
We approach the verification of the accessed storage values of $T_B$ similarly. For $T_B$ we use the states $sigma_X_n$ for the normal scenario and $sigma_X_n - Delta_T_A$ for the reverse scenario. We compare the result of the `SLOAD` in the normal scenario with the value it had before executing $T_B$ according to Etherscan. For the reverse scenario, we compare it with the value it had before executing $T_A$ according to Etherscan. We notice that in all these cases, $T_A$ wrote a value to this storage slot that is different from the one that $T_B$ reads in the normal scenario. Therefore, there must be intermediary transactions that changed this storage value between $T_A$ and $T_B$, possibly causing an indirect dependency.
In summary, we verify that at least the first difference between the normal and reverse scenarios is in accordance with the definition of the normal and reverse scenarios.
==== Evaluation of indirect dependencies
As our previous evaluation shows, there are several attacks where an intermediary transaction modifies a storage slot that is written by $T_A$ and accessed by $T_B$, potentially creating an indirect dependency. In this section, we additionally evaluate for a specific kind of indirect dependency.
When we simulate an attack $(T_A, T_B)$, we execute $T_B$ in the reverse scenario on the state $sigma_X_n - Delta_T_A$. Therefore, for all state keys $K in changedKeys(Delta_T_A)$ we use the value at $pre(Delta_T_A)(K)$ and for the other state keys $K in.not changedKeys(Delta_T_A)$ we use $sigma_X_n (K)$.
We now consider an intermediary transaction $T_X$ with the state changes $Delta_T_X$, where $changedKeys(Delta_T_X)$ contains some keys that are changed by $T_A$ and also other keys that $T_A$ does not change. When we execute $T_B$ on $sigma_X_n - Delta_T_A$, we only overwrite some of the changes of $T_X$ and keep the other changes. Therefore, $T_B$ executes on a state where state changes of $T_X$ are only partially included.
Our evaluation shows that in 2 of the 10 cases, we can indeed find an intermediary transaction $T_X$, such that at $T_B$ accesses at least one change of $T_X$ that is overwritten by computing $sigma_X_n - Delta_T_A$, and one change of $T_X$ that is not overwritten. Thus, in these cases, $T_B$ uses a possibly incoherent state. We further verify that $(T_A, T_X)$ and $(T_X, T_B)$ are both TOD. However, we do not investigate, how the partial revert of $T_X$ influences the transaction execution.
==== Unmodeled token events
In one case, the attacker's transaction emits a `Deposit` event. This event is not part of the token standards we model in our definition, therefore our profit calculations ignore this event.
This `Deposit` event is emitted by the #link("https://etherscan.io/token/0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2#code")[WETH] token when someone converts Ether to WETH tokens. Our analysis only assesses a loss of Ether, but not a gain of WETH tokens. If we modeled the `Deposit` event, we would also mark this transaction pair as an attack.
By inspecting the source code at @zhang_erebus-redgiant_2023, we find that they also detect `Deposit` and `Withdrawal` events when they are emitted by the address of the WETH token.
== Performance evaluation
The evaluation of the 1,000 blocks took a total of 75 minutes, averaging to 4.5 seconds per block.
For the TOD candidate mining, we spent 41 minutes fetching the state changes of the 1,000 blocks and inserting them into a database, another 13 minutes filtering the TOD candidates, and 3 minutes for other tasks.
For the TOD detection and TOD attack analysis, we fetch the state changes and transactions and store them in memory for faster access. Because the state changes are already in our RPC cache, these two steps combined only took 5 minutes.
After fetching the state changes and transactions, we ran the TOD detection and TOD attack analysis using 16 threads, enabling us to make multiple RPC requests in parallel. To check the 14,500 TOD candidates for TOD it took 11 minutes, an average of 44 milliseconds per TOD candidate. The attack analysis of the 2,959 TOD transaction pairs took 4 minutes, averaging to 77 milliseconds per analysis.
Compared with @zhang_combatting_2023, our analysis took 4.5 seconds per block, while they report an average of 7.5 seconds per block. However, we cannot directly compare this, as their hardware specifications differ from our setup and in our case the transaction execution is outsourced to an archive node of which we do not know the hardware specifications. Moreover, @zhang_combatting_2023 only reports an average for their whole analysis, and it is not clear if e.g. the vulnerability localization performed in this work is included in this time measurement.
= Discussion
This thesis proposes a method to simulate transaction order dependencies. We precisely define this simulation process and discuss its advantages and disadvantages. Our evaluation shows that it can be used to detect TOD and several attack characteristics, finding more than 80% of the attacks from a previous work.
Nonetheless, we note that our simulation method and the methods of two related studies have drawbacks that can lead to analysis results that do not match the execution that happened on the blockchain or are distorted by the influence of intermediary transactions. The method by @torres_frontrunner_2021 removes intermediary transactions for the simulation. On the downside, this may create results that differ from the blockchain even in the normal transaction order. On the upside, the different orderings can then be compared without the potential influence of intermediary transactions. The methods by @zhang_combatting_2023 and us produce results that are equal to the blockchain in the normal order, but can suffer influences from intermediary transactions in the reverse order.
We discuss when influences from intermediary transactions can occur with our method, and thus, we are able to avoid such cases. However, future work may continue to reduce the influence of intermediary transactions on TOD simulations or analyze the tradeoffs between existing methods.
= Data availability and reproducibility
<cha:reproducibility>
== Tool
The program used to run the experiments is available at https://github.com/TOD-theses/t_race. It can be run with Python or Docker and requires an archive node that supports the debug namespace, including JS tracing for the attack analysis. Refer to the documentation on the repository for more details on using it.
The TOD candidates deduplication relies on randomness. To allow reproducibility, the program sets a seed for the randomness before executing the randomized deduplication step.
== Data availability <sec:data-availability>
The experiment results and evaluation artifacts produced by this thesis are available at https://github.com/TOD-theses/t_race_results. This includes the outputs of the tool executions, post-processed data and the evaluation samples with corresponding notes.
== Experiment setup
The experiments were performed on Ubuntu 22.04.04, using an AMD Ryzen 5 5500U CPU with 6 cores and 2 threads per core and an SN530 NVMe SSD. We used a 16 GB RAM with an additional 32 GB swap file.
For the RPC requests, we did not use our own archive node, but relied on a free service by @nodies_web3_nodate, which uses Erigon 2.59.3 @noauthor_erigon_2024 according to the `web3_clientVersion` RPC method. In the evaluation, we also refer to the usage of a Reth instance for a few TOD checks. We use a public Reth 1.0.4 instance for this @paradigm_reth_nodate. We used a local cache to prevent repeating slow RPC requests @fuzzland_eth_2024. The cache was initially empty for experiments that measure the running time.
|
|
https://github.com/fkasatt/myTypstConf | https://raw.githubusercontent.com/fkasatt/myTypstConf/main/thesis/0.1.0/thesis.typ | typst | #let pageSettings(doc, title, author) = {
set document(author: author, title: title)
set page("a4")
set par(justify: true)
show par: set block(spacing: 0.65em) // 段落ごとの間隔
show list: set block(spacing: 0.65em)
show emph: set text(font: ("Nimbus Roman", "Noto Sans CJK JP"))
show strong: set text(font: ("Nimbus Roman", "Noto Sans CJK JP"), weight: 200)
show raw: set text(font: "PlemolJP Console NF")
show "、": ","
show "。": "."
show "": h(1em)
doc
}
#let setupFrontPage(
title: none, enTitle: none,
id: "", author: "", supervisor: "",
abst: none,
sdgs: "", date: ""
) = {
set page("a4")
set text(font: ("Nimbus Roman", "IPAmjMincho"), lang: "ja")
align(center)[ ,
#set text(size: 18pt)
#v(4em)
令和5年度 \
██高等専門学校#h(0.25em)情報工学科 \
卒業研究論文
#v(2em)
*#title* \
*#enTitle*
#v(1em)
#set text(size: 15pt)
#text(weight: "semibold")[Abstract]
#v(-0.5em)
#block(
width: 80%,
align(left)[#text(size: 10pt)[#abst]]
)
#v(1em)
SDGs目標番号#sdgs
#v(1em)
研究者#author (学生番号 #id) \
指導教官#supervisor
#v(2em)
#date
]
}
#let to-string(content) = {
if content.has("text") {
content.text
} else if content.has("children") {
content.children.map(to-string).join("")
} else if content.has("body") {
to-string(content.body)
} else if content == [ ] {
" "
}
}
#let toc() = {
set page(
footer: [#align(center)[#counter(page).display("i")]]
)
counter(page).update(1)
set text(font: ("Nimbus Roman", "Noto Sans CJK JP"), lang: "ja")
set text(size: 15pt)
set par(leading: 1.25em)
v(20pt)
align(left)[#text(size: 18pt, weight: "semibold")[目次]]
v(20pt)
locate(loc => {
let elements = query(heading.where(outlined: true), loc)
for el in elements {
let before_toc = query(heading.where(outlined: true).before(loc), loc).find((one) => {one.body == el.body}) != none
let page_num = if before_toc {
numbering("i", counter(page).at(el.location()).first())
} else {
counter(page).at(el.location()).first()
}
link(el.location())[#{
let chapt_num = if el.numbering != none {
numbering(el.numbering, ..counter(heading).at(el.location()))
} else {none}
if el.level == 1 {
set text(weight: "medium")
if chapt_num == none {} else {
chapt_num
" "
}
let rebody = to-string(el.body)
rebody
} else if el.level == 2 {
set text(size: 13pt)
h(2em)
chapt_num
" "
let rebody = to-string(el.body)
rebody
} else {
h(5em)
chapt_num
" "
let rebody = to-string(el.body)
rebody
}
}]
box(width: 1fr, h(0.5em) + box(width: 1fr, repeat[.]) + h(0.5em))
[#page_num]
linebreak()
}
})
}
#let mainText(
title: none, enTitle: none,
authors: (), date: "",
abst: none,
doc
) = {
set text(size: 10pt, lang: "ja", font: ("Nimbus Roman", "IPAmjMincho"))
set enum(numbering: "(1a)", body-indent: 0.3em)
set list(body-indent: 0.3em)
set par(first-line-indent: 1em)
show heading: it => {
it
par(text(size: 0pt, ""))
v(-0.35em)
}
show figure: it => {
it
par(text(size: 0pt, ""))
v(-0.65em)
}
show enum: it => {
it
par(text(size: 0pt, ""))
v(-0.65em)
}
show list: it => {
it
par(text(size: 0pt, ""))
v(-0.65em)
}
set heading(numbering: "1.")
show heading: set text(font: ("Nimbus Roman", "Noto Sans CJK JP"))
show heading.where(level: 1): set text(size: 12pt)
show heading.where(level: 2): set text(size: 10pt)
show figure.where( // 表のキャプションの位置
kind: table
): set figure.caption(position: top)
set page(
footer: [
#align(center)[#counter(page).display("1")]
]
)
counter(page).update(1)
set text(size: 10pt)
align(center)[
#v(1em)
#text(size: 15.5pt)[*#title*]
#v(0.5em)
#text(size: 16pt)[*#enTitle*]
#v(2em)
#text(size: 12pt)[#grid(
columns: (1fr,) * 2,
row-gutter: 24pt,
..authors.map(author => author.name),
)]
#v(0.35em)
#date
#v(1em)
*要旨*
#block(
width: 80%,
align(left)[#abst]
)
]
v(1em)
align(left)[#columns(2, doc)]
}
#let bib(title: "参考文献", body) = {
set heading(numbering: none)
align(center)[= #title]
pad(top: -10pt, bottom: -5pt, line(length: 100%, stroke: 0.5pt))
set enum(numbering: "1)")
text(size: 7pt)[#body]
}
#let code(body) = {
set raw(tab-size: 2)
show raw.where(block: true): block.with(
fill: rgb("f6f8fa"), inset: 8pt, radius: 5pt, width: 100%,
)
body
}
#let tbl(body, title: none) = {
set text(size: 0.9em)
figure(
caption: title,
body
)
par(text(size: 0pt, ""))
}
#let img(path, cap: "", width: 100%) = {
set text(size: 0.9em)
figure(
image(path, width: width),
caption: [#cap],
kind: "image",
supplement: [図]
)
}
|
|
https://github.com/Lulliter/cv_typst | https://raw.githubusercontent.com/Lulliter/cv_typst/master/cv_ita.typ | typst | #import "@preview/fontawesome:0.1.0": *
//------------------------------------------------------------------------------
// Style
//------------------------------------------------------------------------------
// const color
#let color-darknight = rgb("#131A28")
#let color-darkgray = rgb("#333333")
#let color-middledarkgray = rgb("#414141")
#let color-gray = rgb("#5d5d5d")
#let color-lightgray = rgb("#999999")
#let color-darklue = rgb("#004980")
#let color-accent = rgb("#0088cc") // deciso in YAML
// Default style
#let color-accent-default = rgb("#dc3522")
#let font-header-default = ("Roboto", "Arial", "Helvetica", "Dejavu Sans")
#let font-text-default = ("Source Sans Pro", "Arial", "Helvetica", "Dejavu Sans")
#let align-header-default = center
// // Lula LINK style
// #let link-style = (content) => {
// box(
// highlight: rgb("#f5ff90"), // Apply the highlight color as a background
// text(
// color: rgb("#0054cc"), // Apply the font color
// underline: true // Apply underline
// )
// )[content] // Apply the styles to the content
// }
// User defined style
#let color-accent = rgb("7c1c2d")
#let font-header = font-header-default
#let font-text = font-text-default
//------------------------------------------------------------------------------
// Helper functions
//------------------------------------------------------------------------------
// icon string parser
#let parse_icon_string(icon_string) = {
if icon_string.starts-with("fa ") [
#let parts = icon_string.split(" ")
#if parts.len() == 2 {
fa-icon(parts.at(1), fill: color-darknight)
} else if parts.len() == 3 and parts.at(1) == "brands" {
fa-icon(parts.at(2), fa-set: "Brands", fill: color-darknight)
} else {
assert(false, "Invalid fontawesome icon string")
}
] else if icon_string.ends-with(".svg") [
#box(image(icon_string))
] else {
assert(false, "Invalid icon string")
}
}
// contaxt text parser
#let unescape_text(text) = {
// This is not a perfect solution
text.replace("\\", "").replace(".~", ". ")
}
// layout utility
#let __justify_align(left_body, right_body) = {
block[
#box(width: 4fr)[#left_body]
#box(width: 1fr)[
#align(right)[
#right_body
]
]
]
}
#let __justify_align_3(left_body, mid_body, right_body) = {
block[
#box(width: 1fr)[
#align(left)[
#left_body
]
]
#box(width: 1fr)[
#align(center)[
#mid_body
]
]
#box(width: 1fr)[
#align(right)[
#right_body
]
]
]
}
/// Right section for the justified headers
/// - body (content): The body of the right header
#let secondary-right-header(body) = {
set text(
size: 10pt,
weight: "light",
style: "italic",
fill: color-accent,
)
body
}
/// Right section of a tertiaty headers.
/// - body (content): The body of the right header
#let tertiary-right-header(body) = {
set text(
weight: "light", // weight: "light",
size: 10pt,
style: "italic",
fill: color-darklue,// fill: color-gray,
)
body
}
/// Justified header that takes a primary section and a secondary section. The primary section is on the left and the secondary section is on the right.
/// - primary (content): The primary section of the header
/// - secondary (content): The secondary section of the header
#let justified-header(primary, secondary) = {
set block(
above: 0.7em,
below: 0.7em,
)
pad[
#__justify_align[
#set text(
size: 12pt,
weight: "bold",
fill: color-darkgray,
)
#primary
][
#secondary-right-header[#secondary]
]
]
}
/// Justified header that takes a primary section and a secondary section. The primary section is on the left and the secondary section is on the right. This is a smaller header compared to the `justified-header`.
/// - primary (content): The primary section of the header
/// - secondary (content): The secondary section of the header
#let secondary-justified-header(primary, secondary) = {
__justify_align[
#set text(
size: 10pt,
weight: "regular",
fill: color-gray,
)
#primary
][
#tertiary-right-header[#secondary]
]
}
//------------------------------------------------------------------------------
// Header
//------------------------------------------------------------------------------
#let create-header-name(
firstname: "",
lastname: "",
) = {
pad(bottom: 7pt)[
#block[
#set text(
size: 26pt,
style: "normal",
font: (font-header),
)
#text(weight: "bold")[#firstname]
//#text(fill: color-gray, weight: "thin")[#firstname]
#text(weight: "bold")[#lastname]
]
]
}
#let create-header-position(
position: "",
) = {
set block(
above: 0.75em,
below: 0.75em,
)
set text(
color-accent,
size: 10pt,
weight: "regular",
)
smallcaps[
#position
]
}
#let create-header-address(
address: ""
) = {
set block(
above: 0.75em,
below: 0.75em,
)
set text(
color-lightgray,
size: 10pt,
style: "italic",
)
block[#address]
}
#let create-header-contacts(
contacts: (),
) = {
let separator = box(width: 2pt)
if(contacts.len() > 1) {
block[
#set text(
size: 10pt,
weight: "regular",
style: "normal",
)
#align(horizon)[
#for contact in contacts [
#set box(height: 10pt)
#box[#parse_icon_string(contact.icon) #link(contact.url)[#contact.text]]
#separator
]
]
]
}
}
#let create-header-info(
firstname: "",
lastname: "",
position: "",
address: "",
contacts: (),
align-header: center
) = {
align(align-header)[
#create-header-name(firstname: firstname, lastname: lastname)
#create-header-position(position: position)
#create-header-address(address: address)
#create-header-contacts(contacts: contacts)
]
}
#let create-header-image(
profile-photo: ""
) = {
if profile-photo.len() > 0 {
block(
above: 15pt,
stroke: none,
radius: 9999pt,
clip: true,
image(
fit: "contain",
profile-photo
)
)
}
}
#let create-header(
firstname: "",
lastname: "",
position: "",
address: "",
contacts: (),
profile-photo: "",
) = {
if profile-photo.len() > 0 {
block[
#box(width: 5fr)[
#create-header-info(
firstname: firstname,
lastname: lastname,
position: position,
address: address,
contacts: contacts,
align-header: left
)
]
#box(width: 1fr)[
#create-header-image(profile-photo: profile-photo)
]
]
} else {
create-header-info(
firstname: firstname,
lastname: lastname,
position: position,
address: address,
contacts: contacts,
align-header: center
)
}
}
//------------------------------------------------------------------------------
// Resume Entries
//------------------------------------------------------------------------------
#let resume-item(body) = {
set text(
size: 10pt,
style: "normal",
weight: "light",
fill: color-darknight,
)
set par(leading: 0.65em)
set list(indent: 1em)
body
}
#let resume-entry(
title: none,
location: "",
date: "",
description: ""
) = {
pad[
#justified-header(title, location)
#secondary-justified-header(description, date)
]
}
//------------------------------------------------------------------------------
// Resume Template
//------------------------------------------------------------------------------
#let resume(
title: "CV",
author: (:),
// date: datetime.today().display("[month repr:long] [day], [year]"),
date: datetime.today().display("[day]/[month]/[year]"),
profile-photo: "",
body,
) = {
set document(
author: author.firstname + " " + author.lastname,
title: title,
)
set text(
font: (font-text),
size: 11pt,
fill: color-darkgray,
fallback: true,
)
set page(
paper: "a4",
margin: (left: 15mm, right: 15mm, top: 10mm, bottom: 10mm),
footer: [
#set text(
fill: gray,
size: 8pt,
)
#__justify_align_3[
#smallcaps[#date]
][
#smallcaps[
#author.firstname
#author.lastname
#sym.dot.c
CV
]
][
#counter(page).display()
]
],
)
// set paragraph spacing
set heading(
numbering: none,
outlined: false,
)
show heading.where(level: 1): it => [
#set block(
above: 1.5em,
below: 1em,
)
#set text(
size: 16pt,
weight: "regular",
)
#align(left)[
// #text[#strong[#text(color-accent)[#it.body.text.slice(0, 3)]#text(color-darkgray)[#it.body.text.slice(3)]]]
#text[#strong[#text(color-darklue)[#it.body.text.slice(0, 3)]#text(color-darklue)[#it.body.text.slice(3)]]]
#box(width: 1fr, line(length: 100%))
]
]
show heading.where(level: 2): it => {
set text(
color-middledarkgray,
size: 12pt,
weight: "thin"
)
it.body
}
show heading.where(level: 3): it => {
set text(
size: 10pt,
weight: "regular",
fill: color-gray,
)
smallcaps[#it.body]
}
// Contents
create-header(firstname: author.firstname,
lastname: author.lastname,
position: author.position,
address: author.address,
contacts: author.contacts,
profile-photo: profile-photo,)
body
}
// Typst custom formats typically consist of a 'typst-template.typ' (which is
// the source code for a typst template) and a 'typst-show.typ' which calls the
// template's function (forwarding Pandoc metadata values as required)
//
// This is an example 'typst-show.typ' file (based on the default template
// that ships with Quarto). It calls the typst function named 'article' which
// is defined in the 'typst-template.typ' file.
//
// If you are creating or packaging a custom typst template you will likely
// want to replace this file and 'typst-template.typ' entirely. You can find
// documentation on creating typst templates here and some examples here:
// - https://typst.app/docs/tutorial/making-a-template/
// - https://github.com/typst/templates
#show: resume.with(
title: [<NAME>],
author: (
firstname: unescape_text("<NAME>."),
lastname: unescape_text("Mimmi"),
address: unescape_text("Pavia, Italia"),
position: unescape_text("Economista | Analista di politiche pubbliche | Consulente freelance"),
contacts: ((
text: unescape_text("<EMAIL>"),
url: unescape_text("mailto:<EMAIL>"),
icon: unescape_text("fa envelope"),
), (
text: unescape_text("luisamimmi.org"),
url: unescape_text("https:\/\/luisamimmi.org"),
icon: unescape_text("assets/icon/bi-house-fill.svg"),
), (
text: unescape_text("<NAME>"),
url: unescape_text("https:\/\/www.linkedin.com/in/luisa-m-mimmi"),
icon: unescape_text("fa brands linkedin"),
), (
text: unescape_text("lulliter"),
url: unescape_text("https:\/\/github.com/lulliter"),
icon: unescape_text("fa brands github"),
)),
),
)
= Esperienza professionale
<esperienza-professionale>
#resume-entry(title: "Consulente in economia e valutazione di politiche pubbliche",location: "Milano, Italia & remoto",date: "Gen-2022 - Ott-2024",description: "Libera Professione",)
#resume-item[
- Consulenza e formazione su gestione di dati e statistica/machine learning per enti pubblici, università, e centri studi
]
#resume-entry(title: "Esperto in Analisi di dati su fondi UE",location: "Roma, Italia",date: "Feb-2023 - Ago-2024",description: "Presidenza del Consiglio dei Ministri",)
#resume-item[
- Contributo al servizio di monitoraggio su interventi finanziati dal PNRR nella missione M5-C3 (coesione territoriale)
]
#resume-entry(title: "Sr. Economic Advisor - dossier infrastrutture",location: "Roma, Italia",date: "Feb-2020 - Dic-2021",description: "Ministero dell'Economia e delle Finanze",)
#resume-item[
- Supporto alla preparazione della Presidenza Italiana del G20 (2021) nel Gruppo di Lavoro Infrastrutture
]
#resume-entry(title: "Advisor",location: "Milano, Italia",date: "Feb-2020 - Mar-2020",description: "CSIL",)
#resume-item[
- Contributo alla preparazione di una proposta di ricerca su 'EU Lagging Regions: state of play and future challenges'
]
#resume-entry(title: "Research Fellow",location: "Washington DC, USA",date: "Mag-2018 - Ott-2019",description: "Banca Inter-Americana dello Sviluppo",)
#resume-item[
- Indagini statistiche multi-paese per l'analisi di offerta e domanda di acqua e servizi igienico-sanitari in America Latina
]
#resume-entry(title: "Sr. Monitoring & Evaluation Specialist",location: "Washington DC, USA",date: "Mar-2009 - Apr-2018",description: "Banca Mondiale",)
#resume-item[
- Ideazione e gestione di sistemi M&E per i finanziatori UE di 2 fondi fiduciari per lo sviluppo di infrastrutture
]
#resume-entry(title: "Research Assistant",location: "Washington DC, USA",date: "Ott-2008 - Feb-2009",description: "Banca Inter-Americana dello Sviluppo",)
#resume-item[
- Supporto a progetto di assistenza al Municipio di Fortaleza (Brasile) per un 'Programma di Inclusione Sociale dei Giovani'
]
#resume-entry(title: "Stagista",location: "Belo Horizonte, Brasile",date: "Giu-2007 - Ago-2007",description: "AVSI",)
#resume-item[
- Attività sul campo e raccolta dati sul progetto 'Conviver' per l'accesso sicuro all'energia elettrica in alcune favelas
]
#resume-entry(title: "Business Intelligence Analyst",location: "Milano, Italia",date: "Ott-2001 - Giu-2006",description: "Value Partners S.p.A.",)
#resume-item[
- Valutazione posizionamento competitivo e business intelligence per clienti pubblici e privati attivi in Italia e all'estero
]
= Competenze
<competenze>
#resume-entry(title: "Lingue",description: "Italiano (madrelingua), Inglese (C2), Spagnolo (C2), Portoghese (B1)",)
#resume-entry(title: "Pacchetti Office",description: "MS Office, GSuite, LibreOffice",)
#resume-entry(title: "Linguaggi di programmazione",description: "R, Stata, SQL",)
#resume-entry(title: "Altri Strumenti",description: " git, zsh, Markdown, RStudio, VSCode, Quarto, HTML & CSS",)
#resume-entry(title: "Competenze Personali",description: " Eccellenti capacità di pensiero critico e analisi dei dati; Determinazione e spirito d'iniziativa in progetti complessi; Abilità comunicative efficaci; Passione e attitudine per la formazione",)
#pagebreak()
= Istruzione
<istruzione>
#resume-entry(title: "Master in Politiche Pubbliche (2 anni)",location: "Georgetown University",date: "Mag-2008",description: "Politica Internazionale e Sviluppo",)
#resume-entry(title: "Laurea in Economia e Commercio (4 anni)",location: "Università di Pavia",date: "Apr-2001",description: "Economia Industriale",)
#resume-entry(title: "Programma Erasmus EU (2 semestri)",location: "Universidad Autonoma de Madrid",date: "Lug-1998",description: "Economia della UE",)
#resume-entry(title: "Diploma di Maturità (5 anni)",location: "Liceo Scientifico T. Olivelli",date: "Giu-1994",description: "Maturità scientifica",)
= Pubblicazioni sottoposte a revisione paritaria
<pubblicazioni-sottoposte-a-revisione-paritaria>
#resume-entry(title: "Italy in Front of the Challenge of Infrastructure Maintenance: Existing Issues and Promising Responses",location: "Public Works Management and Policy",date: "Apr-2024",description: "<NAME>",)
#resume-item[
- https://journals.sagepub.com/doi/10.1177/1087724X231164648
]
#resume-entry(title: "Predicting Housing Deprivation from Space in the Slums of Dhaka",location: "Environment and Planning B: Urban Analytics and City Science",date: "Set-2022",description: "<NAME> and <NAME> and <NAME> and <NAME> and <NAME>",)
#resume-item[
- https://journals.sagepub.com/doi/10.1177/23998083221123589
]
#resume-entry(title: "From Informal to Authorized Electricity Service in Urban Slums: Findings from a Household Level Survey in Mumbai",location: "Energy for Sustainable Development",date: "Ago-2014",description: "<NAME>",)
#resume-item[
- http://linkinghub.elsevier.com/retrieve/pii/S0973082614000507
]
#resume-entry(title: "An Econometric Study of Illegal Electricity Connections in the Urban Favelas of Belo Horizonte, Brazil",location: "Energy Policy",date: "Set-2010",description: "<NAME> and <NAME>",)
#resume-item[
- http://linkinghub.elsevier.com/retrieve/pii/S0301421510003113
]
= Altre pubblicazioni
<altre-pubblicazioni>
Vedere lista completa su #link("https://scholar.google.com/citations?user=OBYla5gAAAAJ&hl=en&oi=ao")[#strong[Profilo Google Scholar];]
#box(height: 20%) // #box(height: 60pt)
#block(
fill:luma(221),
inset:8pt,
radius:4pt,
[
#set text(size: 8pt, weight: "medium", fill: rgb("#85144b"))
Autorizzo il trattamento dei miei dati personali ai sensi del Decreto Legislativo 101/2018, n. 196 e del GDPR (Regolamento UE 2016/679)
])
|
|
https://github.com/RandomcodeDev/FalseKing-Design | https://raw.githubusercontent.com/RandomcodeDev/FalseKing-Design/main/engine/libraries.typ | typst | = Libraries
#table(
columns: 2,
[*Library*], [*Use*],
[#link("https://github.com/winsiderss/phnt")[phnt]], [Internal Windows APIs],
[#link("https://github.com/microsoft/mimalloc")[mimalloc]], [malloc replacement],
[#link("https://github.com/NVIDIAGameWorks/nvrhi")[nvrhi]], [Graphics API abstraction, makes renderer much easier],
[#link("https://github.com/nfrechette/rtm")[rtm]], [Linear algebra and other math],
)
|
|
https://github.com/RaphGL/ElectronicsFromBasics | https://raw.githubusercontent.com/RaphGL/ElectronicsFromBasics/main/DC/chap4/4_metric_prefix_conversions.typ | typst | Other | #import "../../core/core.typ"
=== Metric prefix conversions
To express a quantity in a different metric prefix that what it was
originally given, all we need to do is skip the decimal point to the
right or to the left as needed. Notice that the metric prefix \"number
line\" in the previous section was laid out from larger to smaller, left
to right. This layout was purposely chosen to make it easier to remember
which direction you need to skip the decimal point for any given
conversion.
Example problem: express 0.000023 amps in terms of microamps.
$ 0.000023 "amps" "(has no prefix, just plain unit of amps)" $
From UNITS to micro on the number line is 6 places (powers of ten) to
the right, so we need to skip the decimal point 6 places to the right:
$ 0.000023 "amps" = 23 ", or" 23 "microamps" (mu A) $
Example problem: express 304,212 volts in terms of kilovolts.
304,212 volts (has no prefix, just plain unit of volts)
From the #emph[(none)] place to #emph[kilo] place on the number line is
3 places (powers of ten) to the left, so we need to skip the decimal
point 3 places to the left:
$ 304,212. = 304.212 "kilovolts" ("kV") $
Example problem: express 50.3 Mega-ohms in terms of milli-ohms.
$ 50.3 "M ohms" ("mega" \= 10^6) $
From mega to milli is 9 places (powers of ten) to the right (from 10 to
the 6th power to 10 to the -3rd power), so we need to skip the decimal
point 9 places to the right:
$ 50.3 "M ohms" = 50,300,000,000 "milli-ohms" (m Omega) $
#core.review[
- Follow the metric prefix number line to know which direction you skip
the decimal point for conversion purposes.
- A number with no decimal point shown has an implicit decimal point to
the immediate right of the furthest right digit (i.e. for the number
436 the decimal point is to the right of the 6, as such: 436.)
]
|
https://github.com/jamesrswift/frackable | https://raw.githubusercontent.com/jamesrswift/frackable/main/tests/example/test.typ | typst | The Unlicense | #import "/src/lib.typ" as frackable: frackable, generator
#set page(
width: auto,
height: auto,
margin: 0.25cm,
background: none
)
#frackable(1, 2)
#frackable(1, 3)
#frackable(3, 4, whole: 9)
#frackable(9, 16)
#frackable(31, 32)
#frackable(0, "000")
|
https://github.com/ukihot/igonna | https://raw.githubusercontent.com/ukihot/igonna/main/articles/algo/sort.typ | typst | #import "@preview/codly:0.2.0": *
#let ruby(rt, rb, size: 0.4em, alignment: "between") = {
let gutter = if (alignment=="center" or alignment=="start") {h(0pt)} else if (alignment=="between" or alignment=="around") {h(1fr)}
let chars = if(alignment=="around") {
[#h(0.5fr)#rt.clusters().join(gutter)#h(0.5fr)]
} else {
rt.clusters().join(gutter)
}
style(st=> {
let bodysize = measure(rb, st)
let rt_plain = text(size: size, rt)
let width = if(measure(rt_plain, st).width > bodysize.width) {measure(rt_plain, st).width} else {bodysize.width}
let rubytext = box(width: width, align(if(alignment=="start"){left}else{center}, text(size: size, chars)))
let textsize = measure(rubytext, st)
let x = if(alignment=="start"){0pt}else{bodysize.width/2-textsize.width/2}
box[#place(top+left, dx: x, dy: -size*1.2, rubytext)#rb]
})
}
#let icon(codepoint) = {
box(
height: 0.8em,
baseline: 0.05em,
image(codepoint)
)
h(0.1em)
}
#show: codly-init.with()
#codly(languages: (
rust: (name: "Rust", icon: icon("brand-rust.svg"), color: rgb("#CE412B")),
))
== バブルソート
バブルソートの最悪の計算量は$cal(O)(n^2)$である。
全ての要素を比較し、必要に応じてスワップ操作を行うことにより、入力配列が逆順にソートされている場合は最大限の比較とスワップが必要になる。
```rust
fn bubble_sort<T: Ord>(arr: &mut [T]) {
if arr.is_empty() {
return;
}
let mut sorted = false;
let mut n = arr.len();
while !sorted {
sorted = true;
for i in 0..n - 1 {
if arr[i] > arr[i + 1] {
arr.swap(i, i + 1);
sorted = false;
}
}
n -= 1;
}
}
```
== 選択ソート
== 挿入ソート
== シェルソート
== バケットソート
== 分布数え上げソート
== 基数ソート
== マージソート
== ヒープソート
== クイックソート
|
|
https://github.com/elipousson/typstdoc | https://raw.githubusercontent.com/elipousson/typstdoc/main/_extensions/typstdoc/typst-template.typ | typst | Creative Commons Zero v1.0 Universal | // 2023-10-09: #fa-icon("fa-info") is not working, so we'll eval "#fa-info()" instead
// 2024-01-29: copied from quarto-cli and revised to use em units
#let callout(
body: [],
title: "Callout",
background_color: rgb("#dddddd"),
icon: none,
icon_color: black,
) = {
block(
breakable: false,
fill: background_color,
stroke: (paint: icon_color, thickness: 0.04em, cap: "round"),
width: 100%,
radius: 0.16em,
block(
inset: 0.25em,
width: 100%,
below: -0.1em,
block(
fill: background_color,
width: 100%,
inset: 0.25em,
)[#text(icon_color, baseline: -0.1em, size: 0.8em, weight: 700)[#icon] #title],
) + block(
inset: 0.25em,
width: 100%,
block(fill: white, width: 100%, inset: 0.8em, body),
),
)
}
#let ifnone(x, default) = {
if x == none {
return default
}
if x == () {
return default
}
x
}
#let rgb-color(x, default) = {
if type(x) == array {
x = default
}
if type(x) == color {
return x
}
if x in ("black", "gray", "silver", "white",
"navy", "blue", "aqua", "teal",
"eastern", "purple", "maroon", "red",
"orange", "yellow", "olive", "green", "lime") {
return eval(x)
}
if x.starts-with("\#") {
x = x.replace("\#", "")
}
rgb(x)
}
#let running-text-block(
font: (),
fontsize: 10pt,
fontfill: "black",
width: 100%,
inset: 20pt,
text-align: left,
content,
) = {
if content == none {
return none
}
align(text-align, block(
width: width,
inset: inset,
[#text(fill: fontfill, size: fontsize, font: font, content)],
))
}
#let typstdoc(
// Document attributes
title: none,
authors: (),
keywords: (),
date: none,
abstract: none,
abstract-label: none,
lang: "en",
region: "US",
// Page layout, fill, and numbering
paper: "us-letter",
cols: 1,
gutter: 4%,
margin: (x: 1.25in, y: 1.25in),
flipped: false,
fill: none,
page-numbering: "1",
page-number-align: right + bottom,
// Typography
font: ("Roboto", "Arial", ),
fontsize: 11pt,
fontweight: "regular",
fontfill: "black",
slashed-zero: false,
monofont: ("Roboto Mono", "Courier", ),
// Body text typography
justify: false,
linebreaks: "optimized",
first-line-indent: 0pt,
hanging-indent: 0pt,
leading: 0.65em,
spacing: 1.25em,
// Title typography
title-font: (),
title-fontsize: 1.5em,
title-weight: "bold",
title-fontfill: (),
title-align: left,
title-inset: 0pt,
// Section numbering
sectionnumbering: none,
heading-font: (),
heading-fontsize: 1.2em,
heading-fontfill: (),
// Table of contents
toc: false,
toc_title: none,
toc_depth: none,
toc_indent: 1.5em,
lof: false,
lof_title: "Figures",
lot: false,
lot_title: "Tables",
// Header and footer
header: none,
header-font: (),
header-fontsize: (),
header-fontfill: (),
header-align: left,
header-ascent: 30%,
footer: none,
footer-font: (),
footer-fontsize: (),
footer-fontfill: (),
footer-align: left,
footer-descent: 30%,
// List numbering and indent
list-numbering: "1.",
list-indent: 0pt,
list-body-indent: 0.5em,
// list-tight: false,
// list-spacing: auto,
// Bibliography
bibliography-file: none,
// blockquote-fontsize: 11pt,
doc,
) = {
// Formats the author's names in a list with commas and a
// final "and".
let names = authors.map(author => author.name)
let author-string = if authors.len() == 2 {
names.join(" and ")
} else {
names.join(", ", last: ", and ")
}
if fill != none {
fill = rgb-color(fill, "white")
}
// Set font fill colors with default
fontfill = rgb-color(fontfill, "black")
header-fontfill = rgb-color(header-fontfill, fontfill)
footer-fontfill = rgb-color(footer-fontfill, fontfill)
title-fontfill = rgb-color(title-fontfill, fontfill)
heading-fontfill = rgb-color(heading-fontfill, fontfill)
// Set document metadata
set document(title: title, author: names, keywords: keywords,)
// Set page layout
set page(
paper: paper,
flipped: flipped,
margin: margin,
fill: fill,
numbering: page-numbering,
number-align: page-number-align,
// Set header defaults from other variables
header: running-text-block(
font: ifnone(header-font, font),
fontsize: ifnone(header-fontsize, fontsize),
fontfill: header-fontfill,
text-align: header-align,
header,
),
header-ascent: header-ascent,
// Set footer defaults from other variables
footer: running-text-block(
font: ifnone(footer-font, font),
fontsize: ifnone(footer-fontsize, fontsize),
fontfill: footer-fontfill,
text-align: footer-align,
footer,
),
footer-descent: footer-descent,
)
// Set overall text defaults
set text(
lang: lang,
region: region,
font: font,
weight: fontweight,
size: fontsize,
fill: fontfill,
slashed-zero: slashed-zero,
)
// Set font for inline code and blocks
show raw: set text(font: monofont)
// Configure heading typography
set heading(numbering: sectionnumbering)
show heading: set text(
font: ifnone(heading-font, font),
size: ifnone(heading-fontsize, fontsize),
fill: heading-fontfill,
)
// Display the bibliography, if supplied
if bibliography-file != none {
show bibliography: set text(fontsize * 0.8)
show bibliography: pad.with(x: fontsize * 0.4)
bibliography(bibliography-file)
}
if title != none {
align(title-align)[#block(inset: title-inset)[
#text(
font: ifnone(title-font, font),
weight: title-weight,
size: title-fontsize,
fill: title-fontfill,
)[#title]
]]
}
if authors != none {
align(title-align)[#block(inset: title-inset)[
#text()[#author-string]
]]
}
if date != none {
align(title-align)[#block(inset: title-inset)[
#date
]]
}
if abstract != none {
if abstract-label == none {
abstract-label = "$labels.abstract$"
}
block(inset: title-inset)[
#text(weight: "semibold")[#abstract-label] #h(0.6em) #abstract
]
}
// Configure paragraph properties.
set par(
justify: justify,
first-line-indent: first-line-indent,
hanging-indent: hanging-indent,
linebreaks: linebreaks,
leading: leading,
)
show par: set block(spacing: spacing)
// Configure table of contents
if toc {
let toc_title = if toc_title == none {
auto
} else {
toc_title
}
block(above: 1.5em, below: 3em)[
#outline(title: toc_title, indent: toc_indent, depth: toc_depth);
]
}
// List of figures
if lof {
let lof_title = if lof_title == none {
auto
} else {
lof_title
}
block(above: 1em, below: 2em)[
#outline(title: lof_title, target: figure.where(kind: "quarto-float-fig"))
]
}
// List of tables
if lot {
let lot_title = if lot_title == none {
auto
} else {
lot_title
}
block(above: 1em, below: 2em)[
#outline(title: lot_title, target: figure.where(kind: "quarto-float-tbl"))
]
}
// Configure lists
set enum(
indent: list-indent,
numbering: list-numbering,
body-indent: list-body-indent,
)
set list(
indent: list-indent,
// tight: list-tight,
// spacing: list-spacing,
body-indent: list-body-indent,
)
// Configure columns
if cols == 1 {
doc
} else {
columns(cols, gutter: (gutter), doc)
}
}
#set table(
inset: 6pt,
stroke: none
) |
https://github.com/OverflowCat/BUAA-Digital-Image-Processing-Sp2024 | https://raw.githubusercontent.com/OverflowCat/BUAA-Digital-Image-Processing-Sp2024/master/expt03/findContours.typ | typst | 假设输入图像为 $F= f_(i j)$,将初始 NBD(Next Boundary
Descriptor)设为 1(将 F
的#strong[框架];看作第一个边界)。使用光栅扫描法扫描图像
F,当扫描到某个像素点 $(i , j)$ 的灰度值 $f_(i j) eq.not 0$
时执行下面的步骤。每次当我们扫描到图像的新行的起始位置时,将
#strong[LNBD];(Last Next Boundary Descriptor)重置为 1。
+ 根据以下情况选择一种操作:
- 如果 $f_(i j) = 1$ 并且 $f_(i , j - 1) = 0$,则 $(i , j)$
是#strong[外边界的开始点];,NBD 增加 1,并且将
$(i_2 , j_2) arrow.l (i , j - 1)$。
- 如果 $f_(i j) gt.eq 1$ 并且 $f_(i , j + 1) = 0$,则 $(i , j)$
是#strong[孔边界的开始点];,NBD 增加 1,并且将
$(i_2 , j_2) arrow.l (i , j + 1)$。如果 $1 < f_(i j)$,则
$L N B D arrow.l f_(i j)$。
- 其他情况,跳到步骤 4。
+ 根据上一个边界 $B prime$ 和当前新遇到边界 $B$ 的类型,选择 B
的父边界。
+ 从#strong[边界开始点] $(i , j)$ 开始,按以下步骤进行边界跟踪:
- 以 $(i , j)$ 为中心,$(i_2 , j_2)$ 为起始点,按顺时针方向查找
$(i , j)$ 的 4(或
8)邻域是否存在非零像素点。如果找到非零像素点,则令 $(i_1 , j_1)$
是顺时针方向的第一个非零像素点;否则令 $f_(i j) = - N B D$,跳到步骤
4。
- 更新
$(i_2 , j_2) arrow.l (i_1 , j_1)$,$(i_3 , j_3) arrow.l (i , j)$。
- 以 $(i_3 , j_3)$ 为中心,从 $(i_2 , j_2)$
的下一个点开始,按逆时针方向查找 $(i_3 , j_3)$ 的 4(或
8)邻域是否存在非零像素点。找到的第一个非零像素点为 $(i_4 , j_4)$。
- 更新 $f_(i_3 , j_3)$ 的值:
- 如果 $(i_3 , j_3 + 1)$ 是在上述步骤中已经检查过的像素点并且是 0
像素点,则 $f_(i_3 , j_3) arrow.l - N B D$。
- 如果 $(i_3 , j_3 + 1)$ 不是上述步骤中已经检查过的 0 像素点,并且
$f_(i_3 , j_3) = 1$,则 $f_(i_3 , j_3) arrow.l N B D$。
- 其他情况,不改变 $f_(i_3 , j_3)$。
- 如果 $(i_4 , j_4) = (i , j)$ 并且
$(i_3 , j_3) = (i_1 , j_1)$(即回到了边界开始点),跳到步骤
4;否则更新
$(i_2 , j_2) arrow.l (i_3 , j_3)$,$(i_3 , j_3) arrow.l (i_4 , j_4)$,返回上述步骤。
+ 如果 $f_(i j) eq.not 1$,则 $L N B D arrow.l lr(|f_(i j)|)$。继续从点
$(i , j + 1)$ 进行光栅扫描。当扫描到图像的右下角顶点时结束。
|
|
https://github.com/caro3dc/typststuff | https://raw.githubusercontent.com/caro3dc/typststuff/main/jx-date.typ | typst | #let sadave = datetime.today().display("[day padding:none] [month repr:short] [year repr:full]");
#let daycolours = (
rgb("#93C5FD50"),
rgb("#FCA5A550"),
rgb("#86EFAC50"),
rgb("#FDE04750"),
rgb("#67E8F950"),
rgb("#F0ABFC50"),
rgb("#D4D4D450"),
)
#let wdn = datetime.today().weekday() - 1;
#let weekdays = ("M", "T", "W", "H", "F", "R", "S");
#let sadaveVerbose = sadave + " " + weekdays.at(wdn); |
|
https://github.com/dldyou/Operation-System | https://raw.githubusercontent.com/dldyou/Operation-System/main/typst/main.typ | typst | #import "template.typ": *
#show: project.with(title: title, authors: authors,)
#let img(src, width:100%) = {
figure(
image("img/" + src + ".png", width: width)
)
}
= 14-Concurrency and Threads
== Threads
- Multi-threaded 프로그램
- 스레드 하나의 상태는 프로세스의 상태와 매우 비슷하다.
- 각 스레드는 그것의 PC(Program Counter)와 private한 레지스터를 가지고 있다.
- 스레드 당 하나의 스택을 가지고 있다.
- 같은 address space를 공유하므로 같은 데이터를 접근할 수 있다.
- Context Switch
- Thread Control Block (TCB)
- 같은 address space에 남아있다. (switch를 하는데 page table이 필요하지 않음)
#img("image")
- 사용하는 이유
- 병렬성
- Multiple CPUs
- Blocking 회피
- 느린 I/O
- 프로그램에서 하나의 스레드가 기다리는 동안(I/O 작업을 위해 blocked 되어), CPU 스케줄러가 다른 스레드를 실행시킬 수 있다.
- 많은 현대 서버 기반 어플리케이션은 멀티스레드를 사용하도록 구현되어 있다.
- 웹 서버, 데이터베이스 관리 시스템, ...
=== Thread Create
#prompt(```c
void *mythread(void *arg)
{
printf("%s\n", (char *) arg);
return NULL;
}
int main(int argc, char *argv[])
{
pthread_t p1, p2;
int rc;
printf("main: begin\n");
rc = pthread_create(&p1, NULL, mythread, "A"); assert(rc == 0);
rc = pthread_create(&p2, NULL, mythread, "B"); assert(rc == 0);
// join waits for the threads to finish
rc = pthread_join(p1, NULL); assert(rc == 0);
rc = pthread_join(p2, NULL); assert(rc == 0);
printf("main: end\n");
return 0;
}
```)
- 실행 가능한 순서
#img("image-1")
- 공유 데이터
#prompt(```c
static volatile int counter = 0;
void * mythread(void *arg)
{
int i;
printf("%s: begin\n", (char *) arg);
for (i = 0; i < 1e7; i++) {
counter = counter + 1;
}
printf("%s: done\n", (char *) arg);
return NULL;
}
int main(int argc, char *argv[])
{
pthread_t p1, p2;
printf("main: begin (counter = %d)\n", counter);
pthread_create(&p1, NULL, mythread, “A”);
pthread_create(&p2, NULL, mythread, "B");
pthread_join(p1, NULL);
pthread_join(p2, NULL);
printf("main: done with both (counter = %d)\n", counter);
return 0;
}
```)
- 실행 결과
- counter 값이 2e7이 아닌 다른 값이 나올 수 있다.
#prompt(```bash
main: done with both (counter = 20000000)
main: done with both (counter = 19345221)
main: done with both (counter = 19221041)
```)
=== Race Condition
#img("image-2")
=== Critical Section
- Critical Section
- 공유된 자원에 접근하는 코드 영역 (공유 변수)
- 둘 이상의 스레드에 의해 동시에 실행되어서는 안 된다.
- Mutual Exclusion
- 한 스레드가 critical section에 들어가면 다른 스레드는 들어갈 수 없다.
=== Atomicity
- Atomic
- 한 번에 실행되어야 하는 연산
- 하나의 명령이 시작되었다면 해당 명령이 종료될 때까지 다른 명령이 시작되어서는 안 된다.
- synchronizaion을 어떻게 보장하는지
- 하드웨어 지원 (atomic instructions)
- Atomic memory add -> 있음
- Atomic update of B-tree -> 없음
- OS는 이러한 명령어들에 따라 일반적인 동기화 primitive 집합을 구현한다.
=== Mutex
위의 Atomicity를 보장하기 위해 Mutex를 사용한다.
- Initialization
- Static: `pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;`
- Dynamic: `pthread_mutex_init(&lock, NULL);`
- Destory
- `pthread_mutex_destroy();`
- Condition Variables
- `int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex);`
- 조건이 참이 될 때까지 대기하는 함수
- `pthread_mutex_lock`으로 전달할 mutex을 잠근 후에 호출되어야 한다.
- `int pthread_cond_signal(pthread_cond_t *cond);`
- 대기 중인 스레드에게 signal을 보내는 함수
- `pthread_cond_wait`로 대기 중인 스레드 중 하나를 깨운다.
- 외부를 lock과 unlock으로 감싸줘야 한다.
- 두 스레드를 동기화
#prompt(```c
while (read == 0)
; // spin
```)
#prompt(```c
ready = 1;
```)
- 오랜 시간 spin하게 되어 CPU 자원을 낭비하게 된다.
- 오류가 발생하기 쉽다.
- 현대 하드웨어의 메모리 consistency 모델 취약성
- 컴파일러 최적화
#prompt(```c
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
pthread_mutex_lock(&lock);
while (ready == 0)
pthread_cond_wait(&cond, &lock);
pthread_mutex_unlock(&lock);
```)
#prompt(```c
pthread_mutex_lock(&lock);
ready = 1;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&lock);
```)
- `#include <pthread.h>` 컴파일 시 `gcc -o main main.c -Wall -pthread` 와 같이 진행
= 15-Locks
== Pthread Locks
#prompt(```c
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
...
pthread_mutex_lock(&lock);
counter = counter + 1; // critical section
pthread_mutex_unlock(&lock);
```)
- Lock을 어떻게 설계해야 할까?
- 하드웨어 / OS 차원에서의 지원이 필요한가?
=== Evaluting Locks
- 상호 배제(Mutual Exclution)
- 둘 이상의 스레드가 동시에 critical section에 들어가는 것을 방지
- 공평(Fairness)
- lock을 두고 경쟁할 때, lock이 free가 되었을 때, lock을 얻는 기회가 공평함
- 성능(Performance)
- lock을 사용함으로써 생기는 오버헤드
- 스레드의 수
- CPU의 수
=== Controlling Interrupts
#prompt(```c
void lock() {
DisableInterrupts();
}
void unlock() {
EnableInterrupts();
}
```)
- 이러한 모델은 간단하지만 많은 단점이 있음
- thread를 호출하는 것이 반드시 privileged operation으로 수행되어야 함
- 멀티프로세서 환경에서 작동하지 않음
- 인터럽트가 손실될 수 있음
- 한정된 contexts에서만 사용될 수 있음
- 지저분한 인터럽트 처리 상황을 방지하기 위해
== Support for Locks
=== Hardware Support
- 간단한 방법으로는 `yield()`(본인이 ready큐로, 즉 CPU자원을 포기한다고 함)를 사용할 수 있음
- 그러나 여전히 비용이 높고 공평하지 않음
- RR에 의해 스케줄 된 많은 스레드가 있는 상황을 고려해보자
#prompt(```c
void lock(lock_t *lock) {
while (TestAndSet(&lock->flag, 1) == 1)
yield();
}
```)
- 하드웨어 만으로는 상호 배제 및 공평성만 해결 할 수 있었음
- 성능 문제는 여전히 존재 -> OS의 도움이 필요
==== Spin Locks
===== Loads / Stores
#prompt(```c
typedef struct __lock_t { int flag; } lock_t;
void init(lock_t *mutex) {
// 0 -> lock is available, 1 -> held
mutex->flag = 0;
}
void lock(lock_t *mutex) {
while (mutex->flag == 1) // TEST the flag
; // spin-wait (do nothing)
mutex->flag = 1; // now SET it!
}
void unlock(lock_t *mutex) {
mutex->flag = 0;
}
```)
- 상호 배제가 없음
- thread 1에서 lock()을 호출하고 while(flag == 1)에서 1이 아니구나 하고 빠져나갈 때 context switch가 일어남
- thread 2에서 lock()을 호출하고 while(flag == 1)에서 1이 아니구나 하고 빠져나가서 flag = 1로 만듦
- context switch가 일어나 thread 1이 다시 돌아와서 flag = 1이 됨
- 두 스레드 모두 lock을 얻게 됨
- 성능 문제
- spin-wait으로 인한 CPU 사용량이 많아짐
===== Test-and-Set
- Test-and-Set atomic instruction
#prompt(```c
int TestAndSet(int *old_ptr, int new) {
int old = *old_ptr; // fetch old value at old_ptr
*old_ptr = new; // store ’new’ into old_ptr
return old; // return the old value
}
typedef struct __lock_t { int flag; } lock_t;
void init(lock_t *lock) {
lock->flag = 0;
}
void lock(lock_t *lock) {
while (TestAndSet(&lock->flag, 1) == 1);
}
void unlock(lock_t *mutex) {
mutex->flag = 0;
}
```)
- 공평하지 않음 (starvation이 발생할 수 있음)
- 단일 CPU에서 오버헤드가 굉장히 클 수 있음
===== Compare-and-Swap
- Compare-and-Swap atomic instruction
#prompt(```c
int CompareAndSwap(int *ptr, int expected, int new) {
int actual = *ptr;
if (actual == expected)
*ptr = new;
return actual;
}
void lock(lock_t *lock) {
while (CompareAndSwap(&lock->flag, 0, 1) == 1);
}
```)
- Test-and-Set과 동일하게 동작함
==== Ticket Locks
- Fetch-and-Add atomic instruction
- 번호표 발급으로 생각하면 됨
#prompt(```c
int FetchAndAdd(int *ptr) {
int old = *ptr;
*ptr = old + 1;
return old;
}
typedef struct __lock_t {
int ticket;
int turn;
} lock_t;
void lock_init(lock_t *lock) {
lock->ticket = 0;
lock->turn = 0;
}
void lock(lock_t *lock) {
int myturn = FetchAndAdd(&lock->ticket);
while (lock->turn != myturn);
}
void unlock(lock_t *lock) {
lock->turn = lock->turn + 1;
}
```)
- fairness 하게 됨
=== OS Support
- spin을 하는 대신 sleep을 함
- Solaris
- `park()`: 호출한 스레드를 sleep 상태로 만듦
- `unpark(threadID)`: `threadID`의 스레드를 깨움
- Linux
- `futex_wait(address, expected)`: address가 expected랑 같다면 sleep 상태로 만듦
- `futex_wake(address)`: queue에서 스레드 하나를 깨움
==== Locks with Queues (Hardware + OS Support)
#prompt(```c
typedef struct __lock_t {
int flag; // lock
int guard; // spin-lock around the flag and
// queue manipulations
queue_t *q;
} lock_t;
void lock_init(lock_t *m) {
m->flag = 0;
m->guard = 0;
queue_init(m->q);
}
void lock(lock_t *m) {
while (TestAndSet(&m->guard, 1) == 1);
if (m->flag == 0) {
m->flag = 1; // lock is acquired
m->guard = 0;
}
else {
queue_add(m->q, gettid());
m->guard = 0;
park(); // wakeup/waiting race
}
}
void unlock(lock_t *m) {
while (TestAndSet(&m->guard, 1) == 1);
if (queue_empty(m->q))
m->flag = 0;
else
unpark(queue_remove(m->q));
m->guard = 0;
}
```)
setpark를 미리 불러주는 모습을 볼 수 있음
#prompt(```c
void lock(lock_t *m) {
while (TestAndSet(&m->guard, 1) == 1);
if (m->flag == 0) {
m->flag = 1; // lock is acquired
m->guard = 0;
}
else {
queue_add(m->q, gettid());
setpark(); // another thread calls unpark before
m->guard = 0; // park is actually called, the
park(); // subsequent park returns immediately
}
}
void unlock(lock_t *m) {
while (TestAndSet(&m->guard, 1) == 1);
if (queue_empty(m->q))
m->flag = 0;
else
unpark(queue_remove(m->q));
m->guard = 0;
}
```)
= 16-Lock-Based Concurrent Data Structures
- Correctness
- 올바르게 작동하려면 lock을 어떻게 추가해야 할까? (어떻게 thread safe하게 만들 수 있을까?)
- Concurrency
- 자료구조가 높은 성능을 발휘하고 많은 스레드가 동시에 접근할 수 있도록 하려면 lock을 어떻게 추가해야 할까?
== Counter
=== Concurrent Counters
#prompt(```c
typedef struct __counter_t {
int value;
pthread_mutex_t lock;
} counter_t;
void init(counter_t *c) {
c->value = 0;
pthread_mutex_init(&c->lock, NULL);
}
void increment(counter_t *c) {
pthread_mutex_lock(&c->lock);
c->value++;
pthread_mutex_unlock(&c->lock);
}
void decrement(counter_t *c) {
pthread_mutex_lock(&c->lock);
c->value--;
pthread_mutex_unlock(&c->lock);
}
int get(counter_t *c) {
pthread_mutex_lock(&c->lock);
int rc = c->value;
pthread_mutex_unlock(&c->lock);
return rc;
}
```)
- 간단하게 생각해보면 이렇게 구현할 수 있을 것이다. 그러나 매 count마다 lock을 걸어줘야 하므로 concurrency가 떨어진다.
=== Sloppy Counters
- Logical counter
- Local counter가 각 CPU 코어마다 존재
- Global counter
- Locks (각 local counter마다 하나, global counter에도 하나)
- 기본 아이디어
- 각 CPU 코어마다 local counter를 가지고 있다가 global counter에 값을 옮기는 방식
- 이는 일정 주기마다 이루어짐
- global counter에 값을 옮기는 동안 lock을 걸어서 다른 코어가 접근하지 못하도록 함
#prompt(```c
typedef struct __counter_t {
int global;
pthread_mutex_t glock;
int local[NUMCPUS];
pthread_mutex_t llock[NUMCPUS];
int threshold; // update frequency
} counter_t;
void init(counter_t *c, int threshold) {
c->threshold = threshold;
c->global = 0;
pthread_mutex_init(&c->glock, NULL);
int i;
for (i = 0; i < NUMCPUS; i++) {
c->local[i] = 0;
pthread_mutex_init(&c->llock[i], NULL);
}
}
void update(counter_t *c, int threadID, int amt) {
int cpu = threadID % NUMCPUS;
pthread_mutex_lock(&c->llock[cpu]); // local lock
c->local[cpu] += amt; // assumes amt>0
if (c->local[cpu] >= c->threshold) {
pthread_mutex_lock(&c->glock);// global lock
c->global += c->local[cpu];
pthread_mutex_unlock(&c->glock);
c->local[cpu] = 0;
}
pthread_mutex_unlock(&c->llock[cpu]);
}
int get(counter_t *c) {
pthread_mutex_lock(&c->glock); // global lock
int val = c->global;
pthread_mutex_unlock(&c->glock);
return val; // only approximate!
}
```)
== Concurrent Data Structures
=== Linked Lists
#img("image-3", width: 50%)
#img("image-4", width: 50%)
#prompt(```c
typedef struct __node_t {
int key;
struct __node_t *next;
} node_t;
typedef struct __list_t {
node_t *head;
pthread_mutex_t lock;
} list_t;
void List_Init(list_t *L) {
L->head = NULL;
pthread_mutex_init(&L->lock, NULL);
}
int List_Insert(list_t *L, int key) {
pthread_mutex_lock(&L->lock);
node_t *new = malloc(sizeof(node_t));
if (new == NULL) {
perror("malloc");
pthread_mutex_unlock(&L->lock);
return -1; // fail
}
new->key = key;
// mutex lock은 여기로 옮겨지는 것이 좋음 (critical section이 여기부터)
new->next = L->head;
L->head = new;
pthread_mutex_unlock(&L->lock);
return 0; // success
}
int List_Lookup(list_t *L, int key) {
pthread_mutex_lock(&L->lock);
node_t *curr = L->head;
while (curr) {
if (curr->key == key) {
pthread_mutex_unlock(&L->lock);
return 0; // success (그러나 ret = 0을 저장해놓고 break한 다음에 마지막에 return ret을 하는 것이 좋음 -> 버그 찾기 쉬움)
}
curr = curr->next;
}
pthread_mutex_unlock(&L->lock);
return -1; // failure
}
```)
==== Scaling Linked Lists
- Hand-over-hand locking (lock coupling)
- 각 노드에 대해 lock을 추가 (전체 list에 대한 하나의 lock을 갖는 대신)
- list를 탐색할 때, 다음 노드의 lock을 얻고 현재 노드의 lock을 해제
- 각 노드에 대해 lock을 얻고 해제하는 오버헤드 존재
- Non-blocking linked list
- compare-and-swap(CAS) 이용
#prompt(```c
void List_Insert(list_t *L, int key) {
...
RETRY: next = L->head;
new->next = next;
if (CAS(&L->head, next, new) == 0)
goto RETRY;
}
```)
=== Queues
#prompt(```c
typedef struct __node_t {
int value;
struct __node_t *next;
} node_t;
typedef struct __queue_t {
node_t *head; // out
node_t *tail; // in
pthread_mutex_t headLock;
pthread_mutex_t tailLock;
} queue_t;
void Queue_Init(queue_t *q) {
node_t *tmp = malloc(sizeof(node_t)); // dummy node (head와 tail 연산의 분리를 위해)
tmp->next = NULL;
q->head = q->tail = tmp;
pthread_mutex_init(&q->headLock, NULL);
pthread_mutex_init(&q->tailLock, NULL);
}
```)
#img("image-5", width: 50%)
#prompt(```c
void Queue_Enqueue(queue_t *q, int value) {
node_t *tmp = malloc(sizeof(node_t));
assert(tmp != NULL);
tmp->value = value;
tmp->next = NULL;
pthread_mutex_lock(&q->tailLock);
q->tail->next = tmp;
q->tail = tmp;
pthread_mutex_unlock(&q->tailLock);
}
```)
#img("image-6", width: 50%)
- 길이가 제한된 큐에서는 제대로 작동하지 않음, 조건 변수에 대해서는 다음 장에서 다루게 될 예정
#prompt(```c
int Queue_Dequeue(queue_t *q, int *value) {
pthread_mutex_lock(&q->headLock);
node_t *tmp = q->head;
node_t *newHead = tmp->next;
if (newHead == NULL) {
pthread_mutex_unlock(&q->headLock);
return -1; // queue was empty
}
*value = newHead->value;
q->head = newHead;
pthread_mutex_unlock(&q->headLock);
free(tmp);
return 0;
}
```)
#img("image-7")
=== Hash Table
#prompt(```c
#define BUCKETS (101)
typedef struct __hash_t {
list_t lists[BUCKETS]; // 앞에서 본 list_t를 사용
} hash_t;
void Hash_Init(hash_t *H) {
int i;
for (i = 0; i < BUCKETS; i++)
List_Init(&H->lists[i]);
}
int Hash_Insert(hash_t *H, int key) {
int bucket = key % BUCKETS;
return List_Insert(&H->lists[bucket], key);
}
int Hash_Lookup(hash_t *H, int key) {
int bucket = key % BUCKETS;
return List_Lookup(&H->lists[bucket], key);
}
```)
= 17-Condition Variables
스레드를 계속 진행하기 전에 특정 조건이 true가 될 때까지 기다리는 것이 유용한 경우가 많다. 그러나, condition이 true가 될 때까지 그냥 spin만 하는 것은 CPU cycle을 낭비하게 되고 이것은 부정확할 수 있다.
#prompt(```c
volatile int done = 0;
void *child(void *arg) {
printf("child\n");
done = 1;
return NULL;
}
int main(int argc, char *argv[]) {
pthread_t c;
printf("parent: begin\n");
pthread_create(&c, NULL, child, NULL); // create child
while (done == 0); // spin
printf("parent: end\n");
return 0;
}
```)
== Condition Variable
- condition 변수는 명시적인 대기열과도 같다.
- 스레드는 일부 상태(즉, 일부 condition)가 원하는 것과 다를 때 대기열에 들어갈 수 있다.
- 몇몇 스레드는 상태가 변경되면, 대기열에 있는 스레드 중 하나(또는 그 이상)를 깨워 진행되도록 할 수 있다.
- `pthread_cond_wait();`
- 스레드가 자신을 sleep 상태로 만들려고 할 때 사용
- lock을 해제하고 호출한 스레드를 sleep 상태로 만든다. (atomic하게)
- 스레드가 깨어나면 반환하기 전에 lock을 다시 얻는다.
- `pthread_cond_signal();`
- 스레드가 프로그램에서 무언가를 변경하여 sleep 상태인 스레드를 깨우려고 할 때 사용
#prompt(```c
int done = 0;
pthread_mutex_t m = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t c = PTHREAD_COND_INITIALIZER;
void *child(void *arg) {
printf("child\n");
thr_exit();
return NULL;
}
int main(int argc, char *argv[]) {
pthread_t p;
printf("parent: begin\n");
pthread_create(&p, NULL, child, NULL);
thr_join();
printf("parent: end\n");
return 0;
}
```)
#prompt(```c
void thr_exit() {
pthread_mutex_lock(&m);
done = 1;
pthread_cond_signal(&c);
pthread_mutex_unlock(&m);
}
void thr_join() {
pthread_mutex_lock(&m);
while (done == 0)
pthread_cond_wait(&c, &m);
pthread_mutex_unlock(&m);
}
```)
- 만약 여기서 상태 변수인 `done`이 없으면?
- child가 바로 실행되고 thr_exit()을 호출하면?
- child가 signal을 보내지만 그 상태에서 잠들어 있는 스레드가 없다.
- 만약 lock이 없다면?
- child가 parent가 wait을 실행하기 직전에 signal을 보내면?
- waiting 상태에 있는 스레드가 없으므로 깨어나는 스레드가 없다.
=== Producer / Consumer Problem
- Producers
- 데이터를 생성하고 그들을 (제한된) 버퍼에 넣는다.
- Consumers
- 버퍼에서 데이터를 가져와서 그것을 소비한다.
- 예시
- Pipe
- `grep foo file.txt | wc -l`
- Web server
- 제한된 버퍼가 공유 자원이기에 당연히 이에 대한 동기화된 접근이 필요하다.
#prompt(```c
int buffer; // single buffer
int count = 0; // initially, empty
void put(int value) {
assert(count == 0);
count = 1;
buffer = value;
}
int get() {
assert(count == 1);
count = 0;
return buffer;
}
```)
#prompt(```c
cond_t cond;
mutex_t mutex;
void *producer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
pthread_mutex_lock(&mutex); // p1
if (count == 1) // p2
pthread_cond_wait(&cond, &mutex); // p3
put(i); // p4
pthread_cond_signal(&cond); // p5
pthread_mutex_unlock(&mutex); // p6
}
}
```)
#prompt(```c
void *consumer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
pthread_mutex_lock(&mutex); // c1
if (count == 0) // c2
pthread_cond_wait(&cond, &mutex); // c3
int tmp = get(); // c4
pthread_cond_signal(&cond); // c5
pthread_mutex_unlock(&mutex); // c6
printf("%d\n", tmp);
}
}
```)
- 단일 producer와 단일 consumer로 진행한다고 하자.
#img("image-8")
- 위 그림에서 알 수 있듯이 $T_(c 1)$가 다시 깨어나 실행될 때 state가 여전히 원하던 값이라는 보장이 없다.
- count를 체크하는 부분을 if문에서 while문으로 바꾸어주면 아래와 같이 돌아간다.
#img("image-9")
- consumer는 다른 consumer를 깨우면 안 되고, producer만 깨우면 되고, 반대의 경우도 마찬가지이다. 위의 경우는 그것이 안 지켜져서 모두가 잠들어버린 상황이다.
- 이는 condition 변수를 하나를 사용하기에 발생하는 문제이다. (같은 큐에 잠들기에 producer를 깨우고자 했으나 다른 결과를 야기할 수 있음)
- `p3`의 cv를 `&empty`로 `p5`의 cv를 `&fill`로
- `c3`의 cv를 `&fill`로 `c5`의 cv를 `&empty`로
#prompt(```c
int buffer[MAX];
int fill_ptr = 0;
int use_ptr = 0;
int count = 0;
void put(int value) {
buffer[fill_ptr] = value;
fill_ptr = (fill_ptr + 1) % MAX;
count++;
}
int get() {
int tmp = buffer[use_ptr];
use_ptr = (use_ptr + 1) % MAX;
count--;
return tmp;
}
```)
- 이와 같이 버퍼를 만들고 producer에서 `count == MAX`로 바꾸어주면 동시성과 효율성을 챙길 수 있다.
- Covering Conditions
- `pthread_cond_broadcast()`
- 대기 중인 모든 스레드를 깨운다.
= 18-Semaphores
- 세마포어는 lock이나 condition 변수를 통해 사용할 수 있다.
- POSIX Semaphores
- `int sem_init(sem_t *s, int pshared, unsigned int value);`
- pshared가 0이면 프로세스 내에서만 사용 가능하고, 1이면 프로세스 간에도 사용 가능하지만, 공유 메모리에 있어야 한다.
- `int sem_wait(sem_t *s);`
- 세마포어 값을 감소시키고, 값이 0보다 작으면 대기한다.
- `int sem_post(sem_t *s);`
- 세마포어 값을 증가시킨다.
- 만약 대기 중인 스레드가 있다면 하나를 깨운다.
- Binary Semaphores (lock이랑 비슷함)
#prompt(```c
sem_t m;
sem_init(&m, 0, 1);
sem_wait(&m);
// critical section here
sem_post(&m);
```)
#img("image-10")
- Semaphores for Ordering
세마포어를 사용해 스레드간의 순서를 정할 수 있다.
#prompt(```c
sem_t s;
void * child(void *arg) {
printf("child\n");
sem_post(&s);
return NULL;
}
int main(int argc, char *argv[]) {
pthread_t c;
sem_init(&s, 0, X); // what should X be?
printf("parent: begin\n");
pthread_create(&c, NULL, child, NULL);
sem_wait(&s);
printf("parent: end\n");
return 0;
}
```)
- X는 0이어야 한다. 그래야 다음 `sem_wait`이 바로 실행되더라도 세마포어 값이 음수가 되며 잠들 수 있고, child가 먼저 실행되어 post를 실행하여 세마포어 값이 1이 되고 `sem_wait`가 실행되더라도 잠에 들지 않아 deadlock이 발생하지 않는다.
#img("image-11")
== Producer / Consumer Problem
#prompt(```c
int buffer[MAX]; // bounded buffer
int fill = 0;
int use = 0;
void put(int value) {
buffer[fill] = value;
fill = (fill + 1) % MAX;
}
int get() {
int tmp = buffer[use];
use = (use + 1) % MAX;
return tmp;
}
sem_t empty, sem_t full;
void *producer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
sem_wait(&empty);
put(i);
sem_post(&full);
}
}
void *consumer(void *arg) {
int i, tmp = 0;
while (tmp != -1) {
sem_wait(&full);
tmp = get();
sem_post(&empty);
printf("%d\n", tmp);
}
}
int main(int argc, char *argv[]) {
// ...
sem_init(&empty, 0, MAX); // MAX are empty
sem_init(&full, 0, 0); // 0 are full
// ...
}
```)
- Race Condition이 발생한다.
- 생산자와 소비자가 여럿인 경우 `put()`과 `get()`에서 race condition이 발생한다.
#prompt(```c
void *producer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
sem_wait(&mutex); // 2
sem_wait(&empty);
put(i);
sem_post(&full);
sem_post(&mutex);
}
}
void *consumer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
sem_wait(&mutex);
sem_wait(&full); // 1
int tmp = get();
sem_post(&empty);
sem_post(&mutex);
}
}
```)
- 이렇게 mutex를 추가하면 deadlock이 발생한다.
- 소비자가 먼저 실행되어 wait에 의해 mutex를 0으로 감소시키고 1까지 실행되어 sleep을 하게 된다.
- 생산자가 실행되고, wait에 의해 mutex가 -1이 되어 잠들게 된다.
- 둘 다 잠들어버리게 되어 deadlock이 발생한다.
- *mutex를 모두 안쪽으로 옮겨주면 해결된다.*
== Reader / Writer Locks
- Reader
- `rwlock_acquire_readlock()`
- `rwlock_release_readlock()`
- Writer
- `rwlock_acquire_writelock()`
- `rwlock_release_writelock()`
#prompt(```c
typedef struct _rwlock_t {
// binary semaphore (basic lock)
sem_t lock;
// used to allow ONE writer or MANY readers
sem_t writelock;
// count of readers reading in critical section
int readers;
} rwlock_t;
void rwlock_init(rwlock_t *rw) {
rw->readers = 0;
sem_init(&rw->lock, 0, 1);
sem_init(&rw->writelock, 0, 1);
}
void rwlock_acquire_writelock(rwlock_t *rw) {
sem_wait(&rw->writelock);
}
void rwlock_release_writelock(rwlock_t *rw) {
sem_post(&rw->writelock);
}
void rwlock_acquire_readlock(rwlock_t *rw) {
sem_wait(&rw->lock);
rw->readers++;
if (rw->readers == 1)
// first reader acquires writelock
sem_wait(&rw->writelock);
sem_post(&rw->lock);
}
void rwlock_release_readlock(rwlock_t *rw) {
sem_wait(&rw->lock);
rw->readers--;
if (rw->readers == 0)
// last reader releases writelock
sem_post(&rw->writelock);
sem_post(&rw->lock);
}
```)
- reader에게 유리함 (writer가 굶을 수 있음)
== How To Implement Semaphores
#prompt(```c
typedef struct __Sem_t {
int value;
pthread_cond_t cond;
pthread_mutex_t lock;
} Sem_t;
// only one thread can call this
void Sem_init(Sem_t *s, int value) {
s->value = value;
Cond_init(&s->cond);
Mutex_init(&s->lock);
}
void Sem_wait(Sem_t *s) {
Mutex_lock(&s->lock);
while (s->value <= 0)
Cond_wait(&s->cond, &s->lock);
s->value--;
Mutex_unlock(&s->lock);
}
void Sem_post(Sem_t *s) {
Mutex_lock(&s->lock);
s->value++;
Cond_signal(&s->cond);
Mutex_unlock(&s->lock);
}
```)
- 원래 구현: 값이 음수인 경우 대기 중인 스레드의 수를 반영
- Linux: 값은 0보다 낮아지지 않음
= 19-Common Concurrency Problems
== Concurrency Problems
- Non-deadlock 버그
- Atomicity 위반
- 순서 위반
- deadlock bug
=== Atomicity-Violation
- 메모리 영역에 대해 여러 개의 스레드가 동시에 접근할 때 serializable 해서 race condition이 발생하지 않을 것이라 예상하지만 그렇지 않은 경우가 있다.
- MySQL 버그
#prompt(```c
Thread 1:
if (thd->proc_info) {
...
fputs(thd->proc_info, ...);
...
}
Thread 2:
thd->proc_info = NULL;
```)
- Thread 1의 if문을 확인하고 들어왔으나 Thread 2가 값을 NULL로 바꾸어버리면서 fputs에서 비정상 종료가 된다.
- 해결 방법
#prompt(```c
pthread_mutex_t proc_info_lock = PTHREAD_MUTEX_INITIALIZER;
Thread 1:
pthread_mutex_lock(&proc_info_lock);
if (thd->proc_info) {
...
fputs(thd->proc_info, ...);
...
}
pthread_mutex_unlock(&proc_info_lock);
Thread 2:
pthread_mutex_lock(&proc_info_lock);
thd->proc_info = NULL;
pthread_mutex_unlock(&proc_info_lock);
```)
=== Order-Violation
- A -> B 스레드 순서로 실행되기를 바랬으나 다르게 실행되는 경우
- Mozilla 버그
#prompt(```c
Thread 1:
void init() {
...
mThread = PR_CreateThread(mMain, ...);
...
}
Thread 2:
void mMain(...) {
...
mState = mThread->State;
...
}
```)
- Thread 2가 생성되자마자 mState를 읽어버리면서 mThread가 초기화되기 전에 읽어버리는 문제가 발생한다. (Null 포인터를 접근하게 됨)
- 해결 방법
#prompt(```c
pthread_mutex_t mtLock = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t mtCond = PTHREAD_COND_INITIALIZER;
int mtInit = 0;
Thread 1:
void init() {
...
mThread = PR_CreateThread(mMain, ...);
// signal that the thread has been created...
pthread_mutex_lock(&mtLock);
mtInit = 1;
pthread_cond_signal(&mtCond);
pthread_mutex_unlock(&mtLock);
...
}
Thread 2:
void mMain(...) {
...
// wait for the thread to be initialized...
pthread_mutex_lock(&mtLock);
while (mtInit == 0)
pthread_cond_wait(&mtCond, &mtLock);
pthread_mutex_unlock(&mtLock);
mState = mThread->State;
...
}
```)
=== Deadlock Bugs
- Circular Dependencies
#prompt(```c
Thread 1:
pthread_mutex_lock(L1);
pthread_mutex_lock(L2);
Thread 2:
pthread_mutex_lock(L2);
pthread_mutex_lock(L1);
```)
#img("image-12", width:50%)
- Thread 1은 L1을 먼저 잡고, Thread 2는 L2를 먼저 잡은 상태에서 서로를 기다리게 되어 deadlock이 발생한다.
- 왜 deadlock이 발생할까?
- 큰 코드 베이스에서는 컴포넌트 간의 의존성이 복잡함
- 캡슐화의 특징
- `Vector v1, v2`
- `Thread 1: v1.addAll(v2)`
- `Thread 2: v2.addAll(v1)`
=== Conditions for Deadlock
- Mutual Exclusion
- 한 번에 하나의 스레드만이 자원을 사용할 수 있음
- Hold and Wait
- 스레드가 자원을 가지고 있는 상태에서 다른 자원을 기다림
- No Preemption
- 스레드가 자원을 강제로 뺏을 수 없음
- Circular Wait
- 스레드 A가 스레드 B가 가지고 있는 자원을 기다리고, 스레드 B가 스레드 A가 가지고 있는 자원을 기다림
==== Deadlock Prevention
- Circular Wait
- lock acqustition 순서를 정함
- Hold and Wait
- 모든 자원을 한 번에 요청 (전체를 lock으로 한 번 감싸기)
- critical section이 커지는 문제가 발생할 수 있었음
- 미리 lock을 알아야 함
#prompt(```c
pthread_mutex_lock(prevention); // begin lock acquisition
pthread_mutex_lock(L1);
pthread_mutex_lock(L2);
...
pthread_mutex_unlock(prevention); // end
```)
- No Preemption
- `pthread_mutex_trylock()`: lock을 얻을 수 없으면 바로 반환
- 아래처럼 구현하면 livelock(deadlock처럼 모든 스레드가 lock을 얻지 못하고 멈췄는데, 코드는 돌아가고 있는 상태)이 발생할 수 있음
- random delay를 추가해 누군가는 acquire에 성공하도록 할 수 있음
- 획득한 자원이 있다면 반드시 해제해야 함
- lock이나 메모리...
#prompt(```c
top:
pthread_mutex_lock(L1);
if (pthread_mutex_trylock(L2) != 0) {
pthread_mutex_unlock(L1);
goto top;
}
```)
- Mutual Exclusion
- race condition을 없애기 위해 mutual exclusion을 사용함
- 그런데 이거를 없애야 하나? (X) -> lock을 안 쓴다는 것으로 이해하면 됨
- lock free 접근법 (atomic operation을 이용)
#prompt(```c
int CompareAndSwap(int *address, int expected, int new) {
if (*address == expected) {
*address = new;
return 1; // success
}
return 0; // failure
}
void AtomicIncrement(int *value, int amount) {
do {
int old = *value;
} while (CompareAndSwap(value, old, old + amount) == 0);
}
```)
#prompt(```c
void insert(int value) {
node_t *n = malloc(sizeof(node_t));
assert(n != NULL);
n->value = value;
n->next = head;
head = n;
}
void insert(int value) {
node_t *n = malloc(sizeof(node_t));
assert(n != NULL);
n->value = value;
pthread_mutex_lock(listlock);
n->next = head;
head = n;
pthread_mutex_unlock(listlock);
}
void insert(int value) {
node_t *n = malloc(sizeof(node_t));
assert(n != NULL);
n->value = value;
do {
n->next = head;
} while (CompareAndSwap(&head, n->next, n) == 0);
}
```)
= 20-I/O Devices and HDD
== System Architecture
- CPU / Main Memory
- (Memory Bus)
- (General I/O Bus(PCI))
- Graphics
- (주변기기 I/O Bus(SCSI, SATA, USB))
- HDD
== I/O Devices
- 인터페이스
- 시스템 소프트웨어로 작동을 제어할 수 있도록 함
- 모든 장치는 일반적인 상호작용을 위한 특정 인터페이스와 프로토콜이 있음
- 내부 구조
- 시스템에 제공하는 추상화된 구현
#img("image-13")
== Interrupts
- Interrupt로 CPU 오버헤드를 낮춤
- 장치를 반복적으로 polling 하는 대신 OS 요청을 날리고, 호출한 프로세스를 sleep 상태로 만들고 다른 작업으로 context switch를 함
- 장치가 최종적으로 작업을 마치면 하드웨어 interrupt를 발생시켜 CPU가 미리 결정된 interrupt service routine(ISR)에서 OS로 넘어가게 함
- Interrupts는 I/O 연산을 하는 동안 CPU를 다른 작업에 사용할 수 있게 함
== Direct Memory Access (DMA)
- DMA를 사용하면 더 효율적인 데이터 이동을 할 수 있다.
- DMA 엔진은 CPU 개입 없이 장치와 주메모리 간의 전송을 조율할 수 있는 장치이다.
- OS는 데이터가 메모리에 있는 위치와 복사할 위치를 알려주어 DMA 엔진을 프로그래밍한다.
- DMA가 완료되면 DMA 컨트롤러는 interrupt를 발생시킨다.
#img("image-14")
== Methods of Device Interaction
- I/O instructions
- `in` / `out` (x86)
- 장치에 데이터를 보내기 위해 호출자는 데이터가 포함된 레지스터와 장치 이름을 지정하는 특정 포트를 지정한다.
- 일반적으로 privileged instruction이다.
- Memory-mapped I/O
- 하드웨어는 마치 메모리 위치인 것처럼 장치 레지스터를 사용할 수 있게 만든다.
- 특정 레지스터에 접근하기 위해 OS는 주소를 읽거나 쓴다.
#img("image-15")
== HDD
- 기본 요소
- Platter
- 데이터가 지속적으로 저장되는 원형의 단단한 표면
- 디스크는 하나 또는 그 이상의 platters를 가진다. 각 platter는 `surface`라고 불리는 두 면을 가진다.
- Spindle
- platters를 일정한 속도로 회전시키는 모터를 연결
- 회전 속도는 RPM으로 측정된다. (7200 \~ 15000 RPM)
- Track
- 데이터는 각 구역(sector)의 동심원으로 각 표면에 인코딩된다. (512-byte blocks)
- Disk head and disk arm
- 읽기 및 쓰기는 디스크 헤드에 의해 수행된다. 드라이브 표면 당 하나의 헤드가 있다.
- 디스크 헤드는 단일 디스크 암에 부착되어 표면을 가로질러 이동하여 원하는 track 위에 헤드를 배치한다.
#img("image-16")
=== I/O Time
$T_(I \/ O) = T_("seek") + T_("rotation") + T_("transfer")$
- Seek time
- 디스크 암을 올바른 트랙으로 옮기는데 걸리는 시간
- Rotational delay
- 디스크가 올바른 섹터로 회전하는데 걸리는 시간
=== Disk Scheduling
- OS가 디스크로 날릴 I/O 요청들의 순서를 결정한다.
- I/O 요청의 집합이 주어지면, 디스크 스케줄러는 요청을 검사하고 다음에 무엇을 실행해야 하는지 결정한다.
- 요청: 98, 183, 37, 122, 14, 124, 65, 67 (Head: 53)
- *FCFS (First Come First Serve)*
- 98 -> 183 -> 37 -> 122 -> 14 -> 124 -> 65 -> 67
- *Elevator (SCAN or C-SCAN)*
- *SCAN*: 맨 앞으로 가면서 훑고 다시 순차로 가는 방식
- 37 -> 14 -> 65 -> 67 -> 98 -> 122 -> 124 -> 183
- *C-SCAN*: 현 위치부터 뒤로 쭉 가서 앞으로 나오는 원형 방식
- 65 -> 67 -> 98 -> 122 -> 124 -> 183 -> 14 -> 37
- *SPTF (Shortest Positioning Time First)*
- track과 sector를 고려하여 가장 가까운 것을 먼저 처리
- 현대 드라이브는 seek과 rotation 비용이 거의 동일하다.
- 아래 그림에서 rotation이 중요하면 8을 먼저 접근함 (디스크의 하드웨어 특성에 따라 달라짐)
#img("image-17", width: 70%)
= 21-Assignment 2: KURock
= 22-Files and Directions
== Abstractions for Storage
- 파일
- bytes의 선형 배열
- 각 파일은 low-level 이름을 가지고 있음 (`inode`)
- OS는 파일의 구조에 대해 별로 알지 못함 (그 파일이 사진인지, 텍스트인지, C인지)
- 디렉토리
- (user-readable name, low-level name)쌍의 리스트를 포함한다.
- 디렉토리 또한 low-level 이름을 가지고 있음 (`inode`)
#img("image-18")
== Interface
=== Creating
- `O_CREAT`를 같이 사용한 `open()` system call
#prompt(```c int fd = open("foo", O_CREAT|O_WRONLY|O_TRUNC, S_IRUSR|S_IWUSR);```)
- `O_CREAT`: 파일이 없으면 생성
- `O_WRONLY`: 쓰기 전용
- `O_TRUNC`: 파일이 이미 존재하면 비우기
- `S_IRUSR | S_IWUSR`: 파일 권한 (user에 대한 읽기, 쓰기 권한)
- File descriptor
- An integer
- 파일을 읽거나 쓰기 위해 file descriptor 사용(그 작업을 할 수 있는 권한이 있다고 가정)
- 파일 형식 객체를 가리키는 포인터라고 생각할 수 있음
- 각 프로세스끼리 독립적이다. (private하다)
- 각 프로세스는 file descriptors의 리스트를 유지함 (각각은 system-wide하게 열린 파일 테이블에 있는 항목을 가리킨다)
=== Accessing
==== Sequential
#prompt(```bash
prompt> echo hello > foo
prompt> cat foo
hello
prompt>
```)
#prompt(```bash
prompt> strace cat foo
...
open("foo", O_RDONLY|O_LARGEFILE) = 3
read(3, "hello\n", 4096) = 6
write(1, "hello\n", 6) = 6
hello
read(3, "", 4096) = 0
close(3) = 0
...
prompt>
```)
- `strace`는 프로그램이 실행되는 동안 만드는 모든 system call 을 추적한다. 그리고 그 결과를 화면에 보여준다.
- file descriptors 0, 1, 2는 각각 stdin, stdout, stderr를 가리킨다.
==== Random
- OS는 "현재" offset을 추적한다.
- 다음 읽기 또는 쓰기가 어디서 시작할지는 파일을 읽고 있는 혹은 쓰고 있는 것이 결정한다.
- 암묵적인 업데이트
- 해당 위치에서 $N$바이트를 읽거나 쓰면 현재 offset에 $N$만큼 추가된다.
- 명시적인 업데이트
- `off_t lseek(int fd, off_t offset, int whence);`
- `whence`
- `SEEK_SET`: 파일의 시작부터
- `SEEK_CUR`: 현재 위치부터
- `SEEK_END`: 파일의 끝부터
- 임의로 offset의 위치를 변경할 수 있다.
==== Open File Table
- 시스템에서 현재 열린 모든 파일을 보여준다.
- 테이블의 각 항목은 descriptor가 참조하는 기본 파일, 현재 offset 및 파일 권한과 같은 기타 관련 정보를 추적한다.
- 파일은 기본적으로 open 파일 테이블에 고유한 항목을 가지고 있다.
- 다른 프로세스가 동시에 동일한 파일을 읽는 경우에도 각 프로세스는 open 파일 테이블에 자체적인 항목을 갖는다.
- 파일의 논리적 읽기 또는 쓰기는 각각 독립적이다.
==== Shared File Entries
- `fork()`로 file entry 공유
#prompt(```c
int main(int argc, char *argv[]) {
int fd = open("file.txt", O_RDONLY);
assert(fd >= 0);
int rc = fork();
if (rc == 0) {
rc = lseek(fd, 10, SEEK_SET);
printf(“C: offset % d\n", rc);
}
else if (rc > 0) {
(void)wait(NULL);
printf(“P: offset % d\n", (int) lseek(fd, 0, SEEK_CUR));
}
return 0;
}
```)
#prompt(```bash
prompt> ./fork-seek
child: offset 10
parent: offset 10
prompt>
```)
#img("image-19")
- `dup()`으로 file entry 공유
- `dup()`은 프로세스가 기존 descriptor와 동일한 open file을 참조하는 새 file descriptor를 생성한다.
- 새 file descriptor에 대해 가장 작은 사용되지 않는 file descriptor를 사용해 file descriptor의 복사본을 만든다.
- output redirection에 유용함
#prompt(```c
int fd = open(“output.txt", O_APPEND|O_WRONLY);
close(1);
dup(fd); //duplicate fd to file descriptor 1
printf(“My message\n");
```)
- `dup2()`, `dup3()`
==== Writing Immediately
- `write()`
- 파일 시스템은 한동안 쓰기 작업을 하는 것을 버퍼에 집어넣고, 나중에 특정 시점에 쓰기가 디스크에 실제로 실행된다.
- `fsync()`
- 파일 시스템이 모든 dirty 데이터(아직 쓰이지 않은)를 강제로 디스크에 쓴다.
=== Removing
- `unlink()`
=== Functions
- `mkdir()`
- 디렉토리를 생성할 때, 빈 디렉토리를 생성한다.
- 기본 항목
- `.`: 현재 디렉토리
- `..`: 상위 디렉토리
- `ls -a`로 확인하면 위 2개가 나옴
- `opendir()`, `readdir()`, `closedir()`
#prompt(```c
int main(int argc, char *argv[]) {
DIR *dp = opendir(".");
struct dirent *d;
while ((d = readdir(dp)) != NULL) {
printf("%lu %s\n", (unsigned long)d->d_ino, d->d_name);
}
closedir(dp);
return 0;
}
```)
#prompt(```c
struct dirent {
char d_name[256]; // filename
ino_t d_ino; // inode number
off_t d_off; // offset to the next dirent
unsigned short d_reclen; // length of this record
unsigned char d_type; // type of file
};
```)
- `rmdir()`
- 빈 디렉토리를 삭제한다.
- 빈 디렉토리가 아니면 삭제되지 않는다.
- `ln` command, `link()` system call (Hard Links)
#prompt(```bash
prompt> echo hello > file
prompt> cat file
hello
prompt> ln file file2
prompt> cat file2
hello
prompt> ls -i file file2
67158084 file
67158084 file2
prompt>
```)
- 디렉토리에 다른 이름을 생성하고 그것이 원본 파일의 같은 `inode`를 가리키게 한다.
- `rm` command, `unlink()` system call
#prompt(```bash
prompt> rm file
removed ‘file’
prompt> cat file2
hello
```)
- user-readable name와 inode number 사이의 link를 제거한다.
- reference count를 감소시키고 0이 되면 파일이 삭제된다.
== Mechanisms for Resource Sharing
- 프로세스의 추상화
- CPU 가상화 -> private CPU
- 메모리 가상화 -> private memory
- 파일 시스템
- 디스크 가상화 -> 파일과 디렉토리
- 파일들은 일반적으로 다른 유저 및 프로세스와 공유되므로 private하지 않다.
- Permission bits
=== Permission Bits
#prompt(```bash
prompt> ls -l foo.txt
-rw-r--r-- 1 remzi wheel 0 Aug 24 16:29 foo.txt
```)
- 파일의 타입
- `-`: 일반 파일
- `d`: 디렉토리
- `l`: symbolic link
- Permission bits
- owner, group, other 순서로 읽기, 쓰기, 실행 권한을 나타낸다.
- `r`: 읽기
- `w`: 쓰기
- `x`: 실행
- 디렉토리의 경우 `x` 권한을 주면 사용자가 디렉토리 변경(`cd`)으로 특정 디렉토리로 이동할 수 있다.
=== Making a File System
- `mkfs` command
- 해당 디스크 파티션에 루트 디렉토리부터 시작하여 빈 파일 시스템을 만든다.
#prompt(```bash mkfs.ext4 /dev/sda1```)
- 균일한 파일 시스템 트리 내에서 접근 가능해야 한다.
=== Mounting a File System
- `mount` command
- 기존 디렉토리를 대상 마운트 지점으로 사용하고, 기본적으로 해당 지점의 디렉토리 트리에 새로운 파일 시스템을 연결한다.
#prompt(```bash mount -t ext4 /dev/sda1 /home/users```)
- 경로 `/home/users`는 이제 새롭게 마운트된 파일 시스템의 루트를 가리킨다.
= 23-File System Implementation
파일 시스템은 순수한 소프트웨어이다.
- CPU와 메모리 가상화와는 달리 파일 시스템이 더 좋은 성능을 내기 위한 측면으로 하드웨어 기능을 추가하지는 않는다.
== Overall Organization
- Blocks
- 디스크를 블록으로 나누어 관리한다.
- Data region
- 이 블록들을 위해 디스크의 고정된 부분을 예약한다.
#img("image-20")
- Metadata
- 각 파일에 대한 정보를 추적한다.
- 파일을 구성하는 데이터 블록 (데이터 영역 내), 파일 크기, 소유자 및 접근 권한, 접근 및 수정 시간 등
- `inode` (index node)
- 메타데이터를 저장한다.
- 디스크의 일부 공간에 `inode` 테이블을 예약한다.
#img("image-21")
- Allocation structures
- `inode` 또는 데이터 블록이 free인지 할당되어 있는지를 추적한다.
- free list, bitmap으로 구현
#img("image-22")
- Superblock
- 이 특정 파일 시스템에 대한 정보를 포함한다.
- `inode`와 데이터 블록이 파일 시스템에 얼마나 있는지, `inode` 테이블의 시작점이 어딘지와 같은 정보를 포함한다.
- 파일 시스템이 마운트될 때, OS가 superblock을 먼저 읽고 다양한 파라미터를 초기화하고 나서 파일 시스템 트리에 볼륨을 할당한다.
#img("image-23")
- Block size: 4 KB
- 256 KB partition (64-block partition)
- `inode` size: 256 B
- 16 `inode`s per block
- 총 80 `inode`s
== `inode`
- i-number
- 각 `inode`는 암묵적으로 숫자로 참조된다.
- i-number가 주어지면 디스크에서 해당 `inode`가 어디에 있는지 바로 계산할 수 있다.
#img("image-24")
- `inode` number 32를 읽으려고 한다.
- `inode` region에서의 offset: $32 times "sizeof(inode)" = 8"KB"$
- `inode` size = 256B
- 디스크에서 `inode` 테이블의 시작 주소: 12KB
- 12KB + 8KB = 20KB
- 디스크는 바이트 주소 지정이 가능하지 않지만, 주소 지정이 가능한 다수의 구간으로 구성되어 있다.
- 일반적으로 512B
- sector address: $(20 times 1024) / 512 = 40$
- 그렇다면 `inode`가 데이터 블록이 어디에 있는지를 참조하는 방법은?
- Multi-level index
- `inode`는 고정된 수의 direct pointer와 하나의 indirect pointer를 가지고 있다.
- indirect pointer
- 사용자 데이터가 포함된 블록을 가리키는 대신, 사용자 데이터를 가리키는 포인터가 더 많이 포함된 블록을 가리킨다.
- 파일이 충분히 커지면 indirect block이 할당된다.
- double indirect pointer
- 더 큰 파일들을 지원할 수 있다.
- indirect block을 가리키는 포인터들을 포함한 블록을 참조한다.
- 예시
- 12개의 direct pointer
- 1개의 indirect pointer
- Block size: 4KB
- 4B disk address
- $(12 + 1024) times 4"KB" = 4144"KB"$ 크기의 파일 수용 가능
== Directory Organization
- 디렉토리
- 디렉토리는 파일의 특별한 형태이다.
- `inode`의 type 필드는 "regular file" 대신 "directory"로 마킹한다.
- 데이터 블록에 (entry, i-number)쌍들의 배열을 포함한다.
#img("image-25")
- 파일을 제거하는 것은 디렉토리의 중간에 빈 공간을 남길 수 있다.
- i-number (inum) -> 0 (reserved)
- reclen
- 새로운 항목은 오래되거나 더 큰 항목을 덮어쓸 수 있으므로 안에 extra 공간을 가진다.
#img("image-26")
== Free Space Management
파일을 생성할 때를 예로 들어보자.
- 파일 시스템은 비트맵을 탐색하며 `inode`가 free인 것을 찾고 그것을 파일에 할당한다. (1을 적으면 사용 중)
- 데이터 블록에서와 유사함
- 몇몇 파일 시스템은 새로운 파일이 생성되어 데이터 블록을 필요로 할 때, 순차적으로 블록을 탐색하여 free한지를 본다.
== System Calls (FILE)
- `open("/foo/bar", O_RDONLY)`
- `/foo/bar` 파일을 찾기 위해 `inode`를 먼저 찾는다.
- root 디렉토리의 `inode`를 읽는다.
- `/`의 i-number는 일반적으로 2이다.
- 0: `inode`가 없음
- 1: `inode`가 올바르지 않은 블록에 있음
- 하나 또는 그 이상의 디렉토리 데이터 블록을 읽으므로서 `foo` 항목을 찾을 수 있다. (`foo`의 i-number도)
- `foo`의 `inode`를 포함한 블록을 읽고나서 그것의 디렉토리 데이터인 `bar`의 `inode` number를 찾는다.
- `bar`의 `inode`를 메모리로 읽어온다.
- 권한을 확인한다.
- per-process open-file 테이블에 있는 이 프로세스에 file descriptor를 할당한다.
- 유저에게 이것을 반환한다.
- `read()`
- 파일의 첫 번째 블록을 읽고, 해당 블록의 위치를 찾기 위해 `inode`를 참조한다.
- 마지막 접근 시간으로 `inode`가 새롭게 업데이트 될 수 있다.
- file offset을 업데이트 한다.
#img("image-27")
- `close()`
- file descriptor 할당을 해제한다.
- disk I/O가 발생하지 않는다.
#img("image-28")
- `write()`
- 새 파일을 쓸 때, 각 쓰기 작업은 디스크에 데이터를 써야 할 뿐만 아니라 먼저 파일에 할당할 블록을 결정하고 이에 따라 디스크의 다른 구조(data bitmap과 `inode`)를 업데이트 해야 한다.
- 각 쓰기 작업은 논리적으로 5개의 I/O를 발생시킨다.
- 데이터 비트맵을 읽는데 1개
- 비트맵을 쓰는데 1개
- `inode`를 읽고 쓰는데 2개 이상
- 실제 블록에 쓰는데 1개
#img("image-29")
== Caching and Buffering
- 디스크에 많은 I/O가 있으면 파일 입출력 비용이 클 수 있다.
- 파일을 열 때마다 디렉토리 계층 구조의 모든 level에 대해 최소 2번의 읽기가 필요하다.
- 하나는 쿼리를 한 디렉토리의 `inode`를 읽는 것에, 하나는 그것의 데이터를 최소 하나라도 읽어놓는 것에 필요하다.
- Page cache
- 처음 open에는 디렉토리에 `inode`와 데이터를 읽는데 많은 I/O 트래픽을 생성할 수 있다.
- 동일한 파일(또는 동일한 디렉토리에 있는 파일)의 후속 파일을 열 때에는 대부분 캐시에 hit된다.
- Write buffering
- 쓰기 작업에 딜레이를 줘서 파일 시스템은 작은 집합의 I/O들에 대해 일괄적으로 업데이트 할 수 있다.
- 동일한 I/O들을 스케줄링 할 수 있다.
- 일부 쓰기 작업은 딜레이를 통해 완전히 방지될 수 있다.
= 24-FSCK and Journaling
== How to Update the Disk despite Crashes
충돌이 일어나도 디스크를 업데이트 하는 방법
- 두 번의 쓰기 작업 사이에 시스템이 충돌하거나 전원이 꺼질 수 있다.
- 충돌은 임의의 시점에 발생할 수 있다.
- On-disk 상태는 일부분만 업데이트 되어 있을 수 있다.
- 충돌 후 시스템이 부팅되고, 파일 시스템을 다시 마운트하려고 한다.
- 파일 시스템이 On-disk 이미지를 적절한 상태로 유지하는 것을 어떻게 보장할까?
- File System Checker (FSCK)
- Journaling
- 예시
#img("image-30")
- 기존 파일에 하나의 데이터 블록을 추가하려고 한다.
- 파일을 연다
- `lseek()`으로 file offset을 파일의 끝으로 옮긴다.
- 파일을 닫기 전에 파일에 4KB 쓰기 작업 하나를 요청한다.
#img("image-31")
- Data bitmap, `inode`, data block을 쓴다.
- *충돌 시나리오 (오직 한 번의 쓰기만 성공한 경우)*
- 데이터 블록(Db)만 쓰인 경우
- 파일 시스템 충돌 일관성의 관점에서 문제가 없다.
- `inode`(l[v2])만 쓰인 경우
- 디스크로부터 쓰레기 값을 읽어온다.
- File-system inconsistency
- bitmap(B[v2])만 쓰인 경우
- 공간 누수를 발생시킬 수 있다.
- File-system inconsistency
- *충돌 시나리오 (2번의 쓰기가 성공, 하나는 실패한 경우)*
- `inode`와 bitmap이 쓰인 경우
- 파일 시스템의 메타데이터 관점에서는 괜찮아 보이지만, 데이터 블록은 쓰레기 값이다.
- `inode`와 데이터 블록이 쓰인 경우
- File-system inconsistency
- bitmap과 데이터 블록이 쓰인 경우
- File-system inconsistency
== File System Checker (FSCK)
- fsck
- 파일 시스템 불일치를 찾아 복구하기 위한 UNIX 도구
- 불일치가 발생하도록 두고 나중에(재부팅 시) 수정한다.
- 모든 문제를 고칠 순 없다.
- 파일 시스템 메타데이터가 내부적으로 일관성을 유지하는지 확인하는 것이 목표이다.
- Superblock
- 먼저 superblock이 reasonable하게 보이는지 확인한다.
- Free blocks
- 다음으로 `inode`, indirect blocks, double indirect blocks 등을 스캔하여 파일 시스템 내에 현재 할당된 블록을 읽는다.
- 비트맵과 `inode` 사이에 불일치가 있는 경우 `inode`내의 정보를 신뢰하여 문제를 해결한다.
- `inode` state
- 할당된 각 `inode`에 유효한 유형의 필드가 있는지 확인한다.
- 예: 일반 파일, 디렉토리, symbolic link 등
- `inode`필드에 쉽게 해결되지 않는 문제가 있는 경우 해당 `inode`는 의심스러운 것으로 간주되어 삭제된다.
- 이에 따라 `inode` 비트맵이 업데이트 된다.
- `inode` link
- 각 `inode`에 대한 참조 수를 계산한다.
- Duplicates
- duplicate pointer를 확인하다. 즉, 두 개의 서로 다른 `inode`가 동일한 블록을 참조하는 경우이다.
- 올바르지 않은 `inode`를 제거하고 블록을 복사한다.
- Bad block pointers
- 포인터가 유효 범위 밖의 무언가를 가리키는 것이 확실하면 포인터는 "올바르지 않은 것"으로 간주된다.
- 포인터를 지운다.
- Directory checks
- 각 디렉토리의 내용에 대해 무결성 검사를 수행한다.
- `/.`과 `/..`이 첫 번째 항목이다.
- 디렉토리 항목에서 참조되는 각 `inode`가 할당된다.
- 단점
- 너무 느리다.
- 매우 큰 용량의 디스크에서 모든 디스크를 읽는 것은 몇 분에서 몇 시간까지 걸린다.
- 작동하지만 낭비가 많다.
- 몇 개의 블록을 업데이트하는 동안 발생한 문제를 해결하기 위해 전체 디스크를 읽는 비용이 크다.
== Journaling (Write-ahead Logging)
- Journaling
- 디스크를 업데이트 할 때, 구조를 제자리에 덮어쓰기 전에 먼저 수행할 작업을 설명하는 작은 메모(디스크의 다른위치에)를 적어 둔다.
- Checkpointing
- pending 메타데이터 및 데이터 업데이트를 파일 시스템의 최종 위치에 쓴다.
- Ext3
- On-disk 구조
- 디스크는 블록 그룹으로 나누어진다.
- 각 블록 그룹은 `inode` 비트맵, 데이터 비트맵, `inode`와 데이터 블록을 포함한다.
- Journal
#img("image-32")
=== Data Journaling
#img("image-33")
- journal 쓰기
- 한 가지 간단한 방법은 한 번에 하나씩 요청을 하고 각각이 완료될 때까지 기다린 후 다음을 요청하는 것이다.
- 느리다.
- 한 번에 5개의 블록 쓰기를 모두 실행한다.
- 디스크 스케줄링 -> 재정렬 필요
- journal을 쓰는 동안의 충돌
#img("image-34")
- 2-step으로 트랜잭션 쓰기를 요청한다.
- Step 1: TxE 블록을 제외한 모든 블록을 쓴다.
- Step 2: Step 1이 완료되면, TxE 블록의 쓰기를 요청한다.
#img("image-35")
- TxE 쓰기가 atomic하게 이루어지도록 하려면 이를 단일 512-byte 구간으로 만들어야 한다.
- 디스크는 512-byte 쓰기 작업 발생 여부를 보장한다.
=== Recovery
- 만약 충돌이 트랜잭션이 로그에 안전하게 쓰이기 전에 일어나면?
- pending 업데이트는 무시된다.
- 만약 충돌이 트랜잭션이 log에 커밋된 후 체크포인트가 완성되기 전에 일어나면?
- log를 읽어서 디스크에 커밋된 트랜잭션을 찾는다.
- 트랜잭션이 (순서대로) 재생되어 트랜잭션의 블록을 디스크의 최종 위치에 쓰려고 시도한다.
- 일부 업데이트는 복구 중에 다시 중복 수행된다.
=== Batching Log Updates
- 문제점: 추가적인 디스크 트래픽을 많이 발생시킬 수 있다.
- 예: 같은 디렉토리에 두 개의 파일을 생성할 때
- 만약 이 파일들의 `inode`들이 같은 `inode` 블록에 있으면 같은 블록을 계속해서 쓰게 된다.
- 해결법: 버퍼링 업데이트
- 파일 시스템은 일정 시간 동안 메모리에 업데이트를 버퍼링한다. (고의적인 딜레이를 주는 것)
- 디스크에 대한 과도한 쓰기 트래픽을 방지할 수 있다.
=== Making the Log Finite
- 문제점: 로그가 가득 찼을 때
- 로그를 더 키우면, 복구에 더 많은 시간이 걸린다.
- 더 이상 트랜잭션이 커밋될 수 없다.
- 해결법: Cirular log
- 트랜잭션이 체크포인트가 되면, 파일 시스템은 공간을 free시킬 수 있다.
#img("image-36")
=== Ordered Journaling (=Metadata Journaling)
- 문제점: Data journaling
- 디스크에 각 쓰기 작업에서 journal에 먼저 쓰게 되므로 쓰기 트래픽이 2배가 된다.
- 해결법: Metadata journaling
- 유저 데이터는 journal에 쓰지 않는다.
#img("image-37")
- 데이터 블록을 언제 디스크에 쓸까?
- 일부 파일 시스템(예: Linux ext3)은 관련 메타데이터가 디스크에 쓰이기 전에 먼저 데이터 블록을 디스크에 기록한다.
- Basic Protocol
+ Data write
+ Journal metadata write
+ Journal commit
+ Checkpoint metadata
+ Free
= 25-Log-Structured File Systems
- 시스템 메모리가 늘어나고 있다.
- 더 많은 데이터가 캐싱 됨에 따라 디스크 트래픽은 점점 더 쓰기 작업으로 구성되고 읽기 작업은 캐시에 의해 처리된다.
- 랜덤 I/O와 순차 I/O는 큰 성능적 차이가 있다.
- 그러나 탐색 및 회전 지연 비용은 천천히 감소해왔다.
- 기존 파일 시스템은 여러 일반적인 워크로드에서 제대로 작동하지 않는다.
- 예를 들어, 파일 시스템은 한 블록 크기의 새 파일을 생성하기 위해 많은 수의 쓰기 작업을 수행한다.
- *LFS*
- 디스크에 쓸 때, LFS는 먼저 모든 업데이트(메타데이터 포함)를 in-memory 세그먼트에 버퍼링한다.
- 세그먼트가 가득 차면, 디스크의 사용되지 않은 부분에 하나의 긴 순차적 전송으로 디스크에 쓴다.
- 기존 데이터를 덮어쓰지 않고 항상 빈 위치에 세그먼트를 쓴다.
- 최근 연구에 따르면 플래시 기반 SSD의 고성능을 위해서는 대규모 I/O가 필요하다.
- LFS 스타일의 파일 시스템은 다음 수업에서 플래시 기반 SSD에도 탁월한 선택이 될 수 있다는 것을 보일 것이다.
== Writing to Disk Sequentially
- 파일 시스템 상태에 대한 모든 업데이트를 디스크에 순차적인 쓰기 작업의 일환으로 어떻게 변환할까?
#img("image-38")
- 유저가 데이터 블록을 쓸 때, 디스크에 기록되는 것은 데이터 뿐만이 아니고 업데이트 해야 하는 다른 메타데이터도 있다.
#img("image-39")
== Writing Sequentially and Effevtively
- 단순히 순차적으로 디스크에 쓰는 것만으로는 최고 성능을 달성하기에 충분하지 않다.
- 시간 $T$: 단일 블록을 주소 A에 쓰는 것
- 시간 $T + delta$: 디스크 주소 A + 1에 쓰는 것
- 디스크는 두 번째 쓰기를 디스크 표면에 커밋할 수 있기 전까지 $T_"rotation" - delta$만큼 기다릴 것이다.
- 오히려, 드라이브에 다수에 연속 쓰기(또는 하나의 대규모 쓰기)를 실행해야 한다.
- 쓰기 버퍼링
- 디스크에 쓰기 전에, LFS는 메모리에 업데이트의 track을 유지한다.
- 충분한 수의 업데이트를 받으면 디스크에 한 번에 쓴다.
- 세그먼트
- LFS가 한 번에 쓰는 큰 청크 (예: 몇 MB)
#img("image-40")
- 얼마나 많은 업데이트를 LFS가 디스크에 쓰기 전에 버퍼링해야 할까?
- 디스크 자체에 의존한다. (뒤에 내용을 보자)
== `inode` Map (imap)
- 단순한 파일 시스템
#img("image-41")
- LFS
- `inode`는 디스크 전체에 흩어져 있다.
- 절대 덮어쓰지 않으므로 최신 버전의 `inode`는 계속해서 이동한다. (어떻게 찾지..?)
- imap
- `inode` 번호를 입력으로 사용하고 `inode`의 최신 버전의 디스크 주소를 생성하는 구조이다.
- `inode`가 디스크에 쓰일 때마다 imap은 새로운 위치로 업데이트 된다.
- persistent를 유지해야 한다.
- imap은 디스크의 어디에 위치해야 할까?
- 디스크의 고정된 위치 -> 성능이 저하됨 (다시 이거 찾으려고 회전해야 함)
- 다른 모든 새로운 정보가 기록되는 바로 옆에 위치
#img("image-42")
- 근데 imap은 그러면 또 어떻게 찾을까?
- Checkpoint Region (CR)
- `inode` 맵의 최신 버전에 대한 포인터를 포함하므로 CR을 읽어 `inode` 맵 위치를 찾을 수 있다.
#img("image-43")
- 주기적으로 업데이트 됨 (일반적으로 매 30초 정도에 1번)
- 이렇게 안 하면 저렇게 뒤에 순차적으로 붙이는 의미가 없음..
- 파일을 디스크로부터 읽어올 때
- checkpoint region을 먼저 읽는다.
- 그리고 나서 `inode` 맵 전체를 읽고 메모리에 캐싱한다.
- 파일로부터 block을 읽으려면 여기서 LFS는 일반적인 파일 시스템과 동일하게 진행된다.
- 전체 imap이 캐싱되므로 LFS가 읽는 동안 수행하는 추가 작업은 imap에서 `inode`의 주소를 찾는 것이다.
== What About Directories?
- 파일 `dir/f`를 생성할 때
- imap에는 디렉토리 파일 `dir`과 새로 생성된 파일 `f`의 위치에 대한 정보가 포함되어 있다.
#img("image-44")
- 파일 `f`를 접근할 때
- 먼저 `inode` 맵(보통 메모리에 캐싱됨)을 보고 디렉토리 `dir`(A3)의 `inode` 위치를 찾는다.
- 그런 다음 디렉토리 데이터의 위치를 제공하는 디렉토리 `inode`를 읽는다. (A2)
- 이 데이터 블록을 읽으면 (f, k)의 (파일 이름, `inode` 번호)쌍을 알 수 있다.
- 그런 다음 `inode` 맵을 다시 참조하여 `inode` 번호 k(A1)의 위치를 찾는다.
- 마지막으로 주소 A0에서 원하는 데이터 블록을 읽는다.
== Garbage Collection
- 파일 데이터, `inode` 및 기타 구조의 오래된 버전을 주기적으로 찾아서 정리한다.
- 이어지는 쓰기에 사용할 수 있도록 디스크의 블록을 다시 사용 가능하게 만든다.
- 이어지는 쓰기를 위해 큰 청크를 정리한다.
- LFS 클리너는 주기적으로 M개의 오래된(부분적으로 사용된) 세그먼트를 읽는다.
- 해당 세그먼트 내에서 어떤 블록이 활성화되어 있는지 결정한다.
- 해당 내용을 N개의 새 세그먼트(N < M)로 압축한 다음, N개의 세그먼트를 디스크의 새 위치에 쓴다.
- 이전 M개의 세그먼트는 해제되어 이어지는 쓰기를 위해 파일 시스템에서 사용할 수 있다.
- 블록의 생명 결정
- LFS는 세그먼트 내의 어떤 블록이 활성 상태이고 어떤 블록이 죽은 상태인지 어떻게 알 수 있을까?
- Segment summary block
- 디스크 주소 A에 있는 각 데이터 블록 D에 대해 해당 `indoe` 번호 N과 오프셋 T를 포함한다.
#img("image-45")
- 어떤 블록을 지워야하고 언제 지워야 할까?
- cleaner가 얼마나 자주 실행되어야 하고, 어떤 세그먼트를 골라서 지워야 할까?
- 언제 지울지 결정
- 주기적으로, 유휴 시간 동안, 디스크가 가득 차면
- 어떤 블록을 지울지 결정
- Hot Segments
- 내용이 자주 덮어쓰이는 것
- Cold Segments
- 죽은 블록이 몇 개 있을 수 있지만 나머지 내용은 비교적 안정적인 것
- cold segment 먼저, hot segment 나중에
== Crash Recovery
- LFS가 디스크에 쓰는 동안 시스템 충돌이 일어나면 journaling을 다시 호출한다.
- CR에 쓰는 동안 충돌이 일어나면
- LFS는 두 개의 CR을 유지하고 교대로 쓴다.
+ 먼저 헤더(타임스탬프 포함)를 작성한다.
+ 그런 다음 CR의 body를 작성한다.
+ 마지막으로 마지막 블록 하나(타임스탬프 포함)를 작성한다.
- 항상 일관된 타임스탬프가 있는 가장 최근 CR을 사용하도록 선택한다.
- 세그먼트에 쓰는 동안 충돌이 일어나면
- LFS는 약 30초마다 CR을 작성하므로 파일 시스템의 마지막 일관된 스냅샷은 꽤 오래되었을 수 있다.
- Roll Forward
- 로그(CR에 포함된)의 끝을 찾은 후 이를 사용하여 다음 세그먼트를 읽고 그 안에 유효한 업데이트가 있는지 확인한다.
- 있는 경우 LFS는 그에 따라 파일 시스템을 업데이트하여 마지막 체크포인트 이후에 기록된 많은 데이터와 메타데이터를 복구한다.
= 26-Flash-based SSDs
- Solid-State Storage (SSD)
- 기계적으로 움직이는 부분들이 없음 (arm이 돌아가는 것)
- 메모리 같은 형태로 동작하지만, 파워가 없어진 후에도 정보가 남아 있음
- Flash (NAND-based flash)
- 플래시 칩은 단일 트랜지스터에 하나 이상의 비트를 저장하도록 설계되었다.
- 트랜지스터 내에 비트 수준은 이진 값으로 매핑된다.
- single-level cell(SLC) 플래시에서는 트랜지스터 내에 단일 비트(예: 1 또는 0)만 저장된다.
- multi-level cell(MLC) 플래시를 사용하면 서로 다른 비트가 다른 level(예: 00, 01, 10, 11)로 인코딩된다.
- 셀당 3비트를 인코딩하는 triple-level cell(TLC) 플래시도 있다.
== Basic Flash Operations
- 단순한 예제
#img("image-46")
- 디테일한 예제
- 8-bit 페이지, 4-page 블록
#img("image-47")
- 블록에 있는 어떠한 페이지를 덮어쓰기 전에, 먼저 중요한 데이터는 다른 블록에 복사해놓아야 한다. (덮어쓰려면 블록 단위로 erase를 진행해야 하기 때문)
- 기본 플래시 칩을 일반적인 저장 장치처럼 보이는 것으로 바꾸는 방법은 무엇일까?
- SSD
- 몇 개의 플래시 칩이 있다.
- 일정량의 휘발성 메모리(예: SRAM)가 있다.
- 캐싱 및 버퍼링에 유용하다.
- control logic
- Flash Translation Layer(FTL): 논리 블록에 대한 읽기 및 쓰기 요청을 받아 이를 low-level의 읽기, 삭제 및 program(쓰기) 명령으로 변환한다.
#img("image-48")
- Direct mapped (가장 간단하나 성능은 좋지 않은 방식 - bad approach)
- 논리적 페이지 N개 읽기는 물리적 페이지 N개 읽기에 직접 매핑된다.
- 논리적 페이지 N개에 쓰기는 더 복잡하다.
- 먼저 페이지 N개가 포함된 전체 블록을 먼저 읽는다.
- 그런 다음 블록을 지운다.
- 마지막으로 이전 페이지와 새 페이지를 program(쓰기)한다.
- 성능 문제
- 쓰기 증폭: FTL이 플래시 칩에 날린 총 쓰기 트래픽(byte)을 클라이언트가 날린 총 쓰기 트래픽(byte)으로 나눈 것
- 신뢰성 문제
- 마모: 단일 블록을 너무 자주 지우고 program(쓰기)하면 더 이상 사용이 불가능하다. (소자의 특성 상 블록을 지우는 횟수에 제한이 있다.)
== A Log-Structured FTL
- 오늘날 대부분의 FTL들은 log structured를 사용하고 있다.
- 논리적 블록 N개를 쓰려고 하면, 장치는 현재 쓰고 있는 블록에서 다음 free 공간에 쓴다.
- 예시
- 가정
- 클라이언트는 4-KB 사이즈에 대해 읽기 또는 쓰기를 요청한다.
- SSD는 4개의 4-KB 페이지로 구성된 16-KB 크기의 블록이 있다.
- 100번 논리 주소에 a1을 쓰려고 한다.
#img("image-49")
- 101번 논리 주소에 a2를 쓰려고 한다.
- 2000번 논리 주소에 b1을 쓰려고 한다.
- 2001번 논리 주소에 b2를 쓰려고 한다.
#img("image-50")
- 장점
- 로그 기반 접근법은 특성상 성능을 향상시킨다.(삭제를 덜 함)
- 모든 페이지에 쓰기를 분산시켜(wear leveling) 장치의 수명을 늘린다.
- 단점
- Garbage collection
- 논리 블록의 덮어쓰기는 garbage를 만든다.
- in-memory mapping table의 높은 비용
- 매핑 테이블(Out-Of-Band(OOB) area)의 지속성 처리 (전원이 꺼졌을 때 날라가지 않는 곳에 저장해놓아야 함)
- 장치가 클수록 해당 테이블에 더 많은 메모리가 필요하다.
- 예시 (Garbage Collection)
- 블록 100, 101에 다시 쓰기를 하려고 한다.
#img("image-51")
- 하나 이상의 garbage page를 포함하는 블록 찾기
- 해당 블록의 live(non-garbage) 페이지를 읽는다.
- 해당 live 페이지를 로그에 기록하고 마지막으로 쓰기에 사용하기 위해 전체 블록을 회수한다.
#img("image-52")
- 매핑 테이블의 크기
- 1TB SSD에서 page가 4KB이고 각 엔트리가 4B인 경우
- 매핑을 위해 1GB 메모리가 필요하다. (1TB / 4KB x 4B = 1GB)
- 페이지 단위 FTL 체계는 실용적이지 않다.
- Block-based 매핑
- 페이지 단위가 아닌 장치의 블록 단위로만 포인터를 유지한다.
- 논리 블록 주소: #text("chunk number", fill: blue) + #text("offset", fill: red) (4개의 페이지로 구성되어 있으므로 offset bit는 2개)
- 논리 블록 2000: #text("0111 1101 00", fill: blue)#text("00", fill: red)
- 논리 블록 2001: #text("0111 1101 00", fill: blue)#text("01", fill: red)
- 가장 큰 문제는 작은 쓰기가 발생할 때 생긴다.
- 기존 블록에서 대량의 실시간 데이터를 읽어서 새 블록에 복사해야 함
- 예시 (Block-based mapping)
- 기존 방식: 2000 -> 4, 2001 -> 5, 2002 -> 6, 2003 -> 7에 매핑되어 있음
#img("image-53")
- 블록 전체를 옮겨야 할 수 있는 단점이 있음
- 하이브리드 매핑
- 로그 테이블: 작은 단위의 페이지별 매핑
- 데이터 테이블: 더 큰 블록별 매핑
- 특정 논리 블록을 찾을 때, 먼저 로그 테이블을 참조한 다음 데이터 테이블을 참조
- 로그 블록의 수를 작게 유지하기 위해 FTL은 주기적으로 로그 블록을 검사하여 블록으로 전환해야 한다.
- Wear Leveling
- 여러 번의 삭제 / program 주기로 인해 플래시 블록이 마모된다. FTL은 해당 작업을 장치의 모든 블록에 균등하게 분산시키기 위해 최선을 다해야 한다.
- 모든 블록은 거의 동시에 마모된다.
- 기본 로그 구조 접근법은 쓰기 부하를 분산시키는 초기 작업을 잘 수행하며 garbage collection에도 도움이 된다.
- 덮어쓰지 않는 수명이 긴 데이터로 블록이 채워지면 garbage collection은 블록을 회수하지 않는다.
- 쓰기 부하를 공평하게 분배 받지 못한다.
- 이 문제를 해결하려면 FTL은 주기적으로 해당 블록에서 모든 live 데이터를 읽고 다른 곳에 써야 한다. |
|
https://github.com/Gekkio/gb-ctr | https://raw.githubusercontent.com/Gekkio/gb-ctr/main/chapter/peripherals/p1.typ | typst | Creative Commons Attribution Share Alike 4.0 International | #import "../../common.typ": *
== Port P1 (Joypad, Super Game Boy communication)
#reg-figure(
caption: [#hex("FF00") - P1 - Joypad/Super Game Boy communication register]
)[
#reg-table(
[U], [U], [W-0], [W-0], [R-x], [R-x], [R-x], [R-x],
unimpl-bit(), unimpl-bit(), [P15], [P14], [P13], [P12], [P11], [P10],
[bit 7], [6], [5], [4], [3], [2], [1], [bit 0]
)
#set align(left)
#grid(
columns: (auto, 1fr),
gutter: 1em,
[*bit 7-6*], [*Unimplemented*: Ignored during writes, reads are undefined],
[*bit 5*], [*P15*],
[*bit 4*], [*P14*],
[*bit 3*], [*P13*],
[*bit 2*], [*P12*],
[*bit 1*], [*P11*],
[*bit 0*], [*P10*],
)
]
|
https://github.com/dismint/docmint | https://raw.githubusercontent.com/dismint/docmint/main/networks/lec6-7.typ | typst | #import "template.typ": *
#show: template.with(
title: "Lecture 6-7",
subtitle: "14.15"
)
= Random Graphs
A reason to study random graphs is the idea that once we go beyond a couple nodes, it is often times not very meaningful or feasible to analyze the exact structure of a network. However, taking the entire graph as a whole, random graph studies make it very possible to better understand large graphs and trends within them.
= Erdos-Renyi (ER) Random Grpahs
#define(
title: "Erdos-Renyi"
)[
The Erdor-Renyi model simply studies a graph with $n$ nodes, where each possible undirected edge forms with *independent* probability $p$
]
#define(
title: "Bernoulli Random Variable"
)[
A random variable that either takes on the value $1$ with probability $p$, or $0$ with probability $1-p$
]
The expected number of links is equal to:
$ EE[sum_(i != j) I_(i j)] = [(n (n + 1)) / 2]p $
Where $I$ is the Bernoulli random variable. As the graph gets larger and larger, the number of edges, although random, becomes very tightly centered around its mean (the expected value).
The mean degree of an ER graph is $(n-1) p approx n p = lambda$. The graph density is then $lambda / (n-1) = p$
Usually, we will fix $lambda$ in ER graphs. We can think of this as people have a similar number of Facebook friends, regardless of whether they live in a small or large country.
#define(
title: "Poisson Limit Theorem"
)[
As $n -> infinity$, the distribution of degrees, $D$, converges to a Poisson random variable:
$ PP(D = d) = (e^(-lambda) lambda^d) / (d !) $
This degree distribution falls off *faster* than exponential.
]
Interestingly, think about what happens when we consider how many other friends a friend of ours has, not including us. This amount is simply $1 + (n - 2) p$. Over the long run, this simply converges to $1 + lambda$ - another example of the friendship paradox.
The branching approximation previously mentioned is very good in this type of scenario, since the graph ends up being rather sparse. Under this assumption in a tree-like structure, the average path length and diameter of the graph ends up being $"log"(n) slash "log"(lambda)$
#define(
title: "Threshold Function"
)[
If both of the following are true, then we have a threshold function:
$ PP(A) -> 0 "if" lim_(n -> infinity) p(n) / t(n) = 0 $
$ PP(A) -> 1 "if" lim_(n -> infinity) p(n) / t(n) = infinity $
Where:
- $p(n)$ is a function for $p$
- $t(n)$ is the threshold function.
- $A$ is the property that we desire to track.
The point of the threshold is called a *phase transition*.
]
#example(
title: "Phase Transition: Edges"
)[
$ A = {"number of edges" > 0} $
The claim is that the function $t(n) = 1 / n^2$ is a threshold function for this property of having at least one edge.
Thus we need to show that:
+ If $n^2 p(n) -> 0$ then $PP(A) -> 0$
+ If $n^2 p(n) -> infinity$ then $PP(A) -> 1$
We can notice that the expected number of edges is roughly $n^2 / 2 p$, which gives us a good intuition to prove this.
]
It's actually more likely that we see a tree of arbitrary size than a cycle. This is because a cycle must choose an already visited node, while a tree has the opposite, much easier restriction at the beginning. This leads to the intuition that the threshold for a cycle emerging is the same as seeing a *giant component*.
#define(
title: "Giant Component"
)[
A component with a positive fraction of all the nodes in the network.
]
|
|
https://github.com/miliog/typst-penreport | https://raw.githubusercontent.com/miliog/typst-penreport/master/typst-penreport/content/title_page.typ | typst | MIT No Attribution | #let titlePage(title, subtitle, date) = {
set align(horizon + center)
v(-50%)
line(length: 100%)
text(30pt, title + "\n")
v(-23pt)
text(16pt, subtitle + "\n")
line(length: 100%, stroke: 2pt)
date.display("[month repr:long] [day], [year]")
set align(bottom + right)
pagebreak()
} |
https://github.com/Quaternijkon/notebook | https://raw.githubusercontent.com/Quaternijkon/notebook/main/content/数据结构与算法/.chapter-算法/滑动窗口与双指针/移动零.typ | typst | #import "../../../../lib.typ":*
=== #Title(
title: [移动零],
reflink: "https://leetcode.cn/problems/move-zeroes/description/",
level: 1,
)<移动零>
#note(
title: [
移动零
],
description: [
给定一个数组 nums,编写一个函数将所有 0 移动到数组的末尾,同时保持非零元素的相对顺序。
请注意 ,必须在不复制数组的情况下原地对数组进行操作。
],
examples: ([
输入: nums = [0,1,0,3,12]
输出: [1,3,12,0,0]
],[
输入: nums = [0]
输出: [0]
]
),
tips: [
$1 <= "nums.length" <= 10^4$
$-2^31 <= "nums"[i] <= 2^31 - 1$
],
solutions: (
( name:[双指针],
text:[
我们使用两个指针 `left` 和 `right` 来遍历数组:
- `right` 指针用于遍历数组的每个元素。
- `left` 指针用于记录非零元素的位置。
*算法步骤:*
1. 初始化两个指针 `left` 和 `right`,都指向数组的起始位置。
2. 遍历数组,直到 `right` 指针到达数组的末尾:
- 如果 `nums[right]` 不是零,就将 `nums[left]` 和 `nums[right]` 交换,并将 `left` 指针右移一位。
- 不管 `nums[right]` 是否为零,`right` 指针都右移一位。
3. 继续这个过程,直到 `right` 指针遍历完数组。
],code:[
```cpp
class Solution {
public:
void moveZeroes(vector<int>& nums) {
int n = nums.size();
int left = 0, right = 0;
while (right < n) {
if (nums[right] != 0) {
swap(nums[left], nums[right]);
left++;
}
right++;
}
}
};
```
]),
),
gain:none,
)
|
|
https://github.com/MatheSchool/typst-g-exam | https://raw.githubusercontent.com/MatheSchool/typst-g-exam/develop/test/questions/test-002-subquestion.typ | typst | MIT License | #import "../../src/lib.typ": *
#show: g-exam.with()
#g-question(points: 1)[Question 1]
#g-subquestion(points: 1)[Subquestion 1]
#g-subquestion(points: 1.5)[Subquestion 2] |
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/cetz-plot/0.1.0/src/plot/bar.typ | typst | Apache License 2.0 | #import "/src/cetz.typ": draw, util
#import "errorbar.typ": draw-errorbar
#let _transform-row(row, x-key, y-key, error-key) = {
let x = row.at(x-key)
let y = if y-key == auto {
row.slice(1)
} else if type(y-key) == array {
y-key.map(k => row.at(k, default: 0))
} else {
row.at(y-key, default: 0)
}
let err = if error-key == none {
0
} else if type(error-key) == array {
error-key.map(k => row.at(k, default: 0))
} else {
row.at(error-key, default: 0)
}
if type(y) != array { y = (y,) }
if type(err) != array { err = (err,) }
(x, y.flatten(), err.flatten())
}
// Get a single items min and maximum y-value
#let _minmax-value(row) = {
let min = none
let max = none
let y = row.at(1)
let e = row.at(2)
for i in range(0, y.len()) {
let i-min = y.at(i) - e.at(i, default: 0)
if min == none { min = i-min }
else { min = calc.min(min, i-min) }
let i-max = y.at(i) + e.at(i, default: 0)
if max == none { max = i-max }
else { max = calc.max(max, i-max) }
}
return (min: min, max: max)
}
// Functions for max value calculation
#let _max-value-fn = (
basic: (data, min: 0) => {
calc.max(min, ..data.map(t => _minmax-value(t).max))
},
clustered: (data, min: 0) => {
calc.max(min, ..data.map(t => _minmax-value(t).max))
},
stacked: (data, min: 0) => {
calc.max(min, ..data.map(t => t.at(1).sum()))
},
stacked100: (.., min: 0) => {min + 100}
)
// Functions for min value calculation
#let _min-value-fn = (
basic: (data, min: 0) => {
calc.min(min, ..data.map(t => _minmax-value(t).min))
},
clustered: (data, min: 0) => {
calc.min(min, ..data.map(t => _minmax-value(t).min))
},
stacked: (data, min: 0) => {
calc.min(min, ..data.map(t => t.at(1).sum()))
},
stacked100: (.., min: 0) => {min}
)
#let _prepare(self, ctx) = {
return self
}
#let _get-x-offset(position, width) = {
if position == "start" { 0 }
else if position == "end" { width }
else { width / 2 }
}
#let _draw-rects(filling, self, ctx, ..args) = {
let x-axis = ctx.x
let y-axis = ctx.y
let bars = ()
let errors = ()
let w = self.bar-width
for d in self.data {
let (x, n, len, y-min, y-max, err) = d
let w = self.bar-width
let gap = self.cluster-gap * if w > 0 { -1 } else { +1 }
w += gap * (len - 1)
let x-offset = _get-x-offset(self.bar-position, self.bar-width)
x-offset += gap * n
let left = x - x-offset
let right = left + w
let width = (right - left) / len
if self.mode in ("basic", "clustered") {
left = left + width * n
right = left + width
}
if (left <= x-axis.max and right >= x-axis.min and
y-min <= y-axis.max and y-max >= y-axis.min) {
left = calc.max(left, x-axis.min)
right = calc.min(right, x-axis.max)
y-min = calc.max(y-min, y-axis.min)
y-max = calc.min(y-max, y-axis.max)
draw.rect((left, y-min), (right, y-max))
if not filling and err != 0 {
let y-whisker-size = self.whisker-size * ctx.x-scale
draw-errorbar(((left + right) / 2, y-max),
0, err, 0, y-whisker-size / 2, self.style + self.error-style)
}
}
}
}
#let _stroke(self, ctx) = {
_draw-rects(false, self, ctx, fill: none)
}
#let _fill(self, ctx) = {
_draw-rects(true, self, ctx, stroke: none)
}
/// Add a bar- or column-chart to the plot
///
/// A bar- or column-chart is a chart where values are drawn as rectangular boxes.
///
/// - data (array): Array of data items. An item is an array containing a x an one or more y values.
/// For example `(0, 1)` or `(0, 10, 5, 30)`. Depending on the `mode`, the data items
/// get drawn as either clustered or stacked rects.
/// - x-key: (int,string): Key to use for retrieving a bars x-value from a single data entry.
/// This value gets passed to the `.at(...)` function of a data item.
/// - y-key: (auto,int,string,array): Key to use for retrieving a bars y-value. For clustered/stacked
/// data, this must be set to a list of keys (e.g. `range(1, 4)`). If set to `auto`, att but the first
/// array-values of a data item are used as y-values.
/// - error-key: (none,int,string): Key to use for retrieving a bars y-error.
/// - mode (string): The mode on how to group data items into bars:
/// / basic: Add one bar per data value. If the data contains multiple values,
/// group those bars next to each other.
/// / clustered: Like "basic", but take into account the maximum number of values of all items
/// and group each cluster of bars together having the width of the widest cluster.
/// / stacked: Stack bars of subsequent item values onto the previous bar, generating bars
/// with the height of the sume of all an items values.
/// / stacked100: Like "stacked", but scale each bar to height $100$, making the different
/// bars percentages of the sum of an items values.
/// - labels (none,content,array): A single legend label for "basic" bar-charts, or a
/// a list of legend labels per bar category, if the mode is one of "clustered", "stacked" or "stacked100".
/// - bar-width (float): Width of one data item on the y axis
/// - bar-position (string): Positioning of data items relative to their x value.
/// - "start": The lower edge of the data item is on the x value (left aligned)
/// - "center": The data item is centered on the x value
/// - "end": The upper edge of the data item is on the x value (right aligned)
/// - cluster-gap (float): Spacing between bars insides a cluster.
/// - style (dictionary): Plot style
/// - axes (axes): Plot axes. To draw a horizontal growing bar chart, you can swap the x and y axes.
#let add-bar(data,
x-key: 0,
y-key: auto,
error-key: none,
mode: "basic",
labels: none,
bar-width: 1,
bar-position: "center",
cluster-gap: 0,
whisker-size: .25,
error-style: (:),
style: (:),
axes: ("x", "y")) = {
assert(mode in ("basic", "clustered", "stacked", "stacked100"),
message: "Mode must be basic, clustered, stacked or stacked100, but is " + mode)
assert(bar-position in ("start", "center", "end"),
message: "Invalid bar-position '" + bar-position + "'. Allowed values are: start, center, end")
assert(bar-width != 0,
message: "Option bar-width must be != 0, but is " + str(bar-width))
if error-key != none {
assert(y-key != auto,
message: "Bar value-key must be set != auto if error-key is set")
assert(mode in ("basic", "clustered"),
message: "Error bars are supported for basic or clustered only, got " + mode)
}
// Transform data to (x, y, error) triplets
let data = data.map(row => _transform-row(row, x-key, y-key, error-key))
let n = util.max(..data.map(d => d.at(1).len()))
let x-offset = _get-x-offset(bar-position, bar-width)
let x-domain = (util.min(..data.map(d => d.at(0))) - x-offset,
util.max(..data.map(d => d.at(0))) - x-offset + bar-width)
let y-domain = (_min-value-fn.at(mode)(data),
_max-value-fn.at(mode)(data))
// For stacked 100%, multiply each column/bar
if mode == "stacked100" {
data = data.map(((x, y, err)) => {
let f = 100 / y.sum()
return (x, y.map(v => v * f), err)
})
}
// Transform data from (x, ..y) to (x, n, len, y-min, y-max) per y
let stacked = mode in ("stacked", "stacked100")
let clustered = mode == "clustered"
let bar-data = if mode == "basic" {
range(0, data.len()).map(_ => ())
} else {
range(0, n).map(_ => ())
}
let j = 0
for (x, y, err) in data {
let len = if clustered { n } else { y.len() }
let sum = 0
for (i, y) in y.enumerate() {
let err = err.at(i, default: 0)
if stacked {
bar-data.at(i).push((x, i, len, sum, sum + y, err))
} else if clustered {
bar-data.at(i).push((x, i, len, 0, y, err))
} else {
bar-data.at(j).push((x, i, len, 0, y, err))
}
sum += y
}
j += 1
}
let labels = if type(labels) == array { labels } else { (labels,) }
range(0, bar-data.len()).map(i => (
type: "bar",
label: labels.at(i, default: none),
axes: axes,
mode: mode,
data: bar-data.at(i),
x-domain: x-domain,
y-domain: y-domain,
style: style,
bar-width: bar-width,
bar-position: bar-position,
cluster-gap: cluster-gap,
whisker-size: whisker-size,
error-style: error-style,
plot-prepare: _prepare,
plot-stroke: _stroke,
plot-fill: _fill,
plot-legend-preview: self => {
draw.rect((0,0), (1,1), ..self.style)
}
))
}
|
https://github.com/herbhuang/utdallas-thesis-template-typst | https://raw.githubusercontent.com/herbhuang/utdallas-thesis-template-typst/main/layout/acknowledgement.typ | typst | MIT License | #let acknowledgement(body) = {
set page(
margin: (left: 30mm, right: 30mm, top: 40mm, bottom: 40mm),
numbering: none,
number-align: center,
)
let body-font = "New Computer Modern"
let sans-font = "New Computer Modern Sans"
set text(
font: body-font,
size: 12pt,
lang: "en"
)
set par(leading: 1em)
// --- Acknowledgements ---
align(left, text(font: sans-font, 2em, weight: 700,"Acknowledgements"))
v(15mm)
body
} |
https://github.com/Lypsilonx/boxr | https://raw.githubusercontent.com/Lypsilonx/boxr/main/demo.typ | typst | MIT License | #import "@preview/boxr:0.1.0": *
#set page(
"a3",
margin: 0mm
)
#set align(center + horizon)
#render-structure(
"box",
width: 100pt,
height: 100pt,
depth: 100pt,
tab-size: 20pt
)
// #let size = get-structure-size(
// "box",
// width: 100pt,
// height: 100pt,
// depth: 100pt,
// tab-size: 20pt
// )
// #render-structure(
// "ramp",
// width: 100pt,
// height: 50pt,
// depth: 200pt,
// tab-size: 20pt
// )
// #render-structure(
// "step",
// width: 100pt,
// height-1: 50pt,
// height-2: 30pt,
// depth-1: 100pt,
// depth-2: 80pt,
// tab-size: 20pt
// ) |
https://github.com/tiankaima/typst-notes | https://raw.githubusercontent.com/tiankaima/typst-notes/master/2bc0c8-2024_spring_TA/lecture_9.typ | typst | #set text(
font: ("linux libertine", "Source Han Serif SC", "Source Han Serif"),
size: 10pt,
)
#show math.equation: set text(11pt)
#show math.equation: it => [
#math.display(it)
]
#let dcases(..args) = {
let dargs = args.pos().map(it => math.display(it))
math.cases(..dargs)
}
#show image: it => [
#set align(center)
#it
]
#let Q_A(Q_it, A_it) = [
#let blue = rgb("#0000bb")
#box(width: 100%)[
#Q_it
]
#rect(width: 100%, stroke: 0.005em + blue, height: 0em)
#pad(x: 1.5em)[
#set text(fill: blue)
*Answer*
#A_it
]
#rect(width: 100%, stroke: 0.005em + blue, height: 0em)
#v(4em)
]
#align(center)[
= 习题课 9 讲义
2024 Spring 数学分析(B2)
<NAME>
]
== 作业答案
#Q_A([
=== P193 1(3)
$
x=a cos t,y=a sin t,z=a ln(cos t) space (0<=t<=pi / 4)
$
])[
$
&r(t)&=&(a cos t,y=a sin t, a ln(cos t))\
&r'(t)&=&(-a sin t,a cos t,-a tan t)\
&abs(r'(t))^2&=&a^2(1+tan^2 t)=a^2 sec^2 t\
&abs(r'(t))&=&a sec t\
$
$
integral_L dif s&=integral_0^(pi / 4) a sec t dif t=a ln(sqrt(2)+1)
$
Notice that $1/2 ln(3+2sqrt(2))=ln(sqrt(2)+1)$ are equivalent.
*Recall:* $integral sec t dif t=ln(sec t+tan t)+C=1/2 ln((1+sin t)/(1-sin t))+C$
Do recite the formula, here's a quick proof in case you forget it:
$
integral sec t dif t=integral (dif t) / (cos t) = integral (cos t dif t) / (cos^2 t)=integral (dif (
sin t
)) / (1-sin^2t) \
= integral 1 / 2 (1 / (1-sin t) + 1 / (1+sin t)) dif t=1 / 2 ln((1+sin t)/(1-sin t))+C
$
]
#Q_A([
=== 1(4)
$
z^2=2a x, 9y^2=16x z quad O(0,0,0) -> A(2a,8 / 3 a, 2a)
$
])[
The key here is to turn the implict function into a parametric one.
Since the goal is to make the result come as simple as possible, we can pick $x=2a t^2$ to make the first equation easier to solve:
$
x=2a t^2, z^2=2a x=4a^2 t^2 quad &=> quad z=2a t space (z>0)\
9y^2=16x z= 64 a^2 t^3 quad &=> quad y=8 / 3 a t^(3 / 2)
$
Then:
$
&r(t)&=&(2a t^2,8 / 3 a t^(3 / 2),2a t)\
&r'(t)&=&(4a t,4 a t^(1 / 2),2a)\
&abs(r'(t))^2&=&16a^2 t^2+16a^2 t+4a^2=4a^2(4t^2+4t+1)=4a^2(2t+1)^2\
&abs(r'(t))&=&2a(2t+1)
$
$
integral_L dif s=integral_0^1 2a(2t+1) dif t=4a
$
]
#Q_A([
=== 1(5)
$
4a x=(y+z)^2, 4x^2+3y^2=3z^2 quad O(0,0,0) -> A(x,y,z)
$
])[
As mentioned in the previous question, we can turn the implict function into a parametric one. Take $x=a t^2$:
$
x=a t^2, (y+z)^2=4a^2t^2 quad &=> quad y+z=2a t\
3(y-z)(y+z)=-4a^2 t^2 quad &=> quad y-z=-2 / 3 a t^3\
$
$
&r(t)&=&(a t^2,a t-1 / 3 a t^3,a t+1 / 3 a t^3)\
&r'(t)&=&(2a t,a-a t^2,a+a t^2)\
&abs(r'(t))^2&=&4a^2 t^2+a^2(1+t^4)+a^2(1+t^4)=2a^2(t^4+2t^2+1)=2a^2(t^2+1)^2\
&abs(r'(t))&=&sqrt(2)a(t^2+1)
$
$
integral_L dif s=integral_0^t sqrt(2)a(t^2+1) dif t=sqrt(2)a(1 / 3 t^3+t)
$
]
#Q_A([
=== 2(2)
$
integral_L z^2 / (x^2+y^2) dif s, L: x=a cos t, y=a sin t, z=a t space (0<=t<=2pi)
$
])[
$
&r(t)&=&(a cos t,a sin t,a t)\
&r'(t)&=&(-a sin t,a cos t,a)\
// &abs(r'(t))^2&=&a^2 sin^2 t+a^2 cos^2 t+a^2=a^2(1+1)=2a^2\
&abs(r'(t))&=&sqrt(2)a
$
$
integral_L z^2 / (x^2+y^2) dif s=sqrt(2)a integral_0^(2pi) t^2 dif t= 8 / 3 sqrt(2) pi^3 a
$
]
#Q_A([
=== 2(5)
$
&integral_L (x+y+z) dif s\ L: &A(1,1,0)->B(1,0,0)\ &B C: x=cos t, y=sin t, z=t space (0<=t<=2pi)
$
])[
$L_1$:
$
integral_(L_1) (x+y+z) dif s=integral_0^1 (1+t) dif t=3 / 2
$
$L_2$:
$
integral_(L_2) (x+y+z) dif s=integral_0^(2pi) (cos t+sin t+t) sqrt(2) dif t=2 sqrt(2) pi^2
$
$
=> integral_L (x+y+z) dif s=integral_(L_1) + integral_(L_2)=3 / 2 + 2 sqrt(2) pi^2
$
]
#Q_A([
=== 2(9)
$
integral_L x sqrt(x^2-y^2)dif s, L: (x^2+y^2)^2=a^2(x^2-y^2) space (x>=0)
$
])[
Here's an example of bad parametrization:
$
x^2-y^2= t^2, space x^2+y^2=a t\
=> x^2=1 / 2 (a t+t^2), y^2=1 / 2 (a t-t^2)\
r(t) = (sqrt(2) / 2 sqrt(a t+t^2), sqrt(2) / 2 sqrt(a t-t^2))\
r'(t) = (sqrt(2) / 2 (a+2t) / (2sqrt(a t+t^2)), sqrt(2) / 2 (a-2t) / (2sqrt(a t-t^2)))\
$
This is going to take forever to integrate. Instead, we can use the polar coordinate:
$
x=r cos theta, y=r sin theta space (-pi / 4<=theta<=pi / 4)\
r^4=a^2r^2 cos 2 theta => r=a sqrt(cos 2 theta)\
$
$
&r(theta) &=& (a cos theta sqrt(cos 2 theta), a sin theta sqrt(cos 2 theta))\
&r'(theta) &=& (-(sin 3 theta) / (sqrt(cos 2theta)), (cos 3 theta) / (sqrt(cos 2theta)))\
&abs(r'(theta)) &=& 1 / sqrt(cos 2 theta)
$
Then,
$
integral_L x sqrt(x^2-y^2)dif s=&integral_(-pi / 4)^(pi / 4) a cos theta sqrt(cos 2 theta) dot (
r^2 cos 2 theta
) dot 1 / sqrt(cos 2 theta) dif theta\
=&integral_(-pi / 4)^(pi / 4) a^3 cos theta cos 2 theta dif theta\
=&(2sqrt(2)) / 3 a^3
$
]
#Q_A([
=== 2(10)
$
integral_L (x^2+y^2+z^2)^n dif s quad L: x^2+y^2=a^2, z=0
$
])[
This is a simple one. We don't need to parametrize the curve, since the function always equal to $a^(2n)$, multiplied by $2pi a$(the length of the curve):
$
integral_L (x^2+y^2+z^2)^n dif s=(a^2)^n dot 2pi a=2pi a^(2n+1)
$
]
#Q_A([
=== 2(11)
$
integral_L x^2 dif s quad L: x^2+y^2+z^2=a^2, x+y+z=0
$
])[
Just for demostration, you still can parametrize the curve, being a intersection of a sphere and a plane, it's obviously a circle.
With a little imagination, we can find the center of the circle $P$ is $(0,0,0)$, and the radius is $a$. We find two perpendicular vectors on the plane, $v_1=(1,-1,0)$ and $v_2=(1,0,-1)$, then the parametric equation of the circle is:
$
r(theta) &= cos theta dot (sqrt(2) / 2 a ,-sqrt(2) / 2 a ,0)+sin theta dot (sqrt(2) / 2 a ,0,-sqrt(2) / 2 a) + (
0,0,0
)\
&= sqrt(2) / 2 a dot (cos theta+sin theta,-cos theta, -sin theta)
$
The rest is just plug in the formula:
$
r'(theta) &= sqrt(2) / 2 a dot (-sin theta+cos theta, sin theta, -cos theta)\
abs(r'(theta)) &= sqrt(2) / 2 a dot sqrt(2 - 2 cos theta sin theta)
$
I'll leave the rest to you.
But we can also use the symmetry of the curve to solve this problem:
$
integral_L x^2 dif s = 1 / 3 integral_L (x^2+y^2+z^2) dif s = 1 / 3 a^2 dot 2pi a = 2 / 3 pi a^3
$
This is something we've already shown multiple times in previous homeworks.
]
#Q_A([
=== 2(12)
$
integral_L (x y + y z+ x z) dif s quad L: x^2+y^2+z^2=a^2, x+y+z=0
$
])[
Since $(x+y+z)^2=x^2+y^2+z^2+2(x y + y z + x z)$
$
integral_L (x y + y z+ x z) dif s = -1 / 2 a^2 dot 2pi a = -pi a^3
$
]
#Q_A([
=== P200 1(1)
$
"surface area of" space z=sqrt(x^2+y^2) quad "inside" x^2+y^2=2x
$
])[
I actually recommend *against* you reciting the textbook style of calculating $E=r'_u^2 ...$. It's time consuming and error-prone.
_But if you do use them, DO NOT CHANGE SYMBOLS. The textbook uses $r(u,v), r_u, r_v, E, F, G$, they comes from the field of differential geometry, use the exact same symbols to avoid confusion._
Here's two formula you should recite to make your life easier:
1. For the general surface, $r=r(u,v)$ $dif S = abs(r_u times r_v) dif u dif v$.
2. Given a $z=f(x,y)$, $dif S = sqrt(1+f_x^2+f_y^2) dif x dif y$.
For this question, we can use the second formula:
$
f_x = x / sqrt(x^2+y^2), f_y = y / sqrt(x^2+y^2)\
dif S = sqrt(1+x^2/(x^2+y^2)+y^2/(x^2+y^2)) dif x dif y = sqrt(2) dif x dif y
$
So the surface area would just be $sqrt(2)$ times the area of the projection of the surface on the $x y$ plane:
$
sigma(S) = sqrt(2) sigma(S_(x y)) = sqrt(2) pi
$
To make your life more troublesome, you can parametrize the surface:
$
&(x-1)^2+y^2=1 => x=1+r cos theta, y=r sin theta
$
$
z&=sqrt(x^2+y^2)=sqrt(1+2r cos theta+r^2)\
hat(r)(r,theta)&=(1+r cos theta,r sin theta,sqrt(1+2r cos theta+r^2))\
hat(r)'_r&=(cos theta,sin theta,(cos theta+r) / sqrt(1+2r cos theta+r^2))\
hat(r)'_theta&=(-r sin theta,r cos theta,0)\
$
I really must stop here before I have a heart attack, but you can continue to calculate the surface area by $abs(hat(r)'_r times hat(r)'_theta)$. Then integrate it over the region of $r,theta$.
The $hat(r)$ is introduced to avoid confusion with the $r$ in the polar coordinate. here $hat(r)$ means nothing more than a "collection of $x,y,z$".
Another way to parametricize the surface without a heart attack is to use a real polar coordinate, origin at $(0,0)$. You get a better $z(r,theta)$ this time, but you now have to interate over a varing $r(theta)$. *It's recommended you try it yourself using this method.*
_The real idea behind this question is the idea to generalize $dif S$ on $(r, theta, z)$, much as the textbook done with $(r, theta, phi)$ on page 197, let's discuss it here:_
Assuming we're fixing it on $r=r_0$:
$
hat(r)&=(r_0 cos theta, r_0 sin theta, z)\
hat(r)'_theta&=(-r_0 sin theta, r_0 cos theta, 0)\
hat(r)'_z&=(0,0,1)\
hat(r)'_theta times hat(r)'_z&=(r_0 cos theta, r_0 sin theta, 0)\
abs(hat(r)'_theta times hat(r)'_z)&=r_0
$
Try draw a picture of these vectors on a cylinder, you'll get a better understanding of $hat(r)'_theta, hat(r)'_z$ and the cross product, try imagine the meaning behind the direction of these vectors.
For this problem, it's really $z=z(r,theta)$ not $r=r_0$:
$
hat(r)&=(r cos theta, r sin theta, r)\
hat(r)'_r&=(cos theta, sin theta, 1)\
hat(r)'_theta&=(-r sin theta, r cos theta, 0)\
hat(r)'_r times hat(r)'_theta&=(r cos theta, r sin theta, r)\
abs(hat(r)'_r times hat(r)'_theta)&=r sqrt(1+cos^2 theta+sin^2 theta)=sqrt(2) r
$
So the surface area could be calculated as:
$
integral.double_S dif S &= integral_0^(2pi) dif theta integral_0^(r(theta)) sqrt(2) r dif r\, space r(
theta
)=cos 2 theta\
&= sqrt(2) pi
$
Note it's much worse if you calculate it the other way around:
$
integral.double_S dif S = integral_0^1 dif r integral_(-theta(r))^(theta(r)) sqrt(2) r dif theta, space theta(r)=arccos(r/2)
$
In this chapter and later, much more calculate techniques are introduced to make your life easier, use them wisely, a personal advice.
]
#Q_A([
=== 1(2)
$
"surface area of" space x^2+y^2=a^2 quad "intersected by" x+z=0, x-z=0
$
])[
With the discussion above, it's a "$r=r_0$ fixed type":
$
hat(r)(theta, z)&=(a cos theta, a sin theta, z)\
hat(r)'_theta&=(-a sin theta, a cos theta, 0)\
hat(r)'_z&=(0,0,1)\
hat(r)'_theta times hat(r)'_z&=(a cos theta, a sin theta, 0)\
abs(hat(r)'_theta times hat(r)'_z)&=a
$
The upper and lower bound for $z$ are determined by the two planes, so the surface area is:
$
integral.double_S dif S = integral_0^(2pi) dif theta integral_(-a abs(cos theta))^(a abs(cos theta)) a dif z = integral_0^(2pi) 2a^2 abs(cos theta) dif theta = 8 a^2
$
The absolute value is easily noticed during calculation, one shall get confused to get a $0$ result if otherwise. These are the checks you should "unconsiously" perform during calculation.
]
#Q_A([
=== 1(5)
$
"surface area of" space x=1 / 2(2y^2+z^2) quad "inside" 4y^2+z^2=1
$
])[
This is nothing more than changing $x y$ to $y z$... use the second formula:
$
dif S&=sqrt(1+f_y^2+f_z^2) dif y dif z\
&=sqrt(1+(2y)^2+z^2) dif y dif z\
&=sqrt(4y^2+z^2+1) dif y dif z\
&= sqrt(r^2 + 1) dot r / 2 dif r dif theta
$
If you have doubts about the last step, you can parametrize the surface:
$
hat(r)(r,theta)&=(1 / 2 r cos theta, r sin theta, 1 / 4 r^2 cos^2 theta+1 / 2 r^2 sin^2 theta)\
hat(r)'_r&=(1 / 2 cos theta, sin theta, 1 / 2 r cos^2 theta+ r sin^2 theta)\
hat(r)'_theta&=(-1 / 2 r sin theta, r cos theta, 1 / 2 r^2 sin theta cos theta)\
hat(r)'_theta times hat(r)'_r&=(1 / 2 r^2 cos theta, 1 / 2 r^2 sin theta, -1 / 2 r)\
abs(hat(r)'_theta times hat(r)'_r)&=r sqrt(1+r^2 cos^2 theta+r^2 sin^2 theta)=r / 2 dot sqrt(r^2+1)
$
$
integral.double_S dif S = integral_0^(2pi) dif theta integral_0^1 sqrt(r^2+1) dot r / 2 dif r = (2sqrt(2)-1) / 3 pi
$
]
#Q_A([
=== 2(2)
$
integral.double_S x y z dif S quad S: x+y+z=1, x,y,z>0
$
])[
$
dif S = sqrt(1+f_x^2+f_y^2) dif x dif y = sqrt(3) dif x dif y\
integral.double_S dif S = sqrt(3) integral_0^1 dif x integral_0^(1-x) x y (1-x-y) dif y = sqrt(3) / 120
$
]
#Q_A([
=== 2(3)
$
integral.double_S (x^2+y^2) dif S quad S: "surrounded by" z=sqrt(x^2+y^2), z=1
$
])[
$S_1: {(x,y) mid(|) x^2+y^2<=1} times {1}$
$
integral.double_(S_1) (x^2+y^2) dif S = integral_0^(2pi) dif theta integral_0^1 r^3 dif r = pi / 2
$
$S_2: {(x,y,z) mid(|) x^2+y^2<=1, z=sqrt(x^2+y^2)}$
$
dif S = sqrt(1+f_x^2+f_y^2) dif x dif y = sqrt(2) dif x dif y\
integral.double_(S_2) (...) dif S = sqrt(2) integral.double_(S_1) (...) dif S = sqrt(2) / 2 pi
$
Adding them up:
$
integral.double_S dif S = integral.double_(S_1) + integral.double_(S_2) = (sqrt(2)+1) / 2 pi
$
]
#Q_A([
=== 2(6)
$
integral.double_S (dif S) / (r^2) quad S: x^2+y^2=R^2, z in [0,H]
$
])[
$
hat(r)(theta, z)&=(R cos theta, R sin theta, z)\
hat(r)'_theta&=(-R sin theta, R cos theta, 0)\
hat(r)'_z&=(0,0,1)\
hat(r)'_theta times hat(r)'_z&=(R cos theta, R sin theta, 0)\
abs(hat(r)'_theta times hat(r)'_z)&=R
$
$
integral.double_S (dif S) / (r^2) = integral_0^(2pi) dif theta integral_0^H (R dif z) / (R^2+z^2) = 2pi arctan(H/R)
$
If you recall, you might already done similar execise in your electrodynamics course, where you calculate the electric field of a infinite cylinder.
]
#Q_A([
=== 2(7)
$
integral.double_S abs(x y z) dif S quad S: z=x^2+y^2, z in [0,1]
$
])[
$
dif S = sqrt(1+f_x^2+f_y^2) dif x dif y = sqrt(4x^2+4y^2+1) dif x dif y
$
Further, transform this to cylinder coordinate:
$
dif S = sqrt(4 r^2 + 1) dot r dif r dif theta
$
This requires prove though:
$
hat(r)(r,theta)&=(r cos theta, r sin theta, r^2)\
hat(r)'_r&=(cos theta, sin theta, 2r)\
hat(r)'_theta&=(-r sin theta, r cos theta, 0)\
hat(r)'_r times hat(r)'_theta&=(2 r^2 cos theta, 2 r^2 sin theta, r)\
abs(hat(r)'_r times hat(r)'_theta)&=r sqrt(1+ 4r^2 cos^2 theta+4r^2 sin^2 theta)=sqrt(4 r^2+1) dot r
$
Then the integral is:
$
integral.double_S abs(x y z) dif S = integral_0^(2pi) dif theta integral_0^1 abs(cos theta sin theta) dot r^5 sqrt(4 r^2+1) dif r = 1 / 420 (
125 sqrt(5) - 1
)
$
]
#Q_A([
=== 3(1)
$
integral.double_S (x^2+y^2) dif S quad S: x^2+y^2+z^2=R^2
$
])[
$
integral.double_S (x^2+y^2) dif S = integral.double_S (y^2+z^2) dif S = integral.double_S (z^2+x^2) dif S
$
$
=> integral.double_S (x^2+y^2) dif S = 2 / 3 integral.double_S (x^2+y^2+z^2) dif S = 2 / 3 (
4 pi R^2 dot R^2
) = 8 / 3 pi R^4
$
]
#Q_A([
=== 3(2)
$
integral.double_S (x+y+z) dif S, quad S: x^2+y^2+z^2=a^2 (z>=0)
$
])[
Reverse $S->S'$, this essentially means we take a new parametrization of the upper half of the sphere, namely $(x,y)->(-x,-y,sqrt(a^2-x^2-y^2))$
Changing the parametrization shouldn't result in a change of the integral value, so:
$
integral.double_S (x+y+z) dif S = integral.double_S' (-x-y+z) dif S = integral.double_S z dif S
$
Then it's the good old formula 2:
$
z&=sqrt(a^2-x^2-y^2)\
dif S&=sqrt(1+f_x^2+f_y^2) dif x dif y\
&=sqrt(1+x^2/(a^2-x^2-y^2)+y^2/(a^2-x^2-y^2)) dif x dif y\
&=a / z dif x dif y
$
This cancels out the $z$ in the integral, so the result is:
$
integral.double_S z dif S = integral.double_S_(x y) z dot a / z dif x dif y = a dot pi a^2 = pi a^3
$
]
#Q_A([
=== 4
$G$ is a bounded, closed region on plane $A x+B y+C z+D=0 space (C!=0)$, its projection on $O x y$ is $G_1$. Show that
$
sigma(G) / sigma(G_1)=sqrt((A^2+B^2+C^2)/C^2)
$
])[
$
z=-A / C x-B / C y-D / C\
dif S=sqrt(1+f_x^2+f_y^2) dif x dif y=sqrt(1+A^2/C^2+B^2/C^2) dif x dif y
$
$
integral.double_G dif S = integral.double_(G_1) sqrt(1+A^2/C^2+B^2/C^2) dif x dif y = sqrt((A^2+B^2+C^2)/C^2) integral.double_(G_1) dif x dif y\
=> sigma(G) = sqrt((A^2+B^2+C^2)/C^2) sigma(G_1)
$
]
#Q_A([
=== P215 1(3)
$
integral_L (-x dif y+y dif x) / (x^2+y^2) quad L:x^2+y^2=a^2 "counter-clockwise"
$
])[
$
x=a cos t, y=a sin t quad t: 0->2pi
$
$
integral.cont.ccw_L (-x dif y+y dif x) / (x^2+y^2) = integral_0^(2pi) (-a cos t a cos t+a sin t a sin t) / a^2 dif t = integral_0^(2pi) - cos(2t) dif t = 0
$
Notice you can't use green's theorem here, since the region is not simply connected. (The region is a circle with a hole in the middle.)
// $
// arrow(v)&=(-x / (x^2+y^2), y / (x^2+y^2))\
// gradient times v &= (diff / (diff x), diff / (diff y)) times (P,Q)\
// &= (diff Q) / (diff x) - (diff P) / (diff y)\
// &= (-2x y) / (x^2+y^2) + (2x y) / (x^2+y^2)\
// & = 0
// $
]
#Q_A([
=== (4)
$
integral_L y^2 dif x + x y dif y + x z dif z quad L: O(0,0,0)->A(1,0,0)->B(1,1,0)->C(1,1,1)
$
])[
$
integral_(L_1) y^2 dif x = 0\
integral_(L_2) x y dif y=1 / 2\
integral_(L_3) x z dif z=1 / 2\
integral_L arrow(v) dot dif arrow(r) = integral_(L_1) + integral_(L_2) + integral_(L_3) = 1
$
]
#Q_A([
=== 2(6)
$
integral_L y dif x+z dif y + z dif z quad L: x+y=0, x^2+y^2+z^2=2(x+y)
$
])[
The key again falls back onto the parametrization:
We treat L as intersection between $x+y=1$ and $x^2+y^2+z^2=4$
So the ceter is $(1,1,0)$ and the radius is $sqrt(2)$. Two perpendicular vectors on the plane are $v_1=(1,-1,0)$ and $v_2=(0,0,1)$, so the parametrization is:
$
r(t) &= cos t dot (1,-1,0)+sin t dot (0,0,sqrt(2)) + (1,1,0)\
&= (cos t+1, -cos t+1, sqrt(2) sin t)\
r'(t) &= (-sin t, sin t, sqrt(2) cos t)\
$
$
&quad integral_L y dif x+z dif y+x dif z\
= &- integral_0^(2pi) (-cos t+1)(- sin t)+(sqrt(2) sin t)(sin t)+(cos t+1)(sqrt(2) cos t) dif t\
= &-2 sqrt(2) pi
$
]
#Q_A([
=== 2
$
bold(v)=(y+z)bold(i)+(z+x)bold(j)+(x+y)bold(k)\
L: x=a sin^2t,y=2a sin t cos t,z=a cos^2 t space (0<=t<=pi)
$
])[
$
integral_L bold(v)dot dif bold(r) &= integral (y+z)dif x+(z+x)dif y+(x+y)dif z\
&=integral dif(x y +y z+x z)\
&=integral dif (a sin^2 t dot 2a sin t cos t+2a sin t cos t dot a cos^2 t+a cos^2 t dot a sin^2 t)\
&=0\
$
]
#Q_A([
=== 4
Use Green's theorem to calculate the following integral:
(2) $
integral.cont_L (x y+x+y)dif x+(x y+x-y)dif y quad L:x^2+y^2=1 "counter-clockwise"
$
(3) $
integral.cont_L (y x^3+e^y)dif x+(x y^3+x e^y-2y)dif y quad L:"symmetric with respect to x and y axis"
$
(4) $
integral.cont sqrt(x^2+y^2)dif x+y[x y+ln(x+sqrt(x^2+y^2))]dif y quad L: y^2=x-1, x=2
$
(6) $
integral_(A M O)(e^x sin y-m y)dif x+(e^x cos y-m)dif y\
"AMO": A(a,0) -> O(0,0), x^2+y^2=a x (a>0) "upper half"
$
])[
(2) $
integral_L bold(v) dot dif bold(r) = integral.double_S ((diff Q)/(diff x)-(diff P)/(diff y)) dif S = integral.double_S (y+1)-(x+1) dif x dif y = 0
$
(3) $
integral_L bold(v) dot dif bold(r) = integral.double_S ((diff Q)/(diff x)-(diff P)/(diff y)) dif S = integral.double_S (y^3+e^y)-(x^3+e^y) dif x dif y = 0
$
(4) $
integral_L bold(v) dot dif bold(r) = integral.double_S ((diff Q)/(diff x)-(diff P)/(diff y)) dif S &= integral.double_S y(y+1/sqrt(x^2+y^2)) - y/sqrt(x^2+y^2) dif x dif y\
&= integral_1^2 dif x integral_(-sqrt(x-1))^(sqrt(x-1)) y^2 dif y\
&= integral_0^1 2/3 x^(3/2) dif x\
&=4/15
$
(6) $
integral.cont = integral.double_S (e^x cos y)-(e^x cos y - m) dif x dif y = m/8 pi a^2\
integral_0^a (e^x sin y - m y) dif x = 0\
=> integral_(A M O) = m/8 pi a^2
$
]
#Q_A([
=== 5
Calculate the following area:
(1) $
x=a cos^3 t, y=a sin^3 t space (0<=t<=2pi)
$
(2) $
x=a(t-sin t), y=a(1-cos t) space (0<=t<=2pi)
$
])[
(1) $
sigma(D)=integral_0^(2pi) x dif y=integral_0^(2pi) a cos^3 t dot 3 a sin^2 t cos t dif t=3/8 pi a^2
$
(2) $
sigma(D)=-integral_(2pi)^0 y dif x + integral_0^(2pi a) 0 dif x=integral_0^(2pi) a^2 (1-cos t)^2 dif t = 3 pi a^2
$
]
#Q_A([
=== 6
Calculate the following integral: $
integral_L (-y dif x+ x dif y)/(x^2+y^2)
$
(1) From $A(-a,0)->B(a,0)$, $y=sqrt(a^2-x^2), space a>0$
(2) From $A(-1,0)->B(3,0)$, $y=4-(x-1)^2$
])[
This is still something we already encountered multiple times:
$
gradient(arctan(y/x)) = (-y / (x^2+y^2), x / (x^2+y^2)) quad x>0
$
The scalar field can be extended to the $RR^2\\{(0,y)mid(|)y<=0}$ as the following:
$
&phi(x,y) = arctan(y/x) quad &x>0\
&phi(0,y) = pi / 2 quad &y>0\
&phi(x,y) = arctan(y/x)+pi quad &x<0\
$
Given that two path don't cross the negative y-axis, we just need to calculate $phi(x,y)$ at the following points:
$
A_1(-a,0) quad B_1(a,0) quad A_2(-1,0) quad B_2(3,0)
$
Thus,
$
phi(A_1) = pi, phi(B_1) = 0 => integral_L_1 = -pi\
phi(A_2) = pi, phi(B_2) = 0 => integral_L_2 = -pi
$
]
|
|
https://github.com/jeffa5/typst-todo | https://raw.githubusercontent.com/jeffa5/typst-todo/main/todo.typ | typst | Apache License 2.0 | #let todo_counter = counter("todos")
#let todo(content, inline: false, fill: orange, note: "", numbers: none, border_radius: 4pt, border_stroke: 1pt, inline_width:100%, line_stroke: orange + 0.1em, underline_content: true) = {
locate(loc => {
let heading_counter = counter(heading).at(loc)
let todo_count = todo_counter.at(loc)
let note_counter = (..heading_counter, ..todo_count)
let count_display = if numbers != none { [#numbering(numbers, ..note_counter)] }
if inline {
assert(note.len() == 0, message: "inline notes cannot have separate note text")
[#box(rect(width:inline_width, fill:fill, radius:border_radius, stroke:border_stroke, [#count_display #content])) <todos>]
} else {
let content = if underline_content {underline(stroke:line_stroke, evade: false, content)}else{content}
[#box(rect(fill:fill, radius:border_radius, stroke:border_stroke, [#count_display #note])) <todos> #content]
}
todo_counter.step()
})
}
#let missing_figure(content, width: 100%, fill: gray, stroke: none, prefix: [*Missing Figure*:]) = {
rect(width: width, fill: fill, stroke: stroke, [#prefix #content])
}
#let list_of_todos(title: "List of Todos", outlined: true, numbers: none) = {
heading(title, numbering: none, outlined: outlined)
locate(loc => {
let todos = query(<todos>, loc)
for todo in todos {
let location = todo.location()
let todo_counter = todo_counter.at(location)
let heading_counter = counter(heading).at(location)
let note_counter = (..heading_counter, ..todo_counter)
let counter_display = if numbers != none { numbering(numbers, ..note_counter) }
let page = counter(page).at(location)
let body = todo.body.body.children.last()
let body_func = body.func()
let text_slug = if body_func == text {
let text = body.at("text")
text.slice(0, calc.min(90,text.len()))
} else {
body.children.first()
}
[
#link(todo.location())[
#box(width: 1em, height:1em, fill: todo.body.fill, stroke: todo.body.stroke)
#counter_display #text_slug
]
#box(width: 1fr, repeat[.])
#link(todo.location(), [
#numbering("1", ..page)
]) \
]
}
})
}
|
https://github.com/TJ-CSCCG/tongji-undergrad-thesis-typst | https://raw.githubusercontent.com/TJ-CSCCG/tongji-undergrad-thesis-typst/main/paddling-tongji-thesis/tongjithesis.typ | typst | MIT License | #import "elements.typ": *
#set pagebreak(weak: true)
#let thesis(
school: "某学院", major: "某专业", id: "0000000", student: "某某某", teacher: "某某某", title: "某标题", subtitle: "某副标题", title-english: "Some Title", subtitle-english: "Some Subtitle", date: datetime.today(), abstract: "慧枫尚萍氢,驳展妙棚端梦称委竞励。绘象臂淬人壳闭营风混仓、问抬兽村蜡胡锹挤污艰烃伏惧派宝既抓章住蓟棒褶均谭穿谴属;羟贮银…钓郭曾牙记氢硝巍仰蒲邀趟。革旅剑撞压单施宵饼狼将售烷贸问术粮洞魔。却烟陕倍且隘框糟秩板商,宙刚疮顿表羽楞景哺驯邮戒歌溜著聪峻忙劈左绩卖卫萨讯完读百釉好仔帜纽龟玉炒脂衍蛴瓦副冯查索桐梁;轴派?蝗丸朝保岂搅搞燕挫品休礼倾玻黑李宽列邮苦仔汛鳙物己弱寸栓孝哄俭牙敬厄搬吨楞干捧原趋息…善!", keywords: ("关键词1", "关键词2", "关键词3"), abstract-english: lorem(300), keywords-english: ("Keyword1", "keyword2", "keyword3"), doc,
) = {
set document(author: id + " " + student, title: title)
set page(
paper: "a4", margin: (top: 4.2cm, bottom: 2.7cm, left: 3.3cm, right: 1.8cm), binding: left,
)
set text(font-size.at("5"), font: font-family.song, lang: "zh", region: "cn")
make-cover(
(
"课题名称", title, "副标题", subtitle, "学院", school, "专业", major, "学生姓名", student, "学号", id, "指导老师", teacher, "日期", date.display("[year]年[month]月[day]日"),
),
)
pagebreak()
set par(justify: true, first-line-indent: 2em, leading: 0.9em)
show par: set block(spacing: 0.9em)
set math.equation(numbering: none) // not implemented yet: (1.1)
show strong: it => text(font: font-family.hei, weight: "bold", it.body)
show emph: it => text(font: font-family.kai, style: "italic", it.body)
show raw: set text(font: font-family.code)
show math.equation: set text(font: font-family.math)
show raw.where(block: true): block.with(fill: luma(250), inset: 10pt, radius: 4pt)
set underline(offset: 3pt, stroke: 0.6pt) // to make latin and CJK characters have the same underline offset
set list(indent: 2em, spacing: 0.9em)
set heading(numbering: (..nums) =>
if nums.pos().len() <= 3 {
nums.pos().map(str).join(".")
} else if nums.pos().len() == 4 {
"ABCDEFGHIJKLMNOPQRSTUVWXYZ".at(nums.pos().at(-1) - 1) + ". "
} else if nums.pos().len() == 5 {
"abcdefghijklmnopqrstuvwxyz".at(nums.pos().at(-1) - 1) + ". "
})
show heading: it => locate(
loc => {
if it.level == 1 {
set align(center)
set text(font: font-family.hei, size: font-size.at("4"), weight: "bold")
if it.numbering != none {
numbering(it.numbering, ..counter(heading).at(loc))
h(1em)
it.body
} else {
it
}
v(1.5em)
} else if it.level == 2 {
set text(font: font-family.hei, size: font-size.at("5"), weight: "bold")
v(0.5em)
if it.numbering != none {
h(-2em)
numbering(it.numbering, ..counter(heading).at(loc))
h(1em)
it.body
} else {
it
}
v(1em)
} else if it.level == 3 {
set text(font: font-family.hei, size: font-size.at("5"), weight: "bold")
v(0.5em)
if it.numbering != none {
numbering(it.numbering, ..counter(heading).at(loc))
h(1em)
it.body
} else {
it
}
v(1em)
} else if it.level == 4 {
set text(font: font-family.hei, size: font-size.at("5"), weight: "bold")
v(-0.5em)
grid(columns: (2em, 1fr), [], it)
v(0.5em)
} else if it.level == 5 {
set text(font: font-family.hei, size: font-size.at("5"), weight: "bold")
v(-0.5em)
grid(columns: (2em, 1fr), [], it)
v(0.5em)
} else {
it
}
},
) + empty-par()
show list: it => it + empty-par()
show enum: it => it + empty-par()
show figure: it => v(0.5em) + it + v(0.5em) + empty-par()
show figure: set block(breakable: true)
show table: it => it + empty-par()
show math.equation.where(block: true): it => it + empty-par()
show raw.where(block: true): it => it + empty-par()
show heading: i-figured.reset-counters.with(extra-kinds: ("algo",))
show figure: i-figured.show-figure.with(extra-prefixes: (algo: "algo:"))
show math.equation.where(block: true): i-figured.show-equation
show figure.where(kind: table): set figure.caption(position: top)
set page(
numbering: "I", header: {
set text(font: font-family.song, font-size.at("-4"))
grid(
columns: (0.5em, 1fr, auto, 0.5em), [], image("figures/tongji.svg", height: 1cm), block(height: 0.7cm, [#set align(right); 毕业设计(论文)]), [],
)
v(-0.5em)
line(length: 100%, stroke: 1.8pt)
draw-binding()
}, header-ascent: 20%, footer: locate(loc => {
set align(center)
set text(font: font-family.song, size: font-size.at("-4"))
numbering("I", counter(page).at(loc).first())
}),
)
counter(page).update(1)
make-abstract(
title: title, abstract: abstract, keywords: keywords, prompt: ("摘要", "关键词:"),
)
pagebreak()
make-abstract(
title: title-english, abstract: abstract-english, keywords: keywords-english, prompt: ("ABSTRACT", "Key words: "), is-english: true,
)
pagebreak()
make-outline()
pagebreak()
set page(footer: locate(loc => {
line(stroke: 1.8pt, length: 100%)
set align(right)
set text(font: font-family.song, size: font-size.at("-4"))
v(-0.6em)
[
共#h(1em)
#counter(page).final(loc).at(0)#h(1em)
页#h(1em)
第#h(1em)
#counter(page).display()
#h(1em)页
]
}))
counter(page).update(1)
doc
}
|
https://github.com/noahjutz/CV | https://raw.githubusercontent.com/noahjutz/CV/main/body/section.typ | typst | #let section(title) = block[
#text(
size: 20pt,
font: "Roboto Slab",
title
)
] |
|
https://github.com/wuespace/delegis | https://raw.githubusercontent.com/wuespace/delegis/main/README.md | markdown | MIT License | # delegis
<table>
<tr>
<td><img src="demo-1.png" alt="Page containing a logo at the top-right and a geric (example) title"></td>
<td><img src="demo-2.png" alt="Page containing an outline"></td>
<td><img src="demo-3.png" alt="Page containing a German-style legislative content including a preamble, division titles, sections, paragraph and sentence numbering, etc."></td>
</tr>
</table>
A package and template for drafting legislative content in a German-style structuring, such as for bylaws, etc.
While the template is designed to be used in German documents, all strings are customizable. You can have a look at the `delegis.typ` to see all available parameters.
## General Usage
While this `README.md` gives you a brief overview of the package's usage, we recommend that you use the template (in the `template` folder) as a starting point instead.
### Importing the Package
```typst
#import "@preview/delegis:0.3.0": *
```
### Initializing the template
```typst
#show: delegis.with(
// Metadata
title: "Vereinsordnung zu ABCDEF", // title of the law/bylaw/...
abbreviation: "ABCDEFVO", // abbreviation of the law/bylaw/...
resolution: "3. Beschluss des Vorstands vom 24.01.2024", // resolution number and date
in-effect: "24.01.2024", // date when it comes into effect
draft: false, // whether this is a draft
// Template
logo: image("wuespace.jpg", alt: "WüSpace e. V."), // logo of the organization, shown on the first page
)
```
### Sections
Sections are auto-detected as long as they follow the pattern `§ 1 ...` or `§ 1a ...` in its own paragraph:
```typst
§ 1 Geltungsbereich
(1)
Diese Ordnung gilt für alle Mitglieder des Vereins.
(2)
Sie regelt die Mitgliedschaft im Verein.
§ 2 Mitgliedschaft
(1)
Die Mitgliedschaft im Verein ist freiwillig.
(2)
Sie kann jederzeit gekündigt werden.
§ 2a Ehrenmitgliedschaft
(1)
Die Ehrenmitgliedschaft wird durch den Vorstand verliehen.
```
Alternatively (or if you want to use special characters otherwise not supported, such as `*`), you can also use the `#section[number][title]` function:
```typst
#section[§ 3][Administrator*innen]
```
### Hierarchical Divisions
If you want to add more structure to your sections, you can use normal Typst headings. Note that only the level 6 headings are reserved for the section numbers:
```typst
= Allgemeine Bestimmungen
§ 1 ABC
§ 2 DEF
= Besondere Bestimmungen
§ 3 GHI
§ 4 JKL
```
Delegis will automatically use a numbering scheme for the divisions that is in line with the "Handbuch der Rechtsförmlichkeit", Rn. 379 f. If you want to customize the division titles, you can do so by setting the `division-prefixes` parameter in the `delegis` function:
```typst
#show: delegis.with(
division-prefixes: ("Teil", "Kapitel", "Abschnitt", "Unterabschnitt")
)
```
### Sentence Numbering
If a paragraph contains multiple sentences, you can number them by adding a `#s~` at the beginning of the sentences:
```typst
§ 3 Mitgliedsbeiträge
#s~Die Mitgliedsbeiträge sind monatlich zu entrichten.
#s~Sie sind bis zum 5. des Folgemonats zu zahlen.
```
This automatically adds corresponding sentence numbers in superscript.
### Referencing other Sections
Referencing works manually by specifying the section number. While automations would be feasible, we have found that in practice, they're not as useful as they might seem for legislative documents.
In some cases, referencing sections using `§ X` could be mis-interpreted as a new section. To avoid this, use the non-breaking space character `~` between the `§` and the number:
```typst
§ 5 Inkrafttreten
Diese Ordnung tritt am 24.01.2024 in Kraft. §~4 bleibt unberührt.
```
## Changelog
### v0.3.0
#### Features
- Adjust numbered list / enumeration numbering to be in line with "Handbuch der Rechtsförmlichkeit", Rn. 374
- Make division titles (e.g., "Part", "Chapter", "Division") customizable and conform to the "Handbuch der Rechtsförmlichkeit", Rn. 379 f.
### v0.2.0
#### Features
- Add `#metadata` fields for usage with `typst query`. You can now use `typst query file.typ "<field>" --field value --one` with `<field>` being one of the following to query metadata fields in the command line:
- `<title>`
- `<abbreviation>`
- `<resolution>`
- `<in-effect>`
- Add `#section[§ 1][ABC]` function to enable previously unsupported special chars (such as `*`) in section headings. Note that this was previously possible using `#unnumbered[§ 1\ ABC]`, but the new function adds a semantically better-fitting alternative to this fix.
- Improve heading style rules. This also fixes an incompatibility with `pandoc`, meaning it's now possible to use `pandoc` to convert delegis documents to HTML, etc.
- Set the footnote numbering to `[1]` to not collide with sentence numbers.
#### Bug Fixes
- Fix a typo in the `str-draft` variable name that lead to draft documents resulting in a syntax error.
- Fix hyphenation issues with the abbreviation on the title page (hyphenation between the parentheses and the abbreviation itself)
### v0.1.0
Initial Release
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/in-dexter/0.1.0/sample-usage.typ | typst | Apache License 2.0 | #import "./in-dexter.typ": *
// This typst file demonstrates the usage of the in-dexter package.
#set text(lang: "en", font: "Arial", size: 10pt)
#set heading(numbering: "1.1")
// Index-Entry hiding : this rule makes the index entries in the document invisible.
#show figure.where(kind: "jkrb_index"): it => {}
// Front Matter
#align(center)[
#text(size: 23pt)[in-dexter]
#linebreak() #v(1em)
#text(size: 16pt)[An index package for Typst]
#linebreak() #v(.5em)
#text(size: 12pt)[Version 0.1.0 (7. January 2024)]
#linebreak() #v(.5em)
#text(size: 10pt)[<NAME>, <NAME>]
#linebreak() #v(.5em)
#text(size: 10pt)[Contributors: \@epsilonhalbe, \@sbatial]
#v(4em)
]
= Sample Document to Demonstrate the in-dexter package
Using the in-dexter package in a typst document consists of some simple steps:
+ Importing the package `in-dexter`.
+ Marking the words or phrases to include in the index.
+ Generating the index page by calling the `make-index()` function.
== Importing the Package
The in-dexter package is currently available on GitHub in its home repository
(https://github.com/RolfBremer/in-dexter). It is still in development and may have
breaking changes #index[Breaking Changes] in its next iteration.
#index[Iteration]#index[Development]
```typ
#import "./in-dexter.typ": *
```
The package is also available via Typst's build-in Package Manager:
```typ
#import "@preview/in-dexter:0.1.0": *
```
Note, that the version number of the typst package has to be adapted to get the wanted
version.
== Marking of Entries
We have marked several words to be included in an index page at the end of the document. The markup
for the entry stays invisible#index[Invisible]. Its location in the text gets recorded, and later it
is shown as a page reference in the index page.#index([Index Page])
```typ
#index[The Entry Phrase]
```
or
```typ
#index([The Entry Phrase])
```
or
```typ
#index("The Entry Phrase")
```
== Advanced entries
=== Symbols
Symbols can be indexed to be sorted under `"Symbols"`, and be sorted at the top of the index like this
```typ
#index(initial: (letter: "Symbols", sorty-by: "#"), [$(rho)$])
```
=== Nested entries
Entries can be nested. The `index` function takes multiple arguments - one for each nesting level.
```typ
#index("Sample", "medical", "blood")
#index("Sample", "medical", "tissue")
#index("Sample", "musical", "piano")
```
#index("Sample", "medical", "blood")
#index("Sample", "medical", "tissue")
#index("Sample", "musical", "piano")
=== Formatting Entries
#index(fmt: strong, [Formatting Entries])
Entries can be formatted with arbitrary functions that map `content` to `content`
```typ
#index(fmt: it => strong(it), [The Entry Phrase])
```
or shorter
```typ
#index(fmt: strong, [The Entry Phrase])
```
For convenience in-dexter exposes `index-main` which formats the entry bold. It is
semantically named to decouple the markup from the actual style. One can decide to have
the main entries slanted or color formatted, which makes it clear that the style should
not be part of the function name in markup. Naming markup functions according to their
purpose (semantically) also eases the burden on the author, because she must not remember
the currently valid styles for her intent.
Another reason to use semantically markup functions is to have them defined in a central
place. Changing the style becomes very easy this way.
```typ
#index-main[The Entry Phrase]
```
It is predefined in in-dexter like this:
```typ
#let index-main = index.with(fmt: strong)
```
Here we define another semantical index marker, which adds an "ff." to the page number.
```typ
#let index-ff = index.with(fmt: it => [#it _ff._])
```
#let index-ff = index.with(fmt: it => [#it _ff._])
== The Index Page
#index[Index Page]
To actually create the index page, the `make-index()` function has to be called. Of course,
it can be embedded into an appropriately formatted #index[Formatting]
environment#index[Environment], like this:
```typ
#columns(3)[
#make-index()
]
```
= Why Having an Index in Times of Search Functionality?
#index(fmt: strong, [Searching vs. Index])
A _hand-picked_#index[Hand Picked] or _handcrafted_#index[Handcrafted] Index in times of
search functionality#index[Search Functionality] seems a bit
old-fashioned#index[Old-fashioned] at the first glance. But such an index allows the
author to direct the reader, who is looking for a specific topic#index-main("Topic",
"specific") (using index-main ), to exactly the right places.
Especially in larger documents#index[Large Documents] and books#index[Books] this becomes
very useful, since search engines#index[Search Engines] may provide#index[Provide] too
many locations of specific words. The index#index[Index] is much more
comprehensive,#index[Comprehensive] assuming that the author#index[Authors responsibility]
has its content#index[Content] selected well. Authors know best where a certain
topic#index("Topic", "certain") is explained#index[Explained] thoroughly#index[Thoroughly]
or merely noteworthy #index[Noteworthy] mentioned (using the `index` function).
Note, that this document is not necessarily a good example of the index. Here we just need
to have as many index entries#index[Entries] as possible to
demonstrate#index-ff([Demonstrate]) (using a custom made `index-ff` function) the
functionality #index[Functionality] and have a properly#index[Properly] filled index at
the end.
Even for symbols like `(ρ)`.#index([$(rho)$], initial: (letter: "Symbols", sort-by: "#"))
Indexing should work for for any Unicode string like Cyrillic (Скороспелка#index(initial:
(letter: "С", sort-by: "Ss"), "Скороспелка")) or German
(Ölrückstoßabdämpfung).#index(initial: (letter: "Ö", sort-by: "Oo"),
"Ölrückstoßabdämpfung") - though we need to add initials `#index(initial: (letter: "С",
sort-by: "Ss"), "Скороспелка")` or `#index(initial: (letter: "Ö", sort-by: "Oo"),
"Ölrückstoßabdämpfung")`.
#line(length: 100%, stroke: .1pt + gray)
#pagebreak()
= Index
Here we generate the Index page in three columns:
#columns(3)[
#make-index()
]
|
https://github.com/piepert/typst-seminar | https://raw.githubusercontent.com/piepert/typst-seminar/main/Beispiele/UniHausaufgabe/template.typ | typst | #set text(size: 12pt)
#let tasks_points_state = state("tasks_points")
#let subtask_counter = counter("subtask_counter")
#let st_color = rgb("#e1e1e1")
#let st_color_border = rgb("#8e8e8e")
#let init_task_points() = {
tasks_points_state.update(k => {
if k == none {
return (points: (0,), tasks: 0)
}
k
})
}
#let increase_task_points(p) = {
init_task_points()
tasks_points_state.update(k => {
k.points.at(k.tasks) += p
k
})
}
#let add_task_task_points() = {
init_task_points()
tasks_points_state.update(k => {
let _ = k.points.push(0)
k.tasks += 1
k
})
}
#let get_task_points() = {
init_task_points()
locate(loc => tasks_points_state
.final(loc)
.points
.at(tasks_points_state
.at(loc)
.tasks))
}
#let pointed(points, content) = {
table(columns: (1fr, auto),
inset: 0pt,
stroke: 0cm + white,
align: bottom,
[#content],
text(size: 12pt,
style: "normal",
block(
inset: (left: 1.5em),
[(#points P.)])
)
)
increase_task_points(points)
}
#let subtask(border: false, task, points, solution) = {
if border {
block(stroke: (bottom: 0.25mm + black, left: 0.25mm + black),
inset: (rest: 1em), [
#grid(columns: (auto, 1fr, auto),
block(inset: (right: 1em), strong(locate(loc => numbering("(a)", subtask_counter.at(loc).at(0)+1)))),
task,
[#if points >= 1 { block(inset: (left: 1.5em), [(#points P.)]) }]
)
]
)
par(block(inset: (left: 1em), solution))
} else {
table(columns: (auto, 1fr, auto),
stroke: white,
inset: 0pt,
strong(
block(breakable: true,
inset: (right: 0.5em),
locate(loc => numbering("(a)", subtask_counter.at(loc).at(0)+1))
)
),
task,
if points > 0 { block(inset: (left: 1.5em),
[(#points P.)]) }
)
par(solution)
}
v(1.5em, weak: true)
subtask_counter.update(k => k+1)
increase_task_points(points)
}
#let task(title, content) = {
add_task_task_points()
subtask_counter.update(0)
heading(
table(columns: (1fr, auto),
inset: 0pt,
stroke: 0cm + white,
align: bottom,
[Aufgabe #locate(loc => tasks_points_state.at(loc).tasks) -- #title],
text(size: 12pt,
style: "normal",
block(
inset: (left: 1.5em),
[(#get_task_points() P.)])
)
)
)
content
v(1.5em, weak: true)
}
#let adt(types, operants, variable_line, rules) = block(inset: (left: 0.5em), block(breakable: true,
inset: 0em,
stroke: (left: 0.25mm), [
#block(inset: (left: 1em, bottom: 1em, top: 1em, right: 1em),
stroke: (bottom: 0.25mm), {
if types.len() > 0 {
"["
let i = 0
while i < types.len() {
types.at(i)
i += 1
if i < types.len() {
", "
}
}
"]"
v(0.5em)
}
par({
for o in operants {
par(o)
}
})
}
)
#block(inset: (top: 0em, left: 1em, bottom: 1em, right: 1em), {
variable_line
set par(first-line-indent: 0em, hanging-indent: 0em)
v(0.5em)
block(inset: (left: 1.5em),
for r in rules {
if r == "" {
v(0.5em)
} else {
par(r)
}
}
)
})
]
)
)
#let project(title: "", authors: (), date: none, body) = {
let _ = subtask_counter.update(0)
let _ = init_task_points()
/* ------------- PAGE SETUP START ------------- */
set document(author: authors.map(a => a.name), title: title)
set par(first-line-indent: 0em, justify: true, linebreaks: "optimized")
set page(footer: align(center, counter(page).display() + " / " + locate(l => counter(page).final(l).at(0))))
set text(size: 12pt, lang: "de")
show par: set block(below: 1em)
show heading: it => block(inset: (top: 0.5em, bottom: 0.5em), par(justify: false, it.body))
show raw: it => if it.block {
v(1.5em, weak: true)
it
} else {
it
}
show raw.where(block: true): it => {
set par(justify: false)
block(inset: (left: 1em), grid(columns: (100%, 100%), column-gutter: -100%,
block(width: 100%, inset: 1em, {
let lines = it.text.split("\n").rev()
let _i = 0
while _i < lines.len() and lines.at(_i).trim() == "" and _i < 2 {
_i += 1
}
let lines = lines.slice(_i).rev()
for (i, line) in lines.enumerate() {
hide(box(width: 0pt, align(right, str(i + 1) + h(2em))))
set text(fill: rgb("#5e5e5e"), size: 0.75em)
box(width: 0pt, align(right, str(i + 1) + h(2em)))
hide(line)
linebreak()
}
}),
block(stroke: (left: 0.025cm + rgb("#8e8e8e")),
fill: luma(246),
width: 100%,
inset: 1em,
it),
))
}
show raw.where(block:true): block.with(
width: 100%,
)
/* ------------- PAGE SETUP END ------------- */
table(columns: (50%, 50%),
inset: 0pt,
stroke: white,
table(columns: (auto),
inset: 4pt,
stroke: white,
[Universität Rostock,],
[Fakultät für Informatik und\ Elektrotechnik]),
align(right, table(columns: (auto),
inset: 4pt,
stroke: white,
[Algorithmen und Datenstrukturen],
[Sommersemester 2023],
date
))
)
align(center, [
#locate(loc => {
text(size: 2em, strong("Aufgabenblatt "+str(title)))
if tasks_points_state.final(loc) == none {
return
}
par(text(size: 1em, [Erreichbare Punktzahl: ] + str(tasks_points_state.final(loc).points.fold(0, (i,j) => i+j))) + [ P.])
})
])
pad(
top: 0.5em,
bottom: 0.5em,
x: 2em,
grid(columns: (1fr,) * calc.min(3, authors.len()),
gutter: 1em,
..authors.map(author => align(center)[
*#author.name* \
Mat.-Nr.: #author.matnr
]),
),
)
body
}
#let todo(body) = block(fill: rgb("#fff0f0"), stroke: (left: 2pt + red, rest: none), inset: 1em, width: 100%, strong[To Do] + par(body)) |
|
https://github.com/ysthakur/PHYS121-Notes | https://raw.githubusercontent.com/ysthakur/PHYS121-Notes/main/Notes/Ch07.typ | typst | MIT License | = Chapter 7
== Describing Circular and Rotational Motion
/ Rotational motion: Motion of objects that spin about an axis.
/ Angular position: $theta$ is angular position of particle when measured counterclockwise from positive x-axis. Uses radians.
/ Arc length: The arc length $s = r theta$ is the distance a particle has traveled along its circular path.
/ Angular velocity: $omega = (Delta theta)/t$ \
Every point on a rotating body has the same angular velocity. \
Relationship between speed and angular speed: $v = omega r$
/ Angular acceleration: $alpha = (Delta omega)/t$ (units are $upright("rad"/s^2)$)
$Delta theta = omega_0 t + 1/2 alpha t^2$, just like with linear motion with constant acceleration.
Distance: $upright(S) = r Theta$
== The Rotation of a Rigid Body
/ Rigid body: An extended object whose shape and size do not change as it moves.
== Torque
The ability of a force to cause rotation depends on:
- The magnitude $F$ of the force
- The distance $r$ from the pivot to the point at which force is applied
- The angle at which the force is applied
/ Equation for torque: $ tau = r F_(perp) = r F sin phi$ (units: $upright(N dot m)$).
$phi$ is measured from the radial line to the direction of the force.
/ Radial line: Line starting at the pivot and going through the point where force is applied.
/ Line of action: Line that is in the direction of the force and passes through the point at which the force acts.
/ Moment arm/lever arm: Perpendicular distance from line of action to pivot.
/ Alternative equation for torque: $tau = r_(perp)F$
#align(center)[
#image("../Images/7-torque.png", width: 40%)
]
== Gravitational Torque and the Center of Gravity
Every particle in an object experiences torque due to the force of gravity. The gravitational torque can be calculated by assuming that the net force of gravity (the object's weight) acts as a single point. \
This single point is the *center of gravity*.
== Rotational Dynamics and Moment of Inertia
=== Relationship between torque and angular acceleration
Torque causes angular acceleration.
The tangential acceleration is $ a_t = F/m$
Tangential and angular acceleration are related by $a_t = alpha r$, so we can rewrite equation as $alpha = F/(m r)$
We can connect this angular acceleration to torque: $tau = r F$
Relationship between torque and angular acceleration: $alpha = tau/(m r^2)$
#align(center)[
#image("../Images/7-torque-to-angular.png", width: 40%)
]
=== Newton's Second Law for Rotational Motion
For a rigid body rotating about fixed axis, can think of object as consisting of multiple particles. \
Can calculate torque on each particle.
*Each particle has the same angular acceleration* because the object rotates together.
Net torque: $ tau_("net") = tau_1 + tau_2 + ... = m_1 r_1^2 alpha + m_2 r_2^2 alpha + ... = alpha sum m_i r_i^2 $
/ Moment of Inertia ($I$): The proportionality constant between angular acceleration and net torque. \
Units are $upright("kg" dot m)^2$ \
$ I = sum m_i r_i^2 $
Moment of inertia *depends on axis of rotation*. It depends on how the mass is distributed around rotation axis, not just how much mass there is.
*The moment of inertia is the rotational equivalent of mass*, i.e., $F_"net" = m a$, $tau_"net" = I alpha$
/ Newton's second law for rotation: An object that experiences a net torque $tau_"net"$ about the axis of rotation undergoes an angular acceleration of: $ alpha = (tau_"net")/I $
|
https://github.com/peterpf/modern-typst-resume | https://raw.githubusercontent.com/peterpf/modern-typst-resume/main/lib.typ | typst | The Unlicense | #let colors = (
primary: rgb("#313C4E"),
secondary: rgb("#222A33"),
accentColor: rgb("#449399"),
text-primary: black,
text-secondary: rgb("#7C7C7C"),
text-tertiary: white,
)
#let page-margin = 16pt
#let text-size = (
super-large: 24pt,
large: 14pt,
normal: 11pt,
small: 9pt,
)
// assets contains the base paths to folders for icons, images, ...
#let assets = (
icons: "assets/icons"
)
// joinPath joins the arguments to a valid system path.
#let joinPath(..parts) = {
let path = ""
let pathSeparator = "/"
for part in parts.pos() {
if part.at(part.len() - 1) == pathSeparator {
path += part
} else {
path += part + pathSeparator
}
}
return path
}
// Load an icon by 'name' and set its color.
#let icon(
name,
color: white,
baseline: 0.125em,
height: 1.0em,
width: 1.25em) = {
let svgFilename = name + ".svg"
let svgFilepath = joinPath(assets.icons, svgFilename)
let originalImage = read(svgFilepath)
let colorizedImage = originalImage.replace(
"#ffffff",
color.to-hex(),
)
box(
baseline: baseline,
height: height,
width: width,
image.decode(colorizedImage)
)
}
// infoItem returns a content element with an icon followed by text.
#let infoItem(iconName, msg) = {
text(colors.text-tertiary, [#icon(iconName, baseline: 0.25em) #msg])
}
// circularAvatarImage returns a rounded image with a border.
#let circularAvatarImage(img) = {
block(
radius: 50%,
clip: true,
stroke: 4pt + colors.accentColor,
width: 2cm
)[
#img
]
}
#let headline(name, title, bio, avatar: none) = {
grid(
columns: (1fr, auto),
align(bottom)[
#text(colors.text-tertiary, name, size: text-size.super-large)\
#text(colors.accentColor, title)\
#text(colors.text-tertiary, bio)
],
if avatar != none {
circularAvatarImage(avatar)
}
)
}
// contact-details returns a grid element with neatly organized contact details.
#let contact-details(contact-options-dict) = {
if contact-options-dict.len() == 0 {
return
}
let contactOptionKeyToIconMap = (
linkedin: "linkedin",
email: "envelope",
github: "github",
mobile: "mobile",
location: "location-dot",
website: "globe",
)
// Evenly distribute the contact options among two columns.
let contactOptionDictPairs = contact-options-dict.pairs()
let midIndex = calc.ceil(contact-options-dict.len() / 2)
let firstColumnContactOptionsDictPairs = contactOptionDictPairs.slice(0, midIndex)
let secondColumnContactOptionsDictPairs = contactOptionDictPairs.slice(midIndex)
let renderContactOptions(contactOptionDictPairs) = [
#for (key, value) in contactOptionDictPairs [
#infoItem(contactOptionKeyToIconMap.at(key), value)\
]
]
grid(
columns: (.5fr, .5fr),
renderContactOptions(firstColumnContactOptionsDictPairs),
renderContactOptions(secondColumnContactOptionsDictPairs),
)
}
#let headerRibbon(color, content) = {
block(
width: 100%,
fill: color,
inset: (
left: page-margin,
right: 8pt,
top: 8pt,
bottom: 8pt,
),
content
)
}
#let header(author, job-title, bio: none, avatar: none, contact-options: ()) = {
grid(
columns: 1,
rows: (auto, auto),
headerRibbon(
colors.primary,
headline(author, job-title, bio, avatar: avatar)
),
headerRibbon(colors.secondary, contact-details(contact-options))
)
}
#let pill(msg, fill: false) = {
let content
if fill {
content = rect(
fill: colors.primary.desaturate(1%),
radius: 15%)[
#text(colors.text-tertiary)[#msg]
]
} else {
content = rect(
stroke: 1pt + colors.text-secondary.desaturate(1%),
radius: 15%)[#msg]
}
[
#box(content)~
]
}
#let experience(
title: "",
subtitle: "",
facility-description: "",
task-description: "",
date-from: "Present",
date-to: "Present",
label: "Courses") = [
#text(size: text-size.large)[*#title*]\
#subtitle\
#text(style: "italic")[
#text(colors.accentColor)[#date-from - #date-to]\
#if facility-description != "" [
#set text(colors.text-secondary)
#facility-description\
]
#text(colors.accentColor)[#label]\
]
#task-description
]
// experience-edu renders a content block for educational experience.
#let experience-edu(..args) = {
experience(..args, label: "Courses")
}
// experience-work renders a content block for work experience.
#let experience-work(..args) = {
experience(..args, label: "Achievements/Tasks")
}
// project renders a content block for a project.
#let project(title: "", description: "", subtitle: "", date-from: "", date-to: "") = {
let date = ""
if date-from != "" and date-to != "" {
date = text(style: "italic")[(#date-from - #date-to)]
} else if date-from != "" {
date = text(style: "italic")[(#date-from)]
}
text(size: text-size.large)[#title #date\ ]
if subtitle != "" {
set text(colors.text-secondary, style: "italic")
text()[#subtitle\ ]
}
if description != "" {
[#description]
}
}
#let modern-resume(
// The person's full name as a string.
author: "<NAME>",
// A short description of your profession.
job-title: [Data Scientist],
// A short description about your background/experience/skills, or none.
bio: none,
// A avatar that is pictures in the top-right corner of the resume, or none.
avatar: none,
// A list of contact options, defaults to an empty set.
contact-options: (),
// The resume's content.
body
) = {
// Set document metadata.
set document(title: "Resume of " + author, author: author)
// Set the body font.
set text(font: "Roboto", size: text-size.normal)
// Configure the page.
set page(
paper: "a4",
margin: (
top: 0cm,
left: 0cm,
right: 0cm,
bottom: 1cm,
),
)
// Set the marker color for lists.
set list(marker: (text(colors.accentColor)[•], text(colors.accentColor)[--]))
// Set the heading.
show heading: it => {
set text(colors.accentColor)
pad(bottom: 0.5em)[
#underline(stroke: 2pt + colors.accentColor, offset: 0.25em)[
#upper(it.body)
]
]
}
// A typical icon for outbound links. Use for hyperlinks.
let linkIcon(..args) = {
icon("arrow-up-right-from-square", ..args, width: 1.25em / 2, baseline: 0.125em * 3)
}
// Header
{
show link: it => [
#it #linkIcon()
]
header(author, job-title, bio: bio, avatar: avatar, contact-options: contact-options)
}
// Main content
{
show link: it => [
#it #linkIcon(color: colors.accentColor)
]
pad(
left: page-margin,
right: page-margin,
top: 8pt
)[#columns(2, body)]
}
}
|
https://github.com/paugarcia32/CV | https://raw.githubusercontent.com/paugarcia32/CV/main/modules/skills.typ | typst | Apache License 2.0 | #import "../brilliant-CV/template.typ": *
#cvSection("Skills")
#cvSkill(
type: [Languages],
info: [Spanish #hBar() Catalan #hBar() English #hBar() German]
)
#cvSkill(
type: [Soft Skills],
info: [Willingness to learn #hBar() Teamwork #hBar() Agile & SCRUM methodologies #hBar() Problem Solving]
)
#cvSkill(
type: [Personal Interests],
info: [Martial Arts #hBar() Electronics #hBar() Reading #hBar() Music]
)
|
https://github.com/MultisampledNight/diagram | https://raw.githubusercontent.com/MultisampledNight/diagram/main/source/linux-audio/linux-audio.typ | typst | Other | #import "../template.typ": *
#show: template
#canvas(length: 1em, {
import draw: *
content((0, 30), text(2em)[Linux audio system overview])
let inner-base = oklch(58.31%, 0.175, 299.73deg)
let outer-base = oklch(48.81%, 0.125, 309.26deg)
let pw = inner-base
let pa = inner-base.rotate(120deg).darken(5%)
let jack = inner-base.rotate(240deg).darken(10%)
let indirect = outer-base
let direct = outer-base.rotate(90deg).lighten(5%)
let alsa = outer-base.rotate(180deg).lighten(10%)
let oss = outer-base.rotate(270deg).lighten(15%)
let pw-pa = pa.mix(pw)
let pw-jack = jack.mix(pw)
let pa-jack = jack.mix(pa)
let padsp = oss.mix(pa)
let alsa-oss = oss.mix(alsa)
let nodes = (
program: (
x: -1.75,
desc: [
*Program*
Would like to play audio. \
Can't access hardware directly though.
],
parts: (
indirect: (
y: 10,
long: [Indirectly through wrappers],
accent: indirect,
),
direct: (
y: 3,
long: [Directly to the kernel],
accent: direct,
),
),
),
api: (
x: -1,
desc: [
*API*
Just the _protocol_ that is spoken. \
Not necessarily what _processes_ it.
],
parts: (
pw: (
y: 10,
long: [PipeWire],
accent: pw,
),
pa: (
y: 9,
long: [PulseAudio],
accent: pa,
),
jack: (
y: 7,
long: [JACK],
accent: jack,
),
oss: (
y: 3,
long: [OSS],
accent: oss,
),
alsa: (
y: 0,
long: [ALSA],
accent: alsa,
),
),
),
adapter: (
x: 0,
desc: [
*Adapter*
Speaks one API, actually \
sends to a _different_ server.
],
parts: (
pw-pa: (
y: 8,
long: [pipewire-pulse],
accent: pw-pa,
),
pw-jack: (
y: 5,
long: [pipewire-jack],
accent: pw-jack,
),
pa-jack: (
y: 4,
long: [pulseaudio-jack],
accent: pa-jack,
),
padsp: (
y: 2,
long: [padsp],
accent: padsp,
),
alsa-oss: (
y: 1,
long: [alsa-oss],
accent: alsa-oss,
),
),
),
server: (
x: 1,
desc: [
*Server*
Juggles codecs, mixes and \
decides what is the final output.
],
parts: (
pw: (
y: 10,
long: [PipeWire],
accent: pw,
),
pa: (
y: 9,
long: [PulseAudio],
accent: pa,
),
jack2: (
y: 7,
long: [JACK2],
accent: jack,
),
jack1: (
y: 6,
long: [JACK1],
accent: jack,
),
),
),
kernel: (
x: 1.75,
desc: [
*Kernel*
Takes buffer and sends \
it to the hardware.
],
parts: (
oss: (
y: 3,
long: [OSS],
accent: oss,
),
alsa: (
y: 0,
long: [ALSA],
accent: alsa,
),
),
),
)
let connectors = (
program: (
indirect: ("jack", "pa", "pw"),
direct: ("oss", "alsa"),
),
api: (
oss: ("kernel.oss", "alsa-oss", "padsp"),
alsa: "kernel.alsa",
jack: ("server.jack1", "server.jack2", "pa-jack", "pw-jack"),
pa: ("server.pa", "pw-pa"),
pw: "server.pw",
),
adapter: (
pw-pa: "pw",
pw-jack: "pw",
pa-jack: "pa",
padsp: "pa",
alsa-oss: "kernel.alsa",
),
server: (
pw: "alsa",
pa: "alsa",
jack2: "alsa",
jack1: "alsa",
),
)
// scale the positions so they're not super tight
for (name, layer) in nodes {
layer.x *= 15
for (name, node) in layer.parts {
node.y *= 2.5
layer.parts.at(name) = node
}
nodes.at(name) = layer
}
for (layer-idx, layer) in connectors.pairs().enumerate() {
let (source-layer, connectors) = layer
let source-layer = nodes.at(source-layer)
let node-count = connectors.len()
for (node-idx, outgoing) in connectors.pairs().enumerate().rev() {
let (source-node, targets) = outgoing
if type(targets) != array {
targets = (targets,)
}
for target in targets {
let (target-layer, target-node) = if "." in target {
target.split(".")
} else {
(nodes.keys().at(layer-idx + 1), target)
}
// now that we got all text reprs, let's look them up
let source = source-layer.parts.at(source-node)
let target-layer = nodes.at(target-layer)
let target = target-layer.parts.at(target-node)
let source-pos = (source-layer.x, source.y)
let target-pos = (target-layer.x, target.y)
// and onto rendering them
// we'd like the y traverser to be on a different x position for every node in a layer
// so one can still differentiate between them
// hence this node specific offset
let node-specific-offset = (-node-idx + node-count / 2 - 0.5) * 1.75
let mid-bottom = (
to: (source-pos, 50%, (source-pos, "-|", target-pos)),
rel: (node-specific-offset, 0),
)
let mid-top = (
to: mid-bottom,
rel: (0, target.y - source.y),
)
let accent = gradient.linear(
source.accent,
target.accent,
)
line(
source-pos,
mid-bottom,
mid-top,
target-pos,
stroke: 2pt + accent,
)
}
}
}
for (i, layer) in nodes.values().enumerate() {
let side = calc.ceil(i / (nodes.len() - 2))
let anchor = ("east", none, "west").at(side)
let alignment = (right, center, left).at(side)
// description
content(
(layer.x, -4),
anchor: if anchor == none { "north" } else { "north-" + anchor },
align(alignment, layer.desc),
)
line(
(layer.x, -2.5),
(rel: (0, 30)),
stroke: (
paint: gamut.sample(35%),
dash: "loosely-dotted",
),
)
for part in layer.parts.values() {
let pos = (layer.x, part.y)
// actual node
content(
pos,
anchor: anchor,
box(
fill: bg,
inset: 0.25em,
radius: 0.5em,
text(
part.accent,
strong(part.long),
),
)
)
}
}
})
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/show-node-11.typ | typst | Other | // Error: 7-10 expected function, label, string, regular expression, symbol, or selector, found color
#show red: []
|
https://github.com/han0126/MCM-test | https://raw.githubusercontent.com/han0126/MCM-test/main/2024校赛typst/chapter/chapter2.typ | typst | #import "../template/template.typ":*
= 问题分析
== 问题一分析
针对第一问,由于原始数据样本数据太多造成计算难度较大,因此先系统选取了14个数据样本进行分析。选取合适的指标进行分析评价。使用秩和比综合评价法进行评估得到结论对选取的14种竞赛赛事进行了三等级的分级评价。
== 问题二分析
本题要求给出赛事的科学评价方法,从而对其综合评价并排序,分析题目知属于多指标决策分析的问题,由此选择熵值法这一常用于多指标决策分析的数学方法,其核心原理在于,对于数据集中的每个指标,通过计算其分布的不确定性,来衡量其在综合决策中的重要性。这种方法既考虑到了指标之间的关联性和权重分配,又具有较好的稳定性和可操作性。先对数据采用正向处理,后对其标准化,计算信息熵以确定权重,可使决策者可以更好地比较和评估不同选项,并根据计算出来的权重,对评估对象进行排名,以达到本题解题目标。具体分析流程如下:
#img(
image("../figures/图片1.png", width:80%),
caption: "问题二分析流程图"
)<1>
== 问题三分析
针对第三问,分别探究三个因素对参赛人数的影响,考虑到三个因素不是简单的线性关系,所以先分别探讨三个因素与参赛人数的关系,建立回归方程。分别根据不同的限制条件建立三个具体的预测回归模型(具体模型公式见后文)。考虑到本文第一问从三个指标分别分析对参加人数的影响,构建的是线性回归模型和有理函数模型。第二问为发现三个指标之间的相互作用与内在关联,现重新构建模型,以寻找各因素间的相互影响和共同对因变量投入的贡献程度,所以选择构建多项式非线性回归模型。 |
|
https://github.com/xdoardo/co-thesis | https://raw.githubusercontent.com/xdoardo/co-thesis/master/thesis/chapters/imp/related.typ | typst | #import "/includes.typ": *
== Related works<section-related>
The important aspect of this thesis is about the use of coinduction and sized
types to express properties about the semantics of a language. Of course, this
is not a new theoretical breakthrough, as it draws on a plethora of previous
works, such as @danielsson-operational-semantics and
@leroy-coinductive-bigstep.
The general objective is that of coming up with a representation of the
semantics of a language, be it functional or imperative, that allows a uniform
representation of both the diverging and the fallible behaviour of the
execution. Even if, surely, the idea comes up earlier in the literature, we choose
to cite @leroy-coinductive-bigstep, where the author uses coinduction to model
the diverging behaviour of the semantics of untyped λ-calculus but does so
using a relational definition and not an equational one, making proofs
concerning the semantics significantly more involved.
With the innovations proposed by Capretta's `Delay` monad, a new attempt to
obtain such a representation was that of Danielsson in
@danielsson-operational-semantics; nonetheless, Agda's instrumentation for
coinduction was not mature enough: it used the so-called _musical notation_,
which suffered from the same limitations that regular induction has when using
a syntax-based termination or productivity checker, and it is also worth noting
that musical notation is potentially unsound when used together with sized
types @agda-docs. It would be unfair, however, not to mention that recent
updates to the code related to @danielsson-operational-semantics indeed uses
sized types and goes beyond using concepts from cubical type theory.
In @concrete-semantics, the authors explore methods to apply transformations to
programs of an imperative language and prove the equivalence of the semantics
before and after such transformations; they do so using relational semantics
without the use of coinduction, thus not considering the "effect" of
non-termination. As noted, @concrete-semantics is the work we followed to come
up with transformations to explore.
In @nakata-coinductive-while, the authors show four semantics: big-step and
small-step relational semantics and big-step and small-step functional
semantics. They achieve so using Coq, which had no concept such as sizes.
In @hutton-compilers, the authors show how to implement correct-by-construction
compilers targeting the Delay monad.
|
|
https://github.com/duskmoon314/THU_AMA | https://raw.githubusercontent.com/duskmoon314/THU_AMA/main/docs/ch1/3-例题与应用.typ | typst | Creative Commons Attribution 4.0 International | #import "/book.typ": *
#show: thmrules
#show: book-page.with(title: "例题与应用") |
https://github.com/The-Notebookinator/notebookinator | https://raw.githubusercontent.com/The-Notebookinator/notebookinator/main/themes/linear/components/pro-con.typ | typst | The Unlicense | #import "../colors.typ": *
#import "/utils.typ"
#let pro-con = utils.make-pro-con((pros, cons) => {
table(
columns: (50%, 50%),
inset: 0.75em,
fill: (col, row) => if row == 0 {
if col == 0 {
pro-green
}
if col == 1 {
con-red
}
},
align(
center,
text(size: 14pt, weight: "semibold", [Pros]),
),
align(
center,
text(size: 14pt, weight: "semibold", [Cons]),
),
pros,
cons,
)
})
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/051%20-%20March%20of%20the%20Machine/005_Arcavios%3A%20A%20Radiant%20Heart.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Arcavios: A Radiant Heart",
set_name: "March of the Machine",
story_date: datetime(day: 20, month: 04, year: 2023),
author: "<NAME>",
doc
)
In the tunnels beneath Strixhaven, where dwelled the relics of bygone eras, Quintorius said, "I believe we're lost."
Groans met his announcement.
"It's our own school," Rootha growled, "we can't be #emph[lost] . It should be a straight shot from the dormitories to the Biblioplex!"
"Think of it as practice for when we're actually #emph[in] the Biblioplex and searching for the Invocation of the Founders," Dina said. "Remind me, how many expeditions have been formed to rescue lost students? About a hundred?" The pests in her shoulder bag squeaked and squirmed, and she slapped its side to silence them.
From the rear of the group, Zimone said, "I wonder if the invocation will really slow the Phyrexian invasion, like Professor Vess said."
Killian's head snapped up. "Better finding it late than not at all. My father can help. He was in the Biblioplex when the invasion began."
Quint flapped his ears but held his tongue. He wanted desperately to believe that professors other than Liliana had escaped the Phyrexian stranglehold—that there were others who #emph[hadn't] been overtaken and forged onto the Phyrexian intelligence—but in the dusty, lantern-lit darkness, with red sinew snaking across the walls and shadows stinking of black oil, hope was thin. The invasion had burned through Strixhaven's defenses. With their professors captured and compleated, it was hard to believe that the school could repel their assault. The dormitories still stood, the students within protected by Professor Vess and an army of undead—yet she couldn't last forever.
#figure(image("005_Arcavios: A Radiant Heart/01.png", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none)
But Quint took a deep breath and said, "Killian is correct. There's little to be gained from despair." Lantern held high, he led the way down the tunnel. Really, it wasn't too different from exploring any other ruins, though prior expeditions hadn't carried the threat of capture and compleation.
It was difficult going. The devastation caused when the Phyrexian portals ruptured the sky and the Invasion Tree's branches stabbed into the earth had caved in many underground tunnels. In some places, the ceiling had collapsed. The students were forced to heave aside debris to continue or backtrack and find new paths.
Resting beside a branch of the Invasion Tree after breaking through a particularly nasty blockage, Quint spotted a statue behind a crumbling block.
"Ah!" Quint said. "I should have considered this sooner."
"Considered what?" Killian asked.
Quint crouched beside the statue and traced white-gold sigils in the air, for conjuring, calling, and regenerating. "Who better to guide us than Strixhaven's earliest professors? The statue is clearly venerable, based on weathering and discoloration patterns, so if we ask—"
The sigils trembled as Quint's spell caught. Dust and pebbles whirled around the stone figure, a whirlwind in miniature that grew increasingly solid. Glowing white stone accreted from dust. Limbs stretched through the whirl; luminous eyes blinked as the professor's spirit incorporated into its statue.
The spirit looked around, scowled, and said, "Strixhaven's gone downhill since my day."
"Almost as if we're actively being invaded," Dina said.
The professor's spirit glared at her. Before it could voice displeasure, Quint said, "I'm terribly sorry for the inconvenience, professor—"
"#emph[Dean] , thank you. <NAME> the Second of—"
"Please," Quint interrupted to a sputter of disapproval, "we're in a hurry. Do you know the way to the Biblioplex? We're somewhat—um—"
"Lost," Rootha said.
"Lost!" <NAME> yelped. "How can you be #emph[lost] ?" Glowering, Killian snapped, "It's a long story, and we don't have time to explain."
"I should say not! How can you be lost when you're #emph[under] the Biblioplex?" Quint's eyes widened. Then, in unison, the students looked up at the Invasion Tree's branch, plated in carapace and pulsating with nauseating warmth, and the hole through which it had plunged.
"I'd rather we were lost again," Zimone muttered.
And Dina, grinning, asked, "Who's going first?"
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#emph[Never again will I scale a structure without expeditionary gear] , Quint thought as he climbed. Four pairs of hands grabbed his coat; four backs bent and heaved him out of the hole onto the Biblioplex floor. He wasn't the only one tired of losing his grip and slipping.
One glance, and he wished he could sink back down.
His lovely, luminous center of learning was gone. Red-edged portals squirmed overhead and bled lifeless, ruddy light. The Invasion Tree's branches cut through the air and walls alike, disrupting existing structures. And here was more red sinew, overrunning the furnishings in knotty columns, hand in hand with porcelain plates segmented like spinal columns. It seemed to feed off the very walls, dulling them, drinking in everything that gave the Biblioplex luster, and spitting out black oil and more tendrils of itself.
None of them spoke. The air felt so thick it choked the words in their throats. And yet light danced nearby, somehow, not the redness of Phyrexia, but motes like dust through a sunbeam, pale blue and frail. Without thinking, Quint reached for a mote~ and his eyes widened as the mote seemed to melt into his skin. A sensation quiet, nascent discovery settle over him.
#emph[The invocation should feel new] , Professor Liliana had said. #emph[It should emit traces of itself . . . what kind, I don't know. But Strixhaven originated with the Invocation of the Founders, and the spell will seek to oppose any invasion. Find it. Cast it. Help it drive Phyrexia from our school.]
Quint glanced at his fellow students. More motes drifted around them, and their expressions shone as well with the same realization. These were traces of the invocation, awakened and struggling against the Phyrexian gloom.
#emph[Onward] , Quint thought, and followed the dancing lights.
Even though the Biblioplex appeared deserted, the presence of the portals stifled Quint's desire to talk and turned his thoughts gray with uncertainty. Which was odd, he thought as he inched along a book-lined, sinew-bound aisle. The Invasion Tree's branches themselves hummed; the air throbbed with the heartbeat thrum of malignant expansion. Yet the red-lit Biblioplex reminded Quint of a sepulcher. Even the ruins he'd studied had felt livelier.
Then, creeping at Rootha's heels, with Dina, Zimone, and Killian behind, Quint looked up—and almost jumped. Another student, a dark-haired dwarf in Lorehold red and white, stared at them from atop a bookcase, wide-eyed and still very much uncompleated. Their breath shuddered from lips bloodless with dread, barely rippling the staleness.
The student caught Quint's gaze, and their eyes welled up with relief. "Help me," they whispered.
As if the student's voice were a thrown pebble and the Biblioplex a pond, the red sinew #emph[rippled] .
#emph[Something] coiled around the bookcase and snared the student's leg. They had time for one terrified shriek before the thing whipped them down a dark passage—and Quint glimpsed a figure covered in steel-bright feathers and razored talons. Where tongue and beak should be, there was instead a spreading web of metal filaments.
"No!" Quint shouted, unable to stop himself.
At once Killian grabbed his shoulder in warning and Rootha slapped a hand over his mouth, but his cry raced through the red sinew with the same awful, spreading ripple. De<NAME>'s head abruptly swiveled around.
They didn't even speak. They just ran.
The Biblioplex howled around them in a voice that made Quint want to tear off his ears. The shadows themselves seemed to grasp at him with blood-red fingers. Down the labyrinthine ways—past shelves monstrous with carapace and chitin—over moats murky, reeking, veined with black oil—silent no longer, for silence was gone, and what chased them now was fear and fury. The edges of Quint's vision skittered with too many limbs and proportions too twisted for recognition. Magic flashed: darkly green bursts as Dina ripped life from the pests in her bag and flung patches of treacherous, moss-slick footing behind them; Killian's lashes of ink that snared at their pursuers' limbs; Rootha whirling to fling needle-sharp ice spikes or fiery blasts. Zimone panted as she tried to keep up, and Quint hauled her along as gently as possible. On they ran, trampling over ancient tomes, and even with his heart stuttering in his chest, he felt a stab of remorse—
"They're slowing," Rootha panted, and hope churned through Quint. They were almost at an atrium, where they could dive into one of a dozen branching halls and lose the pursuit—
Feather and metal and web-mouth fluttered overhead. Before Quint could react, <NAME> dove. Her talons hooked into Killian's collar, and then she was pulling up from the dive, Killian thrashing in her grip and <NAME> Zimone grabbing at his feet, trying to pull him back. <NAME> simply flapped her wings and rose higher. The web of her mouth wormed over Killian's skull and tried to creep beneath his eyelids, squeezed shut with horror.
"Where is your father?" <NAME> asked, and the web pulsated gently, almost lovingly, around Killian's head. White and black flashed along his hands, but whatever spells he was trying to cast fizzled inches from his fingertips.
Quint shook. He saw all too clearly the fate that awaited Killian, fetid with metal and oil. He couldn't let that happen. Voice high and cracked, he croaked, "We don't know. And since it's Dean Lu you want, you shouldn't waste your time on Killian. You don't need him. He won't be beneficial to you. To—to Phyrexia."
Dean Shaile's web-mouth fluttered wide for a second, showing Killian's terrified, ashen face, then closed again. "While I would have preferred the elder Lu over his inferior progeny, you're mistaken. Killian isn't wasting my time. All are welcome in Phyrexia's better, unalloyed Multiverse."
Quint cried out and reached for Killian, futile, distant, even as Dean Shaile screeched in triumph and more Phyrexianized professors rolled toward them in a terrible gleaming tide—
Then someone whispered, "#emph[You left yourself vulnerable] ," and Quint reeled as, in a whirl of ink-black flutters, Dean Embrose Lu landed beside them. Dean Shaile's web-mouth #emph[writhed] ; she spat Killian onto the floor and launched at Embrose as the other professors leaped—and the whirl exploded outward, passing over Quint and his fellow students with the lightness of silk scraps—but where they touched the professors, flesh bubbled and metal cracked or melted.
The professors' screams of rage and pain echoed through the Biblioplex as the ink stormed around them.
"#emph[Father, behind you!] " Killian shouted. He staggered upright, ink spilling from his hands and surging toward Dean Shaile, who arrowed at Embrose's back.
Then another curve of black knocked Killian aside—and a second later, a scythe-limb pierced the storm, slashing where Killian's head had been just a moment ago.
"#emph[You are so much weaker than you know] ," Embrose spat. The bookcases rocked back, books flying open as the words and wisdom of a hundred thousand writers ripped free and flew to Embrose's command. Arrows and darts pierced some of the attacking professors; shrouds of the stuff swarmed over others, choking them; still more ink flashed upward to shred the filaments of Dean Shaile's mouth. The onetime-owlin fell—
A scrap of darkness slapped over Killian's mouth as he summoned another slash of ink. As he clawed at it, Embrose said, "Run, Killian."
Killian's eyes flashed; he tore the ink aside. "I can help!"
"Yes. You would also distract me."
Dina snapped, "If you couldn't stop Phyrexia before, what makes you think you can defeat them now?"
Embrose's gaze flicked toward Killian—and to Quint's surprise, he caught the faintest wavering of the dean's normally stoic expression.
"I need not explain myself," Embrose said. "Now #emph[go] ."
And without warning lashes of black whipped around the students' torsos and flung them through a split-second hole in the vicious ink storm. Dean Shaile swiveled toward them—but another burst of ink ripped through her wings, and with a scream she refocused on Embrose, a shadow carved in human shape.
The ink dumped them unceremoniously several rows away. Quint scrambled to his feet as Killian leaped up, fire in his eyes—and Dina seized his arm.
"Let go!"
"You'll die," she said flatly, and the pests in her bag chirruped as though in agreement.
Killian's eyes narrowed. "I can't let my father be taken by Phyrexia."
"Right, sorry, you won't die. You'll only #emph[wish] you'd died." Then, as Killian drew breath, Dina added, "What's more important, throwing your life away here or finding the invocation?"
Quint thought of the lashing ink. Though he knew it would hurt Killian, he said, "Dina's right."
"My father could help!"
Zimone reached for Killian's hand, then drew back, as though afraid to touch him. "Dean Embrose #emph[is] helping."
"By making himself #emph[bait] ?"
"By giving us the time and space needed to find the invocation. You'd know best, but—would he sacrifice himself if he didn't believe you would succeed?"
Killian's jaw clenched, and Quint saw Rootha tense, ready to restrain him—then he nodded, just once.
But as they marched forward, Quint caught flashes of white magic—words to strengthen and support—flitting from Killian's fingertips in his father's direction.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
The sounds of battle faded as they advanced through the Biblioplex. To Quint, the silence felt thicker now, as though punishing them for speaking even briefly. Even the invocation's wafting lights grew weak at times, forcing them to search until they found another drift of motes. The only consolation, if it could be called such, was that Embrose had attracted the professors' attention. The corridors were now clear—#emph[mostly] clear.
Quint paused mid-stride. A metallic shape hung from the arch at the end of the corridor. The five students exchanged glances. Then, without speaking, they chose another path.
Unfortunately, it became clear that the upside-down professor was not the only one to have abstained from battling Embrose. The Biblioplex whispered with slithering and clacked with metal against stone floors. Twice—three times—#emph[four] —more times than Quint could count—they scrambled behind stacks or wedged themselves into alcoves as professors stalked past, eyes sweeping the shadows and shelves. Strange shapes, limbs cracking and bending in ways that should incapacitate, but somehow didn't. Quint knew those appearances would haunt him forever.
#emph[How much of the Biblioplex have we searched by now?] Quint wondered, trying to recall the layout as he scrunched behind a column of red sinew. The Biblioplex was vast and convoluted. Even Professor Vess, Planeswalker and scholar, had yet to explore its full extent. Still, thinking about the mazelike ways was better than listening to the professor shuffling nearby. The sound of its feet—however many of those there now were—scraped directly across Quint's brain~
#figure(image("005_Arcavios: A Radiant Heart/02.png", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none)
The shuffling faded as the professor moved away. Across the aisle, Quint caught Dina's eye, and she nodded. It was safe to move again.
They were in a wholly unfamiliar part of the Biblioplex now, with dust thick on statues and books, and oil-slimed cobwebs twined with delicate strands of sinew. The students had to split up to pass through the narrower ways. They regrouped—the red, breathing silence made solitude repugnant—only to be forced by the aisles to diverge once more. Quint clung to those moments together, brief though they were, as the reddish shadows bore down upon them.
Then, as he slipped with Dina between two bookcases, the bookcase on his right trembled, and he spotted long, needle-like fingers curving over the top. He froze. A professor hung on the opposite side, waiting—watching. Quint exchanged a glance with Dina. It would spot them the instant they emerged.
Then he heard a gasp, somewhere to his right.
Zimone.
The bookcase creaked as the professor whirled around—
Quint stumbled to the end of his aisle and saw Zimone's terrified eyes barely peeking out from behind a waist-high bookcase, and Rootha and Killian's hands on her shoulders drawing her back—saw, in all its bizarre, twisted glory, the scythe-limbed professor slinking in their direction—but he also saw the statue against a far wall, shrouded in red sinew. His fingers flew through the motions, spilling out in one second a spell that normally took thirty. The statue's spirit coalesced in a flurry of dust and stone. It shot Quint a brief, fierce smile, raised its shimmering hands, and screamed, "No talking in the library!"
A shriek blasted through the air.
With a stiff, sweeping clatter, the professor spun and skittered toward the statue, now gleefully smashing every carapace plate and shredding every skein of red sinew within reach. Even dead, it seemed, Strixhaven's onetime professors could not abide the Phyrexian intrusion.
The students exchanged the briefest of glances—relief and terror and surprise all wrapped up together—then flew past the Phyrexianized professor's back, their footsteps obscured by the statue's defiant bellows.
Deeper they delved through the Biblioplex, always deeper, chasing the puffs of light, but they couldn't last. Quint could see everyone faltering under the brutal, unceasing dread. Killian kept flicking courage and hope to the others, the words flashing in Quint's eyes, but the redness of the invasion portals turned the white magic thin. He stumbled and barely kept himself from tripping over a chair. They were on the right path—the pale, downy motes glittered more intensely—but how far they'd have to go, he didn't know and didn't want to imagine—
Then his steps faltered; the other students slowed as well. Fear and fatigue seemed to slough off him like old bandages.
The light was stronger here: not just feebly withstanding the heavy Phyrexian murkiness but throwing it off completely in spots. Pockets of radiance hung between the bookcases, and when Quint passed through one, the freshness of the air itself was almost euphoric after so long plodding through darkness and distress.
#emph[Almost there.]
Revitalized, they had to restrain themselves from rushing headlong. Patch to pale-lit patch they moved, each stretch becoming stronger, broader, brighter. To Quint, it felt like sunlight on unearthed ruins, or old words copied onto clean paper—
The aisles opened up, revealing a circular platform surrounded by a moat. Bobbing at the platform's center was a tangle of light like no spell Quint had ever seen—#emph[and no red sinew] , Quint realized with a thrill. The platform was clean. It had to be the invocation. No other spell he knew of could defy Phyrexia's grasp.
With a sweep of her arms, Rootha crystallized the water into an instant, icy bridge. They dashed across, without a professor in sight.
Which was good, because as Quint neared the invocation, its glow wrapped around him in a soft, comforting blanket, and he could think of almost nothing else.
The tangle wasn't just light, but a prismatic confusion of letters so dazzling they warded off the red gloom entirely. Sentences looped out, sank back down, and reformed with new clauses and phrases. Single words burst like bubbles on the surface. Quint leaned forward, squinting to try and make out individual words—and a shining tendril coiled around his wrist. He almost jumped in shock. He'd thought the words would be intangible constructs of pure magic, but they felt like warm silk threads against his skin.
"It's alive," Quint breathed. The invocation pulsed faintly. His eyes widened. "Did you see—"
"It's #emph[responding] to us?" Rootha asked, and the invocation pulsed again.
"Not just responding, I think." Zimone paced a slow circle around the tangle. #emph[Pulse, pulse, pulse] , it went, in time with her speech. "Do you hear that?"
Rootha looked around. "Hear what?"
"Exactly. #emph[Nothing] ."
Nothing. No screams, no screeches, no scuttling, clicking limbs.
Killian let out a slow breath. "It's protecting us from Phyrexian attention."
They fell silent. A little shiver ran through Quint. Awe or fear, he couldn't tell. The invocation, Professor Vess had said, held the power of Strixhaven's five elder dragons, all meshed and melded together to construct the school and safeguard it from harm. Until now, though, he hadn't realized that to do so, it had become partially #emph[alive] .
"How do we begin?" he asked, half to himself. It was easier to imagine raising a mountain than casting the spell that had built Strixhaven itself.
"Maybe—" Dina began, but before she even finished, the invocation unknotted, rearranging itself into neat segments. Not a tangle, Quint realized, but a five-petaled flower, each petal a seamless blend of two colors. The muddled words reformed into recognizable sentences.
"Five elder dragons," Rootha said, touching a blue and red petal. "Five parts to the spell. I have a hunch we need to follow the elder dragons' example and read all five parts together."
Zimone stood on tiptoe to examine the very heart of the invocation. "Outside, too. You see this conditional? We have to be able to see what we're affecting."
"We likely can't return the way we entered," Quint said. Even the thought of creeping among the sinew-strewn stacks again made him shudder.
Dina's eyes sparkled. "There's more than one way to catch some fresh air. We can always blame it on the Phyrexians."
"Oh, no. You're planning something destructive," Killian said, then added, with awful emphasis, "#emph[again] ."
"Depends on your definition of 'destructive.' Watch my back." Crouching, Dina pulled small pots of unidentifiable goo from her bag and began scrawling symbols across the platform.
Zimone knelt beside her. "I #emph[see] . How are you powering it?"
"With my pests."
"That won't provide enough energy."
"Unless you're volunteering—"
"Let me add to the growth factors." Zimone's fingers, trailing bluish light, dabbed through Dina's scrawls, pocking the muddy-green sigils with spots of brightness. "The imaginary spaces between discrete physical features theoretically extend forever, the same way an infinity of numbers exists between discrete digits. If we apply Thale's Expansion Hypothesis to flip #emph[imaginary] into #emph[real] ~"
The air above Dina and Zimone's rapidly expanding ritual shimmered, blue-green dark blending in a way that ought to have been muddy but instead looked animated. Patterns like twisting ladders twined between symbols gnarled like willow roots. The sense of energy trapped and waiting redoubled.
Then Quint heard #emph[movement] .
He spun and flung out his hand, hot-white symbols striking nearby statuary and scrolls, but even as seven spirit-statues scraped themselves together, the professor leaped from the shadows. Its claws stretched toward Quint; its metal sides opened, and in the center of the gaping ribs a red-beating #emph[thing] glared—
An inky needle shot past Quint and tore through the red-eyed thing. The professor reared back, ribs shuddering wide, and a spike of ice screamed from the moat, piercing its leg with a tremendous crack. Quint's spirits charged the professor in a grinding crash of stone; the professor reeled under their assault. Quint's heart leaped. They only had to last until Zimone and Dina completed their ritual.
Then the professor's ribs stretched open again. With another hiss, Killian speared ink at the professor—too late. The ribs swelled out in ribbons, fast as thought, and cleaved through Quint's spirits. Three of them dissolved; the other four reeled back, torn almost to nothing.
Fire thundered from Rootha as Quint grabbed the invocation and begged it to condense itself. The single thought ramped through his head: above all else, #emph[Phyrexia must not take Strixhaven's heart] . Around him seethed ink darts and crackling ice—at the corners of his eyes flared the white of encouragement and the blaze of fire—as the invocation furled its petals and knotted down to the diameter of a soup tureen, a dinner plate, a teacup, and he grabbed it and rammed it into his pocket, out of sight—
Then, "Done!" Dina cried. She upended her bag of pests—and the professor bulled past Rootha and Killian, sweeping the pests aside as it lunged for Dina and Zimone. But its foot landed in the ritual circle. A shriek tore from its ribs, but it might as well have tried to shout the suns out of the sky. Its flesh was #emph[melting] , shriveling down to leather on bone, and the red-eyed heart flared as the ritual absorbed its life energy.
Then Dina screamed.
#emph[It's too much energy!] Quint realized as Killian scrambled to her side. She writhed, body burning with a dark-green fire—which erupted from her pores and splashed every bookcase nearby.
Quint never knew growth could sound like violence incarnate.
Polished planks splintered into razor-edged branches, lengthening so rapidly they slashed the professor's remnants in half. Leaves erupted with sounds like blades being unsheathed. In the span of a breath, roots thicker than Quint's body churned the floor into pebbles. The thunder of unshackled life drowned Quint's senses as bookcases exploded and tangled into a single tree above the invocation platform—#emph[and kept growing] , boughs forming a perfect helix of steps and leaves thinning into infinity. The tree's crown shoved against the Biblioplex's ceiling, paused—and broke right through. Light and air and masonry showered onto the platform.
Quint's mouth fell open.
Then Dina crumpled.
"Not bad for overloading, huh, Zimone?" she panted as Killian and Rootha helped her up with awed expressions.
Zimone's smile was quiet but fierce. "Not bad at all. But be careful not to go too far from the main trunk. #emph[Theoretically] , the branches have converted from imaginary to physical space—but past a certain length, they become more imaginary than real."
Dina laughed breathlessly. "And don't look down."
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
On the roof at last and wishing he hadn't looked down, Quint braced his hands on his knees, wheezed through his trunk, and thought, #emph[This time—I mean it—I will never again climb anything without expeditionary gear] . After a few moments, he caught enough breath to straighten up and take the Invocation of the Founders from his pocket. In the open air, its petals unfolded, brightened, grew. Behind him came a sound like shuffling cards as Zimone released the imaginary boughs and the tree shrank back to a plausible height.
Killian, still supporting Dina, eyed the invocation. "We can't be disrupted while casting. That could lead to any manner of unintended results. The invocation #emph[probably] won't create a massive swamp creature if that happens, but—"
"No guarantees," Dina cackled weakly.
"No disruptions," Rootha said, "got it," and she swept around the edge of the roof. Ice erupted in her wake, enclosing the rooftop and dampening the red, ruptured sky with its chilly purity.
Then, with only faint hesitation, they each grabbed a petal and began reading.
#figure(image("005_Arcavios: A Radiant Heart/03.png", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none)
Shock ran through Quint. The words were so #emph[prosaic] . The invocation simply described Strixhaven. Here, the invocation stated, the ground had this consistency; it sloped in this manner and contained these types of stones. The sky shuddered as Zimone defined the way the clouds moved and the air ebbed around the school. Rootha told the sun how it heated the school's roofs and lawns and the aquifers and springs where to flow. Dina grinned as she chronicled the flora: where they grew, how they died, the new life they fed. Through it all twined Killian's portion, cajoling the separate parts together as their words rose in pillars of light. They told Strixhaven what it was, and in that telling, there was no room for Phyrexia.
And Strixhaven listened. Even expecting it, the sight almost made Quint stutter. The portals overhead puckered as they fought being described out of reality, but they could no more resist than could water, wind, fire, earth, and light. Five voices rose as the invocation neared completion, and the pillars flared brighter—
The glacial wall shattered.
The explosion knocked Quint to his knees, and Zimone, Dina, and Killian flung themselves against the rooftop, hands still grasping their petals, mouths still reciting. But Rootha faced the person standing at the roof's edge, an elegant figure despite the way its body appeared to be one giant mechanical heart.
"Rootha," the figure sighed. "You always found flaws in your work that no one else could see~ but somehow, you missed the weakness in your ice. I'm disappointed."
#emph[Don't!] Quint tried to say; but he couldn't utter a sound without interrupting the invocation.
Rootha's voice faltered. "<NAME>?"
Her petal went dark.
The other students read frantically, trying to make up for Rootha as she flung flare after flare, spike after spike of ice, but Nassari evaded everything. Harsh words slithered from their lips—criticism without critique—and Rootha flinched and paled with each barb. The light overhead dimmed. The invocation was failing—
But Quint smiled.
Odd, how excited he felt. Almost like he had when finding the lost city of Zantafar. There was that same sense of bridging the lost knowledge of the past with the scholars of the future.
In this case, he was ensuring Strixhaven #emph[had] a future.
Quint took a single moment to bask in this school and the glory of its existence. Then he reached over and grabbed Rootha's petal.
The others' eyes widened, but he couldn't spare them a thought, because every scrap of himself was focused on the invocation. It was impossible to speak two parts at once. Instead, he poured magic directly into Rootha's petal. The earth was his voice; the seas and suns were his bones; he powered the invocation with his life alone. The pillars of light flared brighter than ever. Even as his life drained into the invocation, he thought, #emph[I've never seen anything so magnificent.]
A shock ripped through his core.
Quint gasped. The invocation~ ? No. This light shone from #emph[within] . Quint screamed as it tore through muscle and bone in savage coils. He tried to reach for his fellow students—his friends—but the invocation roared in response. The twin lights whipped about in fervent dance, cleaving through stone and steel alike. Dean Nassari was flung from the rooftop; the Biblioplex buckled and broke, crashing down as the light held Quint captive in midair; the stones of Strixhaven's outlying buildings came apart like sugar cubes in tea; the portals crumpled; the Invasion Tree's branches thrashed as the sky tried to close in on itself. And through it all, Quint #emph[burned] —
Amid the conflagration, Quint's thoughts raced to Will and Rowan. His friends, too. He hadn't seen them since the invasion began. As the burning grew unbearable, he could only hope they were all right.
#figure(image("005_Arcavios: A Radiant Heart/04.png", width: 100%), caption: [Art by: Eelis Kyttanen], supplement: none, numbering: none)
The light swallowed Quint whole.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Strixhaven's students emerged from the dormitory to find not a siege—not their former professors, ready to whisk them to compleation—but ruin. Some cried, but not for long, because the sky still bulged with invasion portals trying to force back through, and metallic figures still gleamed in the distance. Under Liliana's instructions, they built up what defenses they could, dug through rubble, pulled out survivors, and tried to identify any professors they uncovered, as Witherbloom students tended to the wounded.
"A poor effort, Merrow," Liliana said, examining the contents of a cauldron. "The blood-restoration potion requires #emph[powdered] blackcrest pods. You haven't even sieved out the hulls. You're focusing too much on the obvious injuries, Frena. That poor boy's going to suffocate long before the arm you're splinting mends. What's this? The #emph[Sorlian Theorem] ? Really, Rinne? She's an owlin, not a loxodon, the Sorlian Theorem is hardly applicable~"
There was a crash; then multiple voices shouted, "They're down here!" Liliana had to force herself not to run. She was the one who'd sent them to find the invocation; she was the reason they were now injured, possibly dead. They had granted Strixhaven this reprieve, however brief. She owed them her attention and much more besides~
By the time she reached the remains of the Biblioplex, the students working there had dug out the injured. Despite her stern facade, Liliana's heart beat rapid as a drum as she looked over Dina, Killian, Zimone, and Rootha. Broken bones, contusions, gaping wounds, no doubt a myriad of interesting infections—it would be faster to consider the injuries they #emph[didn't ] have.
And she was impressed. Even bleeding from multiple gashes, Killian was staggering through the rubble. Ink snarled around him as he tore through stonework and carapace alike.
Trust Embrose's son to be a nuisance. "Sedate him," Liliana said, and a Witherbloom student descended with an ominously smoking potion.
But before the student could get within force-feeding distance, Killian yelled, "#emph[Father!] "
Liliana drew in a sharp breath and peered into the hole Killian had excavated. There was Embrose, dusty and disheveled, bloody and blemished, surrounded by the remnants of a number of Phyrexianized professors—but alive, and himself.
"Well, Lu," Liliana said.
"Well, Vess," he returned, curt and dignified as ever. His attention shifted to Killian, standing stunned at the edge of the hole. "Help me up."
Liliana beckoned another student over. "Lend a hand to—"
"I don't need help," Embrose interrupted, but when Killian reached down, his father grasped his hand.
Liliana turned aside, her own heart twisting uncomfortably at the look on Killian's face. The others still needed her attention, anyway. Zimone had, sensibly, not tried to stand, with her broken leg and eyes glassy from sedation. Still, she grasped Liliana's arm and croaked, "Nimiroti~ you have to save her~"
As gently as possible, Liliana unhooked Zimone's hand. She couldn't spare anyone to check on Zimone's grandmother. And Rootha—one look told her more than enough. The girl wasn't even attempting to move. She simply lay on her stretcher, broken as a child's doll, and stared blankly at the sky.
"Not bad for students, right?" Dina croaked.
Liliana glanced at the fourth stretcher. "I'd suggest there's no way you could have done #emph[worse] ."
Dina shrugged, then winced. What few patches of skin remained unbloodied bore large, painful-looking bruises. "Now we have plenty of room to remodel."
Liliana shook her head—then straightened, eyes widening. "Where's Quint?"
A cloud passed over Dina's face. "We don't know. There was a burst of light, and he just—disappeared."
#emph[Dead] , Liliana thought; then she frowned as Dina's words echoed in her mind. #emph[Dead . . . or a spark? Kasmina suggested there was an ember among them, and it's clearly not one of these four. If Quint's spark ignited, he could still be alive . . .]
Dina was saying something. Liliana shook her head. "What was that?"
"We should expand the swamp. I've always thought it needed to be bigger."
Liliana looked up. She looked at the sky, brightness warring with murky redness, and the pulsing, squirming, black-edged scars that used to be invasion portals. She looked at the branches puncturing the ground, bent and battered but still standing. She looked at her colleagues' bodies, some crushed by collapsing buildings, others with their metal parts torn away by the incomplete invocation, and knew that more were still alive, and they would never stop. She looked at the ruins of #emph[her] home, #emph[her] sanctuary, #emph[her] respite disrupted by Phyrexia.
Then she stared up at the portals like open wounds, with the maggots of the Invasion Tree's branches already breaking through the invocation's incomplete banishment.
Liliana's hands dropped to her sides. Her fingers opened. Light spilled from her palms: not the murky blood-color thrashing overhead, or even the clear brightness that fought against it. This was #emph[her] light, dim and grim. It sank like water into the ground. Far below, in the corpse-strewn ruins of the school—in the catacombs where ancient professors moldered—beneath even that, where the bones of unnamed, unknown thousands leached into the bedrock—her magic found bodies and gave them new life.
Skeletons and zombies erupted from the ground, and students screamed and scrambled to get out of their way. Empty sockets smoldered with purple fire as Lilliana's army arranged itself around the rubble of Strixhaven in obedience to her silent command, forming a barrier to hold off Phyrexia. It would stand as long as she had breath in her body.
"Remodeling will have to wait," Liliana said.
|
|
https://github.com/darkMatter781x/OverUnderNotebook | https://raw.githubusercontent.com/darkMatter781x/OverUnderNotebook/main/entries/pure_pursuit/pure_pursuit.typ | typst | #import "/packages.typ": notebookinator, gentle-clues
#import notebookinator: *
#import themes.radial.components: *
#import gentle-clues: *
#import "/util.typ": qrlink
#show: create-body-entry.with(
title: "Concept: Pure Pursuit",
type: "concept",
date: datetime(year: 2024, month: 3, day: 6),
author: "<NAME>",
)
= Pure Pursuit
Pure pursuit is an algorithm for making a robot follow predetermined curves. To
do this, the algorithm takes a list of points that form our curve and some
lookahead distance. Then, the robot tries to follow the curve by steering
towards the point furthest along the path but within a circle whose radius of is
the lookahead distance.
= Lookahead
== Analogy
Imagine you're mountain biking up a really annoying rocky uphill. You can't just
go straight, because the rocks will push you around. If you attempt to look
right in front of your front tire you'll end up doing a wobbly zigzag up the
trail. But if you instead focus on a point in the distance, then you're not
correcting for little changes caused by the rocks, and thus you will take a much
smoother path.
Same thing goes for robots, but for robots the “rocks” are anything from field
tile variance, to hot motors, to a low battery.
== So just make it big?
In the analogy, we saw that a small lookahead can be problematic, but what about
a large lookahead?
// excalidraw: https://excalidraw.com/#json=epm3bvKs1YoqG6nPTF_6t,njizcY1d2WAajrVdqjtZww
#figure(
image("./lookahead.svg", width: 100%),
caption: [
Example illustration of the effect of lookahead on the actual path the robot
takes.
],
)
Here we see that any amount of lookahead causes the robot to cut the corners of
the path and thus never actually follows our path. As lookahead increases, the
robot follows less and less closely to the path, until eventually it just goes
straight. This illustrates (quite literally) that there is a balance to be
struck between too much lookahead, and too little.
But how do we get the path to follow in the first place? This is where Bézier
curves come in.
= Bézier Curve
A Bézier curve is a method of taking a list of points and forming a curve
between them.
#figure(
stack(
dir: ltr, // left-to-right
spacing: 2mm, // space between contents
{
set align(horizon)
box(image("./cubic-Bézier.svg", width: 70%), height: 37%, clip: true)
},
{
set align(horizon)
figure(
qrlink("https://www.desmos.com/calculator/mm72eduq5w", size: 0.25em),
caption: [
interactive desmos
],
)
},
),
caption: [
A cubic Bézier
],
)
== Linear Interpolation
A Bézier curve is a method of taking a list of points to form a curve. Its
simplest form uses an algorithm known as linear interpolation or lerp for short.
A lerp takes a time between 0 and 1, $t$ and two values $a$ and $b$ to lerp
between. At $t=0$, the lerp function's output should equal $a$ and at $t=1$, the
output is $b$. But what happens between $t=0$ and $t=1$? In between these two
times, we linearly interpolate between $a$ and $b$. For example at $t=0.1$, the
output is expected to be 90% $a$ and 10% $b$.
#grid(
columns: 2,
gutter: 2mm,
[
The awesome thing about lerps is that we are not limited to normal numbers for $a$ and $b$,
we can also use points. For example, if you take 2 points and find all points
output by our function lerp over the entire domain of $t$, you will have found
all points on the line between $a$ and $b$. This is what we would call a linear
Bézier curve in the world Béziers.
],
figure({
image("./lerp.png", width: 100%)
}, caption: [
A linear Bézier curve. $t$ is represented by color
]),
)
== Quadratic Bézier
#grid(
columns: 2,
gutter: 2mm,
figure({
image("./quadratic.png", width: 100%)
}, caption: [
A quadratic Bézier curve. $t$ is represented by color
]),
[
Hey thats not a curve, thats a line!! Well, yes, but this is the simplest form
of a Bézier curve, but we can make it more interesting by adding more points. We
will start with a quadratic Bézier curve which requires 3 points. To do this we
make two lerps, one between $a$ and $b$, and one between $b$ and $c$. We then
take a lerp between these two curves to form our quadratic Bézier curve.
],
)
== Recursive Bézier
OK, but what if we want to make a Bézier curve with more than 3 points? Well, we
can form a recursive process to create n-degree Bézier. In our quadratic
(2-degree) Bézier example, we took two linear Béziers (1-degree) and lerped
between them. We can do the same thing for a cubic (3-degree) Bézier, but
instead of lerping between two linear Béziers (1), we lerp between two quadratic
Béziers(2).
Hopefully this helps you to see how a general recursive n-Bézier curve can be
formed. But, just to drive the point home, to create a n-degree Bézier curve, we
take two (n-1)-degree Bézier curves and lerp between them. And if you have a
0-degree Bézier curve, you just have a point.
#grid(
columns: 2,
gutter: 2mm,
[
= Tooling
Now we have a way to form a curve, but we don't wanna manually generate each
path. Luckily we have amazing tooling like #link("path.jerryio.com"). This tool
allows us to visualize, create, and modify a path. This path is then exported as
a list of points which we can then put in our code. What is this path formed by?
Splines!
Splines are simply just multiple Bézier curves put together. There's many
special types of splines, but thats another subject that we will not be getting
into. In the case of #link("path.jerryio.com"), we use cubic Bézier curves and
lines/linear Bézier curves to form the spline.
],
figure(
{
image("./path.jerryio.png", width: 80%)
},
caption: [
A screenshot of path.jerryio.com showing a path for an old six ball. The intent
of this path is to remove the triball in the matchload zone.
],
),
)
|
|
https://github.com/barddust/Kuafu | https://raw.githubusercontent.com/barddust/Kuafu/main/src/Logic/intro.typ | typst | = Introduction
About mathematical logic, we mainly concern about three questions:
+ What does it mean for one sentance to "follow logically" from certain others? (What is *logically reasoning*?)
+ If a sentance does follow logically from certain others, what methods of proof might be necessary to establish this fact? (How to prove a sentance is *truly* follow logically from others?)
+ Is there a gap between what we can prove in an axiomatic system and what is true in the system.(Does _proved_ means _true_ in an axiomatic system?)
命题逻辑 (sentential logic) 过于简单,不兼容更广义的演绎;而一阶逻辑 (first-order logic) 则在数学上兼容演绎。
#v(3em)
See also:
- _A Mathmatical Introduction to Logic_, by <NAME>.
- _Analysis I_, by <NAME>. Chapter A Appendix: the basics of mathematical logic.
|
|
https://github.com/8LWXpg/jupyter2typst | https://raw.githubusercontent.com/8LWXpg/jupyter2typst/master/convert_list.md | markdown | MIT License | # KaTeX Convert List
full list in [KaTeX](https://katex.org/docs/support_table)
## TODOs
- affect following - 18
- begin - 4
- binary - 6
- no alternative - 12
- not sure - 9
- spacing - 7
- scripting - 14
- overlap - 8
- TeX - 3
## References
something too long to fit in the table
- `boxed`, `colorbox`, `fbox`, `fcolorbox`:
`#box(inset: (left: 3pt, right: 3pt), outset: (top: 3pt, bottom: 3pt))`
- `rule`: `$1` is optional
`#box(inset: (bottom: $1), box(fill: black, width: $2, height: $3))`
## Environments
details at [KaTeX](https://katex.org/docs/supported.html#environments)
context inside the environment is denoted by `{}`
| LaTeX | Typst |
| ------------- | ------------------------------- |
| `align` | `$${}$$` |
| `align*` | `$${}$$` |
| `aligned` | `$${}$$` |
| `alignat` | `$${}$$` |
| `alignat*` | `$${}$$` |
| `alignedat` | `$${}$$` |
| `array` | `mat(delim: #none, {})` |
| `Bmatrix` | `mat(delim: "{", {})` |
| `Bmatrix*` | `mat(delim: "{", {})` |
| `bmatrix` | `mat(delim: "[", {})` |
| `bmatrix*` | `mat(delim: "[", {})` |
| `cases` | `cases({})` |
| `CD` | TODO#not sure |
| `darray` | `mat(delim: #none, {})` |
| `dcases` | `cases({})` |
| `equation` | `$${}$$` |
| `equation*` | `$${}$$` |
| `gather` | `$${}$$` |
| `gathered` | `$${}$$` |
| `matrix` | `mat(delim: #none, {})` |
| `matrix*` | `mat(delim: #none, {})` |
| `pmatrix` | `mat(delim: "(", {})` |
| `pmatrix*` | `mat(delim: "(", {})` |
| `rcases` | `cases(reverse: #true, {})` |
| `smallmatrix` | `inline(mat(delim: #none, {}))` |
| `split` | `$${}$$` |
| `Vmatrix` | `mat(delim: "\|\|", {})` |
| `Vmatrix*` | `mat(delim: "\|\|", {})` |
| `vmatrix` | `mat(delim: "\|", {})` |
| `vmatrix*` | `mat(delim: "\|", {})` |
## Symbols
| LaTeX | Typst |
| -------- | ---------------- |
| `!` | `!` |
| `\!` | `#h(-1em/6)` |
| `#` | TODO#scripting |
| `\#` | `\#` |
| `%` | `//` |
| `\%` | `%` |
| `&` | `&` |
| `\&` | `\&` |
| `'` | `'` |
| `\'` | `acute($1)` |
| `(` | `(` |
| `)` | `)` |
| `\(…\)` | TODO#not sure |
| `\` | `space` |
| `\"` | `dot.double($1)` |
| `\$` | `$` |
| `\,` | `space.sixth` |
| `\:` | `#h(2em/9)` |
| `\;` | `#h(5em/18)` |
| `_` | `_` |
| `\_` | `\_` |
| `` \` `` | `grave($1)` |
| `<` | `<` |
| `\=` | `macron($1)` |
| `>` | `>` |
| `\>` | `#h(2em/9)` |
| `[` | `[` |
| `]` | `]` |
| `{}` | ignored |
| `\{` | `{` |
| `\}` | `}` |
| `\|` | `\|` |
| `\\|` | `\|\|` |
| `~` | `space.nobreak` |
| `\~` | `tilde($1)` |
| `^` | `^` |
| `\^` | `hat($1)` |
## A
| LaTeX | Typst |
| ------------------- | -------------------- |
| `\AA` | `circle(A)` |
| `\aa` | `circle(a)` |
| `\above` | TODO#binary |
| `\acute` | `acute($1)` |
| `\AE` | `Æ` |
| `\ae` | `æ` |
| `\alef` | `alef` |
| `\alefsym` | `alef` |
| `\aleph` | `aleph` |
| `\allowbreak` | TODO#not sure |
| `\Alpha` | `Alpha` |
| `\alpha` | `alpha` |
| `\amalg` | `product.co` |
| `\And` | `\&` |
| `\angl` | no alternative |
| `\angln` | no alternative |
| `\angle` | `angle` |
| `\approx` | `approx` |
| `\approxeq` | `approx.eq` |
| `\approxcolon` | `approx:` |
| `\approxcoloncolon` | `approx::` |
| `\arccos` | `arccos` |
| `\arcctg` | `#math.op("arcctg")` |
| `\arcsin` | `arcsin` |
| `\arctan` | `arctan` |
| `\arg` | `arg` |
| `\argmax` | `arg max` |
| `\argmin` | `arg min` |
| `\arraystretch` | TODO#begin |
| `\ast` | `*` |
| `\asymp` | `≍` |
| `\atop` | TODO#binary |
## B
| LaTeX | Typst |
| ----------------------- | --------------------------- |
| `\backepsilon` | `in.rev.small` |
| `\backprime` | `prime.rev` |
| `\backsim` | `tilde.rev` |
| `\backsimeq` | `tilde.eq.rev` |
| `\backslash` | `\\` |
| `\bar` | `macron($1)` |
| `\barwedge` | `⊼` |
| `\Bbb` | `bb($1)` |
| `\bcancel` | `cancel(inverted: #true)` |
| `\begin` | see [begins](#Environments) |
| `\begingroup` | ignored |
| `Beta` | `Beta` |
| `\beta` | `beta` |
| `\beth` | `beth` |
| `\between` | `≬` |
| `\bf` | TODO#affect following |
| `\big` and its variants | TODO#font |
| `\bigcap` | `sect.big` |
| `\bigcirc` | `circle.stroked.big` |
| `\bigcup` | `union.big` |
| `\bigdot` | `dot.circle.big` |
| `\bigplus` | `plus.circle.big` |
| `\bigtimes` | `times.circle.big` |
| `\bigsqcup` | `union.square.big` |
| `\bigstar` | `star.stroked` |
| `\bigtriangledown` | `triangle.stroked.b` |
| `\bigtriangleup` | `triangle.stroked.t` |
| `\biguplus` | `union.plus.big` |
| `\bigvee` | `or.big` |
| `\bigwedge` | `and.big` |
| `\binom` | `binom($1, $2)` |
| `\blacklozenge` | `lozenge.filled` |
| `\blacksquare` | `square.filled` |
| `\blacktriangle` | `triangle.filled.t` |
| `\blacktriangledown` | `triangle.filled.b` |
| `\blacktriangleleft` | `triangle.filled.l` |
| `\blacktriangleright` | `triangle.filled.r` |
| `\bm` | `bold($1)` |
| `\bmod` | `mod` |
| `\bold` | `bold($1)` |
| `\boldsymbol` | `bold($1)` |
| `\bot` | `bot` |
| `\bowtie` | `⋈` |
| `\Box` | `square.stroked` |
| `\boxdot` | `dot.square` |
| `\boxed` | `#box(stroke: 0.5pt)[$$1$]` |
| `\boxminus` | `minus.square` |
| `\boxplus` | `plus.square` |
| `\boxtimes` | `times.square` |
| `\Bra` | `lr(angle.l $1 \|)` |
| `\bra` | `lr(angle.l $1 \|)` |
| `\Braket` | `lr(angle.l $1 angle.r)` |
| `\braket` | `lr(angle.l $1 angle.r)` |
| `\brace` | TODO#binary |
| `\brack` | TODO#binary |
| `\breve` | `breve($1)` |
| `\bull` | `circle.filled.small` |
| `\bullet` | `circle.filled.small` |
| `\Bumpeq` | `≎` |
| `\bumpeq` | `≏` |
## C
| LaTeX | Typst |
| ------------------- | ----------------------- |
| `\cal` | TODO#affect following |
| `\cancel` | `cancel($1)` |
| `\Cap` | `sect.double` |
| `\cap` | `sect` |
| `\cdot` | `dot.op` |
| `\cdotp` | `dot.op` |
| `\cdots` | `dots.h.c` |
| `\ce` | not supported in ipynb |
| `\centerdot` | `dot.op` |
| `\cfrac` | `display(frac($1, $2))` |
| `\char` | `\u{$1}` in hex |
| `\check` | `caron($1)` |
| `\ch` | not supported in ipynb |
| `\Chi` | `Chi` |
| `\chi` | `chi` |
| `\choose` | TODO#binary |
| `\circ` | `compose` |
| `\circeq` | `≗` |
| `\circlearrowleft` | `arrow.ccw` |
| `\circlearrowright` | `arrow.cw` |
| `\circledast` | `ast.circle` |
| `\circledcirc` | `circle.nested` |
| `\circleddash` | `dash.circle` |
| `\circledR` | `®` |
| `\circledS` | `Ⓢ` |
| `\clubs` | `suit.club` |
| `\clubsuit` | `suit.club` |
| `\cnums` | `CC` |
| `\colon` | `colon` |
| `\Colonapprox` | `::approx` |
| `\colonapprox` | `:approx` |
| `\coloncolon` | `::` |
| `\coloncolonapprox` | `::approx` |
| `\coloncolonequals` | `::=` |
| `\coloncolonminus` | `"::−"` |
| `\coloncolonsim` | `::tilde.op` |
| `\Coloneq` | `"::−"` |
| `\coloneq` | `":−"` |
| `\colonequals` | `:=` |
| `\Coloneqq` | `::=` |
| `\coloneqq` | `:=` |
| `\colonminus` | `":−"` |
| `\Colonsim` | `::tilde.op` |
| `\colonsim` | `:tilde.op` |
| `\color` | TODO#affect following |
| `\colorbox` | `#box(fill: $1)[$2]` |
| `\complement` | `complement` |
| `\Complex` | `CC` |
| `\cong` | `tilde.equiv` |
| `\coprod` | `product.co` |
| `\copyright` | `copyright` |
| `\cos` | `cos` |
| `\cosec` | `#math.op("cosec")` |
| `\cosh` | `cosh` |
| `\cot` | `cot` |
| `\cotg` | `#math.op("cotg")` |
| `\coth` | `coth` |
| `\cr` | `;` |
| `\csc` | `csc` |
| `\ctg` | `ctg` |
| `\cth` | `#math.op("cth")` |
| `\Cup` | `union.double` |
| `\cup` | `union` |
| `\curlyeqprec` | `eq.prec` |
| `\curlyeqsucc` | `eq.succ` |
| `\curlyvee` | `or.curly` |
| `\curlywedge` | `and.curly` |
| `\curvearrowleft` | `arrow.ccw.half` |
| `\curvearrowright` | `arrow.cw.half` |
## D
| LaTeX | Typst |
| ------------------- | ------------------------ |
| `\dag` | `dagger` |
| `\Dagger` | `dagger.double` |
| `\dagger` | `dagger` |
| `\daleth` | `ℸ` |
| `\Darr` | `arrow.b.double` |
| `\dArr` | `arrow.b.double` |
| `\darr` | `arrow.b` |
| `\dashleftarrow` | `arrow.l.dash` |
| `\dashrightarrow` | `arrow.r.dash` |
| `\dashv` | `tack.l` |
| `\dbinom` | `display(binom($1, $2))` |
| `\dbcolon` | `::` |
| `\ddag` | `dagger.double` |
| `\ddagger` | `dagger.double` |
| `\ddot` | `dot.double($1)` |
| `\ddots` | `dots.down` |
| `\def` | TODO#scripting |
| `\deg` | `deg` |
| `\degree` | `degree` |
| `\Delta` | `Delta` |
| `\delta` | `delta` |
| `\det` | `det` |
| `\digamma` | `ϝ` |
| `\dfrac` | `display(frac($1, $2))` |
| `\diagdown` | `╲` |
| `\diagup` | `╱` |
| `\Diamond` | `lozenge.stroked` |
| `\diamond` | `diamond.stroked.small` |
| `\diamonds` | `♢` |
| `\diamondsuit` | `♢` |
| `\dim` | `dim` |
| `\displaystyle` | `display($1)` |
| `\div` | `div` |
| `\divideontimes` | `times.div` |
| `\dot` | `dot($1)` |
| `\Doteq` | `≑` |
| `\doteq` | `≐` |
| `\doteqdot` | `≑` |
| `\dotplus` | `plus.dot` |
| `\dots` | `dots.h.c` |
| `\dotsb` | `dots.h.c` |
| `\dotsc` | `dots.h.c` |
| `\dotsi` | `dots.h.c` |
| `\dotsm` | `dots.h.c` |
| `\dotso` | `...` |
| `\doublebarwedge` | `⩞` |
| `\doublecap` | `sect.double` |
| `\doublecup` | `union.double` |
| `\Downarrow` | `arrow.b.double` |
| `\downarrow` | `arrow.b` |
| `\downdownarrows` | `arrows.bb` |
| `\downharpoonleft` | `harpoon.bl` |
| `\downharpoonright` | `harpoon.br` |
## E
| LaTeX | Typst |
| ------------------- | --------------------------- |
| `\edef` | TODO#scripting |
| `\ell` | `ell` |
| `\empty` | `emptyset` |
| `\emptyset` | `emptyset` |
| `\end` | see [begins](#Environments) |
| `\endgroup` | TODO#scripting |
| `\enspace` | `space.en` |
| `\Epsilon` | `Epsilon` |
| `\epsilon` | `epsilon.alt` |
| `\eqcirc` | `≖` |
| `\Eqcolon` | `"−::"` |
| `\eqcolon` | `dash.colon` |
| `\Eqqcolon` | `"=::"` |
| `\eqqcolon` | `=:` |
| `\eqsim` | `minus.tilde` |
| `\eqslantgtr` | `⪖` |
| `\eqslantless` | `⪕` |
| `\equalscolon` | `=:` |
| `\equalscoloncolon` | `"=::"` |
| `\equiv` | `equiv` |
| `\Eta` | `Eta` |
| `\eta` | `eta` |
| `\eth` | `ð` |
| `\exist` | `exists` |
| `\exists` | `exists` |
| `\exp` | `exp` |
| `\expandafter` | TODO#scripting |
## F
| LaTeX | Typst |
| ---------------- | -------------------------------- |
| `\fallingdotseq` | `≒` |
| `\fbox` | `#box(stroke: 0.5pt)[$1]` |
| `\fcolorbox` | `#box(stroke: $1, fill: $2)[$3]` |
| `\Finv` | `Ⅎ` |
| `\flat` | `♭` |
| `\footnotesize` | TODO#affect following |
| `\forall` | `forall` |
| `\frac` | `frac($1, $2)` |
| `\frak` | `frak($1)` |
| `\frown` | `⌢` |
| `\futurelet` | TODO#scripting |
## G
| LaTeX | Typst |
| ------------ | -------------------- |
| `\Game` | `⅁` |
| `\Gamma` | `Gamma` |
| `\gamma` | `gamma` |
| `\gcd` | `gcd` |
| `\ge` | `>=` |
| `\genfrac` | TODO#not sure |
| `\geq` | `>=` |
| `\geqq` | `ge.equiv` |
| `\geqslant` | `gt.eq.slant` |
| `\gets` | `arrow.l` |
| `\gg` | `>>` |
| `\ggg` | `>>>` |
| `\gggtr` | `>>>` |
| `\gimel` | `gimel` |
| `\global` | TODO#scripting |
| `\gnapprox` | `⪊` |
| `\gneq` | `⪈` |
| `\gneqq` | `gt.nequiv` |
| `\gnsim` | `gt.ntilde` |
| `\grave` | `grave($1)` |
| `\gt` | `>` |
| `gtapprox` | `⪆` |
| `gtreqless` | `gt.eq.lt` |
| `gtreqqless` | `⪌` |
| `gtrless` | `gt.lt` |
| `gtrsim` | `gt.tilde` |
| `gvertneqq` | not found in unicode |
## H
| LaTeX | Typst |
| ----------------------------- | ------------------------------- |
| `\H` | `acute.double($1)` |
| `\Harr` | `<=>` |
| `\hArr` | `<=>` |
| `\harr` | `<->` |
| `\hat` | `hat($1)` |
| `\hbar` | `planck.reduce` |
| `\hbox` | `$1` |
| `\hdashline` | TODO#begin |
| `\hearts` | `♡` |
| `\heartsuit` | `♡` |
| `\hline` | TODO#begin |
| `\hom` | `hom` |
| `\hookleftarrow` | `arrow.l.hook` |
| `\hookrightarrow` | `arrow.r.hook` |
| `\hphantom` | `#box(height: 0pt, hide[$$1$])` |
| `\href` | not supported in ipynb |
| `\hskip` | TODO#TeX |
| `\hslash` | `planck.reduce` |
| `\hspace` | `#h($1)` |
| `\htmlClass` and its variants | not supported in ipynb |
| `\huge` | TODO#affect following |
| `\Huge` | TODO#affect following |
## I
| LaTeX | Typst |
| ------------------ | ------------------------------------------ |
| `\i` | `dotless.i` |
| `\iff` | `<==>` |
| `\iiint` | `integral.triple` |
| `\iint` | `integral.double` |
| `\Im` | `Im` |
| `\image` | `Im` |
| `\imageof` | `⊷` |
| `\imath` | `dotless.i` |
| `\impliedby` | `<==` |
| `\implies` | `==>` |
| `\in` | `in` |
| `\includegraphics` | not supported in ipynb |
| `\inf` | `inf` |
| `\infin` | `infinity` |
| `\infty` | `infinity` |
| `\injlim` | `#math.op("inj\u{2009}lim", limits: true)` |
| `\int` | `integral` |
| `\intercal` | `⊺` |
| `\intop` | `integral` |
| `\Iota` | `Iota` |
| `\iota` | `iota` |
| `\isin` | `in` |
| `\it` | TODO#affect following |
## JK
| LaTeX | Typst |
| -------- | ------------------- |
| `\j` | `dotless.j` |
| `\jmath` | `dotless.j` |
| `\Join` | `⋈` |
| `\Kappa` | `Kappa` |
| `\kappa` | `kappa` |
| `\KaTeX` | `"KaTeX"` |
| `\ker` | `ker` |
| `\kern` | TODO#TeX |
| `\Ket` | `lr(\| $1 angle.r)` |
| `\ket` | `lr(\| $1 angle.r)` |
## L
| LaTeX | Typst |
| ------------------------- | --------------------- |
| `\Lambda` | `Lambda` |
| `\lambda` | `lambda` |
| `\land` | `and` |
| `\lang` | `angle.l` |
| `\langle` | `angle.l` |
| `\Larr` | `arrow.l.double` |
| `\lArr` | `arrow.l.double` |
| `\larr` | `<-` |
| `\large` and its variants | TODO#affect following |
| `\LaTeX` | `"LaTeX"` |
| `\lBrace` | `⦃` |
| `\lbrace` | `{` |
| `\lbrack` | `[` |
| `\lceil` | `⌈` |
| `\ldotp` | `.` |
| `\ldots` | `...` |
| `\le` | `<=` |
| `\leadsto` | `arrow.r.squiggly` |
| `\left` | `lr($1 ...)` |
| `\leftarrow` | `<-` |
| `\Leftarrow` | `arrow.l.double` |
| `\leftarrowtail` | `<-<` |
| `\leftharpoondown` | `harpoon.lb` |
| `\leftharpoonup` | `harpoon.lt` |
| `\leftleftarrows` | `arrows.ll` |
| `\Leftrightarrow` | `<=>` |
| `\leftrightarrow` | `<->` |
| `\leftrightarrows` | `arrows.lr` |
| `\leftrightharpoons` | `harpoons.ltrb` |
| `\leftrightsquigarrow` | `arrow.l.r.wave` |
| `\leftthreetimes` | `times.three.l` |
| `\leq` | `<=` |
| `\leqq` | `lt.equiv` |
| `\leqslant` | `lt.eq.slant` |
| `\lessapprox` | `⪅` |
| `\lessdot` | `lt.dot` |
| `\lesseqgtr` | `lt.eq.gt` |
| `\lesseqqgtr` | `⪋` |
| `\lessgtr` | `lt.gt` |
| `\lesssim` | `lt.tilde` |
| `\let` | TODO#scripting |
| `\lfloor` | `⌊` |
| `\lg` | `lg` |
| `\lgroup` | `⟮` |
| `\lhd` | `lt.tri` |
| `\lim` | `lim` |
| `\liminf` | `liminf` |
| `\limits` | ignored |
| `\limsup` | `limsup` |
| `\ll` | `<<` |
| `\llap` | TODO#overlap |
| `\llbracket` | `bracket.l.double` |
| `\llcorner` | `⌞` |
| `\Lleftarrow` | `arrow.l.triple` |
| `\lll` | `<<<` |
| `\llless` | `<<<` |
| `\ln` | `ln` |
| `\lnapprox` | `⪉` |
| `\lneq` | `⪇` |
| `\lneqq` | `lt.nequiv` |
| `\lnot` | `not` |
| `\lnsim` | `lt.ntilde` |
| `\log` | `log` |
| `\long` | TODO#scripting |
| `\Longleftarrow` | `<==` |
| `\longleftarrow` | `<--` |
| `\Longleftrightarrow` | `<==>` |
| `\longleftrightarrow` | `<-->` |
| `\longmapsto` | `arrow.r.long.bar` |
| `\Longrightarrow` | `==>` |
| `\longrightarrow` | `-->` |
| `\looparrowleft` | `arrow.l.loop` |
| `\looparrowright` | `arrow.r.loop` |
| `\lor` | `or` |
| `\lozenge` | `lozenge.stroked` |
| `\lparen` | `(` |
| `\Lrarr` | `<=>` |
| `\lrArr` | `<=>` |
| `\lrarr` | `<->` |
| `\lrcorner` | `⌟` |
| `\lq` | `quote.l.single` |
| `\Lsh` | `↰` |
| `\lt` | `<` |
| `\ltimes` | `times.l` |
| `\lVert` | `parallel` |
| `\lvert` | `divides` |
| `\lvertneqq` | not found in unicode |
## M
| LaTeX | Typst |
| ------------------ | ----------------------------- |
| `\maltese` | `maltese` |
| `\mapsto` | `arrow.r.bar` |
| `\mathbb` | `bb($1)` |
| `\mathbf` | `bold($1)` |
| `\mathbin` | `#math.op("$1")` |
| `\mathcal` | `cal($1)` |
| `\mathchoise` | TODO#spacing |
| `\mathclap` | `#box(width: 0pt, $1)` |
| `\mathclose` | `#h(0pt) $1` |
| `\mathellipsis` | `...` |
| `\mathfrak` | `frak($1)` |
| `\mathinner` | TODO#spacing |
| `\mathit` | `italic($1)` |
| `\mathllap` | TODO#overlap |
| `\mathnormal` | `$1` |
| `\mathop` | `$1` |
| `\mathopen` | `$1 #h(0pt)` |
| `\mathord` | TODO#spacing |
| `\mathpunct` | TODO#spacing |
| `\mathrel` | TODO#spacing |
| `\mathrlap` | TODO#overlap |
| `\mathring` | `circle($1)` |
| `\mathrm` | `upright($1)` |
| `\mathscr` | TODO#font |
| `\mathsf` | `sans($1)` |
| `\mathsterling` | `pound` |
| `\mathstrut` | `#hide(box(width: 0pt, ")"))` |
| `\mathtt` | `mono($1)` |
| `\max` | `max` |
| `\measuredangle` | `angle.arc` |
| `\medspace` | `#h(2em/9)` |
| `\mho` | `ohm.inv` |
| `\mid` | `\|` |
| `\middle` | `mid($1)` |
| `\minuscolon` | `dash.colon` |
| `\minuscoloncolon` | `"−::"` |
| `\minuso` | `⦵` |
| `\mkren` | TODO#TeX |
| `\mod` | `mod` |
| `\models` | `tack.r.double` |
| `\mp` | `minus.plus` |
| `\mskip` | `#h($1)` |
| `\Mu` | `Mu` |
| `\mu` | `mu` |
| `\multimap` | `multimap` |
## N
| LaTeX | Typst |
| ------------------- | ---------------------- |
| `\N` | `NN` |
| `\nabla` | `nabla` |
| `\natnums` | `NN` |
| `\natural` | `♮` |
| `\negmedspace` | `#h(-2em/9)` |
| `\ncong` | `tilde.equiv.not` |
| `\ne` | `!=` |
| `\nearrow` | `arrow.tr` |
| `\neg` | `not` |
| `\negthickspace` | `#h(-5em/18)` |
| `\negthinmedspace` | `#h(-1em/6)` |
| `\neq` | `!=` |
| `\newcommand` | TODO#scripting |
| `\newline` | `\` |
| `nexist` | `exists.not` |
| `\ngeq` | `gt.eq.not` |
| `\ngeqq` | not found in unicode |
| `\ngeqslant` | not found in unicode |
| `\ngtr` | `gt.not` |
| `\ni` | `in.rev` |
| `\nLeftarrow` | `arrow.l.double.not` |
| `\nleftarrow` | `arrow.l.not` |
| `\nLeftrightarrow` | `arrow.l.r.double.not` |
| `\nleftrightarrow` | `arrow.l.r.not` |
| `\nleq` | `lt.eq.not` |
| `\nleqq` | not found in unicode |
| `\nleqslant` | not found in unicode |
| `\nless` | `lt.not` |
| `\nmid` | `divides.not` |
| `\nobreak` | TODO#spacing |
| `\nobreakspace` | `space.nobreak` |
| `\noexpand` | TODO#scripting |
| `\nolimits` | ignored |
| `\nonumber` | TODO#begin |
| `\normalsize` | TODO#affect following |
| `\notin` | `in.not` |
| `\notni` | `in.rev.not` |
| `\nparallel` | `parallel.not` |
| `\nprec` | `prec.not` |
| `\npreceq` | `prec.eq.not` |
| `\nRightarrow` | `arrow.r.double.not` |
| `\nrightarrow` | `arrow.r.not` |
| `\nshortmid` | not found in unicode |
| `\nshortparallel` | not found in unicode |
| `\nsim` | `tilde.not` |
| `\nsubseteq` | `subset.eq.not` |
| `\nsubseteqq` | not found in unicode |
| `\nsucc` | `succ.not` |
| `\nsucceq` | `succ.eq.not` |
| `\nsupseteq` | `supset.eq.not` |
| `\nsupseteqq` | not found in unicode |
| `\ntriangleleft` | `lt.tri.not` |
| `\ntrianglelefteq` | `lt.tri.eq.not` |
| `\ntriangleright` | `gt.tri.not` |
| `\ntrianglerighteq` | `gt.tri.eq.not` |
| `\Nu` | `Nu` |
| `\nu` | `nu` |
| `\nVDash` | `⊯` |
| `\nVdash` | `⊮` |
| `\nvDash` | `tack.r.double.not` |
| `\nvdash` | `tack.r.not` |
| `\nwarrow` | `arrow.tl` |
## O
| LaTeX | Typst |
| ------------------------- | ----------------------------------- |
| `\O` | `Ø` |
| `\o` | `ø` |
| `\odot` | `dot.circle` |
| `\OE` | `Œ` |
| `\oe` | `œ` |
| `\oiiint` | `integral.vol` |
| `\oiint` | `integral.surf` |
| `\oint` | `integral.cont` |
| `\Omega` | `Omega` |
| `\omega` | `omega` |
| `\Omicron` | `Omicron` |
| `\omicron` | `omicron` |
| `\ominus` | `minus.circle` |
| `\operatorname` | `#math.op("$1")` |
| `\operatorname*` | `#math.op("$1", limits: true)` |
| `\operatornamewithlimits` | `#math.op("$1", limits: true)` |
| `\oplus` | `plus.circle` |
| `\origof` | `⊶` |
| `\oslash` | `⊘` |
| `\otimes` | `times.circle` |
| `\over` | TODO#binary |
| `\overbrace` | `overbrace($1)` `overbrace($1, $2)` |
| `\overgroup` | `accent($1, \u{0311})` |
| `\overleftarrow` | `arrow.l($1)` |
| `\overleftharpoon` | `accent($1, \u{20d0})` |
| `\overleftrightarrow` | `accent($1, \u{20e1})` |
| `\overline` | `overline($1)` |
| `\overlinesegment` | `accent($1, \u{20e9})` |
| `\Overrightarrow` | TODO#no alternative |
| `\overrightarrow` | `arrow.r($1)` |
| `\overrightharpoon` | `accent($1, \u{20d1})` |
| `\overset` | TODO#overlap |
| `\owns` | `in.rev` |
## P
| LaTeX | Typst |
| ----------------- | ------------------------------------------- |
| `\P` | `pilcrow` |
| `\parallel` | `parallel` |
| `\partial` | `diff` |
| `\perp` | `bot` |
| `\phantom` | `#hide[$$1$]` |
| `\phase` | TODO#no alternative |
| `\Phi` | `Phi` |
| `\phi` | `phi.alt` |
| `\Pi` | `Pi` |
| `\pi` | `pi` |
| `\pitchfork` | `⋔` |
| `\plim` | `#math.op("plim", limits: true)` |
| `\plusmn` | `plus.minus` |
| `\pm` | `plus.minus` |
| `\pmb` | `bold($1)` maybe works |
| `\pmod` | `mod` |
| `\pod` | TODO#spacing |
| `\pounds` | `pound` |
| `\Pr` | `Pr` |
| `\prec` | `prec` |
| `\precapprox` | `prec.approx` |
| `\preccurlyeq` | `prec.eq` |
| `\preceq` | `⪯` |
| `\precnapprox` | `prec.napprox` |
| `\precneqq` | `prec.nequiv` |
| `\precnsim` | `prec.ntilde` |
| `\precsim` | `prec.tilde` |
| `\prime` | `prime` |
| `\prod` | `product` |
| `\projlim` | `#math.op("proj\u{2009}lim", limits: true)` |
| `\propto` | `prop` |
| `\providecommand` | TODO#scripting |
| `\Psi` | `Psi` |
| `\psi` | `psi` |
| `\pu` | not supported in ipynb |
## QR
| LaTeX | Typst |
| -------------------- | -------------------------- |
| `\qquad` | `#h(2em)` |
| `\quad` | `space.quad` |
| `\R` | `RR` |
| `\r` | `circle($1)` |
| `\raisebox` | `#text(baseline: -$1)[$2]` |
| `\rang` | `angle.r` |
| `\rangle` | `angle.r` |
| `\Rarr` | `=>` |
| `\rArr` | `=>` |
| `\rarr` | `->` |
| `\ratio` | `:` |
| `\rBrace` | `⦄` |
| `\rbrace` | `}` |
| `\rbrack` | `]` |
| `\rceil` | `⌉` |
| `\Re` | `Re` |
| `\real` | `Re` |
| `\Reals` | `RR` |
| `\reals` | `RR` |
| `\renewcommand` | TODO#scripting |
| `\restriction` | `harpoon.tr` |
| `\rfloor` | `⌋` |
| `\rgroup` | `turtle.r` |
| `\rhd` | `gt.tri` |
| `\Rho` | `Rho` |
| `\rho` | `rho` |
| `\right` | `lr(... $1)` |
| `\Rightarrow` | `=>` |
| `\rightarrow` | `->` |
| `\rightarrowtail` | `>->` |
| `\rightharpoondown` | `harpoon.rb` |
| `\rightharpoonup` | `harpoon.rt` |
| `\rightleftarrows` | `arrows.rl` |
| `\rightleftharpoons` | `harpoons.rtlb` |
| `\rightrightarrows` | `arrows.rr` |
| `\rightsquigarrow` | `arrow.r.squiggly` |
| `\rightthreetimes` | `times.three.r` |
| `\risingdotseq` | `≓` |
| `\rlap` | TODO#overlap |
| `\rm` | TODO#affect following |
| `\rmoustache` | `⎱` |
| `\rparen` | `)` |
| `\rq` | `'` |
| `rrbracket` | `bracket.r.double` |
| `\Rrightarrow` | `arrow.r.triple` |
| `\Rsh` | `↱` |
| `\rtimes` | `times.r` |
| `\rule` | [ref](#references) |
| `\rVert` | `parallel` |
| `\rvert` | `divides` |
## S
| LaTeX | Typst |
| -------------------- | --------------------------- |
| `\S` | `section` |
| `\scriptscriptstyle` | TODO#affect following |
| `\scriptsize` | TODO#affect following |
| `\scriptstyle` | TODO#affect following |
| `\sdot` | `dot.op` |
| `\searrow` | `arrow.br` |
| `\sec` | `sec` |
| `\sect` | `section` |
| `\Set` | `{$1}` |
| `\set` | `{$1}` |
| `\setminus` | `without` |
| `\sf` | TODO#affect following |
| `sharp` | `♯` |
| `\shortmid` | TODO#no alternative |
| `\shortparallel` | TODO#no alternative |
| `\Sigma` | `Sigma` |
| `\sigma` | `sigma` |
| `\sim` | `tilde.op` |
| `\simcolon` | `tilde.op:` |
| `\simcoloncolon` | `tilde.op::` |
| `\simeq` | `tilde.eq` |
| `\sin` | `sin` |
| `\sinh` | `sinh` |
| `\sixptsize` | TODO#affect following |
| `\sh` | `#math.op("sh")` |
| `\small` | TODO#affect following |
| `\smallint` | `inline(integral)` |
| `\smallsetminus` | `without` |
| `\smallsmile` | `⌣` |
| `\sout` | `cancel(angle: #90deg, $1)` |
| `\space` | `space` |
| `\spades` | `suit.spade` |
| `\spadesuit` | `suit.spade` |
| `\sphericalangle` | `angle.spheric` |
| `\sqcap` | `sect.sq` |
| `\sqcup` | `union.sq` |
| `\square` | `square.stroked` |
| `\sqrt` | `sqrt($1)` `root($1, $2)` |
| `\sqsubset` | `subset.sq` |
| `\sqsubseteq` | `subset.eq.sq` |
| `\sqsupset` | `supset.sq` |
| `\sqsupseteq` | `supset.eq.sq` |
| `\ss` | `ß` |
| `\stackrel` | TODO#overlap |
| `\star` | `star.op` |
| `\sub` | `subset` |
| `\sube` | `subset.eq` |
| `\Subset` | `subset.double` |
| `\subset` | `subset` |
| `\subseteq` | `subset.eq` |
| `\subseteqq` | `⫅` |
| `\subsetneq` | `subset.neq` |
| `\subsetneqq` | `⫋` |
| `\substack` | TODO#overlap |
| `\succ` | `succ` |
| `\succapprox` | `succ.approx` |
| `\succcurlyeq` | `succ.eq` |
| `\succeq` | `⪰` |
| `\succnapprox` | `succ.napprox` |
| `\succneqq` | `succ.nequiv` |
| `\succnsim` | `succ.ntilde` |
| `\sum` | `sum` |
| `\sup` | `sup` |
| `\supe` | `supset.eq` |
| `\Supset` | `supset.double` |
| `\supset` | `supset` |
| `\supseteq` | `supset.eq` |
| `\supseteqq` | `⫆` |
| `\supsetneq` | `supset.neq` |
| `\supsetneqq` | `⫌` |
| `\surd` | `√` |
| `\swarrow` | `arrow.bl` |
## T
some command here is text mode only
| LaTeX | Typst |
| -------------------- | ----------------------- |
| `\tag` | TODO#not sure |
| `\tag*` | TODO#not sure |
| `\tan` | `tan` |
| `\tanh` | `tanh` |
| `\Tau` | `Tau` |
| `\tau` | `tau` |
| `\tbinom` | `inline(binom($1, $2))` |
| `\TeX` | `"TeX"` |
| `\text` | `#[$1]` |
| `\textasciitilde` | `~` |
| `\textasciicircum` | `\^` |
| `\textbackslash` | `\\` |
| `\textbar` | `\|` |
| `\textbardbl` | `‖` |
| `\textbf` | `bold(#[$1])` |
| `\textbraceleft` | `{` |
| `\textbraceright` | `}` |
| `\textcircled` | TODO#not sure |
| `\textcolor` | `text(fill: $1)[$2]` |
| `\textdagger` | `#sym.dagger` |
| `\textdaggerdbl` | `#sym.dagger.double` |
| `\textdegree` | `#sym.degree` |
| `\textdollarsign` | `\$` |
| `\textellipsis` | `...` |
| `\textemdash` | `---` |
| `\textendash` | `--` |
| `\textgreater` | `#sym.gt` |
| `\textit` | `italic(#[$1])` |
| `\textless` | `#sym.lt` |
| `\textmd` | `#[$1]` |
| `\textnormal` | `#[$1]` |
| `\textquotedblleft` | `#sym.quote.l.double` |
| `\textquotedblright` | `#sym.quote.r.double` |
| `\textquoteleft` | `#sym.quote.l.single` |
| `\textquoteright` | `#sym.quote.r.single` |
| `\textregistered` | `®` |
| `\textrm` | `#[$1]` |
| `\textsf` | `sans(#[$1])` |
| `\textsterling` | `#sym.pound` |
| `\textsyle` | `inline($1)` |
| `\texttt` | `mono(#[$1])` |
| `\textunderscore` | `\_` |
| `\textup` | `#[$1]` |
| `\tfrac` | `inline(frac($1, $2))` |
| `\tg` | `tg` |
| `\th` | `#math.op("th")` |
| `\therefore` | `therefore` |
| `\Theta` | `Theta` |
| `\theta` | `theta` |
| `\thetasym` | `theta.alt` |
| `\thickapprox` | `bold(approx)` |
| `\thicksim` | `bold(tilde)` |
| `\thickspace` | `#h(5em/18)` |
| `\thinspace` | `space.sixth` |
| `\tilde` | `tilde($1)` |
| `\times` | `times` |
| `\tiny` | TODO#affect following |
| `\to` | `->` |
| `\top` | `top` |
| `\triangle` | `triangle.stroked.t` |
| `\triangledown` | `triangle.stroked.b` |
| `\triangleleft` | `triangle.stroked.l` |
| `\trianglelefteq` | `lt.tri.eq` |
| `\triangleq` | `eq.delta` |
| `\triangleright` | `triangle.stroked.r` |
| `\trianglerighteq` | `gt.tri.eq` |
| `\tt` | TODO#affect following |
| `\twoheadleftarrow` | `<<-` |
| `\twoheadrightarrow` | `->>` |
## U
| LaTeX | Typst |
| ---------------------- | ------------------------------------- |
| `\u` | `breve($1)` |
| `\Uarr` | `arrow.t.double` |
| `\uArr` | `arrow.t.double` |
| `\uarr` | `arrow.t` |
| `\ulcorner` | `⌜` |
| `\underbar` | `underline($1)` |
| `\underbrace` | `underbrace($1)` `underbrace($1, $2)` |
| `\undergroup` | `accent($1, \u{032e})` |
| `\underleftarrow` | TODO#no alternative |
| `\underleftrightarrow` | `accent($1, \u{034d})` |
| `\underline` | `underline($1)` |
| `\underlinesegment` | TODO#no alternative |
| `\underrightarrow` | TODO#no alternative |
| `\underset` | TODO#overlap |
| `\unlhd` | `lt.tri.eq` |
| `\unrhd` | `gt.tri.eq` |
| `\Uparrow` | `arrow.t.double` |
| `\uparrow` | `arrow.t` |
| `\Updownarrow` | `arrow.t.b.double` |
| `\updownarrow` | `arrow.t.b` |
| `\upharpoonleft` | `harpoon.tl` |
| `\upharpoonright` | `harpoon.tr` |
| `\uplus` | `union.plus` |
| `\Upsilon` | `Upsilon` |
| `\upsilon` | `upsilon` |
| `\upuparrows` | `arrows.tt` |
| `\urcorner` | `⌝` |
| `\url` | not supported in ipynb |
| `\utilde` | TODO#no alternative |
## V
| LaTeX | Typst |
| ------------------- | ------------------------------ |
| `\v` | `caron($1)` |
| `\varDelta` | `italic(Delta)` |
| `\varepsilon` | `italic(epsilon)` |
| `\varGamma` | `italic(Gamma)` |
| `\varinjlim` | TODO#no alternative |
| `\varkappa` | `italic(kappa)` |
| `\varLambda` | `italic(Lambda)` |
| `\varliminf` | TODO#no alternative |
| `\varlimsup` | TODO#no alternative |
| `\varnothing` | `italic(nothing)` |
| `\varOmega` | `italic(Omega)` |
| `\varPhi` | `italic(Phi)` |
| `\varphi` | `italic(phi)` |
| `\varPi` | `italic(Pi)` |
| `\varpi` | `italic(pi.alt)` |
| `\varprojlim` | TODO#no alternative |
| `\varpropto` | `prop` |
| `\varPsi` | `italic(Psi)` |
| `\varrho` | `italic(rho.alt)` |
| `\varSigma` | `italic(Sigma)` |
| `\varsigma` | `italic(sigma.alt)` |
| `\varsubsetneq` | `subset.neq` |
| `\varsubsetneqq` | `⫋` |
| `\varsupsetneq` | `supset.neq` |
| `\varsupsetneqq` | `⫌` |
| `\varTheta` | `italic(Theta)` |
| `\vartheta` | `italic(theta)` |
| `\vartriangle` | `triangle.stroked.t` |
| `\vartriangleleft` | `lt.tri` |
| `\vartriangleright` | `gt.tri` |
| `\varUpsilon` | `italic(Upsilon)` |
| `\varXi` | `italic(Xi)` |
| `\vcentcolon` | `:` |
| `\vcenter` | TODO#not sure |
| `\Vdash` | `⊩` |
| `\vDash` | `tack.r.double` |
| `\vdash` | `tack.r` |
| `\vdots` | `dots.v` |
| `\vec` | `arrow($1)` |
| `\vee` | `or` |
| `\veebar` | `⊻` |
| `\verb` | TODO#not sure |
| `\Vert` | `parallel` |
| `\vert` | `divides` |
| `\vphantom` | `#box(width: 0pt, hide[$$!$])` |
| `\Vvdash` | `⊪` |
## W
| LaTeX | Typst |
| ------------ | ----------- |
| `\wedge` | `and` |
| `\weierp` | `℘` |
| `\widecheck` | `caron($1)` |
| `\widehat` | `hat($1)` |
| `\widetilde` | `tilde($1)` |
| `\wp` | `℘` |
| `\wr` | `wreath` |
## X
| LaTeX | Typst |
| --------------------- | ----------------------------------- |
| `\xcancel` | `cancel(cross: #true, $1)` |
| `\xdef` | TODO#scripting |
| `\Xi` | `Xi` |
| `\xi` | `xi` |
| `\xhookleftarrow` | `xarrow(sym: arrow.l.hook, $1)` |
| `\xhookrightarrow` | `xarrow(sym: arrow.r.hook, $1)` |
| `\xLeftarrow` | `xarrow(sym: arrow.l.double, $1)` |
| `\xleftarrow` | `xarrow(sym: arrow.l, $1)` |
| `\xleftharpoondown` | `xarrow(sym: harpoon.lb, $1)` |
| `\xleftharpoonup` | `xarrow(sym: harpoon.lt, $1)` |
| `\xLeftrightarrow` | `xarrow(sym: arrow.l.r.double, $1)` |
| `\xleftrightarrow` | `xarrow(sym: arrow.l.r, $1)` |
| `\xleftrightharpoons` | `xarrow(sym: harpoons.ltrb, $1)` |
| `\xlongequal` | `xarrow(sym: eq, $1)` |
| `\xmapsto` | `xarrow(sym: arrow.r.bar, $1)` |
| `\xRightarrow` | `xarrow(sym: arrow.r.double, $1)` |
| `\xrightarrow` | `xarrow(sym: arrow.r, $1)` |
| `\xrightharpoondown` | `xarrow(sym: harpoon.rb, $1)` |
| `\xrightharpoonup` | `xarrow(sym: harpoon.rt, $1)` |
| `\xrightleftharpoons` | `xarrow(sym: harpoons.rtlb, $1)` |
| `\xtofrom` | `xarrow(sym: arrow.l.r, $1)` |
| `\xtwoheadleftarrow` | `xarrow(sym: arrow.l.twohead, $1)` |
| `\xtwoheadrightarrow` | `xarrow(sym: arrow.r.twohead, $1)` |
## YZ
| LaTeX | Typst |
| ------- | ------ |
| `\yen` | `yen` |
| `\Z` | `ZZ` |
| `\Zeta` | `Zeta` |
| `\zeta` | `zeta` |
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/fauxreilly/0.1.0/lib.typ | typst | Apache License 2.0 | #let orly(
font: "",
color: blue,
top-text: "",
pic: "",
title: "",
title-align: left,
subtitle: "",
publisher: "",
publisher-font: ("Noto Sans", "Arial Rounded MT"),
signature: "",
margin: (top: 0in)
) = {
page(
margin: margin,
[
/**************
* VARIABLES
***************/
// Layout
#let top-bar-height = 0.33em // how tall to make the colored bar at the top of the page
// Title block
#let title-text-color = white
#let title-text-leading = 0.5em
#let title-block-height = 12em
// Subtitle
#let subtitle-margin = 0.5em // space between title block and subtitle text
#let subtitle-text-size = 1.4em
// "Publisher" / signature
#let publisher-text-size = 2em
#let signature-text-size = 0.9em
// *********************************************************
#set text(font: font) if font != ""
#grid(
rows: (
top-bar-height,
1em, // top text
1fr, // pre-image spacing
auto, // image
1fr, // spacing between image and title block
title-block-height,
subtitle-margin, // spacing between title and subtitle
subtitle-text-size, // subtitle
1fr, // spacing between subtitle and "publisher"
publisher-text-size
),
rect(width: 100%, height: 100%, fill: color), // color bar at top
align(center + bottom)[#emph[#top-text]], // top text
[], // pre-image spacing
image(pic, width: 100%, fit: "contain"), // image
[], // spacing between image and title block
block(width: 100%, height: title-block-height, inset: (x: 2em), fill: color)[ // title block
#set text(fill: title-text-color, size: 3em)
#set par(leading: title-text-leading)
#set align(title-align + horizon)
#title
],
[], // spacing between title block and subtitle
align(right)[
#set text(size: subtitle-text-size)
#emph[#subtitle]
],
[],
[
#text(font: publisher-font, weight: "bold", size: publisher-text-size)[
#if publisher == "" {
[O RLY#text(fill: color)[#super[?]]]
} else {
publisher
}
]
#if signature != "" {
box(width: 1fr, height: 100%)[
#set align(right + bottom)
#set text(size: signature-text-size)
#emph[#signature]
]
}
]
)
]
)
}
|
https://github.com/GZTimeWalker/GZ-Typst-Templates | https://raw.githubusercontent.com/GZTimeWalker/GZ-Typst-Templates/main/templates/homework.typ | typst | MIT License | #import "shared.typ": *
#let report(subject: "课程", title: "作业一", name: "张三", stdid: "11223344", body) = {
set document(title: title)
show: shared
let fieldname(name) = [
#set align(right + horizon)
#set text(font: fonts.serif)
#name
]
let cell = rect.with(width: 100%, radius: 6pt, stroke: none)
let fieldvalue(value) = [
#set align(left + horizon)
#set text(font: fonts.serif, weight: "medium", size: 13pt)
#cell(value)
]
set page(header: align(center)[
#grid(
columns: (auto, auto, auto, auto),
gutter: 1em,
fieldvalue(subject),
fieldvalue(title),
fieldvalue(name),
fieldvalue(stdid),
)
])
show par: set block(spacing: line_height)
set align(left + top)
set par(justify: true, first-line-indent: 0pt, leading: line_height)
body
}
|
https://github.com/hitszosa/universal-hit-thesis | https://raw.githubusercontent.com/hitszosa/universal-hit-thesis/main/README.md | markdown | MIT License | # 哈尔滨工业大学论文模板
适用于哈尔滨工业大学学位论文的 Typst 模板

> [!WARNING]
> 本模板正处于积极开发阶段,存在一些格式问题,适合尝鲜 Typst 特性
>
> 本模板是民间模板,**可能不被学校认可**,正式使用过程中请做好随时将内容迁移至 Word 或 LaTeX 的准备
## 关于本项目
[Typst](https://typst.app/) 是使用 Rust 语言开发的全新文档排版系统,有望以 Markdown 级别的简洁语法和编译速度实现 LaTeX 级别的排版能力,即通过编写遵循 Typst 语法规则的文本文档、执行编译命令,来可生成目标格式的 PDF 文档。
**universal-hit-thesis** 是一套简单易用的哈尔滨工业大学学位论文 Typst 模板,受 [hithesis](https://github.com/hithesis/hithesis) 启发,计划囊括一校三区本科、硕士、博士的学位论文格式。
**预览效果**
- 本科通用:[universal-bachelor.pdf](https://github.com/hitszosa/universal-hit-thesis/blob/build/universal-bachelor.pdf)
## 使用
### 本地编辑 Ⅰ (推荐)
这种方式适合大多数用户。
首先安装 Typst,您可以在 Typst Github 仓库的 [Release 页面](https://github.com/typst/typst/releases/) 下载最新的安装包直接安装,并将 `typst` 可执行程序添加到 `PATH` 环境变量;如果您使用 Scoop 包管理器,则可以直接通过 `scoop install typst` 命令安装。
安装好 Typst 之后,您只需要选择一个您喜欢的目录,并在此目录下执行以下命令:
```sh
typst init @preview/universal-hit-thesis:0.2.1
```
Typst 将会创建一个名为 `universal-hit-thesis` 的文件夹,进入该目录后,您可以直接修改目录下的 `universal-bachelor.typ` ,然后执行以下命令进行编译生成 `.pdf` 文档:
```sh
typst compile universal-bachelor.typ
```
或者使用以下命令进行实时预览:
```sh
typst watch universal-bachelor.typ
```
当您要实时预览时,我们更推荐使用 Visual Studio Code 进行编辑,配合 [Tinymist Typst](https://marketplace.visualstudio.com/items?itemName=nvarner.typst-lsp), [Typst Preview](https://marketplace.visualstudio.com/items?itemName=mgt19937.typst-preview) 等插件可以大幅提升您的编辑体验。
### 本地编辑 Ⅱ
这种方法适合 Typst 开发者。
首先使用 `git clone` 命令 clone 本项目,或者直接在 Release 页面下载特定版本的源码。在 `templates/` 目录下选择您需要的模板,直接修改或复制一份,在根目录运行以下命令进行编译:
```sh
typst compile ./templates/<template-name>.typ --root ./
```
或者使用如下命令进行实时预览:
```sh
typst watch ./templates/<template-name>.typ --root ./
```
> [!TIP]
> 本模板正处于积极开发阶段,更新较为频繁,虽然已经上传至 Typst Universe,但是您依然可以借助 Typst local packages 来实现在 Typst Universe 同步本模板的最新版本前,在本地体验本模板的最新版本,具体做法为:
> - 在 Release 页面下载对应版本的源码压缩包,并将其解压到 `{data-dir}/typst/packages/local/universal-hit-thesis/{version}`,`{data-dir}` 在不同操作系统下的值为:
> - `$XDG_DATA_HOME` or `~/.local/share` on Linux
> - `~/Library/Application` Support on macOS
> - `%LOCALAPPDATA%` on Windows
>
> `{version}` 的值为 `typst.toml` 中 `version` 项的值.
>
> 解压完成后 `typst.toml` 文件应该出现在 `{data-dir}/typst/packages/local/universal-hit-thesis/{version}` 目录下.
>
> - 接着您需要在您的论文中将 `#import "@preview/universal-hit-thesis:0.2.1"` 修改为 `#import "@local/universal-hit-thesis:{version}"`,即可更新模板.
### 在线编辑
本模板已上传 Typst Universe,您可以使用 Typst 的官方 Web App 进行编辑。
具体来说,在 Typst Web App 登录后,点击 `Start from template`,在弹出的窗口中选择 `universal-hit-thesis`,即可从模板创建项目。


> [!NOTE]
>
> Typst Web App 的排版渲染在浏览器本地执行,所以实时预览体验几乎与在本地编辑无异。
>
> 默认情况下,Web App 中的模板字体显示与预期可能存在差异,这是因为 Web App 默认不提供 `SimSun`, `Times New Roman` 等中文排版常用字体。为了解决这个问题,您可以在搜索引擎搜索以下字体文件:
>
> - `TimesNewRoman.ttf` (包括 `Bold`, `Italic` `Bold-Italic` 等版本)
> - `SimSun.ttf`
> - `SimHei.ttf`
> - `Kaiti.ttf`
> - `Consolas.ttf`
> - `Courier New.ttf`
>
> 并将这些文件手动上传至 Web App 项目根目录中,或为了目录整洁,可以创建一个 `fonts` 文件夹并将字体置于其中,Typst Web App 将自动加载这些字体,并正确渲染到预览窗口中.
>
> 由于每次在 Typst Web App 中打开项目时都需要重新下载字体,而中文字体体积普遍较大,加载时间较长,因此我们更推荐**本地编辑**。
---
> [!NOTE]
> 注意到,官方提供的本科毕业设计 Microsoft Word 论文模板 `本科毕业论文(设计)书写范例(理工类).doc` 在一校三区是通用的,意味着本 Typst 模板的本科论文部分理论上也是在一校三区通用的,因此我们提供适用于各校区的本科毕业论文模板模块导出,即以下四种导入模块的方式效果相同:
> ```typ
> #import "@preview/universal-hit-thesis:0.2.1": harbin-bachelor
> #import harbin-bachelor: * // 哈尔滨校区本科
> ```
> ```typ
> #import "@preview/universal-hit-thesis:0.2.1": weihai-bachelor
> #import weihai-bachelor: * // 威海校区本科
> ```
> ```typ
> #import "@preview/universal-hit-thesis:0.2.1": shenzhen-bachelor
> #import shenzhen-bachelor: * // 深圳校区本科
> ```
> ```typ
> #import "@preview/universal-hit-thesis:0.2.1": universal-bachelor
> #import universal-bachelor: * // 一校三区本科通用
> ```
## 依赖
### 可选依赖
若要书写和引用伪代码,您可以使用 `algorithm-figure`,为此,您需要导入 `algorithmic` 或 `lovelace` 包。
```typ
#import "@preview/algorithmic:0.1.0"
#import algorithmic: algorithm
#import "@preview/lovelace:0.2.0": *
```
使用方式详见[模板](https://github.com/chosertech/HIT-Thesis-Typst/blob/main/templates/universal-bachelor.typ)中的`伪代码`节
## 已知问题
### 排版
尽管本 Typst 模板各部分字体、字号等设置均与原 Word 模板一致,但段落排版视觉上仍与 Word 模板有一些差别,这与字符间距、行距、段落间距有一定肉眼排版成分有关.
### 参考文献
- 学校对参考文献格式的要求与标准的 `GB/T 7714-2015 numeric` 格式存在差异,我们已修改相关 CSL 文件并形成 `gb-t-7714-2015-numeric-hit.csl` 以修复作者名字大小写等问题,但仍有以下已知特性尚未支持:
- 仅纯电子资源(如网页、软件)显示引用日期和 URL
- 无 DOI
- 引用其他学校的学位论文时参考文献页对应条目存在格式问题,因为 Typst 尚不支持 CSL 文件中的 `school` 等字段.
- 目前版本的 Typst 对 CSL 支持程度成谜,更多问题可参考 [SEU-Typst-Template 参考文献已知问题](https://github.com/csimide/SEU-Typst-Template/?tab=readme-ov-file#%E5%8F%82%E8%80%83%E6%96%87%E7%8C%AE).
## 致谢
+ 感谢 [HUST-typst-template](https://github.com/werifu/HUST-typst-template) 为本模板早期版本的框架提供思路.
+ 感谢 [@csimide](https://gist.github.com/csimide) 和 [@OrangeX4](https://github.com/OrangeX4) 提供的中英双语参考文献实现.
+ 感谢 [modern-nju-thesis](https://github.com/nju-lug/modern-nju-thesis) 为本模板的一些特性提供实现思路.
|
https://github.com/rwblickhan/resume | https://raw.githubusercontent.com/rwblickhan/resume/main/resume.typ | typst | #import "template.typ": *
#let asana = {
experience_item(
"Asana",
none,
"Jul 2019 - Present",
)[
#set list(indent: 1em)
#experience_subheader[Senior Software Engineer][Jan 2023 - Current]
- Took ownership of Smart Status, one of Asana's first LLM-powered features,
working across frontend, backend, and prompt engineering; led two other
engineers and a product manager to prototype cutting-edge RAG techniques to\
improve output
- Led a team of three to add Chrome-style tabbed interface to Asana's
Electron/React-based Desktop app
- Embedded on a partner team to unblock a major initiative to bring Gantt views to
the webapp; recognized by team's manager as a model and mentor for other
engineers due to my high velocity
#experience_subheader[Software Engineer][Jul 2019 - Jan 2023]
- Worked across frontend (React), backend (TypeScript/Scala), and mobile
(iOS/Android) to build features like:
- "Recently online" indicators in profile pictures
- Unread task indicators in My Tasks
- Reorganized My Tasks view with new "group by due dates" feature
- Starred notifications in inbox
- Mentored two interns, both of whom accepted return offers, and an apprentice
- Introduced monthly tech talks to an organization of 30 engineers
- Managed mobile on-call and release processes
]
}
#let snowmobile = {
experience_item(
"AWS Snowmobile",
"Software Engineer Intern",
"May - Aug 2018",
)[
- Implemented metrics aggregation pipeline via AWS Kinesis, with automated
deployment
]
}
#let thinkbox = {
experience_item(
"AWS Thinkbox",
"Software Engineer Intern",
"May - Aug 2017",
)[
- Implemented various computer graphics algorithms in C++ for experimental point
cloud mesher
]
}
#let t2 = {
experience_item(
"T2 Systems",
"Embedded Software Engineer Intern",
"Sep 2016 - May 2017",
)[
- Implemented UI and business logic in C++ for parking meter payment software
rewrite
]
}
#let linty = {
personal_project_item(
"Linty",
"Fall 2023",
"https://github.com/rwblickhan/linty",
)[
- Released Rust-based command-line tool for linting for regexes across a codebase
]
}
#let tag_search = {
personal_project_item(
"Obsidian Tag Search Plugin",
"Spring 2023",
"https://github.com/rwblickhan/obsidian-tag-search",
)[
- Published tag search plugin in TypeScript for Obsidian note-taking app, with
>2000 users
]
}
#resume[#asana #snowmobile #thinkbox #t2][#linty #tag_search] |
|
https://github.com/Clamentos/FabRISC | https://raw.githubusercontent.com/Clamentos/FabRISC/main/README.md | markdown | Creative Commons Attribution Share Alike 4.0 International | # FabRISC ISA
**This the official repository for the FabRISC project.**
FabRISC is an open-source, modular, RISC-like instruction set architecture for 32 and 64-bit microprocessors (8 and 16-bit are also possible too). Some features of this specifications are:
- Variable length instructions of 2, 4 and 6 bytes.
- Scalar and vector capabilities.
- Atomic and transactional memory support.
- Privileged architecture with user and machine modes.
- Classically virtualizable.
- Performance monitoring counters and more!
## Compiling from source
This repository simply holds the [Typst](https://github.com/typst/typst) source code to generate the PDF documents for the privileged and unprivileged specifications. Currently, only the unprivileged specification is close to being complete. The steps to compile are very simple and all is needed is to compile the `Main.typ` file in the `./src/unprivileged` directory and run the following comand:
```sh
typst compile Main.typ --root ../
```
#### Disclaimer
This is just a personal project used as a learning experience so don't expect anything ground breaking, however any kind of feedback is absolutely welcome and needed!
|
https://github.com/goshakowska/Typstdiff | https://raw.githubusercontent.com/goshakowska/Typstdiff/main/tests/test_working_types/link/link_updated.typ | typst | https://typst.app/docs/
https://pl.wikipedia.org/ |
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-1D400.typ | typst | Apache License 2.0 | #let data = (
("MATHEMATICAL BOLD CAPITAL A", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL B", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL C", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL D", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL E", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL F", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL G", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL H", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL I", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL J", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL K", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL L", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL M", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL N", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL O", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL P", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL Q", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL R", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL S", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL T", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL U", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL V", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL W", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL X", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL Y", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL Z", "Lu", 0),
("MATHEMATICAL BOLD SMALL A", "Ll", 0),
("MATHEMATICAL BOLD SMALL B", "Ll", 0),
("MATHEMATICAL BOLD SMALL C", "Ll", 0),
("MATHEMATICAL BOLD SMALL D", "Ll", 0),
("MATHEMATICAL BOLD SMALL E", "Ll", 0),
("MATHEMATICAL BOLD SMALL F", "Ll", 0),
("MATHEMATICAL BOLD SMALL G", "Ll", 0),
("MATHEMATICAL BOLD SMALL H", "Ll", 0),
("MATHEMATICAL BOLD SMALL I", "Ll", 0),
("MATHEMATICAL BOLD SMALL J", "Ll", 0),
("MATHEMATICAL BOLD SMALL K", "Ll", 0),
("MATHEMATICAL BOLD SMALL L", "Ll", 0),
("MATHEMATICAL BOLD SMALL M", "Ll", 0),
("MATHEMATICAL BOLD SMALL N", "Ll", 0),
("MATHEMATICAL BOLD SMALL O", "Ll", 0),
("MATHEMATICAL BOLD SMALL P", "Ll", 0),
("MATHEMATICAL BOLD SMALL Q", "Ll", 0),
("MATHEMATICAL BOLD SMALL R", "Ll", 0),
("MATHEMATICAL BOLD SMALL S", "Ll", 0),
("MATHEMATICAL BOLD SMALL T", "Ll", 0),
("MATHEMATICAL BOLD SMALL U", "Ll", 0),
("MATHEMATICAL BOLD SMALL V", "Ll", 0),
("MATHEMATICAL BOLD SMALL W", "Ll", 0),
("MATHEMATICAL BOLD SMALL X", "Ll", 0),
("MATHEMATICAL BOLD SMALL Y", "Ll", 0),
("MATHEMATICAL BOLD SMALL Z", "Ll", 0),
("MATHEMATICAL ITALIC CAPITAL A", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL B", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL C", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL D", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL E", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL F", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL G", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL H", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL I", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL J", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL K", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL L", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL M", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL N", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL O", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL P", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL Q", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL R", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL S", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL T", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL U", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL V", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL W", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL X", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL Y", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL Z", "Lu", 0),
("MATHEMATICAL ITALIC SMALL A", "Ll", 0),
("MATHEMATICAL ITALIC SMALL B", "Ll", 0),
("MATHEMATICAL ITALIC SMALL C", "Ll", 0),
("MATHEMATICAL ITALIC SMALL D", "Ll", 0),
("MATHEMATICAL ITALIC SMALL E", "Ll", 0),
("MATHEMATICAL ITALIC SMALL F", "Ll", 0),
("MATHEMATICAL ITALIC SMALL G", "Ll", 0),
(),
("MATHEMATICAL ITALIC SMALL I", "Ll", 0),
("MATHEMATICAL ITALIC SMALL J", "Ll", 0),
("MATHEMATICAL ITALIC SMALL K", "Ll", 0),
("MATHEMATICAL ITALIC SMALL L", "Ll", 0),
("MATHEMATICAL ITALIC SMALL M", "Ll", 0),
("MATHEMATICAL ITALIC SMALL N", "Ll", 0),
("MATHEMATICAL ITALIC SMALL O", "Ll", 0),
("MATHEMATICAL ITALIC SMALL P", "Ll", 0),
("MATHEMATICAL ITALIC SMALL Q", "Ll", 0),
("MATHEMATICAL ITALIC SMALL R", "Ll", 0),
("MATHEMATICAL ITALIC SMALL S", "Ll", 0),
("MATHEMATICAL ITALIC SMALL T", "Ll", 0),
("MATHEMATICAL ITALIC SMALL U", "Ll", 0),
("MATHEMATICAL ITALIC SMALL V", "Ll", 0),
("MATHEMATICAL ITALIC SMALL W", "Ll", 0),
("MATHEMATICAL ITALIC SMALL X", "Ll", 0),
("MATHEMATICAL ITALIC SMALL Y", "Ll", 0),
("MATHEMATICAL ITALIC SMALL Z", "Ll", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL A", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL B", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL C", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL D", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL E", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL F", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL G", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL H", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL I", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL J", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL K", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL L", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL M", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL N", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL O", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL P", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL Q", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL R", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL S", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL T", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL U", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL V", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL W", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL X", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL Y", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL Z", "Lu", 0),
("MATHEMATICAL BOLD ITALIC SMALL A", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL B", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL C", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL D", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL E", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL F", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL G", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL H", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL I", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL J", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL K", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL L", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL M", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL N", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL O", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL P", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL Q", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL R", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL S", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL T", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL U", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL V", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL W", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL X", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL Y", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL Z", "Ll", 0),
("MATHEMATICAL SCRIPT CAPITAL A", "Lu", 0),
(),
("MATHEMATICAL SCRIPT CAPITAL C", "Lu", 0),
("MATHEMATICAL SCRIPT CAPITAL D", "Lu", 0),
(),
(),
("MATHEMATICAL SCRIPT CAPITAL G", "Lu", 0),
(),
(),
("MATHEMATICAL SCRIPT CAPITAL J", "Lu", 0),
("MATHEMATICAL SCRIPT CAPITAL K", "Lu", 0),
(),
(),
("MATHEMATICAL SCRIPT CAPITAL N", "Lu", 0),
("MATHEMATICAL SCRIPT CAPITAL O", "Lu", 0),
("MATHEMATICAL SCRIPT CAPITAL P", "Lu", 0),
("MATHEMATICAL SCRIPT CAPITAL Q", "Lu", 0),
(),
("MATHEMATICAL SCRIPT CAPITAL S", "Lu", 0),
("MATHEMATICAL SCRIPT CAPITAL T", "Lu", 0),
("MATHEMATICAL SCRIPT CAPITAL U", "Lu", 0),
("MATHEMATICAL SCRIPT CAPITAL V", "Lu", 0),
("MATHEMATICAL SCRIPT CAPITAL W", "Lu", 0),
("MATHEMATICAL SCRIPT CAPITAL X", "Lu", 0),
("MATHEMATICAL SCRIPT CAPITAL Y", "Lu", 0),
("MATHEMATICAL SCRIPT CAPITAL Z", "Lu", 0),
("MATHEMATICAL SCRIPT SMALL A", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL B", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL C", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL D", "Ll", 0),
(),
("MATHEMATICAL SCRIPT SMALL F", "Ll", 0),
(),
("MATHEMATICAL SCRIPT SMALL H", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL I", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL J", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL K", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL L", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL M", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL N", "Ll", 0),
(),
("MATHEMATICAL SCRIPT SMALL P", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL Q", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL R", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL S", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL T", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL U", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL V", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL W", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL X", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL Y", "Ll", 0),
("MATHEMATICAL SCRIPT SMALL Z", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL A", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL B", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL C", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL D", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL E", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL F", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL G", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL H", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL I", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL J", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL K", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL L", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL M", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL N", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL O", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL P", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL Q", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL R", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL S", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL T", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL U", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL V", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL W", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL X", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL Y", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT CAPITAL Z", "Lu", 0),
("MATHEMATICAL BOLD SCRIPT SMALL A", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL B", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL C", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL D", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL E", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL F", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL G", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL H", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL I", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL J", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL K", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL L", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL M", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL N", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL O", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL P", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL Q", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL R", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL S", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL T", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL U", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL V", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL W", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL X", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL Y", "Ll", 0),
("MATHEMATICAL BOLD SCRIPT SMALL Z", "Ll", 0),
("MATHEMATICAL FRAKTUR CAPITAL A", "Lu", 0),
("MATHEMATICAL FRAKTUR CAPITAL B", "Lu", 0),
(),
("MATHEMATICAL FRAKTUR CAPITAL D", "Lu", 0),
("MATHEMATICAL FRAKTUR CAPITAL E", "Lu", 0),
("MATHEMATICAL FRAKTUR CAPITAL F", "Lu", 0),
("MATHEMATICAL FRAKTUR CAPITAL G", "Lu", 0),
(),
(),
("MATHEMATICAL FRAKTUR CAPITAL J", "Lu", 0),
("MATHEMATICAL FRAKTUR CAPITAL K", "Lu", 0),
("MATHEMATICAL FRAKTUR CAPITAL L", "Lu", 0),
("MATHEMATICAL FRAKTUR CAPITAL M", "Lu", 0),
("MATHEMATICAL FRAKTUR CAPITAL N", "Lu", 0),
("MATHEMATICAL FRAKTUR CAPITAL O", "Lu", 0),
("MATHEMATICAL FRAKTUR CAPITAL P", "Lu", 0),
("MATHEMATICAL FRAKTUR CAPITAL Q", "Lu", 0),
(),
("MATHEMATICAL FRAKTUR CAPITAL S", "Lu", 0),
("MATHEMATICAL FRAKTUR CAPITAL T", "Lu", 0),
("MATHEMATICAL FRAKTUR CAPITAL U", "Lu", 0),
("MATHEMATICAL FRAKTUR CAPITAL V", "Lu", 0),
("MATHEMATICAL FRAKTUR CAPITAL W", "Lu", 0),
("MATHEMATICAL FRAKTUR CAPITAL X", "Lu", 0),
("MATHEMATICAL FRAKTUR CAPITAL Y", "Lu", 0),
(),
("MATHEMATICAL FRAKTUR SMALL A", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL B", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL C", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL D", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL E", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL F", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL G", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL H", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL I", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL J", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL K", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL L", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL M", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL N", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL O", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL P", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL Q", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL R", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL S", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL T", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL U", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL V", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL W", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL X", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL Y", "Ll", 0),
("MATHEMATICAL FRAKTUR SMALL Z", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL A", "Lu", 0),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL B", "Lu", 0),
(),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL D", "Lu", 0),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL E", "Lu", 0),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL F", "Lu", 0),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL G", "Lu", 0),
(),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL I", "Lu", 0),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL J", "Lu", 0),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL K", "Lu", 0),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL L", "Lu", 0),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL M", "Lu", 0),
(),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL O", "Lu", 0),
(),
(),
(),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL S", "Lu", 0),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL T", "Lu", 0),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL U", "Lu", 0),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL V", "Lu", 0),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL W", "Lu", 0),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL X", "Lu", 0),
("MATHEMATICAL DOUBLE-STRUCK CAPITAL Y", "Lu", 0),
(),
("MATHEMATICAL DOUBLE-STRUCK SMALL A", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL B", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL C", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL D", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL E", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL F", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL G", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL H", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL I", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL J", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL K", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL L", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL M", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL N", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL O", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL P", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL Q", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL R", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL S", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL T", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL U", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL V", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL W", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL X", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL Y", "Ll", 0),
("MATHEMATICAL DOUBLE-STRUCK SMALL Z", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL A", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL B", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL C", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL D", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL E", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL F", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL G", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL H", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL I", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL J", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL K", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL L", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL M", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL N", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL O", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL P", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL Q", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL R", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL S", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL T", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL U", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL V", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL W", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL X", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL Y", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR CAPITAL Z", "Lu", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL A", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL B", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL C", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL D", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL E", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL F", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL G", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL H", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL I", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL J", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL K", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL L", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL M", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL N", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL O", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL P", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL Q", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL R", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL S", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL T", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL U", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL V", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL W", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL X", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL Y", "Ll", 0),
("MATHEMATICAL BOLD FRAKTUR SMALL Z", "Ll", 0),
("MATHEMATICAL SANS-SERIF CAPITAL A", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL B", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL C", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL D", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL E", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL F", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL G", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL H", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL I", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL J", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL K", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL L", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL M", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL N", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL O", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL P", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL Q", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL R", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL S", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL T", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL U", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL V", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL W", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL X", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL Y", "Lu", 0),
("MATHEMATICAL SANS-SERIF CAPITAL Z", "Lu", 0),
("MATHEMATICAL SANS-SERIF SMALL A", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL B", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL C", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL D", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL E", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL F", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL G", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL H", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL I", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL J", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL K", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL L", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL M", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL N", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL O", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL P", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL Q", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL R", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL S", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL T", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL U", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL V", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL W", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL X", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL Y", "Ll", 0),
("MATHEMATICAL SANS-SERIF SMALL Z", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL A", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL B", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL C", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL D", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL E", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL F", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL G", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL H", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL I", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL J", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL K", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL L", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL M", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL N", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL O", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL P", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL Q", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL R", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL S", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL T", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL U", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL V", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL W", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL X", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL Y", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL Z", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL A", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL B", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL C", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL D", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL E", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL F", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL G", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL H", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL I", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL J", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL K", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL L", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL M", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL N", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL O", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL P", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL Q", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL R", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL S", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL T", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL U", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL V", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL W", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL X", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL Y", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL Z", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL A", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL B", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL C", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL D", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL E", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL F", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL G", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL H", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL I", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL J", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL K", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL L", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL M", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL N", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL O", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL P", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL Q", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL R", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL S", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL T", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL U", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL V", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL W", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL X", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL Y", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC CAPITAL Z", "Lu", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL A", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL B", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL C", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL D", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL E", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL F", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL G", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL H", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL I", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL J", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL K", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL L", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL M", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL N", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL O", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL P", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL Q", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL R", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL S", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL T", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL U", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL V", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL W", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL X", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL Y", "Ll", 0),
("MATHEMATICAL SANS-SERIF ITALIC SMALL Z", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL A", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL B", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL C", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL D", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL E", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL F", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL G", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL H", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL I", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL J", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL K", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL L", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL M", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL N", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL O", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL P", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL Q", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL R", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL S", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL T", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL U", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL V", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL W", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL X", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL Y", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL Z", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL A", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL B", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL C", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL D", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL E", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL F", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL G", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL H", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL I", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL J", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL K", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL L", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL M", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL N", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL O", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL P", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL Q", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL R", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL S", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL T", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL U", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL V", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL W", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL X", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL Y", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL Z", "Ll", 0),
("MATHEMATICAL MONOSPACE CAPITAL A", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL B", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL C", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL D", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL E", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL F", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL G", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL H", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL I", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL J", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL K", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL L", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL M", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL N", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL O", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL P", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL Q", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL R", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL S", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL T", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL U", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL V", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL W", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL X", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL Y", "Lu", 0),
("MATHEMATICAL MONOSPACE CAPITAL Z", "Lu", 0),
("MATHEMATICAL MONOSPACE SMALL A", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL B", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL C", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL D", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL E", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL F", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL G", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL H", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL I", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL J", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL K", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL L", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL M", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL N", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL O", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL P", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL Q", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL R", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL S", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL T", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL U", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL V", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL W", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL X", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL Y", "Ll", 0),
("MATHEMATICAL MONOSPACE SMALL Z", "Ll", 0),
("MATHEMATICAL ITALIC SMALL DOTLESS I", "Ll", 0),
("MATHEMATICAL ITALIC SMALL DOTLESS J", "Ll", 0),
(),
(),
("MATHEMATICAL BOLD CAPITAL ALPHA", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL BETA", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL GAMMA", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL DELTA", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL EPSILON", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL ZETA", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL ETA", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL THETA", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL IOTA", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL KAPPA", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL LAMDA", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL MU", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL NU", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL XI", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL OMICRON", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL PI", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL RHO", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL THETA SYMBOL", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL SIGMA", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL TAU", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL UPSILON", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL PHI", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL CHI", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL PSI", "Lu", 0),
("MATHEMATICAL BOLD CAPITAL OMEGA", "Lu", 0),
("MATHEMATICAL BOLD NABLA", "Sm", 0),
("MATHEMATICAL BOLD SMALL ALPHA", "Ll", 0),
("MATHEMATICAL BOLD SMALL BETA", "Ll", 0),
("MATHEMATICAL BOLD SMALL GAMMA", "Ll", 0),
("MATHEMATICAL BOLD SMALL DELTA", "Ll", 0),
("MATHEMATICAL BOLD SMALL EPSILON", "Ll", 0),
("MATHEMATICAL BOLD SMALL ZETA", "Ll", 0),
("MATHEMATICAL BOLD SMALL ETA", "Ll", 0),
("MATHEMATICAL BOLD SMALL THETA", "Ll", 0),
("MATHEMATICAL BOLD SMALL IOTA", "Ll", 0),
("MATHEMATICAL BOLD SMALL KAPPA", "Ll", 0),
("MATHEMATICAL BOLD SMALL LAMDA", "Ll", 0),
("MATHEMATICAL BOLD SMALL MU", "Ll", 0),
("MATHEMATICAL BOLD SMALL NU", "Ll", 0),
("MATHEMATICAL BOLD SMALL XI", "Ll", 0),
("MATHEMATICAL BOLD SMALL OMICRON", "Ll", 0),
("MATHEMATICAL BOLD SMALL PI", "Ll", 0),
("MATHEMATICAL BOLD SMALL RHO", "Ll", 0),
("MATHEMATICAL BOLD SMALL FINAL SIGMA", "Ll", 0),
("MATHEMATICAL BOLD SMALL SIGMA", "Ll", 0),
("MATHEMATICAL BOLD SMALL TAU", "Ll", 0),
("MATHEMATICAL BOLD SMALL UPSILON", "Ll", 0),
("MATHEMATICAL BOLD SMALL PHI", "Ll", 0),
("MATHEMATICAL BOLD SMALL CHI", "Ll", 0),
("MATHEMATICAL BOLD SMALL PSI", "Ll", 0),
("MATHEMATICAL BOLD SMALL OMEGA", "Ll", 0),
("MATHEMATICAL BOLD PARTIAL DIFFERENTIAL", "Sm", 0),
("MATHEMATICAL BOLD EPSILON SYMBOL", "Ll", 0),
("MATHEMATICAL BOLD THETA SYMBOL", "Ll", 0),
("MATHEMATICAL BOLD KAPPA SYMBOL", "Ll", 0),
("MATHEMATICAL BOLD PHI SYMBOL", "Ll", 0),
("MATHEMATICAL BOLD RHO SYMBOL", "Ll", 0),
("MATHEMATICAL BOLD PI SYMBOL", "Ll", 0),
("MATHEMATICAL ITALIC CAPITAL ALPHA", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL BETA", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL GAMMA", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL DELTA", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL EPSILON", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL ZETA", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL ETA", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL THETA", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL IOTA", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL KAPPA", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL LAMDA", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL MU", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL NU", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL XI", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL OMICRON", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL PI", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL RHO", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL THETA SYMBOL", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL SIGMA", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL TAU", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL UPSILON", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL PHI", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL CHI", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL PSI", "Lu", 0),
("MATHEMATICAL ITALIC CAPITAL OMEGA", "Lu", 0),
("MATHEMATICAL ITALIC NABLA", "Sm", 0),
("MATHEMATICAL ITALIC SMALL ALPHA", "Ll", 0),
("MATHEMATICAL ITALIC SMALL BETA", "Ll", 0),
("MATHEMATICAL ITALIC SMALL GAMMA", "Ll", 0),
("MATHEMATICAL ITALIC SMALL DELTA", "Ll", 0),
("MATHEMATICAL ITALIC SMALL EPSILON", "Ll", 0),
("MATHEMATICAL ITALIC SMALL ZETA", "Ll", 0),
("MATHEMATICAL ITALIC SMALL ETA", "Ll", 0),
("MATHEMATICAL ITALIC SMALL THETA", "Ll", 0),
("MATHEMATICAL ITALIC SMALL IOTA", "Ll", 0),
("MATHEMATICAL ITALIC SMALL KAPPA", "Ll", 0),
("MATHEMATICAL ITALIC SMALL LAMDA", "Ll", 0),
("MATHEMATICAL ITALIC SMALL MU", "Ll", 0),
("MATHEMATICAL ITALIC SMALL NU", "Ll", 0),
("MATHEMATICAL ITALIC SMALL XI", "Ll", 0),
("MATHEMATICAL ITALIC SMALL OMICRON", "Ll", 0),
("MATHEMATICAL ITALIC SMALL PI", "Ll", 0),
("MATHEMATICAL ITALIC SMALL RHO", "Ll", 0),
("MATHEMATICAL ITALIC SMALL FINAL SIGMA", "Ll", 0),
("MATHEMATICAL ITALIC SMALL SIGMA", "Ll", 0),
("MATHEMATICAL ITALIC SMALL TAU", "Ll", 0),
("MATHEMATICAL ITALIC SMALL UPSILON", "Ll", 0),
("MATHEMATICAL ITALIC SMALL PHI", "Ll", 0),
("MATHEMATICAL ITALIC SMALL CHI", "Ll", 0),
("MATHEMATICAL ITALIC SMALL PSI", "Ll", 0),
("MATHEMATICAL ITALIC SMALL OMEGA", "Ll", 0),
("MATHEMATICAL ITALIC PARTIAL DIFFERENTIAL", "Sm", 0),
("MATHEMATICAL ITALIC EPSILON SYMBOL", "Ll", 0),
("MATHEMATICAL ITALIC THETA SYMBOL", "Ll", 0),
("MATHEMATICAL ITALIC KAPPA SYMBOL", "Ll", 0),
("MATHEMATICAL ITALIC PHI SYMBOL", "Ll", 0),
("MATHEMATICAL ITALIC RHO SYMBOL", "Ll", 0),
("MATHEMATICAL ITALIC PI SYMBOL", "Ll", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL ALPHA", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL BETA", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL GAMMA", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL DELTA", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL EPSILON", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL ZETA", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL ETA", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL THETA", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL IOTA", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL KAPPA", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL LAMDA", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL MU", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL NU", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL XI", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL OMICRON", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL PI", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL RHO", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL THETA SYMBOL", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL SIGMA", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL TAU", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL UPSILON", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL PHI", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL CHI", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL PSI", "Lu", 0),
("MATHEMATICAL BOLD ITALIC CAPITAL OMEGA", "Lu", 0),
("MATHEMATICAL BOLD ITALIC NABLA", "Sm", 0),
("MATHEMATICAL BOLD ITALIC SMALL ALPHA", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL BETA", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL GAMMA", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL DELTA", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL EPSILON", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL ZETA", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL ETA", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL THETA", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL IOTA", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL KAPPA", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL LAMDA", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL MU", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL NU", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL XI", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL OMICRON", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL PI", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL RHO", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL FINAL SIGMA", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL SIGMA", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL TAU", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL UPSILON", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL PHI", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL CHI", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL PSI", "Ll", 0),
("MATHEMATICAL BOLD ITALIC SMALL OMEGA", "Ll", 0),
("MATHEMATICAL BOLD ITALIC PARTIAL DIFFERENTIAL", "Sm", 0),
("MATHEMATICAL BOLD ITALIC EPSILON SYMBOL", "Ll", 0),
("MATHEMATICAL BOLD ITALIC THETA SYMBOL", "Ll", 0),
("MATHEMATICAL BOLD ITALIC KAPPA SYMBOL", "Ll", 0),
("MATHEMATICAL BOLD ITALIC PHI SYMBOL", "Ll", 0),
("MATHEMATICAL BOLD ITALIC RHO SYMBOL", "Ll", 0),
("MATHEMATICAL BOLD ITALIC PI SYMBOL", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL ALPHA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL BETA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL GAMMA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL DELTA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL EPSILON", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL ZETA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL ETA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL THETA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL IOTA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL KAPPA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL LAMDA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL MU", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL NU", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL XI", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL OMICRON", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL PI", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL RHO", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL THETA SYMBOL", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL SIGMA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL TAU", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL UPSILON", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL PHI", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL CHI", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL PSI", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD CAPITAL OMEGA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD NABLA", "Sm", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL ALPHA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL BETA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL GAMMA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL DELTA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL EPSILON", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL ZETA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL ETA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL THETA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL IOTA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL KAPPA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL LAMDA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL MU", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL NU", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL XI", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL OMICRON", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL PI", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL RHO", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL FINAL SIGMA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL SIGMA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL TAU", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL UPSILON", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL PHI", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL CHI", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL PSI", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD SMALL OMEGA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD PARTIAL DIFFERENTIAL", "Sm", 0),
("MATHEMATICAL SANS-SERIF BOLD EPSILON SYMBOL", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD THETA SYMBOL", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD KAPPA SYMBOL", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD PHI SYMBOL", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD RHO SYMBOL", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD PI SYMBOL", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL ALPHA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL BETA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL GAMMA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL DELTA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL EPSILON", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL ZETA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL ETA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL THETA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL IOTA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL KAPPA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL LAMDA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL MU", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL NU", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL XI", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL OMICRON", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL PI", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL RHO", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL THETA SYMBOL", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL SIGMA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL TAU", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL UPSILON", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL PHI", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL CHI", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL PSI", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL OMEGA", "Lu", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC NABLA", "Sm", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL ALPHA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL BETA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL GAMMA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL DELTA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL EPSILON", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL ZETA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL ETA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL THETA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL IOTA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL KAPPA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL LAMDA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL MU", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL NU", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL XI", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL OMICRON", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL PI", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL RHO", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL FINAL SIGMA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL SIGMA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL TAU", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL UPSILON", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL PHI", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL CHI", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL PSI", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL OMEGA", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC PARTIAL DIFFERENTIAL", "Sm", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC EPSILON SYMBOL", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC THETA SYMBOL", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC KAPPA SYMBOL", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC PHI SYMBOL", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC RHO SYMBOL", "Ll", 0),
("MATHEMATICAL SANS-SERIF BOLD ITALIC PI SYMBOL", "Ll", 0),
("MATHEMATICAL BOLD CAPITAL DIGAMMA", "Lu", 0),
("MATHEMATICAL BOLD SMALL DIGAMMA", "Ll", 0),
(),
(),
("MATHEMATICAL BOLD DIGIT ZERO", "Nd", 0),
("MATHEMATICAL BOLD DIGIT ONE", "Nd", 0),
("MATHEMATICAL BOLD DIGIT TWO", "Nd", 0),
("MATHEMATICAL BOLD DIGIT THREE", "Nd", 0),
("MATHEMATICAL BOLD DIGIT FOUR", "Nd", 0),
("MATHEMATICAL BOLD DIGIT FIVE", "Nd", 0),
("MATHEMATICAL BOLD DIGIT SIX", "Nd", 0),
("MATHEMATICAL BOLD DIGIT SEVEN", "Nd", 0),
("MATHEMATICAL BOLD DIGIT EIGHT", "Nd", 0),
("MATHEMATICAL BOLD DIGIT NINE", "Nd", 0),
("MATHEMATICAL DOUBLE-STRUCK DIGIT ZERO", "Nd", 0),
("MATHEMATICAL DOUBLE-STRUCK DIGIT ONE", "Nd", 0),
("MATHEMATICAL DOUBLE-STRUCK DIGIT TWO", "Nd", 0),
("MATHEMATICAL DOUBLE-STRUCK DIGIT THREE", "Nd", 0),
("MATHEMATICAL DOUBLE-STRUCK DIGIT FOUR", "Nd", 0),
("MATHEMATICAL DOUBLE-STRUCK DIGIT FIVE", "Nd", 0),
("MATHEMATICAL DOUBLE-STRUCK DIGIT SIX", "Nd", 0),
("MATHEMATICAL DOUBLE-STRUCK DIGIT SEVEN", "Nd", 0),
("MATHEMATICAL DOUBLE-STRUCK DIGIT EIGHT", "Nd", 0),
("MATHEMATICAL DOUBLE-STRUCK DIGIT NINE", "Nd", 0),
("MATHEMATICAL SANS-SERIF DIGIT ZERO", "Nd", 0),
("MATHEMATICAL SANS-SERIF DIGIT ONE", "Nd", 0),
("MATHEMATICAL SANS-SERIF DIGIT TWO", "Nd", 0),
("MATHEMATICAL SANS-SERIF DIGIT THREE", "Nd", 0),
("MATHEMATICAL SANS-SERIF DIGIT FOUR", "Nd", 0),
("MATHEMATICAL SANS-SERIF DIGIT FIVE", "Nd", 0),
("MATHEMATICAL SANS-SERIF DIGIT SIX", "Nd", 0),
("MATHEMATICAL SANS-SERIF DIGIT SEVEN", "Nd", 0),
("MATHEMATICAL SANS-SERIF DIGIT EIGHT", "Nd", 0),
("MATHEMATICAL SANS-SERIF DIGIT NINE", "Nd", 0),
("MATHEMATICAL SANS-SERIF BOLD DIGIT ZERO", "Nd", 0),
("MATHEMATICAL SANS-SERIF BOLD DIGIT ONE", "Nd", 0),
("MATHEMATICAL SANS-SERIF BOLD DIGIT TWO", "Nd", 0),
("MATHEMATICAL SANS-SERIF BOLD DIGIT THREE", "Nd", 0),
("MATHEMATICAL SANS-SERIF BOLD DIGIT FOUR", "Nd", 0),
("MATHEMATICAL SANS-SERIF BOLD DIGIT FIVE", "Nd", 0),
("MATHEMATICAL SANS-SERIF BOLD DIGIT SIX", "Nd", 0),
("MATHEMATICAL SANS-SERIF BOLD DIGIT SEVEN", "Nd", 0),
("MATHEMATICAL SANS-SERIF BOLD DIGIT EIGHT", "Nd", 0),
("MATHEMATICAL SANS-SERIF BOLD DIGIT NINE", "Nd", 0),
("MATHEMATICAL MONOSPACE DIGIT ZERO", "Nd", 0),
("MATHEMATICAL MONOSPACE DIGIT ONE", "Nd", 0),
("MATHEMATICAL MONOSPACE DIGIT TWO", "Nd", 0),
("MATHEMATICAL MONOSPACE DIGIT THREE", "Nd", 0),
("MATHEMATICAL MONOSPACE DIGIT FOUR", "Nd", 0),
("MATHEMATICAL MONOSPACE DIGIT FIVE", "Nd", 0),
("MATHEMATICAL MONOSPACE DIGIT SIX", "Nd", 0),
("MATHEMATICAL MONOSPACE DIGIT SEVEN", "Nd", 0),
("MATHEMATICAL MONOSPACE DIGIT EIGHT", "Nd", 0),
("MATHEMATICAL MONOSPACE DIGIT NINE", "Nd", 0),
)
|
https://github.com/DawnEver/typst-academic-cv | https://raw.githubusercontent.com/DawnEver/typst-academic-cv/main/main_en.typ | typst | #import "template.typ": *
#show: project.with(
)
#info_en(
name: "<NAME>",
phone:"+86 19551570317",
email:"<EMAIL>",
github:"github.com/DawnEver",
// youtube:"youtu.be/LordBaobao",
orcid:"0009-0009-6694-2782",
blog:"www.baomingyang.site"
)
= Education Background
#event(
date:"2021 Sep. - 2025 Jun.",
title:"Huazhong University of Science and Technology(HUST)",
event:"Bachelor of Engineering",
)[
#h(2em)
*GPA:*
#h(1em)
84.7/100\
*College:*
// #h(1em)
School of Electrical and Electronic Engineering(SEEE)\
*Major:*
// #h(1em)
Electrical and Electronic Engineering\
*Courses:*
// #h(1em)
Electrical Machinery Theory,
Electric Drive and Control Systems,
Power Electronics
]
= Skills
#grid(columns:(1fr,2fr,2fr,2fr,2fr),
strong[English:],
[#strong[IELTS] 6.5],
)
#skills()
= Research Experience
#event(
date:"2022 Mar. - now",
title:"Hi-Motor Series",
event:"Leader/Fullstack Developer",
)[\
- Lead a 18-undergraduate team for software development, related research and\ business collaboration.
- Develop _hi-motor designer_ for design and optimization of high-performance motors,\ especially synchronous reluctance motors based on Python and Femm.
]
#event(
date:"2023 Aug. - 2023 Sep.",
title:[Design and Optimization of Flux-Barrier End shape in\ Synchronous Reluctance Motor Based on B-spines],
event:"Primary Person",
)[\
- Propose a novel design method of flux-barrier end shape based on B-spline curves.
- Achieve an effective electro-mechanical co-optimization workflow with sensitivity\ analysis, surrogate model, intelligent algorithms and multi-level optimization.
- Provide optimized motor designs of decrease in torque ripple and max stress\ without significant effect on other machine performances.
]
#event(
date:"2023 May. - 2024 May.",
title:"New Energy Forecast and Consumption Platform",
event:"Developer",
)[Approved ¥20000 funding\
- Propose time series forecast algorithms based om attention mechanism,\ TCN-BiLSTM network and decomposition-based error Correction
- Develop a web platform of new energy forecast and new energy consumption warning.
]
#pagebreak()
#event(
date:"2023 Jul. - 2023 Aug. 2024 Jun. - 2024 Aug.",
title:"Strategic Internship, Bosch (China) Investment Ltd.",
event:"Fullstack Developer",
)[CR/RMD-AP, Shanghai, China\
- Design and optimization of switched reluctance motors used in power tools,\ including structure optimization, PFC circuit and sensorless control.
]
#event(
date:"2024 Jun. - 2025 Jun.",
title:[Fundamental Research Funds for the Central Universities, HUST],
event:"Primary Person",
)[Approved ¥50000 Funding
- Design and optimization of permanent magnet assisted synchronous reluctance\ motor based on unequal turn winding.
]
= Honors and Awards
#event(
date:"2024 Oct. 12 - 16",
title:"China International College Students’ Innovation Competition",
event:"Gold Award",
)[Shanghai Jiao Tong University]
#event(
date:"2023 Dec. 7 - 9",
title:"IEEE Student Conference on Electric Machines and Systems",
event:"Best Presenter Award",
)[Huzhou, China]
#event(
date:"2024 Feb. 2 - 5",
title:"Mathematical Contest In Modeling",
event:"Finalist(2%)",
)[Student Advisor]
#grid(columns: (auto,auto),
gutter: 5em,
[#box(baseline: -20%)[#sym.triangle.filled]
#strong[Sieyuan Scholarship] (8/412)],
[#box(baseline: -20%)[#sym.triangle.filled]
#strong[Self-improvement Student] (7/412)],
)
= Extracurricular Activities
#event(
date:"2022 Oct. - 2023 Sep.",
title:"Association for Mathematical Modeling, HUST",
event:"Vice President"
)[Mathematical Modeling/Event Planing\
- Organize school-wide and cross-school lectures for contests like MCM/ICM.
- Participate in textbook and video course development in mathematical modeling.
]
#event(
date:"2022 Sep. - 2023 Aug.",
title:"Publicity Department, Student Union of SEEE, HUST",
event:"Minister"
)[Writing/Graphic Design\
- Generate positive publicity and media coverage of students and major events,\ such as the 70th anniversary celebration.
]
= Publications
#publication_legend()
#publication(
authors:(strong[<NAME>], [<NAME>], [<NAME>], [<NAME>], [<NAME>], [<NAME>], [<NAME>], [<NAME>]),
title:"Novel Design Method of Flux-Barrier End Shape of Synchronous Reluctance Motor Based on B-spline Curves",
booktitle:"2023 IEEE 6th Student Conference on Electric Machines and Systems (SCEMS)",
location:"Huzhou, China",
number:"",
page:"1--8",
date:"Dec. 2023",
doi:"10.1109/SCEMS60579.2023.10379317",
type:"conference",
)
#publication(
authors:([<NAME>], [<NAME>], strong[<NAME>], [<NAME>], [<NAME>], [<NAME>]),
title:"Design and Validation of a High-Efficiency Synchronous Reluctance Motor",
booktitle:"2023 IEEE 26th International Conference on Electric Machines and Systems (ICEMS)",
location:"Zhuhai, China",
number:"",
page:"1--8",
date:"Nov. 2023",
doi:"10.1109/ICEMS59686.2023.10345091",
type:"conference",
)
#publication(
authors:([<NAME>], [<NAME>], strong[M. Bao], [<NAME>], [<NAME>]),
title:"Multi-step Short-term Load Forecasting Based on Attention Mechanism, TCN-BiLSTM Network and Decomposition-based Error Correction",
booktitle:"2024 IEEE 7th Asia Conference on Energy and Electrical Engineering (ACEEE)",
page:"224-231",
date:"July. 2024",
doi:"10.1109/ACEEE62329.2024.10651918",
type:"conference",
)
#publication(
authors:([<NAME>], [<NAME>], [<NAME>], strong[M. Bao], [<NAME>]),
title:"Rotor with Adjacent Pole Mirror Image of Synchronous Reluctance Motor and Permanent Magnet Assisted Synchronous Reluctance Motor",
location:"Invention Patent, Publication",
number:"CN116722678A",
date:"Sep. 2023",
type:"patent",
)
#publication(
authors:([<NAME>], [<NAME>], [<NAME>], strong[M. Bao], [R. Qu]),
title:"A Permanent Magnet Assisted Synchronous Reluctance Motor of Low Torque Ripple",
location:"Invention Patent, Publication",
number:"CN116505683B",
date:"Apr. 2023",
type:"patent",
)
#publication(
authors:([<NAME>], strong[<NAME>], [<NAME>], [<NAME>], [<NAME>], [<NAME>], [R. Qu]),
title:"Design Method of Flux-Barrier End Shape of Synchronous Reluctance Motor Based on B-spline Curves",
location:"Invention Patent, Applying",
number:"",
date:"Aug. 2024",
type:"patent",
)
#publication(
authors:(strong[<NAME>], [<NAME>], [<NAME>]),
title:"Hi-Motor Hub: intelligent Selection Tool for High-efficiency Motors V1.0",
location:"China Software Copyright, Publication",
number:"2023SR1417580",
date:"Nov. 2023",
type:"software",
)
#publication(
authors:(strong[<NAME>], [<NAME>], [<NAME>], [<NAME>]),
title:"Hi-Motor Designer: intelligent Software for Design and Optimization of Synchronous Reluctance Motor V1.0",
location:"China Software Copyright, Publication",
number:"2023SR0446741",
date:"Apr. 2023",
type:"software",
)
#publication(
authors:([<NAME>], strong[<NAME>], [<NAME>], [<NAME>], [<NAME>]),
title:"Intelligent Analysis Platform for New Energy Consumption",
location:"China Software Copyright, Publication",
number:"2024SR0786617",
date:"June. 2024",
type:"software",
)
= Referees
#box(baseline: -20%)[#sym.triangle.filled] <NAME> (supervisor), Professor of Huazhong University of Science and Technology,\ <EMAIL>\
#box(baseline: -20%)[#sym.triangle.filled] <NAME> (supervisor), Associate Professor of Huazhong University of Science and Technology,\ <EMAIL>\
#box(baseline: -20%)[#sym.triangle.filled] <NAME> (mentor), Senior Engineer of Bosch (China) Investment Ltd.,\ <EMAIL> |
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/043%20-%20Innistrad%3A%20Midnight%20Hunt/001_Episode%201%3A%20The%20Witch%20of%20the%20Woods.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Episode 1: The Witch of the Woods",
set_name: "Innistrad: Midnight Hunt",
story_date: datetime(day: 02, month: 09, year: 2021),
author: "<NAME>",
doc
)
#emph[Don't hunt in Kessig] , they said. #emph[The dogs will find you.]
Maybe that had been true right after The Travails, when you couldn't swing a cat without a wolfir devouring it, but it isn't true anymore. #emph[Those ] dogs are dying off, and the woods are keen for the taking. They say you can always tell a Falkenrath by their awful stubbornness about hunting; their ravenous nature; their unending quest to close their talons around the food that least wants to be eaten. To be a Falkenrath is to make your home in the heights, such that everyone else sees you hunt.
#figure(image("001_Episode 1: The Witch of the Woods/01.jpg", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none)
Klaus is no different. His feet slap against the brush; blood drips from his slick chin onto the reddening leaves of the Ulvenwald; bolts whistle past his ears. Despite all this, he grins. They saw him, all right. Perhaps the traveling monk disguise was a bit of an affront—the pack of hunters at his back are worked into a holy froth. He didn't even know they carried so many bolts on them at once—the #emph[thunk thunk thunk ] of them is like the rap of a giant's knuckles on the trees around him.
A fallen trunk bars his path; he leaps it and chances a look at his pursuers. Five of them: two big and broad, armed with crossbows more akin to ballistae than anything portable. Cute. But soon enough it won't matter what they carry.
The thought pulls a laugh deep from the depths of his chest. Every drop of his alchemically refined blood calls out for dusk, and dusk has finally answered. A chorus rises within him, a cult begging the arrival of their unseen god, and he knows deliverance is near.
Because the truth is that he's never really been in danger. Dogs could trouble him, holy men can trouble him, other vampires can trouble him—but these humans are no issue at all. Falcons don't fear mice—even if the mice have sharp claws.
"I wasn't aware so many of you had a death wish," he calls over his shoulder. The elder's blood has him feeling bolder than ever. Will the villagers ever sleep again knowing how easily he wormed his way into their hearts? Likely not, likely not, but that won't stop him from coming back in a few weeks to see. It's important to follow up on your investments.
And sowing fear is always an investment.
Rather than answering him, the hunters fire their bolts, the two largest shooting through the air lightning quick and loud as thunder. Both aim straight for his head. They're good shots, but he's no stag, no bear, no simpering forest creature. Unnatural speed sees him dodge one and pluck the other right out of the air. A #emph[stake] ? My, but they were getting bold, weren't they?
But Klaus, he's in a good mood. Magnanimous even, glutted as he is, the blood still staining the talismans he sold to the unwitting villagers. Stake in hand, he leaps up to an overhanging branch. With its secure weight beneath his feet, he turns to face the hunters beneath him.
"Gentlemen, ladies," he says. "I must thank you for the exercise. Truly."
Look at them. Look at the fear written across their faces, the deep lines carved by worry. Pitiful.
"However, if you fine little morsels will look to the sky, you'll realize what time it is," he says. Already he can feel it, his body straining beneath the glamour, his teeth growing longer and sharper. In times like these, a human form is only a hindrance. The other vampiric lines don't seem to realize that, but the Falkenrath do. Strength is the only thing that matters. That strength comes from the blood, and the blood forever separates them from the dregs of human life. Isn't it best to take advantage of it? Isn't it best to see where your blood can take you?
His is just starting to boil.
"#emph[Setting sun, hunting's done] ," he says, but his mouth has already changed, already become something inhuman; his body elongating to a form more monstrous and fearsome than any these peasants have ever seen. It comes out in a growl deep and hungry.
Their fear is delicious to him. Their dilated pupils, their breath ragged as they behold him! The moon parts the veil of the clouds for an instant; its silver light renders his terrifying visage even more apparent, even more horrendous. The air is thick with anticipation.
Klaus bares his teeth.
This is only natural.
And, perhaps, it is also natural what happens next—the hunters sharing knowing looks, their mouths growing into grins just as grisly as Klaus's. One by one, they drop their weapons. The largest of them, a man resembling a slab of wood more than a human, laughs in a tone just as deep, just as hungry.
He hardly has time to prepare for it: the moonlight's caress on the awaiting hunters, their bodies exploding from their fleshy constraints into their real forms: towering beasts, their tongues lapping at their muzzles, their fur doing nothing to conceal the dense slabs of muscle that make up their wild bodies. The two largest are more like a stitcher's dream than any dog he's seen before, their chests like the kegs of beer he once brewed with his father, their arms thick as the trunk of the tree upon which he perches.
His throat closes.
"That rhyme," snarls the leader, "only applies to humans."
Klaus knows well when to run, when to flee, when to take to the skies like the falcons he so strives to emulate. He leaps from the branch. If he can change his form quickly enough—
But he can't.
Dogs, after all, can pluck anything out of the air if they put their minds to it.
Jaws crush his chest. He's on the ground before he knows what's happened, the wolves circling, looking down on him as if a two-hundred-year-old vampire is no more to them than a bag of meat.
"You can't do this to me," he stammers. "This isn't how it works. The night—"
"The night belongs to those who take it," says the leader just before his mouth changes to muzzle.
It is the last thing Klaus ever hears.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
She watches her breath puff up into a cloud in front of her.
If she tries, she can see all sorts of shapes as it dissipates: the wings of a watchful angel, wolves baying, bats circling. Someone, somewhere, might even try to figure out who she is based on those images. She's heard of that sort of thing—priests who ask you what you see in the sky and use it to determine what it is you're afraid of.
<NAME> knows who she is, but she wouldn't mind having someone to talk to about it. Especially these days. Innistrad's home to her and always has been, but it's never looked like this. There's frost everywhere she looks. Ice clings to the great trees she scrabbled up as a child, a light layer of white dusts the cloaks and coats of the mourning villagers, the familiar crunch of leaves beneath her feet have changed to something else. Sundials tell her it is nearly six in the evening, but the clock in the center of the village says it's half past four. Sundown comes sooner and sooner.
And with it, the moon.
Always the moon.
#figure(image("001_Episode 1: The Witch of the Woods/02.jpg", width: 100%), caption: [Arlinn, the Pack's Hope | Art by: <NAME>], supplement: none, numbering: none)
She can feel it even now as she sits in the elder's old home, even now as she tells his wife that she'll do her best to investigate these murders.
"It's every night, isn't it?" the woman says to her. Her voice is hardly more than a creak. "At night, I hear them calling to each other. My Finneas always says that if we mind our symbols, we'll be safe from them, but last night~"
In the other room, her Finneas's blood paints the walls red. Arlinn swallows. Her eyes fall on the Avacynian symbol right above the fireplace—half solid stone, half wire, and straw. The Travails took much from Innistrad, but faith is a hard thing to break. Even when the object of that faith falls as hard and far as Avacyn fell.
"It just doesn't make sense," says the woman—Agatha. "She was supposed to protect us. Everything seemed~for a little while, it was~"
Arlinn covers Agatha's hand with her own. Sometimes, in the face of unspeakable odds, a simple human connection can find its voice. Agatha sniffs. She looks up at the symbol herself, her gaze falling to the ground the moment she beholds it.
"We aren't alone," Arlinn says. "No matter how dark it may seem, the dawn will come—one way or another."
"Easy for you to say."
But it isn't easy to say at all. Especially not for Arlinn, who remembers so vividly the angel's raised spear. For weeks after The Travails, her wolves wanted nothing to do with human society, and she could hardly blame them. To walk among man was to breathe in their sorrows and bear their weight. The woods brought life, the roads and churches and villages only endless death.
Yet death is everywhere on Innistrad, and to turn from it is to turn away from the beauty of human endeavor. Living in the woods is easier, yes; simpler, yes; but the triumph of a hunt is a distant second to the triumph of a village against the encroaching night. To build a place where children do not fear the dark takes many years, but its rewards last generations.
So she visits the villages and towns of Kessig, doing what she can to fortify them against the darkness.
Agatha throws another log on the fire. As she sits back down into the old, thread-worn chair, she pulls her husband's cloak more tightly around herself. Her breath is misting, too. Arlinn considers asking her what she sees in it.
"<NAME>," she says.
"Yes?"
"It's getting darker, isn't it?"
Arlinn swallows. A glance out the window is all that's needed to confirm Agatha's fears. They both know what the answer is. That she'd even ask speaks to how raw her husband's death the previous night has left her; Kessigers so often rely on their superstitions to keep them safe from things they'd rather not name.
Best not to lie. "Yes, I think it is."
Agatha draws up her knees. "Gustav and Klein say their crops aren't growing the way they ought to. The chill's been bad for them, but they aren't getting any light either."
"Harvest isn't far," Arlinn says. "You'll have to increase stores, but there should be enough to feed everyone this season. The hunters can make up the rest."
"This season," Agatha repeats. "What about the next? And what happens when all of our hunters are~"
She gestures to the other room, to the blood Arlinn can taste at the back of her throat. The scent of it calls to a primal part of her—a part that wants to say that the hunters will find more meat than ever when there are so many wolves among them.
"They say it was a vampire. Can you believe that? A vampire out here?" Agatha says. "The watch, they tracked him down. Asked me if I wanted to see his heart. They said it was an easy thing to kill him."
"I think I saw them on my way here," Arlinn says. "Working on~it looked a bit like a scarecrow, but much larger. And with fangs. Some kind of effigy."
There's a weak smile on Agatha's face. It feels like progress. "That's the witch's doing. Finneas thinks it's a good idea, that she can help us. Thought it was."
Arlinn pours her another cup of tea. In the chill air, steam curls out of the cups, reaching higher and higher. The bright scent of herbs makes the gray den brighter.
"Here," she says. "All those tears are going to leave you thirsty, whether you know it or not."
She smiles again and tips the cup to her lips. "It's good. I don't know what you put in it, but the spices feel warm."
"It's an old family recipe," Arlinn says. Really, it's mixed largely based on what felt right to her nose the last time she was out in the woods. "If I shared it, they'd haunt me."
There's something like a laugh from Agatha—a short breath, a longer one. "Can't have that."
"No, we can't," says Arlinn. She pours herself a cup, too. "So—here's an idea. As long as we're drinking these two cups, we'll talk about our families. I'll tell you all about my brothers, and you can tell me about Finneas."
Her nod is half-hidden in that oversized wool sweater. "All right. I-I can do that."
"I'm glad," says Arlinn. "And afterward, you can tell me more about that witch."
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Arlinn knows these woods, and they know her. Everywhere her eye lands, a memory waits to greet her. Here—the scratches in an oak from an old hunt. For two days, she and her wolves tracked a white stag through the forest. You'd think it'd be easier to find than it was, but there was something about that stag, something that bewitched her whenever she caught its scent. By the time she and the wolves cornered it at the foot of a cliff, they let it go. Sometimes just laying your eyes on something's gift enough.
That isn't what the wolf tells her, though. She remembers seeing it before her: its eyes the pink of blood and water, its fur bright as the snow she so often dreamed about. She remembers the hunger growing in the pit of her stomach. When you're on all fours, it's so #emph[easy ] to taste something, so #emph[easy ] to bite and tear and claw. The timber wolves at her side made their intentions known in low growls and gnashing teeth. They were hungry, too.
But there was something of the moon about that stag, something that told her that it was not for their stomachs. Innocent beauty was a rare thing on Innistrad, as rare as innocence, and she'd not be the one to strike it down. Arlinn shifted back to her human form. The wolves sat, grumpy though they were, and said no more as she whispered a blessing.
Away ran the white stag.
Back to the hunt, the wolves.
In the end, finding another meal hadn't been so difficult. They'd gone to bed, the five of them, curled up around each other with their stomachs filled with less holy meat.
And when in the morning they awoke, there was a skull before them, resting atop a sword driven into the ground. White fur clung to the bone. She knew the sword, she knew the scent that clung to the deer's flesh, she knew the message.
Tovolar never liked her softer tendencies.
Wherever he is and whatever he's doing are no longer her problem. They chose their paths long ago. He found his pack, and she found hers.
The wolves are eager to meet her and eager to play. #emph[Find witches] , she told them, and they're happy to help in whatever way they can. Every few minutes as she runs through the woods, she'll hear one call and run over only to find an oddly shaped bough waiting for her, and the wolf looking at her expectantly. She thanks them, of course; even these strange boughs have their own clues waiting.
#figure(image("001_Episode 1: The Witch of the Woods/03.jpg", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none)
The further into the woods they go, the more the scent of the place changes. An astringent odor burns the inside of her nostrils; a warm, cinnamon perfume soon follows. When she shifts back to her human shape, she can see the bough more clearly: there #emph[is ] a clue here. A series of crescents and rounds line it, shaped by a careful hand. On the end —hanging from a branch—is a polished piece of opal. She squints. Are those shapes carved into it decorative or are they~Agatha said Finneas followed secret signs to find the enclave.
Arlinn gives her companion a ruffle between the ears. "Good work," she says. "Let's fan out—that way."
He hops up and sinks back down and then he's off like a bolt. It takes her only a moment to shift and follow after him. He is the fastest of her pack. Wolves don't have names in the human sense, but it feels wrong to spend so much time with someone without lending them one. The white line along this one's flank, along with his impressive speed, led her to name him Streak. His mate, Redtooth, follows behind them at a reasonable pace, always alert for any potential dangers. Patience—so named when she waited outside the cathedral doors for her every day—raced Redtooth for third. Sometimes she'd even pull ahead. Boulder, the largest and friendliest of them, brings up the rear, his tongue flapping every which way.
Now that she knows what to look for, following the symbols is easy enough. She can give herself over to the hunt—the leaves underfoot, the chill forest air, her senses alight with life. Running on four feet feels so much more natural than jogging on two. Sometimes, she thinks that she might not even be running in her human form at all.
Boulder's excited howl is only the first. They all feel it, the thrill of this untamed wild, the dangers of Innistrad made distant by the joy of the moment. Arlinn joins them. For this moment, at least, she wants to feel free.
But no sooner has the howl left her than she sees it: a stag, pure white, beneath a branch festooned with carved silver. His pale pink eyes lock with hers.
Arlinn skids to a halt. Hackles rise along her back; she growls to the others to stop. Something's wrong. There can't be two of them, and to encounter it #emph[here ] of all places~someone must be trying to trick them.
She isn't going to be fooled. A deep breath of the air lends her some insight—as does the stag simply pacing its way around the gathered group. First—it doesn't smell anything at all like a stag. Sweat, yes; dye, yes; even the scent of magic, but nothing like a stag. Second, it isn't acting like one either. Everything in the forest runs from a pack of wolves. The only exceptions are other werewolves. But this isn't like that, either.
The stag paces around them. Redtooth lowers her muzzle and snarls as he draws near. The stag draws away, locking eyes once more with Arlinn. The way he's tilting his head is the last clue she needs.
Arlinn barks an order at the others to stay put. She slinks behind a tree and shifts back to her human form. Patience bounds over with her leather pack—she reaches in for her clothes.
"Katilda, isn't it?" she calls. "I hope you'll give me a moment to make myself decent."
The woods around her seem to laugh—she feels the thrill of it against her back as she changes. It's only as she looks around that she realizes they're beneath one of the massive stone arches of the Celestus. Something about the structure had always reminded her of a clock's inner workings. At times, it was said the arms shifted about the central platform—which itself was as large as any village green. Arlinn had never seen it happen herself, but she'd had all sorts of ideas about what ancient rituals must drive it.
They must have gone deep into the woods; Arlinn's mother always used to caution her to turn back whenever she saw the broken rings rising from the earth. As a child, she wondered what it would be like to climb their wide flat surfaces—if the people in Thraben woke up every day with that sort of view. Maybe if she got up there herself, she could pretend to be some pampered noble. Now, as an adult, she eyes the carvings along its pitted surface with worry, the lenses with distinct unease. Her mother had been right to warn her of the Celestus. Whatever purpose it once served is best left to the past.
#figure(image("001_Episode 1: The Witch of the Woods/04.jpg", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none)
"If you'll forgive my little trick, I'll forgive you getting dressed," comes the answer. Her voice is at once charming and distant. She sounds, Arlinn thinks, like the sort of village matron who figured out a long time ago that you were the one stealing her pies. "The wolves in this forest aren't typically so well behaved. Most of them would have attacked."
Arlinn rounds the trunk. Where there had once been nothing but trees and underbrush, she now sees an enclave: branches and hides fashioned into tents, decorated with the same crescents and spheres she'd seen earlier. Floating candles lend the place an eerie light, as do the strange scarecrows throughout. Arlinn frowns. Candleguides—that's what her mother used to call them. There's an old story about one saving a boy lost in the woods and walking with him all the way to the Harvesttide festival. Another story told of hunters stalking through the Ulvenwald in search of pelts. One year, none returned. The next, these guides sprang up, born of their family's anxieties. She never expected she'd see one in person, let alone so many. The grins carved into their dripping-wax faces~only on Innistrad could those things be comforting.
But there are people, too, in the enclave—perhaps two dozen of them. Some women, some men, some seeming to eschew such labels. Clad in elaborate headdresses, they mutter spells before the candleguides. A dark-skinned man carves a grinning pumpkin, the swinging moonstones on his headdress winking in the light. Two women surround a cauldron bubbling and broiling. Perhaps it's the chill in the air, but Arlinn can see the smoke rising from a few yards away. And so, too, can she smell the delicious brew.
And there is one woman sitting before them on a mossy stump with a staff across her lap. Her white hair is looped through the many branches of her headdress; the pale crescent and sphere painted on her dusky skin only serve to make her features blend in. It's hard to tell, exactly, whether the wolves or Arlinn herself catch more of her attention—but she finds this all very amusing.
"We aren't most wolves," Arlinn says. Looking out at the enclave, she narrows her eyes. "And I take it you aren't most witches."
They can't be—Arlinn smells no evil in the air here. However frightening the shadows their headdresses lend them, however strange the paint renders their features, there's no mistaking they're human. That in itself is some comfort—even if she has no idea what they're up to. The magic here doesn't smell like typical magic. Like something that's been left to ferment, it has the scent of age to it.
"That depends on who you ask," says Katilda. "Before the Archangel's arrival, we were most witches. With her arrival, we took to the shadows, and now that she has departed, we are once again in the light."
Arlinn tilts her head. "You don't seem that old."
"It need not have been in this form, with this name," Katilda says. She points with her staff at the tree Arlinn's standing near. "An acorn is not, in itself, an oak. Given time, water, #emph[sun—] it may become one. So it is with us."
"Then you're regrowing something," Arlinn says. "Who are you?"
#figure(image("001_Episode 1: The Witch of the Woods/05.jpg", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none)
"We are what was once and will be. We are what the dark cannot kill. We are the Dawnhart Coven." The woman speaks with the voice of three, her eyes flashing with every syllable. The tip of her staff glows. She taps it to the earth. The brush surrounding them springs to life, growing rapidly, taking strange shape. In a matter of seconds, Arlinn recognizes it: the proud head of the white stag. "But who are you, wolf?"
"My name is <NAME>," she answers. She doesn't look the branch-stag in the eye, not even when the eyes flower. She knows well enough the scent of nightshade. "There won't be any Dawnhart Coven if there isn't any dawn—and at this rate, there won't be one for long. I'm here for answers."
"You didn't give me one." Another tap from the staff—vines flow to fill in the gaps of the stag's head. It walks two steps, then bows to Katilda: a supplicant before a strange sovereign. "But we'll leave that aside for now. My answers for you are as clear as the forest around you and the beating of your own human heart."
Streak's beating his tail against the earth. Arlinn's not feeling much more patient herself. This witch, this Katilda—what is it with people like her and dragging things out? "Could you make them a little clearer?" she says. "My eyesight isn't what it used to be."
The witch touches the staff to the stag's head. From it, a crown of branches and flowers. "There is a ritual for just this."
Arlinn doesn't watch the stag bound away—she keeps her eyes on Katilda. "If there's anything I've learned, it's that rituals are never easy."
"Therein lies their power—a ritual centers the community and its traditions. Over time, hundreds add their faith to its power, far surpassing anything a single mage might dream of doing," says Katilda. "The Archangel distracted us from these traditions. We must return to them—to Harvesttide."
Avacyn didn't distract anyone from anything—but now isn't the time for that fight. No matter how much it burns in Arlinn's chest. "Harvesttide? Like in the old stories?"
"The very same," answers Katilda.
"Spiced tea and pies?" says Arlinn. The fire burns hotter. As an Avacynian priest, Arlinn knew well how strong the Archangel's protection had been. "How are those supposed to save us?"
"Harvesttide is more than that," she says. "The sun and moon both have their turn in the sky. Harvesttide is humanity's turn—our celebration of living another year in defiance. Too long we've lived in fear, too long we've relied on external forces to save us. We must save each other. By gathering together—"
"Wait," says Arlinn, holding up her hands. "You're planning on gathering #emph[how ] many people?"
"As many as will come," says Katilda, with all the patience of a village priest. "Together, we can draw on our collective strength beneath the Celestus and, through it, restore the balance."
Arlinn shakes her head, her exasperation breaking through. "Might as well send a letter to every night stalker in Innistrad. Putting that many humans in one place is begging for an attack. We've seen enough death already; we don't need to stake more lives on some old story you read in a book somewhere—"
"I didn't read it in a book," Katilda answers. Now she's gone sharp, too, standing from her stump. To Arlinn's surprise she's a towering woman—sturdy as the oaks she praised. A faint scent of loam hits Arlinn's nose—but it doesn't make sense. Katilda's no ghoul. "There will be wards, <NAME>. Guardians, who can now exchange what they've learned to drive off the dark. You want to bring back the dawn? Very well. But you cannot do it without bringing back the hope we've lost."
Redtooth growls. So does Streak. Their discomfort echoes in Arlinn's chest: there's no way this is going to go well. Yet as she stares the old witch down, there's no sign of a break.
"You haven't even told me how this ritual works," Arlinn says, "assuming we don't all get killed first."
"We?" answers the witch—but she does not linger over the barb. Instead, she points with her staff to the arch of the Celestus. "The answer, as I told you, is right here. We use the Celestus. At its center is a lock of bright gold—we need the Moonsilver Key to activate it. Haven't you ever wondered what it's for? Our ancestors used it for just this—righting the balance of day and night."
"In the Kessig Woods, surrounded by the enemy."
"Yes. To stoke the fires—"
"—of Hope," Arlinn cuts in. "And if we don't? If we find some other way—"
"There is no other way," Katilda says, just as firm. "If the Celestus isn't activated—and if it isn't activated #emph[properly] —then the night will overtake the day. Geists, ghouls, vampires, werewolves—you will feast on us until—"
"I'm not—"
But a sound cutting through the woods stops her voice in her throat. A howl, rough and deep. A sound that sets the wolf within her alight. Her pack answers, and she can feel their joy, feel their eagerness for the hunt.
For she knows that howl well. She heard it for the first time years ago, huddled in her room, staring up at the symbol meant to keep her safe. And she had torn from her family's home, feet and hands against the damp midnight earth, running toward it with all she had—because it spoke of a fearless Innistrad.
The first time she heard that howl was twenty years ago—the first night she tasted blood, the first night she tasted freedom.
It still stirs her, even now.
Tovolar.
|
|
https://github.com/dainbow/MatGos | https://raw.githubusercontent.com/dainbow/MatGos/master/themes/28.typ | typst | #import "../conf.typ": *
= Линейные обыкновенные дифференциальные уравнения с постоянными коэффициентами и правой частью - квазимногочленом
#definition[
Уравнение вида
#eq[
$F(x, y, ..., y^((n))) = 0$
]
называется *обыкновенным дифференциальным уравнением* $n$-го порядка.
]
#definition[
Функция $phi(x)$, определённая на $I$ вместе со своими $n$ производными,
называется решением ОДУ, если
+ $phi$ и все её $n$ производных непрерывны на $I$
+ $forall x in I : space (x, phi(x), phi'(x), ..., phi^((n))(x)) in Omega$, где $Omega$ -- область
определения $F$
+ $forall x in I : F(x, phi(x), phi'(x), ..., phi^((n))(x)) = 0$
]
#definition[
Пусть $n >= 2, f_1, ..., f_n$ -- непрерывные функции, определённые на области $G subset.eq RR_(x, bold(y))^(n + 1)$.
Назовём *нормальной системой дифференциальных уравнений первого порядка*
следующую систему:
#eq[
$bold(y') = bold(f)(x, bold(y)) <=> cases(
y'_1 (x) = f_1 (x\, y_1 (x)\, ...\, y_n (x)) = f_1 (x\, bold(y)),
..., y'_n (x) = f_n (x\, y_1 (x)\, ...\, y_n (x)) = f_n (x, bold(y)),
)$
]
]
#definition[
*Задачей Коши* называется
#eq[
$cases(bold(y') = bold(f)(x\, bold(y)), bold(y)(x_0) = bold(y_0))$
]
]
#theorem(
"О существовании и единственности для системы",
)[
Пусть вектор-функция $bold(f)(x, bold(y))$ непрерывна в области $G$ вместе со
своими производными по $y_j, j in overline("1, n")$, и точка $(x_0, bold(y_0))$ тоже
лежит в $G$.
Тогда задача Коши локально разрешима единственным образом:
+ $exists delta > 0$, такое, что на $[x_0 - delta, x_0 + delta]$ решение задачи
Коши существует
+ Решение единственно в смысле: Если $bold(y_1) equiv bold(phi)(x)$ -- решение
задачи Коши в $delta_1$-окрестности точки $x_0$, а $bold(y_2) equiv bold(psi)(x)$ -- решение
задачи Коши в $delta_2$-окрестности точки $x_0$, то в окрестности точки $x_0$ с
радиусом $delta = min(delta_1, delta_2): bold(phi)(x) equiv bold(psi)(x)$
]
#theorem(
"О существовании и единственности для уравнения",
)[
Пусть функция $f(x, y, p_1, ..., p_(n - 1))$ определена и непрерывна по
совокупности переменных вместе с частными производными по переменным $y, p_1, ..., p_(n - 1)$ в
некоторой области $G subset.eq RR^(n + 1)$ и точка $(x_0, y_0, y'_0, ..., y^((n - 1))_0) in G$.
Тогда существует замкнутая $delta$-окрестность точки $x_0$, в которой существует
единственное решение задачи Коши.
]
#definition[
*Линейное обыкновенные дифференциальное уравнениу с постоянными коэффициентами и
правой частью - квазимногочленом* имеет вид
#eq[
$y^((n)) + a_1 y^((n - 1)) + ... + a_n y = f(x)$
]
где $f(x)$ -- *квазимногочлен*:
#eq[
$f(x) = e^(mu x) P_m (x); quad mu in CC, P_m (x) - "многочлен степени" m$
]
]
#note[
Существование и единственность решения такого уравнения очевидна применением
соответствующей теоремы.
]
#definition[
*Характеристическим многочленом* $L(x)$ назовём многочлен
#eq[
$L(x) = a_n x^n + a_(n - 1)x^(n - 1) + ... + a_0$
]
]
#definition[
Если число $mu$ из формулы квазимногочлена является корнем характеристического уравнения
#eq[
$L(lambda) = 0$
]
то говорят, что в уравнении *резонансный* случай.
Если же $mu$ не является корнем, то имеем *нерезонансный* случай.
]
#definition[
*Дифференциальным многочленом* назовём многочлен вида
#eq[
$L(D) = (D - lambda_1)^(k_1) (D - lambda_2)^(k_2) ... (D - lambda_s)^(k_2)$
]
где $k_i$ соответствует кратности корней характеристического уравнения, а $D$ -- оператор
формального дифференцирования.
]
#note[
Решение рассматриваемого уравнения эквивалентно решению уравнения
#eq[
$L(D)y = 0$
]
]
#theorem(
"О структуре решения ЛНУ с правой частью в виде квазимногочлена",
)[
Для рассматриваемого уравнения существует и единственно решение вида
#eq[
$y(x) = x^k e^(mu x) Q_m (x)$
]
где $Q_m (x)$ -- многочлен одинаковой с $P_m (x)$ степени $m$, а число $k$ равно
кратности корня $mu$ в уравнении $L(lambda) = 0$ в резонансном случае и $k = 0$ в
нерезонансном.
]
#proof[
Если $mu != 0$, то заменой $y = z e^(mu x)$ всегда можно избавиться от $e^(mu x)$ в
правой части.
В самом деле, после замены имеем, что
#eq[
$L(D)y = L(D)(e^(mu x)z) = e^(mu x)L(mu)z + e^(mu x)L(D)z = e^(mu x)L(D + mu)z = e^(mu x) P_m (x)$
]
Разделим на экспоненту и получим
#eq[
$L(D + mu)z = P_m (x)$
]
Таким образом, доказательство теоремы достаточно провести для уравнения вида
(БОО $mu = 0$):
#eq[
$L(D)y = P_m (x)$
]
Рассмотрим нерезонансный случай $L(mu) != 0$. Пусть
#eq[
$P_m (x) = p_m x^m + ... + p_0 ; quad Q_m (x) = q_m x^m + ... + q_0$
]
Если подставить и приравнять коэффициенты при одинаковых степенях $x$, получим
линейную алгебраическую систему уравнений для определения неизвестных
коэффициентов $q_0, ..., q_m$.
Матрица систему треугольная с числами $a_n = L(mu) != 0$ на диагонали, значит
коэффициенты определяются из неё однозначно.
В резонансном случае имеем
#eq[
$L(lambda) = lambda^k (lambda^(n - k) + a_1 lambda^(n - k - 1) + ... + a_(n - k))$
]
Следовательно,
#eq[
$L(D) = cases(
D^n + a_1 D^(n - 1) + ... + a_(n - k) D^k\,space k < n,
D^n \, space k = n,
)$
]
В первом случае замена $D^k y = z$ приводит уравнение к уравнению с
нерезонансным случаем.
Иначе получаем уравнение
#eq[
$D^n y = P_m (x)$
]
Которое очевидно решается интегрированием $n$ раз.
]
|
|
https://github.com/Harkunwar/attractive-typst-resume | https://raw.githubusercontent.com/Harkunwar/attractive-typst-resume/main/README.md | markdown | MIT License | This is an Attractive Resume Template built with Typst, an open source Latex alternative written in Rust, and compiles to PDF.
To compile it to pdf, make sure typst is installed. The provided flake.nix and .envrc is useful if you have nix and direnv installed. This template using the Mulish Google Font and is provided in the `assets/fonts` directory.
Mirror link at Typst.app https://typst.app/project/rLlknWbYc8XMaZc45BHlMl
Preview:\
<img src="assets/images/attractive-typst-resume-blue.png" width="400px" />
<img src="assets/images/attractive-typst-resume-green.png" width="400px" />
|
https://github.com/BrainTmp/MetaNote | https://raw.githubusercontent.com/BrainTmp/MetaNote/main/Typst/Notations/BMFSymbols.typ | typst | // This is the notations for writing Bird-Meertens Formalism
#let opl = math.plus.circle // oplus
#let omi = math.minus.circle // ominus
#let oti = math.times.circle // otimes
#let odo = math.dot.circle // odot
#let m2 = math.arrow.t // Up arrow, Max of two operator
#let cat = math.op([$plus$ #h(-0.5em) $plus$]) // concat ++
#let sp = math.space // space
#let st = math.ast.op // astar
#let rd = math.slash // Reduce / Slash
#let lrd = math.op([$arrow.r$ #h(-0.7em) $slash$]) // Left-to-right Reduce
#let rrd = math.op([$slash$ #h(-0.7em) $arrow.l$]) // Right-to-left Reduce
#let lacc = math.op([$arrow.r$ #h(-0.8em) $slash$ #h(-0.3em) $slash$]) // Left-to-right Accumulate
#let racc = math.op([$slash$ #h(-0.3em) $slash$ #h(-0.8em) $arrow.l$]) // Right-to-left Accumulate
#let reas(x) = {
math.quad; math.brace; math.space; x; math.space; math.brace.r;
} // Equation Reasoning
|
|
https://github.com/ParaN3xus/numblex | https://raw.githubusercontent.com/ParaN3xus/numblex/main/manual.typ | typst | MIT License | #import "@preview/pinit:0.1.4": *
#import "@preview/showybox:2.0.1": showybox as sb
#import "@preview/badgery:0.1.1" as bgy
// Copied from https://github.com/typst-doc-cn/tutorial/blob/main/src/mod.typ
#let exec-code(cc, res: none, scope: (:), eval: eval) = {
rect(
width: 100%,
inset: 10pt,
{
// Don't corrupt normal headings
set heading(outlined: false)
if res != none {
res
} else {
eval(cc.text, mode: "markup", scope: scope)
}
},
)
}
// al: alignment
#let code(cc, code-as: none, res: none, scope: (:), eval: eval, exec-code: exec-code, al: left) = {
let code-as = if code-as == none {
cc
} else {
code-as
}
let vv = exec-code(cc, res: res, scope: scope, eval: eval)
if al == left {
layout(lw => style(styles => {
let width = lw.width * 0.5 - 0.5em
let u = box(width: width, code-as)
let v = box(width: width, vv)
let u-box = measure(u, styles)
let v-box = measure(v, styles)
let height = calc.max(u-box.height, v-box.height)
stack(
dir: ltr,
{
set rect(height: height)
u
},
1em,
{
rect(height: height, width: width, inset: 10pt, vv.body)
},
)
}))
} else {
code-as
vv
}
}
#let package = toml("./typst.toml").package
#let version = package.version
#let entry(name, type: "function") = {
set box(inset: 0.3em, radius: 0.3em)
box(fill: purple.transparentize(80%))[#raw(type)]
box[#name]
}
#let nblx = [numblex]
#let positional = [#set text(size: 8pt);#box(bgy.badge-red("positional"), baseline: 30%)]
#let named = [#set text(size: 8pt);#box(bgy.badge-gray("named"), baseline: 30%)]
#let warning(doc) = {
let warning_mark = [
#place(horizon + center)[
#polygon.regular(fill: none, stroke: 0.05em, vertices: 3, size: 1em)
]
#place(horizon + center)[
#v(0.16em)
#set text(font: "Avenir Next", weight: "semibold", size: 0.5em)
!
]
]
sb(
frame: (
border-color: red.darken(50%),
title-color: red.lighten(60%),
body-color: red.lighten(80%),
),
title-style: (
color: black,
weight: "regular",
align: left,
),
shadow: (
offset: 3pt,
),
title: stack(
dir: ltr,
spacing: 5pt,
align(horizon, [#set text(size: 15pt);#h(10pt)#warning_mark]),
[
#set text(size: 14pt)
Warning
],
),
doc,
)
}
// Manual content
#align(center)[#text(size: 24pt)[Numblex #version Manual]]
// #outline()
= Concepts
== Numbering
Numbering is just a function that takes a list of numbers and returns a string. And this function can be used as the value of the `numbering` option in Typst.
For example, the numbering function could return a string like this:
#align(center)[
#set text(size: 16pt)
#"<1-(3).4.>"
]
We can split the string into several parts, and each part is an element.
#align(center)[
#let colors = (blue, green, yellow, red, purple, orange)
#set text(size: 16pt)
#(
"< 1 - (3). 4. >".split(" ").zip(colors).map(x => box(
inset: (bottom: 0.4em, top: 0.2em),
fill: x.at(1).transparentize(60%),
)[#x.at(0)])
).join([])
]
== Element
The elements are categorized into two types: ordinals and constants. Here "<", "-", ">" are constants, and "1", "(3).", "4." are ordinals.
We use the following format to represent the whole numbering
== Numbering String
The Typst official numbering string is not powerful enough, we usually need to set the numbering to a custom function to implement more complex numbering. However, this leads to redundancy. We define a new format to represent the numbering.
#align(center)[
#set text(size: 16pt)
`{<} {[1]} {-} {([3]).} {[4].} {>}`
]
Each element is enclosed in a pair of curly braces(`{}`), and anything else is ignored. The element can be a constant(with no `[]` in it) or an ordinal(with `[]` in it). The ordinal element is only displayed when the depth is enough.
== Patterns
Patterns are used to represent the ordinal or constant. The ordinal will be replaced by the final ordinal string in the output numbering, and anything outside the `[]` will be kept as it is.
#align(center)[
#set text(size: 16pt)
`{Chapter [1].}` $=>$ Chapter 1.
]
Of course, this is designed to avoid the following problem:
#code(```Typst
#set heading(numbering: "Chapter 1.")
= Once Upon a Time
#set heading(numbering: "Ch\apter 1.")
= Once Upon a Time
```)
== Ordinals
Most of the time, you can just put the character corresponding to the ordinal you want in the `[]`. #nblx passes the character to the `numbering` function in Typst to get the ordinal.
However, #nblx has also modified and extended the ordinal definition.
- `[]`: an empty string, it takes the number but generates nothing.
- `[(1)]`: shorthand for circled numbers (①, ②, ③, ...). If you want the ordinal (1), (2), (3), ..., please use `{([1])}` instead.
_Use single character to represent the ordinal if possible, since Typst have complicated rules to handle the prefix and suffix of the numbering._
The element can be different in different contexts. For example, we might need them to show up in different forms according to the depth.
== Conditions
Conditions are functions that take the current numbering configuration and return a boolean value. The condition match is executed sequentially, and the first match will be used.
Currently the following configuration context are defined:
=== depth: `int` (short: `d`)
The depth of the numbering.
From Typst v0.11.1 on, we can use heading.depth to get the depth of the heading. Similarly, we introduce the numbering depth, which is the length of the list of numbers passed to the numbering function.
#warning[
The condition here is implemented using `eval`, which is not safe and might cause other problems.
]
You may represent a conditional element using the following format:
#align(center)[
#set text(size: 16pt)
`{PAT_1:COND_1;PAT_2:COND_2;...}`
]
Leave the condition empty is equivalent to `true`.
#align(center)[
#set text(size: 16pt)
`{PAT_1:COND_1;PAT_2}` $equiv$ `{PATTERN_1:COND_1;PAT_2:true}`
]
If no condition is matched, the element will return an empty string. _Notice that `[]` must appear in none(constant element) or all(ordinal element) of the patterns._
== Examples
#import "./lib.typ": numblex
#code(
```Typst
#set heading(numbering: numblex("{Section [A].:d==1;[A].}{[1].}{[1])}"))
= Electrostatic Field
== Introduction
=== Gauss's Law
```,
scope: (numblex: numblex),
)
#code(
```Typst
#set heading(numbering: numblex("{[一]、:d==1}{[1].:d==2}{[(1)]}"))
= 保角变换
== 介绍
=== 调和函数的保角性
```,
scope: (numblex: numblex),
)
#code(
```Typst
#let example = numblex("{<} {[1]} {-:d>=2} {([1]).} {[1].} {>}")
#numbering(example, 1) \
#numbering(example, 1, 3) \
#numbering(example, 1, 3, 4)
```,
scope: (numblex: numblex),
)
= Reference
#show heading.where(level: 2): entry
The package provides a function `numblex` to generate the numbering function for Typst.
== `numblex`
=== Parameters
- #positional `s`
- #named `..options`
= Known Issues & Limitations
+ Automatic repeating (which Typst official numbering supports) is not supported yet.
+ Character escaping for "{}[]:;" is not supported yet.
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/par-justify_04.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test that the last line can be shrunk
#set page(width: 155pt)
#set par(justify: true)
This text can be fitted in one line.
|
https://github.com/bchaber/typst-template | https://raw.githubusercontent.com/bchaber/typst-template/main/documents/example.typ | typst | #import "ncn.typ": *
#show: ncn.with(
bibliography-file: "bibliography.bib",
)
= Introduction
This document will be finished soon @bezanson2017.
|
|
https://github.com/VisualFP/docs | https://raw.githubusercontent.com/VisualFP/docs/main/SA/design_concept/content/introduction/tool_research/enso.typ | typst | #import "../../../../acronyms.typ": *
= Enso
Enso is a functional programming language designed for data science that Enso International Inc @enso-language created.
There is a text and a visual editor to create programs.
The visual editor allows a user to define components that can be connected, symbolizing the data flow from one component to another.
The editor also offers previews for a component's data, which, e.g., allows a user to see a modified picture like in @enso_screenshot.
#figure(
image("../../../static/enso_screenshot.png", width: 80%),
caption: [Example program in Enso @enso-language]
)<enso_screenshot>
Enso is visually impressive and largely intuitive regarding data flow.
For example, downloading data from a public #ac("API") and aggregating it is super easy.
However, some operations, such as dividing a number with another number, are pretty complicated.
Based on that, Enso seems to be an excellent tool to work with datasets but not so much for creating programs with complex logic. |
|
https://github.com/Mouwrice/thesis-typst | https://raw.githubusercontent.com/Mouwrice/thesis-typst/main/sota.typ | typst | #import "lib.typ": *
= A brief overview of the State Of The Art <sota>
In this section, we will provide a brief overview of some recent body pose estimation tools and papers that have been published. Moreover, for every tool or paper, we will provide a short summary of the key features and limitations.
Some rather strict requirements for this specific demo application were set at the beginning of the project.
The tool should be able to run on-device in real-time, and should be able to provide 3D pose estimation.
By on-device, we mean that the tool should be able to run on consumer-grade hardware, such as a smartphone or a laptop. Another important requirement is the amount of detail that can be tracked. For a drumming demo application, it is essential that the hands and feet are properly detected and tracked. In many of the papers that we will discuss in this section, the tools often lack one of these requirements. Out of the tools that we will discuss, only RTMPose and OpenPose meet all the requirements except for the 3D pose estimation. Only MediaPipe Pose was able to meet all the requirements.
== Human Motion
The Human Library is an open-source tool, based on web technologies, that provides a wide range of detection tasks. One of which is body pose tracking. In full the name is "Human: AI-powered 3D Face Detection & Rotation Tracking, Face Description & Recognition, Body Pose Tracking, 3D Hand & Finger Tracking, Iris Analysis, Age & Gender & Emotion Prediction, Gaze Tracking, Gesture Recognition" @human. It also comes with a library, human-motion, which is focused on the 3D motion visualisation of the face, body, and hands @human-motion.
Many of the requirements for this project are met by the Human Library. It has an excellent amount of detail that can be tracked, and it can run on-device. However, during testing, we could not get the performance to be high enough for real-time applications. The body pose was only updated every second or so. Even if some setup optimizations could be made on our side, we do not believe that the performance could be improved enough to be used in a real-time application.
== Volumetric Capture
This tool is quite different from the other tools in this section. It is not really a body pose estimation tool, as it does not explicitly provide any precise markers or skeleton. Instead, it provides a volumetric representation of the human body @volumetric-capture. This means that it provides a 3D model of the human body. It does so with a set of calibrated depth cameras, such as the Intel RealSense camera. It requires quite a lot of specific hardware and multiple cameras, so it clearly does not conform with the set requirements. This tool is also not really suitable for our application, as we need precise tracking of the hands and feet using traditional markers. However, it is an interesting tool that could be used in other applications. For example, it has potential in the use of Mixed Reality applications. An overview of the Volumetric Capture tool can be seen in @volumetric-capture-overview.
#figure(caption: ["The Volumetric Capture tool overview taken from the official documentation @volumetric-capture."])[
#image("images/volumetric-capture.jpg")
] <volumetric-capture-overview>
Volumetric Capture was developed by the 3D vision team of the Visual Computing Lab.
#footnote[#link("https://vcl.iti.gr/")[https://vcl.iti.gr/ #link-icon]]
Unfortunately, according to the documentation and releases on GitHub, the project seems to be abandoned.
#footnote[
#link("https://github.com/VCL3D/VolumetricCapture/releases")[https://github.com/VCL3D/VolumetricCapture/releases #link-icon]
]
The last release was in 2020 and is only available on Windows 10. This limited compatibility and support appears to be common for many of the tools in this section. Many tools go along with some research paper, but once the research is done, the tool is abandoned. This lack of updates and maintenance is less than ideal for use in an actual application.
== MMPose: RTMPose
#figure(caption: [RTMPose official logo.])[
#image("images/rtmpose.jpg")
]
To quote from the official GitHub page: "MMPose is an open-source toolbox for pose estimation based on PyTorch. It is a part of the OpenMMLab project."
#footnote[The OpenMMLab project is a collection of "open source projects for academic research and industrial applications. OpenMMLab covers a wide range of research topics of computer vision, e.g., classification, detection, segmentation and super-resolution." #link("https://openmmlab.com/")[https://openmmlab.com/ #link-icon]] @mmpose
The model of interest in the collection of models is RTMPose @rtmpose. RTMPose is a pose estimation toolkit that works in real-time and with multiple persons in the frame. The model is quite fast and can run on consumer-grade hardware. They boast with achieved frame rates of 90+ FPS on an Intel i7-11700 CPU and 430+ FPS on an NVIDIA GTX 1660 Ti GPU. However, it does not provide 3D pose estimation, it is limited to two dimensions. Despite this, it is available on many different platforms and devices such as Windows, Linux, and ARM-based processors. This tool ticks many of the boxes for our project, but the lack of 3D pose estimation was a dealbreaker. But, as shown in the measurements of MediaPipe Pose in the next section, the 3D pose estimation is not always as accurate as one might hope. It is even so that the depth, the third dimension, is not used in the final application. Given the uncovered limitations of the MediaPipe Pose tool, RTMPose might be a better alternative for our usecase. Especially since it has a way higher achievable frame rate than MediaPipe Pose.
== AlphaPose
#figure(caption: [The AlphaPose logo.])[
#image(width: 80%, "images/alphapose.jpg")
]
AlphaPose is yet another open-source, multi-person pose estimator @alphapose, based on the research paper "RMPE: Regional Multi-person Pose Estimation" @alphapose-paper. It is one of the earlier body pose estimation tools originating from 2017. Being an older tool, its accuracy and performance are not as high as some of the newer tools, such as RTMPose. Besides, it is one of those tools that is not maintained once the research paper has been published, making it unfit for actual usage.
== OpenPose
#figure(caption: [OpenPose keypoints example.])[
#image("images/openpose.gif")
] <openpose-example>
OpenPose, released in 2018, is the first real-time multi-person system to jointly detect human body, hand,
facial, and foot keypoints (in total 135 keypoints) on single images (@openpose-example). It is based on the research paper "OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields" @openpose-paper. OpenPose is a popular tool that is still maintained to this day. It supports all major operating systems and can run on consumer-grade hardware. OpenPose supports the detection and tracking of the poses of multiple people in the frame. As we do not need to track multiple individuals, this feature can be disabled, leading to an increase in performance according to the documentation. OpenPose stands out as it also provides 3D key point detection using triangulation from multiple views. This is a feature that is not present in many of the other tools in this section. OpenPose is a strong candidate for our application, but the lack of 3D pose estimation using a single camera is a dealbreaker.
== MindPose
MindPose is the last tool that we will discuss in this section. It is the result of an open-source project jointly developed by the MindSpore
#footnote[
MindSpore is an open-source AI framework developed by Huawei. It is a deep learning and machine learning framework that is used for training and inference of AI models. #link("https://www.mindspore.cn/")[https://www.mindspore.cn/ #link-icon]
]
team @mindpose. MindPose is a toolbox or python framework that allows for the training and inference of pose estimation models. Unfortunately, the project seems to be abandoned and unfinished. The project also appears to be rather bare bones, with no examples and few documentation. It provides support for three body pose estimation models from 2018 and 2019. The models are HRNet, SimpleBaseline and UDP @hrnet @simplebaseline @udp. These models do not seem to focus on real-time applications, and they do not provide 3D pose estimation. The project is not suitable for our application, but it is interesting to see that Huawei is also working on pose estimation tools.
|
|
https://github.com/tiankaima/typst-notes | https://raw.githubusercontent.com/tiankaima/typst-notes/master/2bc0c8-2024_spring_TA/function_analysis_intro.typ | typst | #set text(
font: ("linux libertine", "Source Han Serif SC", "Source Han Serif"),
size: 10pt,
)
#show math.equation: set text(11pt)
#show math.equation: it => [
#math.display(it)
]
#let dcases(..args) = {
let dargs = args.pos().map(it => math.display(it))
math.cases(..dargs)
}
#let blue_note(it) = [
#rect(stroke: blue + 0.02em, width: 100%, inset: 1em)[
#set text(fill: blue)
#it
]
]
#show image: it => [
#set align(center)
#it
]
#align(right + horizon)[
= Function Analysis Introduction
2024 Spring 数学分析 B2
<NAME>
2024.05.20 随堂笔记
]
#pagebreak()
在第 12 章 Fourier Series 的部分, 我们实际上只需要考虑 $L_2$ 空间, 即平方可积函数的空间.
讨论它的原因主要是我们在 B1 讨论过如下积分:
$
&integral_(-pi)^(pi) cos(n x) cos(m x) dif x = 0, quad n != m\
&integral_(-pi)^(pi) sin(n x) sin(m x) dif x = 0, quad n != m\
&integral_(-pi)^(pi) cos(n x) sin(m x) dif x = 0
$
在线性代数中我们有正交基的概念, 讨论起来大概是这个样子:
$
{e_1, e_2, ... e_n}, quad forall i!=j quad angle.l e_i, e_j angle.r = 0
$
这两件事情可能看起来没什么关系, 但是如果我们把线性代数中空间从 $RR^n$ 替换成函数空间呢? 也就是我们不再把函数是做 $f: A->B$ 的映射, 能不能对函数本身, 按照线性代数的思路来讨论呢?
#v(2em)
== 函数空间 Function Space
我们定义函数空间 $F(A)$ 为定义在 $A$ 上的函数的集合:
$
F(A) = {f: A -> RR}
$
也就是说:
$
forall f in F(A), quad f: A -> RR
$
#blue_note[
注: 在下面的讨论中我会默认讨论 $F(RR)$, 在 Fourier Series 中, 按照本书的记号, 实际上所有函数都约定在 $F([-pi, pi])$ 上, 这两者的讨论只相差一个 $1_[-pi,pi]$, 改变上下限也可以得到相同的结论.
]
这不是我们第一次遇到函数空间的概念, 在 B1 中我们把连续函数 $f$ 就写作 $f in C^0(RR)$, 我们在这里把 $C^k (RR)$ 的定义表述如下:
$
C^k (RR) = {f in F(RR): f, f', f'', ..., f^(k) in C^0(RR)}
$
#v(2em)
== 向量空间 Vector Space
我们自然的认为 $F(RR)$ 是线性的,
$
forall f, g in F(RR), alpha, beta in R, quad alpha f + beta g in F(RR)
$
#blue_note[
这样的说法其实不太严谨, 我们要补充定义两件事情,
1. 加法: $f + g$ 的定义
$
f+g: RR -> RR, quad (f+g)(x) := f(x) + g(x)
$
2. 数乘: $alpha f$ 的定义
$
alpha f: RR -> RR, quad (alpha f)(x) := alpha f(x)
$
]
#blue_note[
对更一般的空间 $F$, 线性的性质是对一个数域 $KK$ 表述的, 完整的表述如下:
$
forall f, g in F, alpha, beta in KK, quad alpha f + beta g in F
$
也自然的, 我们需要定义加法和数乘的操作, 他们分别对应了两个二元操作:
$
+ quad& (dot,dot): F times F -> F\
dot quad& (dot, dot): K times F -> F
$
也就是线性的性质是 空间 $F$ 对于这两个操作 $(+, dot)$的性质.
我们把满足上面的性质的空间称作向量空间 Vector Space.
]
#v(2em)
== 内积空间 Inner Product Space
// 我们直接跳到内积的定义, 在理解 Fourier Series上, 其他的事情都不那么重要:
我们定义 $F(RR)$ 上的内积 $angle.l dot, dot angle.r$ 为:
$
angle.l f, g angle.r = &integral_(-oo)^(oo) f(x) g(x) dif x
$
#blue_note[
在有些地方你可能看到这样的写法:
$
angle.l f, g angle.r = &integral_(-oo)^(oo) f(x) overline(g(x)) dif x
$
这是因为他们在复数域上讨论这个问题, 我们这里只考虑 $f : RR -> RR$, 但是这个定义是可以推广到复数域上的. 而且必须保留这个共轭的写法(为什么?)
]
#blue_note[
与线性代数里面讲的内积性质相同, 我们同样可以验证:
- 对称性:
$angle.l f, g angle.r = angle.l g, f angle.r$
- 线性性:
$angle.l alpha f + beta g, h angle.r = alpha angle.l f, h angle.r + beta angle.l g, h angle.r$ (左)
$angle.l h, alpha f + beta g angle.r = alpha angle.l h, f angle.r + beta angle.l h, g angle.r$ (右)
- 正定性:
$angle.l f, f angle.r >= 0$
$angle.l f, f angle.r = 0 <=> f = 0$
#blue_note[
更一般的, 我们要求共轭对称性 & 左线性, 右线性可以推导出来.
即使推广到复数域, 正定性的要求依然存在 $angle.l x,x angle.r >=0$, 意味着 $angle.l x,x angle.r in R$
]
一般的, 我们把满足上面的性质的空间称作内积空间 Inner Product Space.
]
#box(width: 100%)[
到这里我们说回那几个定义在 $[-pi,pi]$ 上的函数:
$
f_n = cos (n x), quad g_m = sin (m x)
$
他们之间相乘积分的性质, 这里也就转变为内积的性质:
$
angle.l f_n, f_m angle.r = &integral_(-pi)^(pi) cos (n x) cos (m x) dif x = 0, quad n != m\
angle.l g_n, g_m angle.r = &integral_(-pi)^(pi) sin (n x) sin (m x) dif x = 0, quad n != m\
angle.l f_n, g_m angle.r = &integral_(-pi)^(pi) cos (n x) sin (m x) dif x = 0
$
]
#v(0.2em)
== 赋范空间 Normed Space
*接下来我们推导一下函数空间上模长的概念:*
在线性代数里, 内积的定义有一种很几何的表述方法:
$
angle.l a, b angle.r = abs(a) dot abs(b) dot cos angle.l a, b angle.r
$
所以比较自然的有:
$
angle.l a, a angle.r = abs(a) dot abs(a) dot cos angle.l a, a angle.r = abs(a)^2
$
从几何的角度来看, 因为我们先定义了 $abs(a) = (sum_(i=1)^n abs(a_i)^2)^(1/2)$, 然后再定义了 $angle.l a, b angle.r = abs(a) dot abs(b) dot cos angle.l a, b angle.r$. 但是我们也可以反过来, 先定义了 $angle.l a, b angle.r$, 然后定义 $abs(a) = angle.l a, a angle.r^(1/2)$, 这样的定义也是合理的.
推广到函数空间上也是这样的思路, 我们先定义了内积, 然后定义了"模长":
$
abs(f) = angle.l f, f angle.r^(1 / 2) = (integral f^2 dif x)^(1 / 2)
$
"模长" 的概念, 推广到一般的向量空间上, 会被记作范数 (norm), 更规范地记作 $norm(dot)$, 也就是:
$
norm(f) = angle.l f, f angle.r^(1 / 2) = (integral f^2 dif x)^(1 / 2)
$
我们把这样的范数称为内积诱导的范数, 额外的, 也可以不通过内积定义范数, 只需要满足:
#blue_note[
作为模长的推广, 我们的对范数也有一些性质的要求:
- 正定性:
$norm(f) >= 0$
$norm(f) = 0 <=> f = 0$
- 齐次性:
$norm(alpha f) = abs(alpha) norm(f)$
- 三角不等式:
$norm(f + g) <= norm(f) + norm(g)$
一般的, 我们把满足上面的性质的空间称作赋范空间 Normed Space.
]
从内积诱导出来的范数满足额外的性质, 也就是:
$
norm(f+g)^2 + norm(f-g)^2 = 2 norm(f)^2 + 2 norm(g)^2
$
一般被称作平行四边形定理 Parallelogram Law.
#blue_note[
在这里我们漏掉了一件比较重要的事情, *存在性的问题*: 在一般的 $F(RR)$ 上, $angle.l f, g angle.r$ 和 $norm(f)$ 总是存在吗? 答案显然是否定的.
那如何定义一个"最大"的 $V subset F(RR)$, 使得 $forall f,g$, $angle.l f, g angle.r$ 和 $norm(f)$ 总是存在呢?
这里我们回顾 B1 用到的一个积分不等式:
$
(integral f dot g dif x)^2 <= (integral f^2 dif x) (integral g^2 dif x)
$
在 B1 习题课的时候我曾经提到, 这实际上是 Cauchy 不等式, 现在我们把他换为内积, 可以清楚的看到这一点:
$
(angle.l f, g angle.r)^2 &<= angle.l f, f angle.r dot angle.l g, g angle.r\
&<= norm(f)^2 dot norm(g)^2
$
所以怎么样限制 $V subset F(RR)$ 使得 $angle.l f,g angle.r$ 总是存在呢? 其实只需要限制 $norm(f) < oo$ 即可, 也就是:
$
integral f^2 dif x < oo
$
那么自然有:
$
angle.l f, g angle.r = integral f dot g dif x <= norm(f) dot norm(g) < oo
$
也就是说, 只要空间里所有函数都平方可积, 那么 $angle.l f, g angle.r$ 总是存在的.
这样, 我们得到了一个新的空间, 我们记作 $L^2 subset F(RR)$, 其中
$
L^2 = {f in F(RR): norm(f) < oo}
$
]
#blue_note[
$L_2$ 空间上的范数 (norm) 定义给了我们启发, 我们可以类似的定义 $L_p$ 空间和它的范数, 将这个概念推广开:
$
norm(f)_p := (integral f^p dif x)^(1 / p)
$
对应的, $L_p$ 空间也就是:
$
L_p = {f in F(RR): norm(f)_p < oo}
$
值得注意的是, 一般的 $L_p$ 空间在 $p!=2$ 时, 并不能被内积诱导出来. (怎么证明?)
]
#blue_note[
有了 $L_2$ 范数以后, 我们注意到 $norm(f)_2 = 0 arrow.double.not f eq.triple 0$, 而只有:
$
integral f^2 dif x = 0
$
从这里能推出
$
m({x: f(x) != 0}) = 0
$
但是并不能得到 $f$ 逐点都为 $0$.
这样的关系会影响其他一些我们对 $L_2$ Space 的看法, 最主要的是:
$
f = g quad <=> quad norm(f-g)_2 = 0
$
如果从这个角度理解 $L_2$ 空间上的 $f=g$, 就变成了:
$
f = g quad <=> quad m({x: f(x) != g(x)}) = 0
$
// 实际上给出了 $L_2$ 上一组等价关系 $L_2\/tilde: quad f tilde g <=> norm(f-g)_2 = 0$
我们把这样的关系记作 $f = g quad "a.e."$ (almost everywhere, 几乎处处相等), 这更像是一个等价关系, 而不是严格的相等关系.
]
#v(2em)
== 总结
推导到这里可以稍微停下来, 看看到目前为止我们介绍了哪些概念, 又有哪些新方法来处理问题:
// 我们backtrack一下:
- 函数空间 Function Space
- 向量空间 Vector Space
- 内积空间 Inner Product Space
- 赋范空间 Normed Space
- $L_2$ 空间
在介绍中, 向量空间是比较自然引入的, 这点符合我们直觉. 内积空间是我们人为定义 $angle.l f, g angle.r = integral f dot g dif x$ 的结果. 赋范空间是从内积空间的范数推导出来的.
在我们严谨讨论存在性的问题后, 我们引入了 $L_2$ 空间, 也就是平方可积函数的空间. 实际内积 $angle.l f, g angle.r$ 应该定义在 $L_2$ 空间上, 这样才能保证 $angle.l f, g angle.r$ 总是存在.
那么回顾开头提出的问题, 现在我们能使用线性代数的方法处理函数空间的问题了吗?
#v(2em)
我来提供几个例子:
*线性相关:* 考虑 $L_2 (RR)$ 上三个函数 ${1,x,2-x}$ 线性相关吗? ${0,x,2-x}$ 呢? ${1, x, x^2}$ 呢? 如果限制在 $L_2[0,1]$ 上, 结果会发生变化吗?
*标准正交基:* 有了内积和范数的概念, 我们能找到 $L_2[0,1]$ 上的一组标准正交基吗? ${cos(n x), sin(m x), ...}$ 构成一组正交基吗? 如何标准化?
*Gram-Schmidt 过程:* 我们能使用 Gram-Schmidt 过程来构造 $L_2[0,1]$ 上的一组正交基吗? 如果已知 ${1, x, x^2, ...}$ 线性无关, 如何构造一组正交基?
#pagebreak(weak: true)
== One More Thing
在泛函分析的课程中, 我们会讨论更一般的空间, 主要从几个方面刻画:
- 度量空间 Metric Space
$
d(dot, dot): X times X -> RR
$
- 赋范空间 Normed Space
$
norm(dot): X -> RR
$
- 内积空间 Inner Product Space
$
angle.l dot, dot angle.r: X times X -> RR
$
这三者分别是 距离, 模长, 内积的概念的推广. 诱导关系 $angle.l dot, dot angle.r => norm(dot) => d(dot, dot)$, 也就是:
$
norm(f) := angle.l f, f angle.r^(1 / 2)\
d(f, g) := norm(f-g)
$
#v(2em)
在这之上还通过 Cauchy 列的概念引入了完备性的概念, 也就是: 所有的 Cauchy 列都收敛.
//
// - Cauchy 列 Cauchy Sequence
// $
// forall epsilon > 0, exists N, forall m, n > N, d(x_m, x_n) < epsilon
// $
// - 收敛性 Convergence
// $
// exists x, forall epsilon > 0, exists N, forall n > N, d(x_n, x) < epsilon
// $
// - 完备性 Completeness:
// #v(2em)
我们这节课讨论的 $L_2$ 空间就是一个完备的内积空间 (Hilbert Space).
另一件值得一提的事情是, 所有可分 Hilbert Space 都是同构 (isomorphic) 的, 也就是说, 他们之间的性质是一样的, 只是基(空间)不同.
#v(2em)
除了这里讨论的 $L_p$ 空间, 另一个很重要的空间是 $cal(l)_p$ 空间:
$
cal(l)_p := {(x_n) in RR^oo :quad sum_(n=1)^oo abs(x_n)^p < oo}
$
其中的元素
$
(x_1, x_2, ... x_n, ...) in cal(l)_p
$
是一个序列, 他们的范数定义为:
$
norm((x_n))_p := (sum_(n=1)^oo abs(x_n)^p)^(1 / p)
$
特别的, $cal(l)_2$ 上内积定义为:
$
angle.l (x_n), (y_n) angle.r = sum_(n=1)^oo x_n y_n
$
如果对这方面感兴趣, 可以重复上面对 $L_2$ 的讨论, 证明 $cal(l_2)$ 是完备的内积空间.
|
|
https://github.com/7sDream/fonts-and-layout-zhCN | https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/05-features/shaping/flags.typ | typst | Other | #import "/template/template.typ": web-page-template
#import "/template/components.typ": note
#import "/lib/glossary.typ": tr
#show: web-page-template
// ### Lookup Flags
=== #tr[lookup]选项
// One more thing about the lookup application process - each lookup can have a set of *flags* which alters the way that it operates. These flags are *enormously* useful in controlling the shaping of global scripts.
在应用#tr[lookup]步骤中,另一个值得一提的功能是每个#tr[lookup]都可以启用一些选项。这些选项可以改变#tr[lookup]的运作方式,在控制#tr[global scripts]的文本#tr[shaping]时非常有用。
// For example, in Arabic, there is a required ligature between the letters lam and alef. We could try implementing this with a simple ligature, just like our `f_f` ligature:
还是以阿拉伯文为例,它在字母 lam 和 alef 间有一个必要#tr[ligature]。我们现在就来尝试实现这个类似`f_f`的简单#tr[ligature]:
```fea
feature liga {
lookup lamalef-ligature {
sub lam-ar alef-ar by lam_alef-ar;
} lamalef-ligature;
} liga;
```
// However, this would not work in all cases! It's possible for there to be diacritical marks between the letters; the input glyph stream might be `lam-ar kasra-ar alef-ar`, and our rule will not match. No problem, we think; let's create another rule:
但这段代码并不是在所有情况下都能正常工作,因为两个字母之间可能有#tr[diacritic]。比如输入的#tr[glyph]流可能是 `lam-ar kasra-ar alef-ar`,这样我们的规则就不匹配了。没事,我们可以再创建一条规则:
```fea
feature liga {
lookup lamalef-ligature {
sub lam-ar alef-ar by lam_alef-ar;
sub lam-ar kasra-ar alef-ar by lam_alef-ar kasra-ar;
} lamalef-ligature;
} liga;
```
// Unfortunately, we find that this refuses to compile; it isn't valid AFDKO syntax. As we'll see in the next chapter, while OpenType supports more-than-one match glyphs and one replacement glyph (ligature), and one match glyph and more-than-one replacement glyphs (multiple substitution), it rather annoyingly *doesn't* support more-than-one match glyphs and more-than-one replacement glyphs (many to many substitution).
但不幸的是,这段代码无法编译,他不符合 AFDKO 语法规则。OpenType支持多换一的#tr[glyph]替换(比如#tr[ligature]),也支持一换多的#tr[glyph]替换(#tr[multiple substitution]),烦人的是它不支持多换多#tr[glyph]替换。关于这个问题下一章会详细介绍。
// However, there's another way to deal with this situation. We can tell the shaper to skip over diacritical marks in when applying this lookup.
不过还好,还有另一种处理方式。我们可以让#tr[shaper]在应用这个#tr[lookup]时跳过所有符号。
```fea
feature liga {
lookup lamalef-ligature {
lookupFlag IgnoreMarks;
sub lam-ar alef-ar by lam_alef-ar;
} lamalef-ligature;
} liga;
```
// Now when this lookup is applied, the shaper only "sees" the part of the glyph stream that contains base characters - `lam-ar alef-ar` - and the kasra glyph is "masked out". This allows the rule to apply.
现在,当应用这个#tr[lookup]时,#tr[shaper]只会关注#tr[glyph]流中的基本#tr[character]部分,也即 `lam-ar alef-ar`,而`kasra`#tr[glyph]会被掩藏。这样一来我们的规则就能被应用了。
// XXX image here.
// TODO:这里需要一张图
// How does the shaper know which are mark glyphs are which are not? We tell it! The `GDEF` table contains a number of *glyph definitions*, metadata about the properties of the glyphs in the font, and one of which is the glyph category. Each glyph can either be defined as a *base*, for an ordinary glyph; a *mark*, for a non-spacing glyph; a *ligature*, for glyphs which are formed from multiple base glyphs; or a *component*, which isn't used because nobody really knows what it's for. Glyphs which aren't explicitly in any category go into category zero, and never get ignored. The category definitions are normally set in your font editor, so if your `IgnoreMarks` lookups aren't working, check your categories in the font editor - in Glyphs, for example, you not only have to set the glyph to category `Mark` but also to subcategory `Nonspacing` for it to be placed in the mark category. You can also [specify the GDEF table](http://adobe-type-tools.github.io/afdko/OpenTypeFeatureFileSpecification.html#9.b) in feature code.
#tr[shaper]怎么知道哪些#tr[glyph]属于符号呢?答案是我们来告诉它!字体中有一个`GDEF`表,它包含了#tr[glyph]定义信息,它们是字体中#tr[glyph]的元数据。这些元数据中有一项就是#tr[glyph]所属的类别。#tr[glyph]可以属于以下分类之一:用于普通字形的基本类`base`;用于非空白#tr[glyph]的符号类`mark`;用于由多个基本#tr[glyph]组成的的#tr[ligature]类`ligature`;还有一个没人知道怎么使用的部件类`component`。没有明确指明属于哪一类的#tr[glyph]会被归属到第零类`zero`,它们没有办法被忽略。你通常可以在字体编辑器中设置这些分类,所以如果 `IgnoreMarks` 选项不起作用的话,你可以用编辑器打开字体,检查一下它的分类设置。比如我使用的 Glyphs,你不仅需要将#tr[glyph]放进`Mark`,还需要具体地放入它的下级`Nonspacing`子类,这样它们才会被放入字体的`mark`类。你也可以直接在特性代码中定义`GDEF`表@Adobe.AFDKO.Fea.9.b。
// Other flags you can apply to a lookup (and you can apply more than one) are:
#tr[lookup]上可以同时开启多个选项,包括:
/*
* `RightToLeft` (Only used for cursive attachment lookups in Nastaliq fonts. You almost certainly don't need this.)
* `IgnoreBaseGlyphs`
* `IgnoreLigatures`
* `IgnoreMarks`
* `MarkAttachmentType @class` (This has been effectively superceded by the next flag; you almost certainly don't need this.)
* `UseMarkFilteringSet @class`
`UseMarkFilteringSet` ignores all marks *except* those in the specified class. This will come in useful when you are, for example, repositioning glyphs with marks above them but you don't really care too much about marks below them.
*/
- `RightToLeft`:只用于波斯体#tr[cursive attachment]的#tr[lookup],你基本上不需要用到它。
- `IgnoreBaseGlyphs`
- `IgnoreLigatures`
- `IgnoreMarks`
- `MarkAttachmentType @class`:因为有下面那个更高效的选项作为替代品,基本上也不需要使用这个。
- `UseMarkFilteringSet @class`:忽略除指定的 `@class` 类以外的其他所有符号。这在一些情况下很有用,比如你希望重新#tr[positioning]#tr[glyph],但并不关心这些#tr[glyph]上带有的附加符号时。
|
https://github.com/piepert/philodidaktik-hro-phf-ifp | https://raw.githubusercontent.com/piepert/philodidaktik-hro-phf-ifp/main/src/parts/ephid/rahmenplaene/fragen_kant.typ | typst | Other | #import "/src/template.typ": *
== Grundlage: Die vier #ix("Kantischen Fragen", "Kantische Fragen")
Grundlage der Inhalte des Rahmebereiches bilden die vier Kantischen Grundfragen der Philosophie aus Kants Logik-Vorlesung:#ens[Vgl. Kant, Immanuel: Log, AA 9, 25.][Vgl. @MBWKMV1996_RP56[S. 12]][Vgl. @MBWKMV2002_RP710[S. 14]]
#orange-list-with-body[*Was kann ich wissen?* #h(1fr) Erkenntnistheorie, Metaphysik][
#strong[Stufe 5/6] \
Philosophie untersucht, wie wir uns erkenennd in der Wirklichkeit orientieren können.
#strong[Stufe 7-10] \
Ist ein sich orientierendes Erkennen der Wirklichkeit möglich, wenn ich unter Überfülle und Spezialisierung leide? Andererseits bereichert mich der ständige Wissenszuwachs.
][*Was soll ich tun?* #h(1fr) Ethik][
*Stufe 5/6* \
Philosophie untersucht, wie der Mensch sein Leben mit sich, mit anderen und mit der Natur regelt.
*Stufe 7-10* \
Wie kann ich handlungsleitende Regeln finden in einer Welt, in der ich Beliebigkeit und Individualisierung erlebe? Andererseits ist mein Aktionsradius auf vielfältige Weise gewachsen.
][*Was darf ich hoffen?* #h(1fr) Religionsphilosophie][
*Stufe 5/6* \
Philosophie untersucht das Selbstverständnis des Menschen, das sich im Wissen, Handeln und Hoffen ausprägt.
*Stufe 7-10* \
Wie finde ich einen grundlegenden und umfassenden Lebensentwurf, wenn es Menschen gibt, die Perspektivlosigkeit und Sinnlosigkeit erfahren? Andererseits habe ich die Chance auf ein langes und erfülltes Leben.
][*Was ist der Mensch?* #h(1fr) Anthropologie][
*Stufe 5/6* \
Philosophie untersucht das Selbstverständnis des Menschen, das sich im Wissen, Handeln und Hoffen ausprägt.
*Stufe 7-10* \
Lohnt es sich nach dem Selbstverständnis des Menschen zu fragen angesichts von Missachtung und Manipulierbarkeit, die ich erfahre und sehe? Andererseits ist die Sicht auf den Menschen komplex und differenziert geworden -- das gilt auch für mich.
]
In der Oberstufe werden die vier #ix("kantischen Fragen", "Kantische Fragen") als Grundlage zur Erörterung der philosophischen Einzeldisziplinen verwendet.#en[Vgl. @MBWKMV2019_RP1112[S. 11]]
#task[Grundlage Rahmenplan: Kant][
Nennen Sie die vier #ix("Kantischen Fragen", "Kantische Fragen")!
][
- Was kann ich wissen?
- Was darf ich hoffen?
- Was soll ich tun?
- Was ist der Mensch?
] |
https://github.com/omarcospo/resume | https://raw.githubusercontent.com/omarcospo/resume/main/README.md | markdown | MIT License | # Typst Resume
A resume template based on https://github.com/wusyong/resume.typ with minor
modifications.

|
https://github.com/yuenalaw/languageappreport | https://raw.githubusercontent.com/yuenalaw/languageappreport/main/main.typ | typst | #set align(
center
)
#import "template.typ": *
#show: ams-article.with(
title: [],
bibliography-file: "refs.bib",
)
#let formatText = (header) => {
set align(center)
set text(size: huge-size, weight: 700)
smallcaps[
#v(15pt, weak: true)
#header
#v(normal-size, weak: true)
]
}
#import "@preview/wordometer:0.1.1": word-count, total-words
#show: word-count
#align(center + top, text(15pt)[
*LANGUAGE LEARNING APP FOR INTERMEDIATE LEARNERS*
])
#align(center + top, text(12pt)[
*<NAME>*
210002926
])
#align(center + horizon)[
#image("documents/uni/uniemblem.png", width: 70%)
]
#align(center + bottom, text(12pt)[
Supervisor: <NAME>
])
#align(center + bottom)[
School of Computer Science
University of St Andrews
22nd March 2024
]
#pagebreak()
#set align(center + horizon)
#formatText("Dedication")
To my grandma, mum and dad.
#set align(left + horizon)
#pagebreak()
#formatText("Abstract")
This project presents the design, development, and evaluation of a full-stack language application curated for intermediate Chinese learners.
Research was conducted through surveys and interviews with language learners and teachers to identify the most important aspects of language learning. The final result is an app that uses natural language processing to segment video transcripts from YouTube into vocabulary words for students to study. Students can watch YouTube videos in real-time with the transcripts and their translations, create multimedia flashcards enhanced with images from Google, personal notes, audio pronunciations, and stroke order animations, as well as play short, 5-minute games to reinforce learning based on a spaced-repetition algorithm. These games include exercises such as matching words to images, testing speech pronunciation, translating sentences, fill-in-the-blank sentences, and stroke order practice.
Lastly, the app was evaluated by 37 students which indicated that the app was engaging, fun and effective, with 24.3% of students rating the app a 10/10.
#pagebreak()
#formatText("Declaration")
I declare that the material submitted for assessment is
my own work except where credit is explicitly given to
others by citation or acknowledgement. This work was
performed during the current academic year except where
otherwise stated. The main text of this project report is #total-words words long, including project specification and plan. In submitting this project report to the University of St Andrews, I give permission for it to be made available for use in accordance with the regulations of the University Library. I also give permission for the title and abstract
to be published and for copies of the report to be made and
supplied at cost to any bona fide library or research worker,
and to be made available on the World Wide Web. I
retain the copyright in this work.
#pagebreak()
#set align(left + top)
#outline(
indent: auto,
)
#pagebreak()
= Introduction
Many existing language apps today cater to beginners, leaving intermediate learners struggling to find effective and personalised resources. This project aims to identify the most crucial aspects of language learning and create a full-stack application that leverages YouTube content, flashcards, multimedia, spaced repetition, and game-based learning to aid intermediate learners. Personal investigations, such as surveys and interviews, will be conducted to complement existing research.
Currently, I am on the journey of studying Mandarin Chinese, but I am struggling to find resources to aid my learning. More specifically, while beginner-friendly content is abundant, there is a lack of resources tailored for intermediate learners. The content topics are generic and are not personalised to my interests. Through asking other language learners, I have found that many others face the same issue.
I, therefore, propose an app solution based on language learning research that seeks to integrate best language learning practices for intermediate learners of Mandarin Chinese using YouTube. Users can create flashcards for new vocabulary found in these YouTube videos, and by utilising a spaced repetition system (SRS), they can review these words through review questions provided by the app. Additionally, research will encompass my investigations through surveys and interviews, research papers, and analysis on the success of language apps such as Duolingo.
The objectives of this project are as follows:
#table(
columns: (auto, auto),
inset: 10pt,
align: horizon,
[*Objective*], [*Objective type*],
text("Create a minimal viable product of a language learning app"),
text("Primary"),
text("Transcripts can be generated from the app"),
text("Primary"),
text("Flashcards can be created from the transcripts"),
text("Primary"),
text("App can generate review questions relevant to the user"),
text("Secondary"),
text("A user evaluation form sent out to obtain user feedback at the end of the project"),
text("Secondary")
)
#pagebreak()
= Software engineering process
For this project, the Agile methodology was employed as the software engineering approach.
Agile allows for the continuous delivery of valuable software. As the end goal is to create a full-stack application with iterative feedback from interviews throughout the building process, Agile is useful for producing numerous iterations of a software solution quickly while involving a wide range of stakeholders at all phases of the development process @ieeeagile.
This approach means that on an ongoing basis, a product can be tested, examined, and adjusted, rather than building a single product at the very end. Agile consists of the following practices (some were removed as it presumed the existence of teams instead of a solo project) @ieeepaper:
+ Daily meetings (short meetings should be introduced to keep everyone up to date with progress)
+ Demo (at the end of an iteration, a working product should be demonstrated to other stakeholders)
+ Iteration planning (for example a sprint backlog, to break down requirements into smaller work items, and planning what features should be included in coming releases)
+ Iterative and incremental development (development is done in sprints. In each sprint, increments are added to a working piece of software)
+ Retrospectives (a way to reflect on what went well each sprint)
+ Task board (where the progress of tasks is visualized)
Since this project is undertaken individually, the only meetings required to achieve (1) are weekly meetings with my supervisor. In this meeting, I will discuss my progress as well as the setbacks I have faced that week, achieving (5), where I reflect on each weekly sprint.
To achieve (2), after the development of a minimal viable product (MVP), interviews will be conducted per week to obtain user feedback. After the feedback, the MVP will be re-iterated in the next sprint.
For (3), Notion, a note-taking app, will be used to track and plan new features brought up by interviewees. Each larger feature will be broken down into atomic levels so that each task is more manageable.
(4) will be achieved through weekly sprints, where the Notion page is reviewed and upcoming features planned for the sprint are tracked.
Finally, (6) will be achieved through a progress log on Notion that will track the work done every time there is a new feature or bug fixed.
#figure(
image("documents/swe/gantt.png", width: 120%),
caption: [
Initial project plan
],
)<projectplan>
The initial project plan is presented in a GANTT chart (see @projectplan). As shown, there is a lot of leeway to allow for project flexibility. This is because the app's development is planned to evolve depending on user feedback during the building process. As discussed below in the design and implementation sections, the project has diverted from the initial plan and has extended to contain many more features than what was initially anticipated.
The reason for the blank weeks between 1-8 was to allocate time to understand the problem space and get familiar with Flask and Flutter.
You can read the progress log in Appendix @progress.
#pagebreak()
= Ethics
Since this project involves the use of YouTube content it is important to be aware of YouTube's developer policies.
Regarding YouTube’s developer policies @youtubepolicies it is important to consider both ethical and legal aspects, including copyright issues.
I will ensure that:
+ Content queried from YouTube is used only for language learning purposes, and the app will not misuse the content in any way.
+ Content queried from YouTube will be explicitly stated, as the video will be embedded into the app.
+ To protect the rights of content creators, the app will credit the original creators of the video.
+ Videos will stop being shown if a content creator does not want their video shown on the app.
+ My app will transform the copyrighted material by incorporating it into a language-learning context.
+ My app will not allow videos to be downloaded or temporarily stored. The video will be transformed into transcripts designed just for language learning.
+ My app will only use publicly available content through the YouTube API.
This project will also require surveys and interviews from participants. Surveys will be conducted on Qualtrics and only be accessible to students within the University of St Andrews. Every participant involved will be given a consent form to fill out as well as a Participant Information sheet, which will describe how their data will be used and for what purposes. This data will be anonymized and deleted after the project submission date.
The signed ethical approval document can also be seen in the Appendix (@ethicsapproval).
#pagebreak()
= Context survey
It has been argued that languages did not necessarily evolve from speech but from the innate human instinct of communication. At first, humans did not have words, but they expressed themselves with body gestures and hand movements. As they innovated, creating fire and inventing tools, only then did they begin to communicate with their mouths, and thus along came the need for words @OEL. Language therefore emerged from mimicking others, which we can see from studying the brains of monkeys; the same areas of their brains light up when watching another monkey perform a set of grasping movements @MMN. Interestingly, the brain region for monkeys mimicking each other is the same brain region that lights up for human language.
What we can distill from this research is that we learn languages by copying and observing each other. By just observing another’s movements, our brain can help us infer their goals and intentions. This therefore gives us meaning behind their movements. If we relate this to language, by listening to a lot of content in that language and watching their body movements, we can unconsciously infer the meaning of the words.
A video summarizes language learning into four principles @HTL:
1. Seek relevance
2. Obtain the content’s basic meaning
3. Focus on only what you can understand
4. Build it into memory
For example, you may be fluent in English, but if you were thrown into a Ph.D. study of physics and did not come from a physics background, you would also be lost in its terminology. The content has no personal relation to you; to build this knowledge into memory, you would first need to filter out the principles of the content and understand the basics. After that, you would have to work out how you can fit this new content into your existing knowledge.
Our brains work unconsciously and are constantly seeking out new patterns. To learn, our brains categorize the content into something we can group, then abstract it away such that we can form relationships. The methodology works because similar words are located close to each other in your brain. To do this, we need to constantly be seeing and using this content in different contexts (a method called 'interleaving') and creating analogies @ILL. The idea of analogies links back to how we move this knowledge into our long-term memory, which is by creating relevance around this content.
If we take this through the lens of computer science, we can understand this idea of abstraction through how we code. If we have new words such as 'car', 'motorbike', and 'truck', we can categorize these into a class called 'vehicle'. As we build up these classes and abstract them away, we can more easily draw high-level relationships between them.
How can we apply this to achieving fluency? Fluency occurs when words fit together automatically. We do not necessarily have to think about the next word in a sentence, because our brain has already found the patterns and intuitively knows what words should come next.
<NAME>, the author of Fluent Forever @FF, discusses how we can actively use language learning principles and apply them.
1. Pronunciation
2. Get the most frequent words to learn
3. Use comprehensible input
4. Output
With pronunciation, it is important to learn the rhythm of the words and the flow of the words. At the same time, we should be training our ears and our mouths to learn how to differentiate similar-sounding words, and how to pronounce these words with our mouths. By getting the flow of the language, this applies to principle (2) of 'obtaining the basic meaning'. Here, we are understanding the gist of the language.
The next step is getting the most frequent words to learn. This helps with principles (1) and (3) - seeking relevance and focusing learning on what you can understand. Before we read a whole chunk of text, we need to create anchors in our brains to latch on to. We have to give our brain context before learning new things, which can let the brain draw connections between ideas and new words. It is also important that learners do not get bogged down by the nitty-gritty of the language; they should instead try and understand the context and overall meaning of the text. This principle is used in most textbooks; students are given the most important keywords to learn before the article, helping them prioritize certain words.
The third is using comprehensible input, relating to principle (1) (seeking relevance) and (4) (building it into memory). By choosing content that we like, we can draw connections in our brains. Wyner talks about how when we get new words, we can make them comprehensible by turning them into stories. We have previously discussed how analogies (and therefore stories) help turn new information into long-term memory. Wyner stresses how comprehensible input does not mean simply translating the words to understand it. He emphasises how we should directly link stories to images. For example, to learn the phrase 'she is', we can use the phrase 'she is a doctor' and have a picture of a woman in a doctor's suit.
Fourth, is output. Outputting is important because this is where we use the new words in different contexts. We can start to use the words in sentences that have a personal relation to us, reinforcing the words in our minds. Playing with the words in our minds also lets us deepen connections with words in that context, helping us draw links between words and start speaking fluently.
== Analogical thinking
To further investigate the importance of creating analogies in our brain, a study @APHT explored how analogical processes in human thinking and learning improved a person’s learning relational retrieval, and transfer.
The paper focuses on mapping. In simplified terms, two situations or concepts are aligned to find commonalities and make inferences between them @MOAL. This theory was previously studied in a paper published in 1890 @TCM, which aimed to create a Structure-Mapping Engine (SME), a cognitive simulation for analogical matching. There are two important aspects; support, which measures how much an inference is based on the analogy you are making (where more support from the analogy is better), and extrapolation, measuring how much your inferences go beyond what the analogy directly provides.
Only after mapping does ‘directionality’ emerge, where the meat of the understanding takes place as we explore the analogy further. An example given from the paper @MOAL is the word 'jail'. A similar word could be 'prison'. An analogy could be 'job'. Thus, if the learner was learning 'jail', instead of just showing them the word countless times, we can ask them to generate metaphors with the word and talk about their experiences or feelings with that word.
A paper on structural alignment @SAIC takes this further, exploring why analogies are so effective for memorization. As we have discussed, our mental representations are hierarchical (we prioritize certain things) and are made up of categories with relations to each other. By comparing two ideas, there is a structural alignment to find a maximal structurally consistent match between representations. The system favors interpretations that preserve a maximal connected related structure; in other words, we remember new information easier if we can draw many links to another idea. Taking this idea into language learning, we can see this in similar sentence structures, or the negation of certain words. For example, 'I like' and 'I don’t like' are different, but we can remember them due to their alignable differences. Antonyms also work the same way; 'up' is related to 'down' even though they have opposite meanings.
Intermediate learners can benefit from analogical thinking, because they have a larger vocabulary range and a better understanding of grammar, enabling them to draw better connections. A current issue intermediate learners face is the lack of speaking and listening practice students get in a classroom setting @PIL. However, it is through listening that learners can create analogies as they see the words in more contexts, and it is through speaking that learners can develop their self-confidence in that language. The self-confidence ties in with pronunciation, as students are worried that they sound too foreign.
== Context based learning and self-evaluation
The most common methods of language learning taught in schools are usually through reading. To investigate the effectiveness of reading in the target language, I explored a paper on the effect of exposure frequency on vocabulary acquisition @EEF. The research confirms that reading does serve as a significant source of vocabulary development, but in quite a surprising manner. Although the vocabulary growth is modest, it highlights that reading creates cumulative knowledge, and has a long-term positive impact on adult vocabulary growth. What is paramount is the exposure of the new word in different contexts, which allows the learners to infer word meanings. We can link this back to the idea of how our brains remember better if they work through finding patterns themselves, which is why learning languages through pure translating back to your native language is not the most effective.
If reading only contributes to a modest impact on vocabulary, what are some of the strategies that have been proven effective for vocabulary retention? As already discussed, learning vocabulary is enhanced when we encounter words in context. Flashcards, mnemonics, and translations are very common approaches for this, but to prove this assumption, we should assess vocabulary learning through immersion. VocabulARy, a study on learning vocabulary through augmented reality (AR) @AR, is an AR application for vocabulary learning that visually annotates objects in AR, in the user’s surroundings, with the corresponding words. The study took two groups, one that used the VocabulARy prototype and another that had an alternate AR system which did not show any additional visualisation of the keyword. Showing visualizations outperformed the other group in short-term retention, mental effort, and task-completion time, and also scored significantly higher in delayed recall and learning efficiency.
Vygotsky, a plurilingual speaker, emphasises the importance of learning in social contexts, where knowledge is acquired from interacting with each other and environments @VTCD. Through this, he developed a social constructivist theory, where he believed community plays a central role in the process of 'making meaning'. This links back to our previous research where we discovered the importance of comprehensible input and using context clues to learn new vocabulary. Furthermore, utilising the community aspect of language learning enables constant assessment of one's performance. A community can help with self-evaluation, where learners can ask for feedback on their pronunciation, grammar, and vocabulary.
Interestingly, the advantages of self-evaluation can especially be seen with speaking. Speaking is an important aspect of language learning where learners focus on outputting different words in different contexts.
Speaking is known for its difficulty due to message conceptualization and articulation @ISS. In this paper, they studied the strategies students took when practicing speaking. Participants used extensive planning to structure their arguments and to select appropriate vocabulary, then continuously monitored their performance by assessing their grammar, vocabulary, and pronunciation. After, they completed a self-evaluation. What we can gauge is that a good speaker comes from the planning and thought behind their sentences and arguments, rather than just constantly outputting speech. Bound with this, is the constant assessment of their performance.
Therefore, having the ability to evaluate and assess one’s performance is crucial in language learning, as it helps a learner self-correct and identify their own mistakes.
== Mobile learning
Many language resources today are available through mobile apps. In language learning, mobile devices are great for quickly looking up new words, translating sentences, or even finding answers to questions. It also provides a convenient means to access videos, music, and content that can apply to language learning. With the billions of content available online, learners can also be exposed to different cultures and languages whenever they want. In learning, there is a ‘forgetting curve’ @FC which states that the process of forgetting gets slower and slower over time, as long as you keep repeating the content at timed intervals. Thus, having access to content anytime means that learners can harness the power of repetition and utilise it to their advantage. This is in contrast to the learning environment in schools, where students have to wait each week for a lesson to review their knowledge.
A case study we can use is X (previously Twitter), which is a microblogging platform. A paper looked especially at using X to foster foreign language learning @FFLL, aiming to gain insights about learners’ perceptions of the use of X in language learning and how they feel about tweeting as an extracurricular activity throughout four weeks. X helps educators and learners benefit because the platform enhances student collaboration and interaction. Students can engage in meaningful communication and get immediate feedback, which as discussed previously, can help students learn more effectively.
== The current language apps
With this in mind, we can assess the current language-learning apps on the market. In particular, I will be discussing the big three language learning apps, Babbel, Duolingo, and Rosetta Stone.
Babbel focuses on achieving fluency through immersion in real-life dialogues @Babbel. It uses the idea of how a native learner would learn, by teaching vocabulary and grammar through practical dialogue examples in conversation. It guides the brain to connect the dots passively by learning new information based on the dialogue context. At the same time, it trains the learner’s pronunciation skills. It also emphasises the use of a spaced-repetition system, where you revisit the words in different contexts, spaced out over time.
Cognitive psychology has repeatedly shown the benefits of using short repetition practices to put new knowledge into long-term memory @EQSDP. Ebbinghaus stated that 'with any considerable number of repetitions a suitable distribution of them over a space of time is decidedly more advantageous than the massing of them at a single time'. The umbrella term for this phenomenon is the 'distributed practice effect' or 'spacing effect'.
Very early on, Ebbinghaus conducted experiments on himself to determine how to minimize the amount of time it took to relearn a set of materials. He discovered that spacing the study of simple verbal material across several days rather than all in one day resulted in fewer relearning trials. From this initial study, many other studies branched from learning words, sentences, and text passages.
In Glenberg's experiments, he discovered that increasing the amount of time between recall sessions benefitted retention to a point; after this point is reached, the additional intervals lead to poorer retention. In other words, learners should start from shorter interstudy intervals until a certain interval, and maintain this.
Duolingo is a game-based learning app that stresses the importance of 'learning-by-doing' through interactive lessons @Duolingo. Similarly to Babbel, it uses the idea of your brain picking up patterns passively; thus, Duolingo does not teach grammar rules. It instead pushes learners to figure out the conjugation rules by themselves. By utilising AI, Duolingo adapts lessons to individuals, where the AI model tracks and adjusts the order and difficulty of exercises. The topics chosen to teach are based on school and institution standards. However, Duolingo is most known for its high engagement, due to its bite-sized lessons and gamification streaks, helping learners to stay motivated.
Finally, Rosetta Stone relies on dynamic immersion @Rosetta. The idea is to use human senses to move new words into long-term memory. For example, learners are not given translations but are instead encouraged to learn the words through the pictures. To achieve grammar, Rosetta Stone gives a few examples of a grammatical concept, and then the words the learner should focus on get highlighted. To achieve speech, they offer services that allow learners to read aloud, and as they do so, their pronunciation gets corrected.
Each language app has its unique advantages and disadvantages. The use of forcing your brain to draw its connections to recognize different patterns aligns with the research I have conducted above, and building content based on personal relevance (as Babbel and Duolingo do) has been proven to help retain information. Babbel’s spaced repetition system is also very effective, especially as it reminds you of the words in a different context every time. A common misconception of learning is to constantly repeat information with flashcards, however not only is this boring, but the brain also starts to recognize the cards and trick itself into thinking it has the card remembered when it has not. The spaced repetition system of Babbel means that the cards for the same word are not always the same, which forces the brain to recognize the words in different contexts.
Duolingo’s strength comes with the gamification aspect. Its bite-sized lessons take less than 5 minutes to do, and learners can undergo 'quests' with their friends, allowing them to push each other and support each other in learning. It also makes them feel more accountable when they miss their lessons because their progress can be shown to their friends. As a personal user of Duolingo, I like how each exercise brings in new vocab to learn, but at the same time is very easy to complete. Furthermore, Duolingo consists of badge systems and rewards, in the hopes that students become more motivated and engaged in their content. Students should be intrinsically motivated to learn @IOG (where the desire to learn comes from the student, rather than from external factors such as parental pressure), as this leads to better information retention. However, a substantial body of research suggests that the way we attempt to increase intrinsic motivation should be cautioned @EOG because tangible rewards (such as badges) can shift a student’s motivation from intrinsic to extrinsic. What they instead conclude is that gamification should be used for fast feedback to the students.
Duolingo is also famous for its microlearning aspect. Microlearning is similar to gamification, but the critical difference lies with learning goals being masked as game-like activities. The gamification encourages students to participate more in the learning activities by providing them with more enjoyable approaches. Duolingo is one of the most popular language apps, but how effective is it? We can back up Duolingo through a study done by Fang @ASCE, which looks at English learning. The study begins by emphasising that despite the exponential growth of knowledge and information with the internet, the traditional 'classroom + textbook' learning mode has failed to satisfy people’s needs to seek knowledge. The new micro-learning approach has become popular among college students due to their ability to use mobile handheld devices, such as a mobile phone, and micro-learning concentrates on brief and independent messages. College students are therefore provided with the information they need at anytime.
A paper on microlearning @MLT shows that gamification, infographics, videos, apps, and social media may all be leveraged to provide this. Microlearning allows lessons to be given in a short length of time and can be accessible at any time and from anywhere. It further explores how microlearning can increase student comprehension and retention, especially when lessons are broken down into digestible pieces. In the modern day, TikTok (a platform for short videos) is extremely easy to remember because of its short, effective nature, where all important information is concisely condensed into a few seconds. This is a great example of microlearning; the only downside is that microlearning is not effective when dealing with a narrow, intricate, and complicated issue requiring an in-depth discussion.
Rosetta Stone’s strength comes from the fact it uses sensory input. Different languages may contain words that do not exist in the learner’s primary language. Thus, there is no direct translation. Therefore, images are incredibly useful for this, because it strips away the intermediary step of having your brain translate to a different language domain, where sometimes the word does not exist. Its stress on pronunciation and having a direct feedback loop is also useful in achieving fluency because many learners struggle with pronunciation the most. Learning new languages requires you to train new mouth muscles, muscles that may not be used in the learner’s primary language.
== Utilising social media
When designing a language app, we can also stretch beyond what other language apps do and take a look at social media. Social media is addictive; among the 7.91 billion people as of 2022, the average time individuals spent using the internet was 6h 58 minutes per day, with an average use of social media platforms of 2h and 27 mins @GOR. What sets social media apart is the personalisation aspect, human connection, and the fact people are free to exchange ideas that build up on top of each other. We can see this through the most popular apps of January 2022, which are TikTok, Instagram, Facebook, and Whatsapp, whose basic goal is to enable users to share and create content with each other.
The idea of social media is simple: help humans establish relationships @PPAS. As establishing relationships interferes with necessary life activities such as sleep, nutrition, and work, its overuse can be seen as any other addiction as it can dominate a person’s life. To establish relationships, people need to be able to connect and engage with others through the sharing of experiences and ideas. The definition of social media by Merriam-Webster states social media is a 'form of electronic communication (such as websites for social networking and microblogging) through which users create online communities to share information, ideas, personal messages, and other content (such as videos)'.
Drawing aspects of social media with language learning is an interesting field to explore; social media gives opportunities for students to understand more about other cultures through videos and blogs. On the other hand, there are many criticisms of using social media, one of the main reasons being addiction. In the context of language learning though, a study in Algeria investigated the effect of social media in writing @IESM. In particular, a discussion was drawn between writing formally and informally, where on social media 90% highlighted their use of slang words and abbreviations. When asked about the reasons for informality in writing, the students gave reasons ranging from the fast and easy-to-use nature, as well as it helping them to express themselves. It gave them more freedom in writing in contrast to the formal way that obliged them to follow strict rules. The paper concluded that social media did indeed have an overall negative impact on the way students write, as the relaxing environment of social media encouraged students to write through abbreviations, symbols, and slang words. In a social media context, this writing style is accepted, but inappropriate from an academic perspective.
However, I would criticize this paper in that this paper was written through the lens of an academic context. There are many goals within the field of language learning; not everyone learns for the sake of academic writing. Social media’s strength lies in getting quick ideas across and sparking conversation. Social media’s personalisation aspect also means learners can choose what and who they want to learn from; there is plentiful content for those in the academic space, and others in entertainment.
== Summary
Many of the language learning applications today are focused on beginners, whose aim is to just familiarise themselves with the language. This means there is more emphasis on creating content around general, surface-level topics. Basic vocabulary and grammar structures are not enough to make students confident in a language. Improvement comes from constant self-evaluation and having an expansive vocabulary range and grammar bank, as well as a fast, personalised feedback loop which does not come from traditional class lessons, as many students only see their teachers once a week.
The solution to this problem is thus a mobile app (due to the advantages of handheld devices for language learning) that will cater to intermediate learners by utilising social media's personalisation and community aspect to generate custom content for the user; techniques such as microlearning, spaced repetition, gamification, sensory learning methods, self-evaluation (especially in the context of speaking); finally, flashcards that promote context-based learning and analogical thinking through the ability to add images and personal notes.
#pagebreak()
= Design
To understand the language learning space, we must first identify the main pain points of learners and what is currently missing from the apps available.
I sent out a survey using Qualtrics to language learners of Asian languages and teachers in this area. The survey assessed the most commonly used language apps and identified the features users liked the most. I also wanted to identify how certain apps kept loyal users and contrast them to other language apps that people use inconsistently.
However, these questions give a false sense that creating language apps is the only solution to learning a language. There are many other resources available, such as using textbooks, attending in-person classes, and more. Thus, the questions in the survey were not just limited to apps, but also learning methods students liked to use and why.
The results are shown below. Also note that the bullet points in the infographics are comments provided by the survey-takers, where they were allowed to write more in detail about an opinion.
Based on my survey, Duolingo emerged as the most used language learning app, followed by Anki and Memrise. Anki is a general flashcard platform, while Memrise is a language app famous for its gamification. Students mentioned that the most useful functionalities were the ability to listen to pronunciations, the gamification aspects (such as the capability to compete against friends through leaderboards and the short lessons which give a feeling of progression), as well as spaced repetition. These findings match with research seen in the context survey, but what was emphasised in the survey was the importance of User Experience (UX) and the app's User Interface (UI).
#figure(
image("documents/survey1/1.png", width:100%),
caption: [
Most popular apps
],
)
Compared to the interfaces of competing language apps, Duolingo's design is attractive due to its simple animations, clean and intuitive design, and game-like features. Those who preferred Anki generally prioritized their spaced-repetition system and its practicality, however, the percentage of those who preferred Duolingo to Anki was 75% to 17%. The customisability aspect of Anki comes from how users can create their cards, and fill them with their sentences, audio, and words. When done well, this provides an optimized experience for long-term memory retention due to its multi-sensory capabilities.
#figure(
image("documents/survey1/Vocabulary-card-Anki-2.jpg", width:100%),
caption: [
Example card creation in Anki
]
)
However, Anki may be challenging for beginners due to its complexity @AnkiProsCons. It requires a long setup time to understand the interface if a user is to design their review deck. Thus, many learners opt to browse and find available pre-made decks online instead. Yet, the majority of the learning experience comes from creating your flashcards, re-emphasised in a study aimed to promote active learning. The new strategy used is called the 'Flashcards-Plus' (FP) @FlashcardsPlus where, like traditional flashcards, students identify bold-faced terms from a textbook and write them on one side. Students then write the textbook definition on the other side. However, the FP strategy goes further; students write a definition for the same keyword in their own words and also generate a realistic example of the key term from their own lives that will increase retention.
The study concluded that students who used FP improved their exam performance more than those who did not use the strategy. The downside of this study lies within the possibility that those being exposed to the FP strategy are already actively searching for ways to improve their grades and study habits, rather than their improved grades stemming from this new FP strategy alone.
On the other hand, this study aligns with the idea of relating new learnings to our pre-existing knowledge.
From my survey findings, users appreciated the listening features of the apps the most. Interestingly, this matches with one of the challenges identified by teachers who took the survey, which was pronunciation.
#figure(
image("documents/survey1/5.png", width:100%),
caption: [
Challenges of language learning
]
)
In the context survey, <NAME> (the author of 'Fluent Forever') emphasised that pronunciation is one of the four main pillars of language learning. It is thus essential that we therefore include features in our app to facilitate pronunciation practice. In the language apps listed above, a common functionality is the ability to press on a word and listen to an automated recording of it. Whilst many apps stop there, Duolingo takes this further by incorporating speaking exercises into its games that assess a user's pronunciation as they repeat a sentence shown on their screen.
#figure(
image("documents/survey1/duolingospeech.png", width:80%, height: 70%),
caption: [
Duolingo's speech exercise
]
)
The teachers and students also align when talking about tackling consistency. 50% of the students surveyed mentioned discipline as being one of the main obstacles in language learning. I took this further, conducting semi-structured interviews where I asked Duolingo app users with streaks over 100 days what made them consistent with the app.
They answered that they liked the competitive nature where they could compete against friends, the short lessons, and the feeling of progression. In the context survey, I discussed the advantages and disadvantages competitive learning has on students, so whilst my survey results do illustrate more of the benefits of competitiveness in learning, it does not give us the whole picture. Duolingo's gamification encompasses both types of learners - those who want to compete, and those who want to work together to unlock chests and experience points (XP), through a feature where you can combine XP with friends. This feature was reinforced by another survey-taker in a separate interview, where they stated it was their favorite attribute of the app.
According to @MotivationDiscipline, autonomy is one of the basic human needs, which contributes to a student's motivation for learning and achievement. It is a strong predictor of student engagement. To achieve autonomy, students need to find some degree of meaningful choice and purpose in their learning.
One method to achieve autonomy is acknowledging a student's interests and building choices in a school's curricula. To understand students' interests, I invited the students to talk about their motivations for learning a language in the survey.
#figure(
image("documents/survey1/4.png", width:100%),
caption: [
Language motivations
]
)
67% of the students are motivated by travel, speaking to new people, and deepening relationships. 33% went into 'other'; they expanded on this as music, meme culture, and general knowledge.
Current language apps do touch upon these topics, however they provide very general vocabulary. The topic of 'travel' is expansive; it is impossible for an app to devise a curriculum relatable for every language learner. As mentioned in my context survey section, cultivating stories that are personal to each person enables deeper thinking and memorization. Therefore, learning words with no personal value may seem 'irrelevant' and 'random'.
The question 'What do you want to improve the most in?' also gave interesting results:
#figure(
image("documents/survey1/mainlanguagegoals.png", width:100%),
caption: [
Language goals
]
)
Students were also allowed to expand on this and were questioned on what features they wanted to see on this app that would help them progress in their language goals. At the time of reporting this survey, 3 responses were given:
1. Listening to words in a sentence. It's easy to just learn the words, but [not] how the words are used
2. Spaced repetition system
3. Drills, practice and games
In further questions, students emphasised how they wanted more flexibility with their listening tasks and speech speed or converse with the app and let it correct their pronunciation. The idea of lacking sufficient real-speaking practice came up multiple times, which went hand-in-hand with how the sentences taught by the apps were irrelevant and the words they suggested to learn did not identify with them. This feedback highlights the importance of designing the app in a way that allows for highly personalised content tailored to each user's needs and preferences, rather than forcing them to follow a one-size-fits-all syllabus created by someone else.
One of my survey questions asked students what other resources they use in language learning.
#figure(
image("documents/survey1/3.png", width:100%),
caption: [
External resources
]
)
Improving speaking requires a foundational level of vocabulary in that topic; this may be the reason why app language learners struggle to achieve 'fluency' because the vocabulary taught by the app is not sufficient enough for their daily-life conversations or for them to understand the content they consume, which is unique to every person.
Duolingo's course follows the school system, which gives a limited set of vocabulary. For example, you are more likely to learn 'I play football' if you are a footballer rather than a basketballer. The survey results display that students learn from online resources and content rather than textbooks, where they have unlimited access to niche, personalised topics. Therefore, creating an app that utilises this content can make their learning more enjoyable, improving their language learning consistency and also help them achieve fluency by widening their vocabulary scope to more specialized words unique to them.
On the other hand, teachers were asked about what language apps could do better that help students. Some suggestions include:
1. Being able to correct when one gets it wrong
2. Using free resources such as Google
While teachers focused on the functionalities, students who were consistent with language apps mentioned more UI-centric capabilities that they would like to see improved. This includes having an easy-to-navigate, minimally distractive UI (minimal colour usage, large, intuitive icons) and the ability to customize flashcards.
Building on this, we can combine Duolingo's gamification techniques (short lessons and sense of progression) with Anki's spaced repetition system and custom sentences that can provide a more personalised learning experience. To keep the user experience fun and motivational, which will assist in consistency, the content provided should be unique to each user and provide relevant vocabulary. Through using online video platforms such as YouTube, students will be able to listen to the content through native users and also see how words are applied in relevant contexts.
Regarding teachers' comments on being able to self-correct, a feature should be included such that exercises get repeated if it has been previously wrong. The accuracy will be used to calculate a 'score' for a certain word, which will then be fed as input for the spaced repetition system.
The survey questions thus far have been biased toward learning languages from apps, but this may not be the best approach. Therefore, I later asked the user about their favourite learning methods.
#figure(
image("documents/survey1/2.png", width:100%),
caption: [
Best learning methods
]
)
By far, in-person classes are the most favourite language-learning approach. The ability to ask questions, interact with others, and get instant feedback (such as error explanations) directly from speaking the language in class is the most advantageous when learning languages. Unfortunately, many language apps fall flat with this, and so we see many platforms start to emerge where users can start question threads.
On the other hand, in-person classes can be costly and your curriculum is limited. The advantage of mobile apps is that you can learn in your own time and anywhere.
The question therefore is how can we merge the advantages of in-person classes into the app?
Looking back at previous responses, students emphasised their wish for pronunciation correction. Like most existing language learning apps, the app should have the functionality to listen to how words should be pronounced. It can further be expanded to also prompt the user to output in that language, and assess their pronunciation with an accuracy score. Unfortunately, this will not be the same as an in-person environment where an experienced teacher can correct the learner, but it gives instantaneous feedback. The app can also facilitate a custom-learning experience by providing relevant content and vocabulary to the user that is not restricted by a syllabus.
Issues that have not been highlighted by the survey takers also include the dangers of in-person learning. Students who do not contribute in class will not be able to practice their speaking muscles and thus will not reap the benefits stated above. Furthermore, students may pick up bad habits or mistakes from other students during interaction practices.
Overall, the survey has highlighted the importance of:
1. Gamification in learning with short lessons (improves motivation and consistency due to the feeling of progression)
2. Relevant content that does not just follow a general syllabus (improves autonomy and thus motivation and consistency)
3. Responsive UI that corrects users instantaneously, is easy to use, and has a clean design
4. Listening exercises to improve pronunciation and speaking
5. Interaction with native speakers
While my app currently may not be able to achieve (5), further implementation of my app could include using chatbots to role-play as a language buddy.
#pagebreak()
= Requirements specification
So far, we have seen the importance of personalised, relevant content, micro-learning, spaced repetition, multimedia learning through images, audio, animations, self-evaluation for speaking, and gamification. Gamification and relevant content have also been re-iterated through the initial surveys sent to language learners and teachers.
One of the main issues of language learning is not being able to find appropriate content. Content in textbooks and apps follows a particular school system, which contains some topics that may not be of interest to the user. Therefore, social media platforms can be leveraged to provide content that is more relevant to the user, due to their personalisation algorithms. A popular social media video-sharing platform is YouTube, where users can find content that is more niche and specific to their interests.
However, obtaining transcripts from YouTube videos does not help with language learning. Users should be able to make flashcards out of sentences and words of interest to learn from. Flashcards should incorporate multimedia such as images, audio, and animations to help a student's analogical thinking. The creation of these flashcards would then be re-surfaced to the user in a spaced repetition system, which studies have shown to be effective in long-term memory retention. Interactive learning, gamification, and microlearning have also proven to keep up engagement and motivation and provide a sense of progression. Consequently, the app should incorporate short games with fun exercises to test the user's flashcards.
Those who took the survey also mentioned the importance of a responsive UI that would correct users instantaneously. Games can help achieve this through exercises including listening exercises that would enhance a user's understanding and speaking, as highlighted by the survey results. Self-evaluation techniques would be utilised in the speaking exercises, where users can record themselves speaking, and the app would provide feedback on their pronunciation.
Therefore, the functional requirements for the implementation is as follows:
#table(
columns: (auto, auto, auto),
inset: 10pt,
align: horizon,
[*Functional Requirement*], [*Features required*], [*Priority*],
text("As a user, I want to obtain transcripts from interesting videos to study from."),
list(
[Connect to the YouTube application programming interface (API).],
[Transcribe the YouTube video.],
[Translate the selected YouTube video.]
),
text("Should"),
text("As a user, I want to identify certain sentences and words to make flashcards from."),
list(
[Incorporate NLP models to do word segmentation on the transcript.],
[Create a database to store the user created flashcards.],
),
text("Should"),
text("As a user, I want to be able to review the flashcards in a spaced repetition system."),
list(
[Create a spaced repetition system where the tested vocabulary gets surfaced to the user at optimal times.],
[Allow the user to create their flashcard on the app.],
),
text("Could"),
text("As a user, I want to be able to improve listening and speaking skills."),
list(
[Games that allows users to listen to the sentence pronunciation to test listening skills.],
[Games that allows users to speak the sentence to test speaking skills.],
),
text("Could"),
text("As a user, I want to be engaged with the app and have fun while learning."),
list(
[Incorporate progression aspects into the app, such as streaks.],
),
text("Could"),
)
#pagebreak()
= Implementation (Backend)
Flask, a lightweight Python-based microframework, was chosen for the backend implementation due to its simplicity and flexibility, active community, and updated documentation. Flask also utilises Python, which contains many useful libraries for word segmentation and other natural language processing (NLP) tasks.
@dbschema is a UML diagram displaying the relationships between my Flask models. SQL database was used because SQL can be accessed with Flask through SQLAlchemy, a Python SQL toolkit, and object-relational mapper (ORM). Flask-SQLAlchemy provides ways to interact with and gain access to the database's SQL functionalities @sqlalchemy. The ORM aspect allows for easy querying by using simple Python objects and methods rather than writing SQL statements.
#figure(
image("documents/appdesign/db/dbschema.png", width:100%),
caption: [
UML diagram
]
) <dbschema>
@dbschema displays the relationships between these tables. The UserStudyDate is a table in charge of dealing with user streaks. The table updates whenever a user finishes a game lesson, and a new study_date entry gets added.
The VideoDetails contains the video information from YouTube. The model saves the YouTube video ID; a dictionary of the video's keywords and their respective images (the implementation discussed below); the lesson_data consisting of the video's transcript now segmented into Chinese characters, their pronunciation, translations, and similar sounding words; the source of the video (in this case, it is assumed to be YouTube, however, if this app were expanded it could contain videos from other mediums); finally the video's title, channel and thumbnail.
Beneath is an example JSON dictionary containing the 'keywords_img' and the 'lesson_data'. A video gets separated into different 'lessons' as video transcripts can be hours long if users want to transcribe long-form content such as podcasts, which Stanza cannot handle. Therefore, the transcripts get segmented into smaller chunks before being processed and appended into the 'lessons' array.
#set align(center)
```
{
"keywords_img": [
{
"img": <imgurl>,
"keyword": "亚洲"
},
],
"lessons": [
{
"segment": {
"duration": 1.291,
"segment": "加州留学生的生活",
"sentences": {
"entries": [
{
"pinyin": "<NAME>",
"similarsounds": [
"甲胄",
],
"translation": [
[
"加",
[
"to add",
"plus",
...
]
],
...
"upos": "PROPN",
"word": "加州"
},
...
],
"sentence": "加州留学生的生活"
},
"start": <starttime>
}
},
},
],
"video_id": <videoid>,
"source": "YouTube",
"title": <title>,
"thumbnail": <thumbnail>,
"channel": <channel>
}
};
```
#set align(left)
There is a one-to-many relationship from VideoDetails to UserSentence, as a video contains many sentences. This contains fields such as the sentence ID which gets automatically incremented, the line number of this sentence in the context of the video, the ID of the video that this sentence is related to, and finally, the actual sentence, which again is a JSON containing the details of each word in the sentence, its translation, pronunciation, other similar sounding words and their part-of-speech (POS).
#set align(center)
```
{'sentence': '加州留学生的生活',
'entries': [
{
'word': '加州',
'upos': 'PROPN',
'pinyin': <pinyin>,
'translation': <translation>,
'similarsounds': <otherwords>
},
...
{
'word': '生活',
'upos': 'NOUN',
'pinyin': <pinyin>,
'translation': <translation>,
'similarsounds': <otherwords>
}
]
}
```
#set align(left)
When a video gets downloaded, the sentences are processed using Stanza (this will be discussed more in detail below, under the section 'word processing') before being uploaded into the UserSentence table.
The UserWordSentence is used to create flashcards and has a one-to-many relationship with a Word. A 'Word' is a character or set of characters in Chinese that gets saved by the user to be tested later using a spaced-repetition algorithm (an algorithm that would resurface the Word at optimal intervals to reinforce learning). In this schema, we have a foreign-key relationship to the Word being tested, the video that the sentence is part of, the line of the video the sentence being tested comes from, a personal note that would help with recall, the actual sentence string to keep the Word's context, and finally an image URL that the user chooses from the frontend as the flashcard multimedia. There is a many-to-one relationship with Word because a user can use the same sentence to test many different words (there are many words in a sentence).
The Word model contains strings for the actual Chinese characters, their pronunciation (pinyin), a JSON list of similar sounding words (to help with pronunciation), and a JSON list of possible translations.
Finally, the UserWordReview model stores information for the spaced repetition algorithm.
This algorithm is inspired by Anki's implementation, where a user ranks a flashcard's difficulty from 0-5, where 5 is a perfect response. In my implementation, this is calculated from how many of these exercises per Word are correct (seen later when discussing frontend, each Word gets reviewed 5 times through 5 different exercises). A perfect 5/5 score would thus give a user a ranking of 5. The calculation also requires the number of previous repetitions of this flashcard, its previous ease factor (a floating point number generated by the last iteration of the spaced repetition algorithm to determine the number of days before the next review), and the previous interval that the user has seen the Word. Additionally, I added a field called 'next_review' so that querying is easier. I have to query all reviews where the next_review is less than or equal to today, which can immediately be displayed to the user.
The algorithm spits out a new interval (the number of days for the next review), increments the number of repetitions, and calculates the new ease factor that has been adjusted based on how well the flashcard was remembered. All this information gets sent to the server after every 'lesson' gets completed.
== Endpoints
In this section we discuss the URL routes that the frontend application can query.
=== Processing a video
The aim of this endpoint is to allow users to obtain the YouTube transcript from YouTube. This gets sent to a worker thread which will then do word processing, described in the next section.
This is a POST request to `http://projectvm05.cs.st-andrews.ac.uk:8080/vid` requiring the following fields:
#set align(center)
```
{
"video_id":<videoid>,
"source":"YouTube",
"forced":<booleantruefalse>,
"title": <title>,
"channel": <channelname>,
"thumbnail":<thumbnailurl>
}
```
#set align(left)
The 'forced' field is required if a user wants to re-download a video and overwrite the existing one in the database.
=== Obtain a video
This endpoint aims to allow users to see the JSON data for a particular video. This will query the VideoDetails table to obtain the relevant row with a particular video_id. At the same time, it will also query the UserSentence table to get all the relevant sentences related to this video_id.
By combining these two queries, we obtain the mapping for the video's keywords and their image URLs, and all the sentence data for the video, where each word contains information about their pronunciation, translation, POS, and similar sounding words.
This is a GET request to `http://projectvm05.cs.st-andrews.ac.uk:8080/getlesson/<video_id>`.
=== Add a study date
This endpoint aims to update the UserStudyDate table so that we can later calculate the study streak of the user. Whenever this endpoint is called, the server will calculate the current time in Europe/London time, to maintain consistency. Then, it will check if the study_date already exists in the table; if not, it will add a new entry to the table.
This is a GET request to the endpoint `http://projectvm05.cs.st-andrews.ac.uk:8080/addstudydate`.
=== Get study streak
This endpoint aims to motivate a user by supplying the current study streak of the user. To achieve this, we first query all the unique study dates from the UserStudyDate table and order them descending from today.
In the event the user just opened the app, they would expect to see their streak starting from yesterday. Thus, we start from yesterday's date and iterate through the study dates until a gap larger than 1 (it is no longer consecutive). When done, we check whether there exists a study_date equal to today; if so, we add one to the current count.
This is a GET request to the endpoint `http://projectvm05.cs.st-andrews.ac.uk:8080/getstreak`.
=== Get video library
This endpoint aims to query all of the previously processed videos for the user. It is a simple query that obtains all videos from VideoDetails and returns this to the user.
This is a GET request to the endpoint `http://projectvm05.cs.st-andrews.ac.uk:8080/getlibrary`.
=== Get cards to review today
This endpoint aims to obtain all the flashcards to be tested depending on the spaced repetition algorithm (as previously discussed).
To achieve this, we join the UserWordReview and Word tables on word_id and filter all reviews where the next_review field is less than or equal to today.
When done, we loop through the review and the word. Here, the word is isolated and does not come with the sentence it is part of, giving the user no context. Therefore, logic is required to query the UserWordSentence table to obtain all the words with the specific word_id being tested and obtain a random sentence out of all those options. The result is used to obtain the relevant UserSentence entry.
Now, we can obtain all relevant information to test the user with - the image related to that word, a sentence the word appears in, and the personal note aligned with that word. Along with this is the review information, such as the last_reviewed fields, repetitions (the number of times this word has been used to test the user), its ease factor, and other details regarding the spaced-repetition algorithm.
This is a GET request to the endpoint `http://projectvm05.cs.st-andrews.ac.uk:8080/getcardstoday`.
=== Update the spaced repetition (SRS) system
When a user completes a game lesson consisting of five words, they will need to update the review information for each of those cards. Instead of sending five separate network requests to the server, one per word, it would be more efficient to utilise batching. By batching the requests, multiple card updates can be sent in a single network request, thereby minimizing overhead and improving performance.
This is achieved by the server iterating through all the words, obtaining their word_id, number of repetitions, the previous ease factor, the previous word interval, and the quality of how well the user recalled the word, calculated by the front end. Passing this to the function `update_user_word_review` in the ModelService class, these parameters are fed into the spaced repetition algorithm, and each word obtains a new ease factor, number of repetitions, the interval for the next review as well as the next review date. Finally, this gets updated in the UserWordReview table.
This endpoint is called via a POST request to `http://projectvm05.cs.st-andrews.ac.uk:8080/batchupdatereviews` with the fields:
#set align(center)
```
[
{
"word_id": <wordid>,
"last_repetitions": <lastrepetitions>,
"last_ease_factor": <lasteasefactor>,
"word_interval": <wordinterval>,
"quality": <quality>
},
...
]
```
#set align(left)
where quality is a score from 1-5 based on how many of the exercises the user got correct.
=== Creating flashcards
When creating a flashcard, a new record must be inserted into the Word table with its relevant pinyin (pronunciation), similar-sounding words and translations. This is done after a check to see if the word does not already exist.
After this, we must create a new UserWordSentence where the word_id is equal to the id of the Word just created, and initialise the review information for this word with the default values of last_reviewed being the current date, repetitions as 0, ease_factor as 2.5 to follow the spaced repetition algorithm, word_interval as 1 and the next_review set to be the same day.
This is a POST request to `http://projectvm05.cs.st-andrews.ac.uk:8080/addnewreview` with the following fields:
#set align(center)
```
{
"word": <word>,
"pinyin": <pinyin>,
"similar_words": <similarsoundingwords>,
"translation": <translation>,
"video_id": <videoid>,
"line_changed": <linechanged>,
"sentence": <sentence>,
"note": <personalnote>,
"image_path": <imageurl>
}
```
#set align(left)
=== Updating flashcards
There are two ways to update flashcards. The personal aspects of the flashcards are the image for the flashcard, as well as the note written by the user. These fields are updated by simple POST requests. For updating an image URL, it is `http://projectvm05.cs.st-andrews.ac.uk:8080/updateimagepath`. Consequently, the endpoint for updating the note is `http://projectvm05.cs.st-andrews.ac.uk:8080/updatenote`.
The fields required for `updateimagepath` are as follows:
#set align(center)
```
{
"video_id": <videoid>,
"word_id": <wordid>,
"line_changed": <linenumber>,
"image_path": <updatedimagepath>
}
```
#set align(left)
The fields required for `updatenote` are as follows:
#set align(center)
```
{
"video_id": <videoid>,
"word_id": <wordid>,
"line_changed": <linenumber>,
"note": <updatednote>
}
```
#set align(left)
The fields 'video_id', 'word_id' and 'line_changed' are required to uniquely identify a row in the UserWordSentence table.
== Worker threads
The worker threads are in charge of processing the video transcripts.
An interesting aspect of Chinese is the fact there are no spaces between words. Word segmentation is thus difficult, as we cannot simply split sentences based on the whitespace character. Chinese characters additionally can have their meaning by themselves, or be combined with other characters to form different words.
One example is given from an article on chinese word segmentation @chinesewordseg. The phrase: '你们研究所有十个图书馆' can have multiple meanings depending on which characters you combine.
One interpretation is:
你们('you')/研究('to study')/所有('all')/十('ten')/个(classifier)/图书馆('library'), meaning 'you go study all the 10 libraries!'.
Another interpretation could be:
你们('you')/研究所('institute')/有('to have')/十('ten')/个(classifier)/图书馆('library'), meaning 'your institute has 10 libraries!'
Therefore, I decided to use a library for Chinese segmentation. In this process, I tried both Jieba and Stanza, however settled with Stanza due to the more advanced features Stanza offered, such as position of word (POS) tagging (categorizing a word as an adjective, adverb, etc), lemmatization (finding the word's root), as well as segmentation.
=== Video processing
Incorporating Stanza @stanzapackage into the app was by creating a YouTubeHelper class with functions that initialized Stanza, such as translating the YouTube transcript into simplified Chinese, processing each segmented word to obtain their POS, pinyin (pronunciation of the Chinese word), their possible translations and similar sounding words, and lastly, splitting the transcript into manageable, processable chunks for Stanza to compute.
Chinese characters can be written in two forms: traditional and simplified. While the majority of Chinese speakers use simplified characters, some transcripts from YouTube are only available in traditional Chinese, as seen in videos from Taiwanese speakers. Therefore, I used an API called HanziConv @hanziconv to convert all Chinese characters between the two types. When the whole transcript is in simplified form, the phrases in the transcript are traversed and Stanza is used to segment each word. At this time, the word gets put into a function called 'process_words' which queries an API called hanzidentifier @hanzidentifier to check if this word is a valid Chinese character, before calling a pinyin library @pinyinlib to obtain the word's pronunciation and possible translations. Obtaining similar sounding words is also done in an akin manner, where dimsim @dimsim is instead the library used. All of these libraries are readily available in Python modules.
From the lens of scaling this app, it would be beneficial to use caching to prevent repetitive calls for common words. Constantly calling words such as 'the' or 'he/she' introduces a lot of overhead. Thus, Redis, an in-memory data structure store, was used to cache the pinyin, translation, and similar word requests to speed up processing.
Later on in the frontend, we discuss the implementation of syncing the YouTube video to the captions and their respective translations and pinyin. This information is prepared in the main logic of the YouTubeHelper class, which saves the timestamp when each phrase is spoken, as well as its duration, utilising the data supplied by the YouTube API itself.
Additionally, the YouTubeHelper class utilises additional APIs such as the TextRazor API @textrazor to get the keywords for the transcript, as well as the PyUnSplash API @pyunsplash to get corresponding images for these keywords. These keywords are displayed in descending order of frequency within the transcript. Regarding the PyUnsplash API, the image URLs are saved alongside the transcript data, such that the frontend can look it up via a simple network request supplied by Flutter. To obtain the YouTube transcript itself, a YoutubeTranscriptApi @youtubetranscript was utilised. Additional logic was implemented to ensure that only videos with Chinese captions could be processed, to avoid errors.
A downside of relying on these APIs is the limitations proposed by the data controllers. PyUnsplash, for example, limits image queries to 50 images per hour. Therefore, if the app were to scale to more users, an alternative API may be used, or costs involved for the best user experience. Otherwise, some videos displayed to the user will not contain images of the keywords for the transcript, which may hinder learning.
=== Obtaining YouTube transcripts
The worker is also in charge of dealing with YouTube videos. When a user requests to process a new YouTube video, this is passed to Celery, a distributed task queue system of multiple workers and brokers to enable high availability and horizontal scaling @celery. This architecture is shown in @architecture. Celery was used due to the long processing time of Stanza, which requires asynchronous work outside of the usual HTTP request-response cycle which the rest of the Flask backend could handle.
Thus, when the backend route processes a video, this gets taken up by a Celery worker tasked with video processing as explained in the above step.
Celery workers are also used for obtaining the keywords and their image URLs from PyUnsplash due to the long waiting time. If a user were to wait for a response, the user would most likely quit the app due to a bad user experience. Both of these steps are achieved from the aforementioned 'YouTubeHelper' class.
Once both these steps are done, the data gets added to the VideoDetails table. Celery contains methods such as 'group' that allow tasks to be executed in parallel, depending on the amount of worker threads available. In my system, I have provided 2 threads; for the minimal-viable-product (MVP), only one user can use the app at a time, so the maximum number of threads open at a time would be 2 (one for obtaining the keywords and their images, another for the stanza word processing).
A chord function, built into Celery, is also used such that after these two tasks execute in parallel, the obtained data gets immediately added to the VideoDetails table.
Thus, the user can utilise the additional features of the app as the video processing tasks run in the background.
== Architecture & Hosting
The backend architecture consists of different services: the main Flask backend, the SQL database, NGINX, and the celery workers. One of the primary challenges in implementing the backend was figuring out how to combine all services to enable seamless communication among them. Each celery worker had received tasks from the Flask app and thus required their instance of the Flask app context. Furthermore, each celery worker must have their own instance of the SQL database to add new videos to the VideoDetails table.
Therefore, the celery service contains some repetition of code from the Flask app service, such as the database models. While this code repetition is a common trade-off associated with microservices architecture, it is a necessary step to enable the decoupling of logic. Microservices allow for better scalability compared to monolithic applications because if one service goes down, the remaining services continue running @microservices.
The next step is to containerize the entire backend so that from the frontend's perspective, it only communicates with a single service. This is achieved through the orchestration tool, Docker Compose, which allows us to run multi-container applications. With a single command and a configuration file, all the microservices can be created and started simultaneously @dockercompose. Docker enables the creation of a portable environment, ensuring that the application can run consistently across any server.
Finally, this backend must be hosted in a manner that allows any front-end client to access its endpoints. utilising the University of St Andrews' virtual machines, the docker-compose network was deployed. This deployment ensures that any device connected to the University's eduroam network can access the aforementioned endpoints.
#figure(
image("documents/appdesign/architecture/architecture.png", width:120%),
caption: [
High-level architecture
]
)<architecture>
#pagebreak()
= Implementation (Frontend)
== Flutter
The decision to create a mobile app over a web application was made because of its accessibility. Although the backend will be hosted online (on a virtual machine supplied by the University), certain data will be cached locally on the mobile device itself. In contrast, a website requires a constant WiFi connection and inhibits practicing a language on the go. Adding to the advantages of mobile apps explored in the context survey, mobile phones can help students draw better connections between vocabulary due to their portable nature. With the case of learning the word 'coffee' in a coffee shop, having a mobile app readily available at any location is very beneficial, helping to provide a deeper and more authentic learning experience @MALL.
In another case, Lu and her colleagues (2014) designed a mobile app @TAT presenting Chinese characters along with pinyin (Chinese pronunciation) and illustrations on the stroke order in writing Chinese characters. Additionally, games such as 'Pinyin Match' and 'Hearing and Match' were implemented, with gamification elements such as leveling up and competition. The final results showed that students positively engaged with the mobile app to practice and learn Chinese characters. Teachers also reported that the mobile app accommodated a variety of students' learning abilities.
After the study, further recommendations for the app were recorded, such as providing animated hints on the correct stroke order as students wrote on the screen, offering instant feedback from the games, and archiving and organising learning artifacts as the students' learning portfolios.
These recommendations are interesting as they relate to the teachers' results from the initial survey conducted. Teachers thought that modern language apps today lacked the ability for students to self-correct which could be helped by providing animated hints on stroke order and instant feedback from games.
The same paper also mentioned that mobile apps provided opportunities for learners to practice the language outside the classroom. One example is when students were tasked to take appropriate photos to interpret certain idioms - these students then reported that it enhanced their understanding of these Chinese idioms.
My initial intention was to use Unity because of the stress on gamification in the initial surveys. Unity has been a cross-platform game engine since 2005 and is popular with many mobile game app developers. Utilising Large Language Models (LLM) models to create in-game characters that the app user could talk to was proposed. However, we are unable to test the LLM's accuracy. A user may pick up something wrong from the LLM, which is detrimental to the user's learning. Instead, the learning in the app should be similar to Anki, where there is a set answer to check with. The learning overhead for Unity was strenuous. Not only did art need to be involved (you create your models to put into your game), but it is also not a very friendly platform for customizability, for example adding extra packages that are beneficial to my language app.
Thus, I looked into other app development platforms, such as Android Studio and XCode. Android Studio and XCode are native to platforms (android and iOS respectively), which means that scaling this application to a large audience would be difficult. Luckily, I came across a cross-platform alternative, Flutter.
Flutter brings along many additional advantages. Flutter contains many readily available widgets that allow developers to focus more on the application logic rather than the user interface. These widgets are available on different OS versions and thus will bring fewer compatibility issues when distributing the app.
The active Flutter community means that packages are constantly updated. As seen later in this paper, features such as tracking a user's stroke to learn a character's stroke order are an example of a pre-made package that developers can use instead of creating the feature from the ground up, allowing for a faster and simpler development time.
== Frontend architecture
Before building Flutter apps, it is crucial to determine their structure. A typical Flutter app follows a four-layer architecture; the presentation layer, application layer, domain layer, and data layer.
Starting from the bottom, the data layer represents the data sources, such as querying JSON from the backend. The domain layer processes this JSON data and converts it into models that the front end can use. Repositories play a key role in handling data serialization and data parsing. The application layer contains services responsible for the application logic. These services can access various repositories. Finally, the presentation layer consists of the widgets and controllers that deal with the interface.
This approach is great for very simplistic apps. When dealing with more complex applications, we need to take this architecture further and think about a feature-first or layer-first approach @flutterstructure. This is because scalable apps consist of a lot of different features. We can think of a feature as an action a user must take to achieve a goal.
A layer-first approach would look something like this:
#set align(center)
```
src
presentation
feature1
feature2
application
feature1
feature2
domain
feature1
feature2
data
feature1
feature2
```
#set align(left)
However one of the pitfalls of this approach is when adding a new feature3; we would have to make changes in each folder.
Instead, we can take a layer-first approach, as follows:
#set align(center)
```
features
feature1
presentation
application
domain
data
feature2
presentation
application
domain
data
```
#set align(left)
This is more logical because when adding a new feature we can focus on just one folder and it is easier to make changes to one feature.
The next step is therefore to decide how to split the app into the different features.
== Frontend features
Based on the user feedback and previous research, we have identified the importance of context-based learning, using multimedia to reinforce understanding of words, the usefulness of spaced repetition, and how gamification can increase a user's motivation and learning consistency.
To achieve context-based learning, obtaining YouTube videos for users to learn would be beneficial as users can understand and 'shadow' the native YouTuber in real time. Shadowing is a technique used by language learners to bridge the gap between listening and speaking. As a learner listens to speech, they simultaneously repeat what they attended to.
Shadowing brings the benefits of bringing a learners' attention to the phonological aspects of what they hear, rather than the meanings, as there is very little time lag @shadowing. The same study showed that text-presented shadowing, where learners shadow together with a written script of a target passage, may improve reading skills and possibly pronunciation.
The disadvantage of shadowing is that a learner cannot hear themselves speak, as the attention is on listening to input and reproducing it orally. Thus, the paper emphasises the importance of learners recording themselves for self-evaluation, re-iterating what we have seen in the context survey.
Thus, providing YouTube transcripts that sync with the videos allows learners to practice shadowing. Users should be able to search for videos and save videos so that the learner can re-visit the same video. Later on, we also touch upon self-evaluation through recording the user's voice in the provided games.
Using multimedia combined with spaced repetition is mainly found with flashcard apps, such as Anki. Flashcards typically contain a sentence or a keyword; when flipped, the translation is displayed. For a flashcard to be fully utilised, images, sound and animations should be comprised in the flashcard. Therefore, my app will contain a user flow that allows users to design their flashcards by accessing images online, be able to listen to new words, and seeing animations of their stroke order.
Gamification directly correlates to a user's engagement in language learning, motivation, and consistency. Learners should undergo exercises that use as many senses as possible, such as visual, audio, and kinetic simulation. My app will contain exercises that will use all aspects of language (speech, writing, reading, translating), broken down into small, gamified lessons.
Lastly, learners mentioned their struggle with staying consistent with their learning. To mitigate this, the app should include features such as streaks and progress tracking of how many lessons they have completed that day.
Overall, the app would be split into 5 different features:
1. lessonoverview: this feature deals with obtaining all the videos you want to study from and displaying the transcripts and keywords to the user.
2. makereviews: this feature deals with creating new flashcards and updating flashcard images and notes.
3. spacedrepetition: this feature deals with obtaining all the words to be reviewed that day, updating all the reviews, and application logic dealing with the game exercises.
4. useroverview: this feature deals with calculating streaks and tracking a user's progress for that day.
5. youtubeintegration: this feature deals with showing the YouTube player and UI aspects for searching for YouTube videos.
#figure(
image("documents/appdesign/architecture/flutterpod.png", width:110%),
caption: [
Example of lessonoverview feature
]
)<flutterpod>
Riverpod providers wrap around these services, which controllers in the presentation layer access. The arrangement allows widgets within the presentation layer, which rely on specific controllers, to listen to state changes and automatically update the interface.
For example, the VideoController accesses methods in the VideoService to obtain the previously processed videos. Once obtained, this gets put into a Library model (a list of all the videos they have previously processed), which Riverpod watches for any changes. If any changes are detected, the widget associated with this controller refreshes automatically.
Riverpod not only serves as a state management framework but also facilitates reactive caching to easily update the UI. Additionally, by catching programming errors at compile time, Riverpod helps developers maintain robust and reliable code @riverpodprovider.
== MVP
Now that the backend has been implemented, our next steps involve creating a Minimal Viable Product (MVP) for user feedback. We will evaluate the MVP design using Nielsen’s 10 usability heuristics @NielsonHeu.
1. Visibility of System status (the deisgn should always keep the users informed of what is going on, through appropriate feedback)
2. Match between the system and the real world (words, phrases and concepts should be familiar to the user)
3. User control and freedom (users should have a clearly marked 'emergency exit' to leave unwanted action without hassle)
4. Consistency and standards (the app should follow platform and industry conventions)
5. Error prevention (best designs carefully prevent errors in the first place)
6. Recognition rather than recall (minimise user's memory load by making the elements and options visible. i.e. help in context, rather than giving a long tutorial)
7. Flexibility and efficiency of use (shortcuts may speed up the interaction for the expert user. Examples are keyboard shortcuts, touch gestures and customization)
8. Aesthetic and minimalist design (interfaces should not contain information that is irrelevant or rarely needed)
9. Help users recognise, diagnose and recover from errors (error messages should be expressed in plain language and constructively suggest a solution)
10. Help and documentation (perhaps provide some documentation to help users understand how to complete their tasks)
From @mvp, the original MVP design, several areas where the app falls short are apparent. Regarding heuristic (1), the home page with the animated gif holding a review sign is a pressable button to show a user's review lessons for that day. The button is unlabelled, leading to confusion for users. The home page also does not succinctly communicate what actions users can take. This is also seen in the lesson overview page, where despite having a box with a camera icon and another text box to add a personal note, interviews have shown that users still get confused with the meaning of these boxes. These elements require better contextual cues or labeling to enhance user understanding.
#figure(
image("documents/appdesign/design/original.png", width:130%),
caption: [
MVP design
],
)<mvp>
Regarding the second heuristic, the words and phrases may be unfamiliar to the user. The main issue of the app is that it assumes that the user knows what to do. For instance, consider the rightmost page that supplies the pinyin (pronunciation), translation, and similar sounds to that word. Due to the absence of clear labels, users may struggle to grasp its purpose.
(3) is also a problem because of the absence of an instantaneous escape route for a user when they accidentally follow a specific navigation path. To get back to the home screen, the user has to constantly press the back button. Sometimes, the back button in the app bar does not even exist. Thus, a navigation bar would be a suitable solution to this issue.
#figure(
image("documents/appdesign/design/searchyoutube.png", width:100%),
caption: [
Initial video search
]
) <initialsearch>
On the home page, the search bar widget (@initialsearch) facilitates searching for new YouTube videos to study. When triggered, a pop-up shows a widget with a YouTube thumbnail, name, and channel.
However, the exit button is on the top left, which goes against convention (the exit button is typically seen on the top right). This design therefore goes against Nielson's 4th heuristic, which emphasises the importance of consistency and standards. Building on top of this, there is also a lack of exit buttons in general. When a user hits an error, it is difficult to navigate away from this, leading to user frustration. Overall, this impacts Nielson's 5th, 9th, and 10th heuristics.
Finally, for the 6th and 7th heuristics (minimising the user's memory load by making options visible and introducing shortcuts to speed up user interaction), additional icon buttons can be used. These icon buttons must be big enough to be visible but also fit the UI aesthetic (Nielson's 8th heuristic).
Taking this a step further, I also wanted to hear what other students had to say. With this MVP, I conducted semi-structured interviews and conversations.
Most responses mentioned the lack of an intuitive UI, such as certain actions not being clear, widgets being too clustered, and the lack of a uniform structure and colour scheme. However, they enjoyed the game aspect and the ability to download transcripts from YouTube to study from, mentioning how certain apps today cannot learn from online articles and content.
#figure(
image("documents/interviewresponses/duolingolack.png", width:110%),
caption: [
Lack of utilisation of online resources
]
)
In the next iteration of the UI design, I drew from online inspirations and apps that I use personally and created inspiration boards. By using these ideas, I would further enhance my UI, re-iterateively conduct interviews and improve upon it. One example is shown below, where I used one interesting design to re-implement the creation of a flashcard page.
#figure(
image("documents/appdesign/inspiration/inspirationboard1.png", width:110%),
caption: [
Inspiration board example
]
)
#figure(
image("documents/appdesign/design/oldreview.png", width:160%),
caption: [
An iteration of make review page
]
)<makereview>
Although the colour theme has improved, feedback included the lack of a focus on the page. The eye is not immediately attracted to any area of the screen, and it is not easy to know where to start to look. This also highlighted the importance of balancing usability and aesthetics.
Another critique was the lack of headers and instructions on the page. For example, on the left-most side is a list of similar-sounding words, to help users with pronunciation (by identifying similar-sounding words, users can be more aware of common pitfalls and mitigate further mistakes in pronunciation down the line). However, from a user perspective, this just looks like a list of random words. It would be best to label what this is exactly.
In the same interview, I asked them which app's design they liked the most, and why. They answered with Headspace's UI design (Headspace is a meditation app).
#figure(
image("documents/appdesign/inspiration/headspace.png", width:110%),
caption: [
Headspace UI design
]
)
More specifically, the UI is spacious and calm to look at, which fit their branding. This can be seen in contrast to my initial home page design, where there were too many functionalities in one page. This creates a similar issue to @makereview, where there is no particular focus on a page, leading to bad usability.
From this conversation, I concluded that each page should lead a user down a particular path through the app and be intuitive. It should follow common practices such as having the close button on the top right rather than the top left, as seen in @initialsearch.
#figure(
image("documents/appdesign/design/home.png",width:120%),
caption: [
Home and Video pages
],
)
On the left-hand side is the initial home page design, consisting of a search bar functionality, streak number, card widget for all the flashcards to review today, and the pre-downloaded transcripts of searched YouTube videos.
When the user searches for a video and selects one they want to study, the resulting video is prominently displayed at the bottom of the home screen. In the first MVP, you may notice a countdown. Every 30 seconds, the front end polls the server; to deliver the learning content, the transcript is extracted from the selected YouTube video. Then, the Stanza Natural Language Processing (NLP) model is applied to analyze the transcript, identifying critical words and phrases. Additionally, external APIs are queried to obtain image links corresponding to these terms. Due to this complexity, the entire process is not instantaneous. Thus, a 30-second polling mechanism was implemented to strike a balance between responsiveness and server load. In subsequent iterations, users manually trigger the refresh button only when they expect a new video to be downloaded, reducing the server load.
Additionally, the app has been revamped to contain navigations to a home page and video page, instead of cluttering all the information together, separating the different features of the app and allowing each page to have a distinct purpose. Interviews highlighted that the primary focus should be on the lessons scheduled for the day, rather than the entire video library. Consequently, the home page now prioritizes a user’s daily lesson progress, and streak count, and provides a clear overview of exercises to complete.
Duolingo, as mentioned previously, utilises small, bite-sized lessons so that users can learn languages on the go and more conveniently. Inspired by this, each lesson shown on the home page only tests 5 new words, allowing users to do 5-minute lessons whenever they have time. Smaller chunks of learning have also proven to increase motivation and discipline. The large fire con at the top was also introduced in response to Nielson's 6th heuristic, giving the app a more decluttered look.
The right-most image displays the new search widget after typing in the YouTube ID in the search bar on the video page. There is a larger thumbnail and the exit button is on the top right, following conventions and thus Nielson's 4th heuristic.
#figure(
image("documents/appdesign/design/transcript.png",width:150%),
caption: [
Transcript page
],
)<transcriptpage>
@transcriptpage displays the full transcript of the video. Initially, this page just showed the keywords of the video with their respective images, then a scrollable list of all phrases from the video, with the times they are spoken (after 'Start') and the line number on the top right-hand side. The highlighted words show their POS (parts of speech), such as whether it is a noun, verb, adjective, etc.
Paper @DCC discusses the approach that should be taken when developing course materials for technology-mediated Chinese language learning. <NAME> recommends that 'for each chapter or unit, the learning objectives should be given at the beginning so that students understand what is expected of them'. Since the app does not follow a course but the content of a YouTube video, this can be achieved through adding these keywords and their images.
Presenting these keywords before the transcript itself provides the most 'pay-off value' possible. These keywords can then be expanded upon in the transcript. At the same time, it provides context to the learner, depicting what the video is about, who are the characters in this video and who the is video for.
In the subsequent interviews, students discussed how it would be beneficial to also watch the YouTube video in real-time as they follow along with the transcript. In my context survey, it was discovered that people learn from body language, not just from the words themselves. Being able to draw from visual contexts to decipher new words has proven to be effective in learning.
Thus, in the new iteration of the app, a real-time listener has been implemented which matches the transcripts to the video as it plays (see @transcriptpage, middle image).
From adding this feature, the app now consists of visual and audio simulation. Rather than using a robotic voice (used by Anki and other language apps), students can learn the intonation and speech styles of their favorite content creators. This makes the learning experience more enjoyable and also provides a method for users to learn more native ways of speaking. As seen from the surveys, language learners value learning from relevant content rather than mechanic textbook ways of speaking.
Along with this, a draggable scrollable sheet can also be seen peeking from the bottom. Exit buttons have also been incorporated into the app in case the user has navigated to this page accidentally, as well as gestures. By taking Nielson's 7th heuristic (adding shortcuts) into account, users can easily scroll up to see the draggable widgets. Otherwise, these widgets are hidden at the bottom of the screen, which gives the screen more space and improves its overall aesthetic. As well as the keywords of the image, it contains the rest of the transcript, allowing the user to have the whole script at their fingertips instead of having to wait for the video to play at a certain point before seeing a particular phrase of interest. Moreover, this scrollable sheet allows users to jump to certain parts of the video, which is beneficial for the shadowing technique explained previously.
In later iterations, we can see that each phrase also contains the full pinyin (with their tones rather than just the Romanji English characters) and a translation beneath. These add-on features are achieved through the Google Translate package and Pinyin package that Flutter offers.
Many YouTubers who teach Mandarin online include Chinese subtitles, their pinyin, and their direct translation in their videos. This has proved to be very beneficial to the community, as sometimes words in the sentence may be misheard and thus learned with the wrong pronunciation. With the pinyin shown immediately to the user, we can mitigate this risk. Furthermore, having a direct translation beneath the phrase is also extremely beneficial, as it strips away the need for the learner to navigate away from the app, translate the whole sentence, and come back to the app. Previously, each word in the sentence had to be translated independently, however, this takes a lot of time if the whole sentence is made up of new words; users are more likely to be discouraged from learning Mandarin and would quit the app.
Each of these phrases belongs to pressable card widgets, enabling users to create their flashcards.
#figure(
image("documents/appdesign/design/newreview.png", width:130%),
caption: [
New review page
]
)<newreview>
When one of these words gets clicked, a user journey is created for a user to create a flashcard. This flashcard will save the word the user wants to review, as well as the sentence it is a part of, to keep its context information.
An improvement from the previous iteration includes the introduction of a stroke-order animation. This idea came from an interview with a beginner-learner of Chinese. Stroke order is incredibly important in the Chinese language but is easily overlooked. Adding animations to the UI also enhances the user experience by making it more fun.
The inclusion of multimedia has been shown to enhance learning @Multimedia. Multimedia is a combination of more than one type of media such as text, symbols, images, pictures, audio, video, and animations usually with the aid of technology to enhance understanding or memorisation. In the same paper, the use of multimedia was summarized to provide the following benefits:
1. Ability to turn abstract concepts into concrete concepts
2. Ability to present large volumes of information within a limited time with less effort
3. Ability to simulate students' interest in learning
The paper is based on the assumption that learners have many channels to separate visual and auditory information, and each channel has a limited load capacity. Thus, multimedia is beneficial as it separates the load from one channel, such that the learners are not overwhelmed by too much information.
From the semi-structured interviews conducted as well as the initial surveys, we can already see how impactful animations and listening to sound pronunciations are. These features stood out especially which, from my findings, can explain why Duolingo is much more popular than its competitor, Anki.
Furthermore, following Nielson's 8th heuristic (an aesthetic and minimalist design), I have improved the user journey when they create a flashcard (see @newreview, 2nd image).
By making this information linear, users can easily identify their next steps to create a flashcard. There is no information overload and users can decide to open and close certain expandable widgets. A checkbox at the side also marks the user's progress so they can track which widgets they have seen already.
Previously, users were prompted to take their images to represent a word. However, many interviewers discussed the overhead with this approach, as they would have to find a relevant photo from their photo gallery. Furthermore, some words simply are difficult to find an image for. Therefore, a new API call to Google Images has been used, where the top 3 images are shown to the user instead (see @newreview, 4th image).
The user simply has to press on their favourite image; this provides a much more seamless feel to the app. Teachers who took the initial survey also mentioned their desire to incorporate Google into the app - they emphasised how utilising these free resources can be very beneficial, and many language apps today do not use their advantage. By adding these Google Images we can use free resources online submitted by native speakers of Chinese, which can strengthen the memory of words in learners.
An example is if we think about learning the word for cat. The British idea of a cat is very different from the image of a cat in the East. When learning English, we already associate 'cat' with a certain type of cat. However, when learning a new language, we should associate this new word with a language that aligns more with their culture, as it can provide more context to the learner, strengthening connections in the brain.
#figure(
image("documents/other/Englishcat.png", width:100%),
caption: [
Cat (in english)
]
)
#figure(
image("documents/other/Mandarincat.png", width:100%),
caption: [
Cat (in chinese)
]
)
Although similar, we can already see the slight differences between the European cats and the cats when we search in Chinese. Google Images also gives a much larger library of images that a user can choose from compared to their photo gallery. Giving the learner the ability to choose their favourite image also means they spend more time thinking about that particular word, strengthening their connection with that word.
Lastly, a game functionality was added with five exercises:
1. Fill in the blank
2. Write the stroke order
3. Match the image
4. Translate the sentence
5. Speaking exercise
#figure(
image("documents/appdesign/design/games.png", width:130%),
caption: [
Games
]
) <game>
Now that we can add a word, we must incorporate this into a spaced repetition system.
According to Rigeney (1978) @LLS, learning strategies are the 'actions, behaviours, steps, or techniques - such as seeking out target language conversation partners, or giving oneself encouragement to tackle a difficult language task - used by learners to enhance learning'.
To make the learning of vocabulary effective, we can utilise previous research and design short, fun games for the learner that motivates and encourages them, even through mistakes. The exercises in the games will include using images and multimedia to help users link the information with pre-existing knowledge. The words will also be tested with sentences, to help users see the words in context and also help them analyze and classify these new vocabulary. Finally, the exercises will be challenging enough that learners are prompted to think critically before answering, for example by having translation and fill-in-the-blank exercises, where not all words are known by the learner.
The CoCAR model is also an interesting concept @CFL that emphasises using action to enhance understanding. When designing the games, we need to create engaging exercises that allow students to manage unknown situations. Achieving this can be through visuals, sound, kinetic (testing stroke order), and understanding (translations). Understanding of vocabulary is achieved from the ability of users to add images and personal notes to their flashcards, where users are prompted to think back to relevant experiences or images that remind them of that word.
Thus, I have distilled the learning aspect into 5 exercises. The first exercise is to fill in the sentence with the correct vocabulary; this exercise aims to teach the user how to use a certain word and where to put it in a sentence. In @game, image 1, you can see the popup that shows when the user gets an exercise correct. This provides the user with pronunciation as well as the sentence's meaning, giving the user extra information to aid with their learning.
Exercise 2, testing stroke order, is an interactive method for learners to learn how to write the characters. This idea was brought up by one of the interviewees learning Chinese. They mentioned how difficult stroke order was and how they enjoyed this aspect of the language the most. Adding additional multimedia reinforces learning. Knowing the stroke order also ensures that the learner does not simply recognize the character, a common pitfall many Chinese learners fall for. Furthermore, writing Chinese characters is very complex and can lead to a lack of motivation. Especially for new vocabulary, students may feel lost while writing a complicated character. Therefore, this app also provides stroke hints when a certain number of wrong strokes have been calculated, to keep the student's morale high and enable instantaneous self-correction.
In a study conducted in an elementary school Chinese immersion language program, students were given iPads to encourage writing @LCTFC. Inputting Chinese was through the handwriting input method, allowing the students to improve their Chinese writing ability through writing stories. At the end of the study, the students showed a net positive attitude.
From our previous research, we already have seen the importance of creating stories to remember a certain vocabulary. This can be achieved by using the matching image to word exercise (@game, 3rd exercise), where the image itself is a story of its own. The personal notes that users put onto their flashcards also can help with memory retention.
Fourthly is translating a sentence, which allows a user to test their understanding. This is a technique seen in many flashcard applications, such as Anki.
Lastly is a speaking exercise. Apps such as Duolingo utilises AI to check the accuracy of a learner's pronunciation. When using this feature, I realised that Duolingo's AI allowed you to cheat the system, where you may pass the exercise despite pronouncing a few words wrong. Furthermore, the research we have conducted explains the importance of self-evaluation in the language learning process. By recording themselves saying the words and comparing this recorded audio with the correct pronunciation, users can recognise their mistakes and are less likely to make the same mistake again.
All these exercises utilises multimedia, which we have seen has proven effective for learning. A study examining the use of multimedia on E-cards for students learning Chinese as a foreign language @MASL revealed that the more visual and verbal a flashcard is, the better the retention. Oxford (1990) recommended word learning techniques through creating mental linkages (an example being grouping words by themes), applying images and sounds, and employing an action to help learn words.
Traditional flashcards have limitations: they lack audio integration and cannot access online visual media. To optimize flashcard learning, incorporating sounds and images directly related to the word being tested is essential. This approach enhances memory retention and engagement, as demonstrated in the exercises above.
#pagebreak()
= Evaluation
== Student feedback
This app was reviewed by 37 students at the University of St Andrews, coming from a variety of backgrounds. The process was as follows: students were given a quick demonstration of my app, and then were asked to fill out an online survey using Qualtrics.
The survey aimed to gather feedback on the app's usability, user interface (UI), and user satisfaction, as well as gather suggestions for future improvements. The first half of the survey was a series of questions to gather a student's previous language learning experience; the second half was a series of questions to gather feedback on the app.
#figure(
image("documents/appreview/understandusers.png",width:120%),
caption: [
User previous language learning experience
],
)<languagelearningexperience>
In @languagelearningexperience we can see that the majority (40.5%) of students have studied their foreign language for over 5 years, yet only 8.8% of the students ranked their language proficiency as 'fluent' (a 10/10). On this scale, 0 means that they cannot speak the language at all, 5 means they can hold a conversation with a stranger on the street, and 10 means they can understand complicated concepts in that language. The majority of the students surveyed ranked their proficiency as between 1-5, which is a beginner to intermediate level. This is the demographic that my app aims to target. Additionally, the students in question primarily used study techniques such as apps and watching media online.
Next, the students were asked about their favourite language learning methods.
#figure(
image("documents/appreview/favmethods.png",width:120%),
caption: [
Favourite language learning methods
],
)<favouritemethods>
The majority of students believed that conversing with native speakers was the most effective method of learning a language. This was followed by watching online media, such as YouTube videos, which aligns with the features of my app, such as the ability to watch YouTube videos and shadow the native speaker. Students who selected 'other' also mentioned listening to music in that foreign language, going to the country where the language is spoken, and having the motivation to learn that language. This goes hand-in-hand with the idea of going to the country where the language is spoken, as surrounding yourself in a culture with that language can keep motivation high.
Motivation also stems from being engaged and having fun when undergoing language learning. Therefore, I asked students to rank the methods of language learning based on engagement and fun. Watching media was ranked the highest. Textbooks, on the other hand, were ranked last, proving the point that students prefer to learn from relevant content and multimedia, rather than the static, mechanical teachings from a textbook.
Surprisingly, learning through flashcards was ranked mainly in the 4th and 5th place, which is lower than I had expected. Despite the book Fluent Forever @FF pushing flashcard learning as one of the most effective methods of learning, it is clear that students do not feel as engaged compared to other methods. To investigate further, I asked the students about how often they used flashcards in their language learning.
#figure(
image("documents/appreview/flashcards.png",width:120%),
caption: [
Flashcard usage
],
)<flashcardusage>
@flashcardusage displays that the majority of students do not use flashcards at all, and if they do it is extremely rare (40.5% of the students selected 'very rarely/ few times a month'). Of the students who did use flashcards every day, they mentioned that they used Anki, a popular flashcard app. However, there has been much discourse on the engagement factor of Anki. Anki can be seen as quite a mechanical way of learning; students mentioned that Anki was 'boring'. The students who did enjoy Anki mainly mentioned its effectiveness rather than how fun it was.
This is where my app comes in. My app aims to make flashcard learning more engaging and fun. The flashcard contains multimedia, such as images and sound, and students get tested through exercises utilising images, sound, and animations, which is in contrast to Anki where students simply flip a card to see its translation.
However, to see the effectiveness of this approach, the second half of the survey was a series of questions to gather feedback on the app.
#figure(
image("documents/appreview/valuablefeatures.png",width:120%),
caption: [
Favourite app features
],
)<appvaluablefeatures>
We can immediately see that students enjoyed watching videos the most. Of those who commented, they mentioned how it was 'innovative', 'fresh', and 'engaging'. Another student mentioned the ability to have the transcription and translation at the same time meant it saved them time from having to look up words in the browser.
Games were generally ranked second highest, getting comments such as 'I like games' and 'GAMES ARE COOL', re-emphasising the importance of making learning fun. While students were ranking the app features, many students struggled to decide the order between games and the short lessons. Those who ranked short lessons (where each 'lesson' only tests 5 new words) before games mentioned how it was 'more realistic for busy public users', explaining how much more practical it was.
Finally, those who ranked creating flashcards first mentioned how easy it was and how they could 'click a word to add notes directly'. This feature is not present in Anki, which gives an empty template to the user and expects them to fill in the information themselves. Students who are unaware of how to utilise flashcards and make them as engaging as possible may create a flashcard empty of multimedia and context, leaving no personal relation to the flashcard, and thus are less likely to remember it.
It is also important to consider what would drive students away from using the app every day.
#figure(
image("documents/appreview/appui.png",width:120%),
caption: [
Least favourite app features
],
)<appleastfavfeatures>
43.2 % of students expressed their lack of motivation for language learning. They mentioned that 'outside the app, there aren't many reasons to use the language. A community of just people interested in the same language would be what I consider a primary source of motivation'. Similarly, another student wrote, 'I want to be able to see the progress of friends and other people!'. We can see a relation between what motivates people - a community of people learning the same language.
On the flip side, those who selected 'other' shared apprehensions. Some feared falling behind due to busy schedules, while others worried about overtaxing their brains. These fears are valid, due to the app's spaced-repetition algorithm, where falling behind means more words to review the next day.
However, I wanted to investigate how students found the app's user interface, as seen in the pie chart in @appleastfavfeatures (2nd chart). Most students discussed how the search functionality was laborious, as the students would need to know the YouTube ID of the video to download it. Initially, this was catered to students who could not type in Chinese. Feedback suggests a better approach: search in English and have the app translate the search for them.
One interesting comment was how 'as a beginner, it is very hard to know what to even search in the first place'. Without prior exposure to Chinese content, users may feel lost and lose motivation. A personalisation feature could be added in the future where the app can suggest videos to the user based on their interests. As stated by another student, 'it would be very unlikely that I would know or be interested in YouTubers that speak in a language I am trying to learn. It would be nice if the app recommended videos related to the type of content I would be interested in, at least for starters to get me familiar with that area of YouTube'. Students also mentioned that 'showing recommended search terms would be helpful because as a language learner, you do not even know what to search. For example, maybe we can search "cooking videos" in English and it would bring up the equivalent in Chinese.' Additionally, students mentioned they would like to see custom videos, instead of being limited to YouTube.
Students who mentioned the 'watching YouTube video' feature as their least favourite interface feature mentioned how although they could see the segmentation of the words, there is currently no way to identify what the different colours mean. They suggested to categorise the words into nouns, verbs, adjectives, etc. They also found the layout of all the previously watched videos cluttering and unorganised, suggesting playlists to organise by categories, ratings, themes, or dates. Some mentioned a bookmarking feature for videos they struggled with, to come back to it later. Others wanted better ways to track their progress in videos, such as their watch time and how many words or flashcards they have made on that video, linking back to the idea of progress tracking increasing motivation. Lastly, when watching the video, students recommended having a feature that allowed automatic pausing of the video at every caption change to allow them to read the transcript at their own pace.
Regarding flashcard creation, many students pointed out that the stroke order animation was difficult to see at times due to the colour clash with the dark background. Furthermore, the app currently does not allow students to see all their previously made flashcards as the idea was to re-surface the flashcard at optimal times based on a spaced repetition system through game-based learning. However, students mentioned how they would like to see all their flashcards in one place, as well as the ability to edit and delete them. In a previous iteration of the MVP, I had initially allowed students to take their images for the flashcards. This was later changed to query Google Images API and obtain the top 3 images for students to select from. It would be beneficial to have options for both of these features, as some students mentioned how they would like to take their images for the flashcards.
Lastly, the 'game learning' feature was highly regarded. Students identified its similarities to Duolingo and expressed how they would like to see more Duolingo-like features such as earning experience points and introducing characters to the games. Some students wanted to see game aspects that leaned more towards competing with friends, such as every 'accomplishment can give you foot soldiers - attack your friends!' In a similar vein, students wanted time-based challenges to bring a competitive aspect to it. There were some discrepancies with this feedback, as others stressed they would rather have simple, manual flashcards rather than all the game features. Thus, a later iteration of the app would be a balance between these two, where the user can choose to have a more game-like experience or a more traditional flashcard experience.
In conclusion, the app was rated very highly, with all students rating the app a 6 or higher, and 24.3% rating the app a 10/10.
#figure(
image("documents/appreview/appsatis.png",width:120%),
caption: [
App Rating
],
)<apprating>
Students who were interested in further development of the app answered the question 'Are there any future features you would like to see on the app?' with ideas such as a continuous assessment of the user's vocabulary. This was interesting, as the student who mentioned this came from a background in language courses and did not use many apps. They discussed how schools would have small tests every week, and how this would be beneficial to have. Many other students' ideas fell into the idea of social networking, where they could make friends and have chat rooms to practice their language, create their user profiles, and have ranked leaderboards that compare their progress with friends. Additionally, it was mentioned by a few others to have more game features such as having more characters and perhaps virtual reality (VR) integration.
== Evaluation against objectives
The original set-out objectives required creating a minimal viable product of a language app, having the ability to generate transcripts and flashcards from YouTube videos as well as review questions relevant to the user, and finally creating a user evaluation form to obtain user feedback at the end of the project.
These objectives have been achieved, with the full-stack app being able to successfully query the YouTube API for a video with Chinese captions, transcribe and translate it with word segmentation, and create flashcards for each word a user wants to learn. The creation of a flashcard has been extended based on user feedback and personal research to include multimedia aspects such as images, audio, and animations. To achieve this, the app can query Google Images to obtain the top 3 images for a particular word, for the user to select as their flashcard image. Audio has been integrated through text-to-speech Flutter packages and animations have been integrated to show the stroke order of a character.
The initial objective was for the app to generate review questions relevant to the user. This has been achieved through the creation of 5 exercises, which test the user's understanding of the word by filling in the blank, matching the image, translating the sentence, recording their voice to check their pronunciation, and also testing their stroke order of characters by allowing users to write on the screen. Gamification techniques were also implemented to make the learning experience more fun, through the introduction of streaks and short game lessons where each lesson only tests a maximum of 5 words.
Furthermore, the app has been evaluated by 37 students at the University of St Andrews, and the feedback has been positive, with 24.3% rating the app a 10/10. Throughout the building process, interviews were conducted with students and teachers to gather iterative feedback. This feedback has been used to improve the app, such as the introduction of a real-time listener to match the transcript to the video as it plays and other UI changes.
== Critical appraisal
My app's uniqueness draws from the utilisation of YouTube videos and context-based learning, which is not seen in many language apps today. Combined with ideas from Duolingo, Rosetta Stone, and Anki, the app is a mix of multimedia (Rosetta Stone), spaced-repetition (Anki), and game-based learning (Duolingo).
The strengths of my app come from the incorporation of visuals through YouTube videos, and images from Google to create flashcards and Chinese character stroke animations. Audio is also incorporated through videos and text-to-speech packages, facilitating better retention and recall. Furthermore, the spaced repetition algorithm optimises effective learning and game-based learning increases a user's motivation and engagement.
However, there are many areas in which the app falls short. As the app encompasses a mixture of elements from the different apps, it is not as strong in certain aspects as the individual apps. For example, the spaced-repetition algorithm may not be as optimised as Anki's implementation, and there is not a lot of user freedom for them to create their flashcard. Anki, for example, contains features for learners to hide certain parts of an image and do labeling.
For multimedia learning, the app cannot ensure the quality of the YouTube videos. As no professional Chinese speakers are working on the app, the app cannot ensure that all of the NLP segmentation is correct. Certain words may not be segmented correctly and there is currently no way for the user to correct this.
Regarding game-based learning, the app's user interface is not as polished as Duolingo's. The app does not have social networking aspects such as leaderboards and features to compete with friends. The app also does not have a feature to track the user's progress, such as how far through the video you have watched, or how many flashcards you have created, which is a big factor of motivation for certain users. Duolingo is also known for its characters and stories, which my app does not have.
On the other hand, the app is tailored specifically to language learners, compared to Anki, which is a general-purpose flashcard app. Anki does not prompt the user to create flashcards with audio, images, and other multimedia aspects, which has proven to increase retention. Furthermore, the app allows users to quickly create flashcards from YouTube videos rather than from scratch, which speeds up the process of creating flashcards. At the same time, it allows users to create flashcards from authentic contexts, preparing them for more real-life situations. Combined with the short lessons for microlearning, the app is more practical for busy learners, who may not have time to sit down and study for long periods.
Since the users can choose which YouTube video to study from, the app can be tailored to the user's interests, which is an advantage to Duolingo and Rosetta Stone where the content is pre-determined. Potentially, the app could incorporate more social features such as virtual study groups and leaderboards, as well as more game-based learning features such as characters and stories. These features would also fit with the app's branding of learning from authentic content, as users can share their own real-life stories as in-game characters.
#pagebreak()
= Conclusion
In conclusion, my app successfully achieves and exceeds all set-out objectives. Through conducting personal research, interviews with language learners, and more, I have been able to create an app that utilises effective language learning techniques to create a product that is engaging, fun and useful, as seen by the results from the final survey. The initial app has been improved through constant iterations from interviews with students and teachers, and the final product consists of features such as YouTube transcript syncing, obtaining flashcard images from Google Images, and a game-based learning system, on top of the original plan of obtaining a video's transcript and creating flashcards to review from.
Currently, the app is designed for one user. However, the app could be expanded to include a social networking aspect, as mentioned by the students. This would make the app more engaging and motivating, as we have discovered that a community of people learning the same language can help motivate people. It would also be interesting to incorporate large language models (LLMs) through the avatar of an in-game character to help with students who do not have access to a network of people speaking that foreign language. Future iterations of the app could bring in new game features that help with building community, such as leaderboards and clans, to make the app more fun and boost motivation. Finally, the app could be expanded to include more languages, as the app is currently only designed for Mandarin. Some students who took the survey mentioned how they would love to see the app in the foreign language they are currently studying.
#pagebreak()
= Appendix
== Tests
Tests were split into 3 different types: backend unit tests that tested the API calls and their logic, tests that ensures the Flutter frontend handles API responses correctly, and finally end-to-end (E2E) tests that tested the complete application flow, which involves users interacting with the app.
*Unit tests*
The API tests were achieved using Postman, an API platform for building and using APIs. These tests were run on localhost and the logs were checked via DockerHub.
#table(
columns: (auto, auto, auto, auto),
inset: 10pt,
align: horizon,
[*Test*], [*Expected*], [*Result*], [*Pass*],
text("Test that the API call to get the YouTube transcript is successful with a valid YouTube ID. (Use endpoint '/vid'). The YouTube video has chinese captions"),
text("The server logs should show the YouTube transcript"),
text("Server successfully logged the chinese transcript"),
text("Pass"),
text("Test that word segmentation works on the YouTube transcript. (Use endpoint '/vid')"),
text("The server logs should show the segmented words"),
text("Server successfully logged the segmented words"),
text("Pass"),
text("Test that the segmented words can successfully obtain their pronunciation information, similar word information and translation information. (Use endpoint '/vid')"),
text("The server logs should show the pronunciation, similar words and translation of the segmented words"),
text("Server successfully logged the pronunciation, similar words and translation of the segmented words"),
text("Pass"),
text("Test that the YouTube transcript, when processed as above, is saved onto the database. (Use endpoint '/vid' to submit a valid YouTube video, then check the database contains this through endpoint /getlesson/<videoid>)"),
text("The database should contain the YouTube transcript"),
text("Database successfully contains the YouTube transcript"),
text("Pass"),
text("Test that the TextRazor API call is successful and obtains the lesson keywords for that transcript. Also seen from endpoint /vid"),
text("The server logs should show the lesson keywords"),
text("Server successfully logged the lesson keywords"),
text("Pass"),
text("Test the PyUnsplash API can obtain image urls for the lesson keywords. Also seen from endpoint /vid"),
text("The server logs should show the image urls"),
text("Server successfully logged the image urls"),
text("Pass"),
)
In the Flutter frontend, the following unit tests used MockHttpClient to test the logic without actually hitting the real endpoints. This is seen under the 'tests' folder in the Flutter app.
#table(
columns: (auto, auto, auto, auto),
inset: 10pt,
align: horizon,
[*Test*], [*Expected*], [*Result*], [*Pass*],
text("Test gets all videos from the server"),
text("When the server is called, the server should return a list of videos with response 200"),
text("Server returns a list of videos and has response 200"),
text("Pass"),
text("Test that the app can send a POST request to request a YouTube video"),
text("When a YouTube video is requested, the server should return a response 200"),
text("Server returns a response 200"),
text("Pass"),
text("Test that a 404 error is thrown if a YouTube video is not found"),
text("When a YouTube video is not found, the server should return a response 404"),
text("Server returns a response 404 for a particular YouTube video request"),
text("Pass"),
text("Test that data for a specific video is returned when a GET is sent"),
text("When a GET request is sent for a specific video, the server should return a response 200 with the video data"),
text("Server returns a response 200 with the video data"),
text("Pass"),
text("Test that the app can send a POST request to create a flashcard"),
text("When a flashcard is created given the correct inputs, the server should return a response 200"),
text("Server returns a response 200"),
text("Pass"),
text("Test that the app can successfully send a POST request to update a flashcard"),
text("When a flashcard is updated given the updated note and image url, the server should return a response 200"),
text("Server returns a response 200"),
text("Pass"),
text("Test that the app can query all words to be reviewed today through a GET request"),
text("When a GET request is sent to the server, the server should return a response 200 with the words to be reviewed today"),
text("Server returns a response 200 with the words to be reviewed today"),
text("Pass"),
text("Test that the app can obtain the streak today from a GET request"),
text("When a GET request is sent to the server, the server should return a response 200 with the streak today"),
text("Server returns a response 200 with the streak today"),
text("Pass"),
)
*End-to-end tests*
These end-to-end tasks test a user interacting with the app.
#table(
columns: (auto, auto, auto, auto),
inset: 10pt,
align: horizon,
[*Test*], [*Expected*], [*Result*], [*Pass*],
text("Test that app only shows YouTube video if chinese captions are available"),
text("When the YouTube id of an english video is placed, app shows no results"),
text("App shows no results"),
text("Pass"),
text("Test that app only shows YouTube video if chinese captions are available"),
text("When the YouTube id of a chinese video is placed, app shows the video"),
text("App shows the video"),
text("Pass"),
text("Test that the app can successfully show the transcript of the video"),
text("When the YouTube id of a chinese video is placed, app shows the transcript"),
text("App shows the transcript"),
text("Pass"),
text("Test that the app can successfully shows the segmented words of each phrase in the transcript"),
text("When a video is selected, the words are segmented through the use of different background colours"),
text("App shows the segmented words"),
text("Pass"),
text("Test that the transcript page displays all relevant information"),
text("When a video is clicked, the lesson keywords and their images are shown, as well as the rest of the transcript"),
text("App shows the lesson keywords and their images, as well as the rest of the transcript"),
text("Pass"),
text("Test that the app allows for the creation of flashcards for each word in the transcript"),
text("When a word is clicked, the app allows the user to create a flashcard by displaying a page with the word stroke animation, its pronunciation, translation and similar words"),
text("App shows the flashcard creation page"),
text("Pass"),
text("Test that the app can successfully query the top 3 images from Google Images for a word when creating a transcript"),
text("When a word is clicked and a user wants to choose a flashcard image from Google Images, the app shows the top 3 images"),
text("App shows the top 3 images"),
text("Pass"),
text("Test that the app can add a day to a streak when a lesson is finished"),
text("Starting from a streak of 0, when a lesson is finished, the streak should show 1"),
text("Streak shows 1 when a lesson is finished"),
text("Pass"),
text("Test that the app can determine whether a flashcard must be created or be updated"),
text("When a flashcard is created, the app's button should turn to 'Update'"),
text("Button turns to 'Update' when a flashcard is created"),
text("Pass"),
text("Test that the flashcard can update its image and notes"),
text("When a flashcard is updated, the image and notes should change"),
text("Image and notes change when a flashcard is updated"),
text("Pass"),
text("Test that all 5 exercises are surfaced to the user in a game lesson"),
text("In a game, all 5 exercises should be shown to the user for a specific word"),
text("All 5 exercises are shown to the user"),
text("Pass"),
text("Test that the YouTube transcript works in real-time and different times of the video can be jumped to"),
text("When the user clicks on a phrase, the video jumps. The transcript also changes in real-time"),
text("The video jumps and the transcript changes in real-time"),
text("Pass"),
text("Test that the app successfully shows the word at correct intervals based on spaced-repetition"),
text("When a word is added, the word gets re-surfaced at different days based on the spaced-repetition algorithm"),
text("The word gets re-surfaced at different days based on the spaced-repetition algorithm"),
text("Pass"),
)
== User manual
The app's server is currently hosted on a virtual machine at the University of St Andrews, thus the app will only serve requests if the Flutter client is connected to the University's Eduroam. The app is currently not available on any stores, but can be run locally by downloading the github repository and running the Flutter client. The github repository is private and can be accessed by contacting the author.
To download the Flutter client when the repository is accessed, run the following:
```
cd frontend/workspace/flutterapp
flutter build apk
flutter pub get
flutter run
```
== Other
#figure(
(image("documents/swe/progress1.png",width:120%),
image("documents/swe/progress2.png",width:120%),
image("documents/swe/progress3.png",width:120%)).join(),
caption: [
Progress log
],
)<progress>
#figure(
image("documents/swe/ethicsapproval.png", width: 120%),
caption: [
Ethics approval
]
)<ethicsapproval> |
|
https://github.com/YunkaiZhang233/PMT-notes | https://raw.githubusercontent.com/YunkaiZhang233/PMT-notes/main/README.md | markdown | MIT License | # PMT-Notes
This set of notes is a collection of extended footnotes, covering some topics that are sometimes difficult, sometimes slightly "out-of-scope" for PMT sessions at Imperial College London, 2024-2025, written in Typst.
## Contributing
Pull requests are highly welcome. For major changes, please open an issue first to discuss what you would like to change.
## License
[MIT](https://choosealicense.com/licenses/mit/)
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/rubby/0.10.1/README.md | markdown | Apache License 2.0 | # rubby (Typst package)
## Usage
```typ
#import "@preview/rubby:0.10.1": get-ruby
#let ruby = get-ruby(
size: 0.5em, // Ruby font size
dy: 0pt, // Vertical offset of the ruby
pos: top, // Ruby position (top or bottom)
alignment: "center", // Ruby alignment ("center", "start", "between", "around")
delimiter: "|", // The delimiter between words
auto-spacing: true, // Automatically add necessary space around words
)
// Ruby goes first, base text - second.
#ruby[ふりがな][振り仮名]
Treat each kanji as a separate word:
#ruby[とう|きょう|こう|ぎょう|だい|がく][東|京|工|業|大|学]
```
If you don't want automatically wrap text with delimiter:
```typ
#let ruby = get-ruby(auto-spacing: false)
```
See also <https://github.com/rinmyo/ruby-typ/blob/main/manual.pdf> and `example.typ`.
## Notes
Original project is at <https://github.com/rinmyo/ruby-typ> which itself is
based on [the post](https://zenn.dev/saito_atsushi/articles/ff9490458570e1)
of 齊藤敦志 (Saito Atsushi). This project is a modified version of
[this commit](https://github.com/rinmyo/ruby-typ/commit/23ca86180757cf70f2b9f5851abb5ea5a3b4c6a1).
`auto-spacing` adds missing delimiter around the `content`/`string` which
then adds space around base text if ruby is wider than the base text.
Problems appear only if ruby is wider than its base text and `auto-spacing` is
not set to `true` (default is `true`).
You can always use a one-letter function (variable) name to shorten the
function call length (if you have to use it a lot), e.g., `#let r = get-ruby()`
(or `f` — short for furigana). But be careful as there are functions with names
`v` and `h` and there could be a new built-in function with a name `r` or `f`
which may break your document (Typst right now is in beta, so breaking changes
are possible).
Although you can open issues or send PRs, I won't be able to always reply
quickly (sometimes I'm very busy).
## Development
This repository should exist as a `@local` package with the version from the `typst.toml`.
Here is a short description of the development process:
1. run `git checkout dev && git pull`;
2. make changes;
3. test changes, if not done or something isn't working then go to step 1;
4. when finished, run `just change-version <new semantic version>`;
5. document changes in the `CHANGELOG.md`;
6. commit all changes (only locally);
7. create a `@local` Typst package with the new version and test it;
8. if everything is working then run `git push`;
9. realize that you've missed something and fix it (then push changes again);
10. run `git checkout master && git merge dev` to sync `master` to `dev`;
11. run `just create-release`.
## Publishing a Typst package
1. To make a new package version for merging into `typst/packages` repository run
`just mark-PR-version`;
2. copy newly created directory (with a version name) and place it in the
appropriate place in your fork of the `typst/packages` repository;
3. run `git fetch upstream && git merge upstream main` to sync fork with `typst/packages`;
4. go to a new branch with `git checkout -b <package-version>`;
5. commit newly added directory with commit message: `package:version`;
6. run `gh pr create` and follow further CLI instructions.
## Changelog
You can view the change log in the `CHANGELOG.md` file in the root of the project.
## License
This Typst package is licensed under AGPL v3.0. You can view the license in the
LICENSE file in the root of the project or at
<https://www.gnu.org/licenses/agpl-3.0.txt>. There is also a NOTICE file for
3rd party copyright notices.
Copyright (C) 2023 <NAME>
|
https://github.com/sbleblanc/typst-templates | https://raw.githubusercontent.com/sbleblanc/typst-templates/main/README.md | markdown | # Personnal repository of Typst templates
This repo holds my Typst templates |
|
https://github.com/MasterTemple/typst-bible-plugin | https://raw.githubusercontent.com/MasterTemple/typst-bible-plugin/main/README.md | markdown | # Typst Bible
**I wrote a more complete README in Typst.
View the documentation [here](./README.pdf).**
## Explanation
- To easily reference Bible verses for personal, ministerial, or academic papers
- ESV is currently only supported translation
- If you have any great ideas, please open a [GitHub Issue](https://github.com/MasterTemple/typst-bible-plugin/issues)
## Usage
`bible.typ` is meant to provide an API for interacting with `bible.wasm`
### Import `bible.typ`
This includes `r` which is currently how you reference a verse
```typ
import "bible.typ": bible_footnote, bible_quote, bible_quote_fmt
```
## `bible_footnote`
### Calling
```typ
I am blessed because my sins are forgiven! #bible_footnote("Romans 4:7")
// or
I am blessed because my sins are forgiven! ^ Romans 4:7
```
### Result

## `bible_quote`
### Calling
```typ
#bible_quote("Romans 4:7")
// or
> Romans 4:7
```
### Result

## `bible_quote_fmt`
### Calling
#### Basic
This is just like using `#bible_quote` with no additional formatting applied
```typ
#bible_quote_fmt("Ephesians 4:28")
```
#### Bold
```typ
#bible_quote_fmt("Ephesians 4:28", b: "")
```
#### `hl` = highlight match pattern
#### `ul` = underline match pattern
#### `it` = italics match pattern
#### `b` = bold match pattern
#### `c` = custom match pattern to apply `fmt` filter
#### `fmt` = custom formatting pattern
#### `omit` = omit content by replacing with elipse ...
### Extra Information
I will try and provide clear naming conventions, and they might be a bit verbose.
However, you can just rename them as follows:
```typ
#let v = set_verse_content_in_footnote
// ...
#v("1 John 3:2")
```
This is a made up example, but you get the point.
## Building WASM
To build:
```bash
wasm-pack build --target web
```
I use a script that deletes and re-links the file so that Typst knows to re-check the contents:
```bash
./run.sh
```
## Screenshots
 |
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-18B00.typ | typst | Apache License 2.0 | #let data = (
("KHITAN SMALL SCRIPT CHARACTER-18B00", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B01", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B02", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B03", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B04", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B05", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B06", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B07", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B08", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B09", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B0A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B0B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B0C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B0D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B0E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B0F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B10", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B11", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B12", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B13", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B14", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B15", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B16", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B17", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B18", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B19", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B1A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B1B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B1C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B1D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B1E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B1F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B20", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B21", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B22", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B23", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B24", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B25", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B26", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B27", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B28", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B29", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B2A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B2B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B2C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B2D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B2E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B2F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B30", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B31", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B32", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B33", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B34", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B35", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B36", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B37", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B38", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B39", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B3A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B3B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B3C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B3D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B3E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B3F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B40", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B41", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B42", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B43", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B44", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B45", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B46", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B47", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B48", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B49", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B4A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B4B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B4C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B4D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B4E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B4F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B50", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B51", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B52", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B53", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B54", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B55", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B56", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B57", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B58", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B59", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B5A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B5B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B5C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B5D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B5E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B5F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B60", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B61", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B62", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B63", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B64", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B65", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B66", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B67", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B68", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B69", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B6A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B6B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B6C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B6D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B6E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B6F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B70", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B71", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B72", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B73", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B74", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B75", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B76", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B77", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B78", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B79", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B7A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B7B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B7C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B7D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B7E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B7F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B80", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B81", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B82", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B83", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B84", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B85", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B86", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B87", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B88", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B89", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B8A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B8B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B8C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B8D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B8E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B8F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B90", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B91", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B92", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B93", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B94", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B95", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B96", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B97", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B98", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B99", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B9A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B9B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B9C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B9D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B9E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18B9F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BA0", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BA1", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BA2", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BA3", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BA4", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BA5", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BA6", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BA7", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BA8", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BA9", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BAA", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BAB", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BAC", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BAD", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BAE", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BAF", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BB0", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BB1", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BB2", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BB3", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BB4", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BB5", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BB6", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BB7", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BB8", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BB9", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BBA", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BBB", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BBC", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BBD", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BBE", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BBF", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BC0", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BC1", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BC2", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BC3", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BC4", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BC5", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BC6", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BC7", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BC8", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BC9", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BCA", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BCB", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BCC", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BCD", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BCE", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BCF", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BD0", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BD1", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BD2", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BD3", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BD4", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BD5", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BD6", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BD7", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BD8", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BD9", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BDA", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BDB", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BDC", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BDD", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BDE", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BDF", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BE0", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BE1", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BE2", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BE3", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BE4", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BE5", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BE6", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BE7", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BE8", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BE9", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BEA", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BEB", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BEC", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BED", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BEE", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BEF", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BF0", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BF1", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BF2", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BF3", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BF4", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BF5", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BF6", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BF7", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BF8", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BF9", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BFA", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BFB", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BFC", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BFD", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BFE", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18BFF", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C00", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C01", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C02", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C03", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C04", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C05", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C06", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C07", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C08", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C09", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C0A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C0B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C0C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C0D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C0E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C0F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C10", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C11", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C12", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C13", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C14", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C15", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C16", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C17", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C18", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C19", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C1A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C1B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C1C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C1D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C1E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C1F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C20", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C21", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C22", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C23", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C24", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C25", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C26", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C27", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C28", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C29", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C2A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C2B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C2C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C2D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C2E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C2F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C30", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C31", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C32", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C33", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C34", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C35", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C36", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C37", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C38", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C39", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C3A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C3B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C3C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C3D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C3E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C3F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C40", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C41", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C42", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C43", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C44", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C45", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C46", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C47", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C48", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C49", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C4A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C4B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C4C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C4D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C4E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C4F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C50", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C51", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C52", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C53", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C54", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C55", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C56", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C57", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C58", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C59", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C5A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C5B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C5C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C5D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C5E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C5F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C60", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C61", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C62", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C63", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C64", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C65", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C66", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C67", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C68", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C69", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C6A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C6B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C6C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C6D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C6E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C6F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C70", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C71", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C72", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C73", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C74", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C75", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C76", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C77", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C78", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C79", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C7A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C7B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C7C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C7D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C7E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C7F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C80", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C81", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C82", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C83", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C84", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C85", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C86", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C87", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C88", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C89", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C8A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C8B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C8C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C8D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C8E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C8F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C90", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C91", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C92", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C93", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C94", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C95", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C96", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C97", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C98", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C99", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C9A", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C9B", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C9C", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C9D", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C9E", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18C9F", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CA0", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CA1", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CA2", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CA3", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CA4", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CA5", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CA6", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CA7", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CA8", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CA9", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CAA", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CAB", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CAC", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CAD", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CAE", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CAF", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CB0", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CB1", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CB2", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CB3", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CB4", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CB5", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CB6", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CB7", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CB8", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CB9", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CBA", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CBB", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CBC", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CBD", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CBE", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CBF", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CC0", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CC1", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CC2", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CC3", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CC4", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CC5", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CC6", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CC7", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CC8", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CC9", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CCA", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CCB", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CCC", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CCD", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CCE", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CCF", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CD0", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CD1", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CD2", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CD3", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CD4", "Lo", 0),
("KHITAN SMALL SCRIPT CHARACTER-18CD5", "Lo", 0),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
(),
("KHITAN SMALL SCRIPT CHARACTER-18CFF", "Lo", 0),
)
|
https://github.com/HiiGHoVuTi/requin | https://raw.githubusercontent.com/HiiGHoVuTi/requin/main/graph/univers.typ | typst | #import "@preview/diagraph:0.2.1": *
#import "../lib.typ": *
#show heading: heading_fct
On dit qu'un mot $w in Sigma^*$ est $n$-univers si tout les mots de $Sigma^n$ sont des facteurs de $w$. On s'intéresse à créer les plus petits mots $n$-univers.
#question(0)[Montrer qu'un mot $n$-univers sur un alphabet à $k$ lettres à au moins une longueur de $k^n+n-1$]
#correct([
1. Le nombre de préfixes distincts de $n$ lettres dans un mot de $l$ lettres est au plus $p <= l-n+1$. En sachant qu'il y a $k^n$ préfixes à $n$ lettres, on a donc $k^n <= l-n+1$ donc $l >= k^n+n-1$
])
Soit $G=(V,E)$ un graphe *orienté*, on définit $L(G)$ le _graphe ligne_ de $G$ par le graphe orienté $(E,E')$ avec $E'$ l'ensemble des arêtes de la forme $((x,y),(y,z))$ pour $(x,y),(y,z) in E$.
#question(1)[Donner le graphe ligne du cycle à $4$ éléments et d'un arbre binaire parfait de hauteur 2.]
#correct([
a. Le graphe ligne d'un graphe du cycle à 4 éléments est aussi le cycle à 4 élément.\
b. Cela donne une foret de deux arbres binaire parfait de hauteur 1.
])
On construit alors la famille des graphes de Bruijn $("DB"(n))_(n in NN^*)$ par $"DB"(1) = ({0,1},{0,1}^2)$ et $"DB"(n+1) = L("DB"(n))$.
#question(1)[Construire $"DB"(2)$]
#correct([
C'est
#set align(center)
#raw-render(
```dot
digraph ex {
fontname="Helvetica,Arial,sans-serif"
node [fontname="Helvetica,Arial,sans-serif"]
edge [fontname="Helvetica,Arial,sans-serif"]
rankdir=LR;
node [shape = circle] 00 01 10 11;
00 -> 01 -> 10 -> 11
00 -> 00
11 -> 11
10 -> 01
01 -> 11
10 -> 00
}
```,
width: auto
)
#set align(left)
])
#question(1)[Montrer que pour tout $n in NN^*$, chaque sommet de $"DB"(n)$ à autant d’arêtes sortantes que entrantes. Combien de sommets et d'arêtes $"DB"(n)$ possède t'il ?]
#correct([
Le dégrée sortant est de dégrée 2 pour chaque sommet.
a. Par récurrence sur $n in NN^*$.
- Les deux sommets de $"DB"(2)$ sont de degrée 2
- Supposons que $forall x in V("DB"(n)), deg x = 2$.\ Soit $(u,v) in V("DB"(n+1))$.
Par HR, $deg (v) = 2$\ donc $|{x in V("DB"(n)) | (v,x) in E("DB"(n))}| = 2$\
donc $|{(a,b) in E("DB"(n)) | a = x }| = 2$\
donc $|{x in V("DB"(n+1)) | ((u,v),x) in E("DB"(n+1)) }| = 2$ donc $deg (u,v) = 2 $
pareillement pour le degrée entrant.
$"DB"(n)$ possède $2^n$ sommets et $2 times 2^n = 2^(n+1)$ arrêtes: $|"DB"(n+1)| = |"DB"(n)| times deg ("DB"(n)) = 2times |"DB"(n)| $
])
#question(1)[Montrer que pour tout graphe orienté fortement connexe tel que pour tout sommet le degrée entrant est le même que le degrée sortant, il existe un cycle eulérien (un cycle passant par toutes les arêtes du graphe).
En déduire que pour tout $n in NN^*$, $"DB"(n)$ possède un cycle eulérien.]
#correct([
Par récurrence forte sur le nombre d'arêtes.
On considère un cycle quelquonque, que l'on peut faire par un parcours en profondeur (comme deg entrant = deg sortant, on sait que dès que on entre on pourra toujours ressortir, sauf sur le premier).
On s'applique par récurrence sur chaque composante connexe. Eulérien => hamiltonien (passe par tout les sommets), donc on rencontre notre premier cycle, et on concatène la boucle.
])
#question(2)[En voyant les sommets de $"DB"(n)$ comme des mots dans ${0,1}^(n-1)$, et en étiquetant les arêtes par $0$ ou $1$, montrer qu'il existe un mot $n$-univers sur l'alphabet ${0,1}$ de taille $2^n+n-1$]
#correct([
On montre que l'on peut étiqueté les mots tels que $w -->^0 w'$ ssi $w=alpha . w'$ et $w'0 =w$. Par récurrence sur $n in NN^*$
- On le fait sur le schéma de $"DB"(2)$ (et $"DB"$(1)).
- Si on a une bonne numérotation de $"DB"(n)$, alors chaque arête $(u,v)$ est uniquement identitifié par $u in {0,1}^(n-1)$ et un $x in {0,1}$, donc par un mot de ${0,1}^n$, qui deviens nos nouveau état. De plus, si $(u,v),(v,z)$ est une arrête, alors on passe de $v$ à $z$, donc en appellant $x$ l'étiquetage de l'arête $(v,z)$, on étiquette $(u,v),(v,z)$ par $x$
Un cycle hamilotinen est donc un chemin qui passe par toutes les arrêtes: un ensemble de lettres de ${0,1}$ à choisir en partant du mot $00...0$ pour les avoir tous une seule fois. un tel cycle est de longueur $2^(n+1+1) = 2^n$, en ajoutant ce qu'il faut pour commencer sur $00...0$ on atteint $2^n-n+1$
])
#question(1)[Généraliser la question précédente pour des alphabets plus grand.]
#correct([
On commence du graphe $"DB"(1) = ({0,1,...,k},{0,1,...,k}^2)$
Sinon, on peut directement donner le bon graphe pour obtenir le mot $n$-univers sur ${1,...,k}$, et montrer que le degrée marche bien.
]) |
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/visualize/image_08.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
//
// // Error: 2-25 failed to parse SVG (found closing tag 'g' instead of 'style' in line 4)
// #image("/assets/files/bad.svg") |
https://github.com/f7ed0/typst-template | https://raw.githubusercontent.com/f7ed0/typst-template/master/project.typ | typst | #import "lib/blocks.typ" : *
#import "lib/PDCA.typ" : *
#let init(
doc,
col : gray.lighten(85%),
title : [TITLE],
code : [CODE],
logos : [LOGOS],
clients : [CLIENTS],
team : [TEAM],
analyse : [ANALYSE],
objectif : [OBJECTIF],
PDCA : [#PDCA(dx : 86%)],
cost : [COST],
P : [PLAN],
D : [DO],
C : [CHECK],
A : [A]
) = {
let bl(content, height : 100% , width : 99%, inset : 30pt) = block(inset: inset, fill : col, width: width, height: height, radius: 10pt, content)
set page(paper:"a0", margin: 40pt)
set text(size : 25pt)
grid(columns: (20%,60%,20%), rows: 5%,
align(left + horizon, bl({
align(center,code)
})),
align(center + horizon,bl({
set text(size : 40pt)
title
})),
align(right + horizon,bl({
align(center,logos)
})),
)
bl(width : 100%, height : 4%, {[= Equipe]
team})
bl(width: 100%, height: 4%, {[= Clients]
clients})
grid(columns: (4fr,5fr), rows : 20%, bl({[= Analyse]
analyse}), align(right, bl(align(left,{[= Objectifs]
objectif}))))
grid(columns: (4fr,3fr), rows : 20%, bl(inset : 30pt,{
[= PDCA]
set text(size : 12pt)
align(bottom,
PDCA)}), bl({[= COUT]
cost}))
grid(columns: (1fr,1fr), rows : 20.5%, bl({[= Plan]
P}), align(right,bl(align(left,{[= DO]
D}))))
grid(columns: (1fr,1fr), rows : 20.5%, bl({[= CHECK]
C}), align(right,bl(align(left,{[= ACT]
A}))))
}
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/a2c-nums/0.0.1/README.md | markdown | Apache License 2.0 | # a2c-nums
Convert Arabic numbers to Chinese characters.
## usage
```typst
#import "@preview/a2c-nums:0.0.1": int-to-cn-num, int-to-cn-ancient-num, int-to-cn-simple-num, num-to-cn-currency
#int-to-cn-num(1234567890)
#int-to-cn-ancient-num(1234567890)
#int-to-cn-simple-num(2024)
#num-to-cn-currency(1234567890.12)
```
## Functions
### int-to-cn-num
Convert an integer to Chinese number. ex: `#int-to-cn-num(123)` will be `一百二十三`
### int-to-cn-ancient-num
Convert an integer to ancient Chinese number. ex: `#int-to-cn-ancient-num(123)` will be `壹佰贰拾叁`
### int-to-cn-simple-num
Convert an integer to simpple Chinese number. ex: `#int-to-cn-simple-num(2024)` will be `二〇二四`
### num-to-cn-currency
Convert a number to Chinese currency. ex: `#int-to-cn-simple-num(1234.56)` will be `壹仟贰佰叁拾肆元伍角陆分`
### more details
Reference [demo.typ](demo.typ) for more details please.
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/scholarly-tauthesis/0.7.0/template/content/use-of-ai.typ | typst | Apache License 2.0 | I hereby declare, that the AI-based applications used in generating this work are as follows:
#table(
align: left,
columns : (70%,30%),
table.header(
[*Application*],
[*Version*]
),
[...],
[...],
[...],
[...],
)
== Purpose of the use of AI
Explain here _in detail_, for which purpose and how AI was utilized in writing this thesis.
== Parts of this work, where AI was used
List here all chapters, sections, subsections, tables, figures and so forth,
that were generated by an AI, or that an AI had a hand in generating.
|
https://github.com/Kasci/LiturgicalBooks | https://raw.githubusercontent.com/Kasci/LiturgicalBooks/master/CU/oktoich/1_generated/0_all/Hlas5.typ | typst | #import "../../../all.typ": *
#show: book
= #translation.at("HLAS") 5
#include "../Hlas5/0_Nedela.typ"
#pagebreak()
#include "../Hlas5/1_Pondelok.typ"
#pagebreak()
#include "../Hlas5/2_Utorok.typ"
#pagebreak()
#include "../Hlas5/3_Streda.typ"
#pagebreak()
#include "../Hlas5/4_Stvrtok.typ"
#pagebreak()
#include "../Hlas5/5_Piatok.typ"
#pagebreak()
#include "../Hlas5/6_Sobota.typ"
#pagebreak()
|
|
https://github.com/soul667/typst | https://raw.githubusercontent.com/soul667/typst/main/PPT/MATLAB/touying/themes/simple.typ | typst | // This theme is from https://github.com/andreasKroepelin/polylux/blob/main/themes/simple.typ
// Author: <NAME>
#import "../utils/utils.typ"
#import "../utils/states.typ"
#let slide(self: none, title: none, footer: auto, ..args) = {
// show strong:set text(weight: 1000)
if footer != auto {
self.simple-footer = footer
}
(self.methods.touying-slide)(self: self, title: title, setting: body => {
if self.auto-heading == true and title != none {
heading(level: 2, title)
}
body
}, ..args)
}
#let centered-slide(self: none, section: none, ..args) = {
self = utils.empty-page(self)
(self.methods.touying-slide)(self: self, repeat: none, section: section, ..args.named(),
align(center + horizon, if section != none { heading(level: 1, utils.unify-section(section).title) } + args.pos().sum(default: []))
)
}
#let title-slide(self: none, body) = {
centered-slide(self: self, body)
}
#let new-section-slide(self: none, section) = {
centered-slide(self: self, section: section)
}
#let focus-slide(self: none, background: auto, foreground: white, body) = {
self.page-args.header = none
self.page-args.footer = none
self.page-args.fill = if background == auto { self.colors.primary } else { background }
set text(fill: foreground, size: 1.5em)
centered-slide(self: self, align(center + horizon, body))
}
#let register(
aspect-ratio: "16-9",
footer: [],
footer-right: states.slide-counter.display() + " / " + states.last-slide-number,
background: rgb("#ffffff"),
foreground: rgb("#000000"),
primary: aqua.darken(50%),
self,
) = {
let deco-format(it) = text(size: .6em, fill: gray, it)
// color theme
self = (self.methods.colors)(
self: self,
neutral-light: gray,
neutral-lightest: background,
neutral-darkest: foreground,
primary: primary,
)
// save the variables for later use
self.simple-footer = footer
self.simple-footer-right = footer-right
self.auto-heading = true
// set page
let header = locate(loc => {
let sections = states.sections-state.at(loc)
deco-format(sections.last().title)
})
let footer(self) = deco-format(self.simple-footer + h(1fr) + self.simple-footer-right)
self.page-args = self.page-args + (
paper: "presentation-" + aspect-ratio,
fill: self.colors.neutral-lightest,
header: header,
footer: footer,
footer-descent: 1em,
header-ascent: 1em,
)
// register methods
self.methods.slide = slide
self.methods.title-slide = title-slide
self.methods.centered-slide = centered-slide
self.methods.focus-slide = focus-slide
self.methods.new-section-slide = new-section-slide
self.methods.touying-new-section-slide = new-section-slide
self.methods.init = (self: none, body) => {
set text(fill: foreground, size: 25pt)
show footnote.entry: set text(size: .6em)
show heading.where(level: 2): set block(below: 1.5em)
set outline(target: heading.where(level: 1), title: none, fill: none)
show outline.entry: it => it.body
show outline: it => block(inset: (x: 1em), it)
body
}
self
} |
|
https://github.com/akagiyuu/asymptotic-notation | https://raw.githubusercontent.com/akagiyuu/asymptotic-notation/main/main.typ | typst | #import "@preview/ctheorems:1.1.2": *
#show: thmrules.with(qed-symbol: $square$)
#show figure.caption: emph
#show link: underline
#set text(size: 14pt)
#set heading(numbering: "1.")
#let theorem = thmbox("theorem", "Theorem", fill: rgb("#eeffee"))
#let corollary = thmplain("corollary", "Corollary", base: "theorem", titlefmt: strong)
#let definition = thmbox("definition", "Definition")
#let proposition = thmbox("proposition", "Proposition")
#let example = thmplain("example", "Example").with(numbering: none)
#let proof = thmproof("proof", "Proof")
#align(center + horizon)[
#text(size: 48pt, style: "italic", weight: 400)[
Asymptotic Notation
] \ \
#text(size: 18pt, style: "italic", weight: 400)[
\- <NAME> -
]
]
#pagebreak()
#set page(numbering: "1")
#outline(indent: auto)
#pagebreak()
= Introduction
== What is asymptotic notation?
In computer science and related disciplines, asymptotic notation describes the
behavior of a function as its input size (often denoted by n) tends towards
infinity. It focuses on the dominant terms that influence the function's growth
rate, ignoring constant factors and lower-order terms.
== Why use asymptotic notation?
It allows researchers to compare and evaluate algorithms' efficiency without
getting bogged down by specific hardware or implementation details. By focusing
on asymptotic behavior, researchers can make general statements about how well
algorithms scale with increasing input sizes.
#pagebreak()
= Types of asymptotic notation
== Big O notation ($O$-notation)
$O$-notation provides an asymptotic *upper bound*.
#figure(image("img/big-O.png", width: 10cm), caption: $f(n) = O(g(n))$)
#definition[
$ O(g(n)) := { f(n): exists c, n_0 > 0 "such that" 0 <= f(n) <= c g(n)", " forall n >= n_0} $
]
#definition[
$ f(n) := O(g(n)) arrow.l.r.double f(n) in O(g(n)) $
]
#example[
$ln(n) = O(n)$
]
== Big Omega notation ($Omega$-notation)
$Omega$-notation provides an asymptotic *lower bound*.
#figure(image("img/big-Omega.png", width: 10cm), caption: $f(n) = Omega(g(n))$)
#definition[
$ Omega(g(n)) := { f(n): exists c, n_0 > 0 "such that" 0 <= c g(n) <= f(n)", " forall n >= n_0} $
]
#definition[
$ f(n) := Omega(g(n)) arrow.l.r.double f(n) in Omega(g(n)) $
]
#example[
$n^2 + n = Omega(n^2)$
]
== Theta notation ($Theta$-notation)
$Theta$-notation provides an asymptotic *tight bound*.
#figure(image("img/Theta.png", width: 10cm), caption: $f(n) = Theta(g(n))$)
#definition[
$ Theta(g(n)) := { f(n): exists c_1, c_2, n_0 > 0 "such that" 0 <= c_1 g(n) <= f(n) <= c_2 g(n)", " forall n >= n_0} $
]
#definition[
$ f(n) := Theta(g(n)) arrow.l.r.double f(n) in Theta(g(n)) $
]
#example[
$Theta(n^2) = n^2$
]
== Little o notation ($o$-notation)
$o$-notation denotes an *upper bound* that is *not asymptotically tight*
#definition[
$ o(g(n)) := { f(n): forall epsilon > 0: exists n_0 > 0 "such that" 0 <= f(n) < epsilon g(n)", " forall n >= n_0 } $
]
#proposition[
$
g(n) > 0 => o(g(n)) = { f(n): f(n) >= 0 "and "lim_(n -> infinity) f(n)/g(n) = 0}.
$
]
#definition[
$ f(n) := o(g(n)) arrow.l.r.double f(n) in o(g(n)) $
]
#example[
$ln(n) = o(n)$
]
== Little omega notation ($omega$-notation)
$omega$-notation denotes an *lower bound* that is *not asymptotically tight*
#definition[
$ omega(g(n)) := { f(n): forall epsilon > 0: exists n_0 > 0 "such that" 0 <= epsilon g(n) < f(n)", " forall n >= n_0 } $
]
#definition[
$ f(n) := omega(g(n)) arrow.l.r.double f(n) in omega(g(n)) $
]
#proposition[
$
f(n) := omega(g(n)) => lim_(n -> infinity) f(n)/g(n) = infinity ", if the limit exists."
$
]
#example[
$n^2 = omega(n)$
]
#pagebreak()
= Properties
== Transitivity
#block[$ \
& f(n) = Theta(g(n)) " and " g(n) = Theta(h(n)) => f(n) = Theta(h(n)) \
& f(n) = O(g(n)) " and " g(n) = O(h(n)) => f(n) = O(h(n)) \
& f(n) = Omega(g(n)) " and " g(n) = Omega(h(n)) => f(n) = Omega(h(n)) \
& f(n) = o(g(n)) " and " g(n) = o(h(n)) => f(n) = o(h(n)) \
& f(n) = omega(g(n)) " and " g(n) = omega(h(n)) => f(n) = omega(h(n)) $]
== Reflexivity
#block[$ \
& f(n) = Theta(f(n)) \
& f(n) = O(f(n)) \
& f(n) = Omega(f(n))$]
== Symmetry
$f(n) = Theta(g(n)) arrow.l.r.double g(n) = Theta(f(n))$
== Transpose symmetry
#block[$ \
& f(n) = O(g(n)) arrow.l.r.double g(n) = Omega(f(n)) \
& f(n) = o(g(n)) arrow.l.r.double g(n) = omega(f(n)) $]
== Some useful identities
#block[$ \
& Theta(Theta(f(n))) = Theta(f(n)) \
& Theta(f(n)) + O(f(n)) = Theta(f(n)) \
& Theta(f(n)) + Theta(g(n)) = Theta(f(n)+g(n)) \
& Theta(f(n)) dot Theta(g(n)) = Theta(f(n) dot g(n)) $]
#block[$ \
& p(n) := sum_(k=0)^(d) a_k n^k", " forall k >= 0: a_k > 0\
& "1. " p(n) = O(n^k)", " forall k >= d \
& "2. " p(n) = Omega(n^k)", " forall k <= d \
& "3. " p(n) = Theta(n^k)"if " k = d \
& "4. " p(n) = o(n^k)", " forall k > d \
& "5. " p(n) = omega(n^k)", " forall k < d $]
#block[$ \
& n! = sqrt(2 pi n) (n/e)^n (1 + Theta(1/n)) \
& log(n!) = Theta(n log(n)) $]
#pagebreak()
= Methods for proving asymptotic bounds
== Using definitions
#example[
$
ln(n) <= n ", " forall n >= 1 " " (c = 1", " n_0 = 1)\
=> ln(n) = O(n)
$
]
#example[
$
0 <= n^2 <= n^2 + n ", " forall n >= 1 " " (c = 1", " n_0 = 1) \
=> n^2 + n = Omega(n^2)
$
]
#example[
$
0 <= n^2 <= n^2 + n <= 2 n^2", " forall n >= 1 " " (c_1 = 1", " c_2 = 2", " n_0 = 1)\
=> Theta(n^2) = n^2
$
]
#example[
$
cases(
reverse: #true, ln(n) >= 0", " forall n >= 1, lim_(n arrow infinity) ln(n)/n = lim_(n -> infinity) 1/n = 0,
)
=> ln(n) = o(n)
$
]
#example[
$
forall epsilon > 0: 0 <= epsilon n < n^2 ", " forall n >= epsilon + 1 " " (n_0 = epsilon + 1)\
=> n^2 = omega(n)
$
]
== Substitution method
The substitution method comprises two steps:
- Guess the form of the solution using symbolic constants.
- Use mathematical induction to show that the solution works, and find the
constants.
This method is powerful, but it requires experience and creativity to make a
good guess.
#example[
$ T(n) := cases(
Theta(1)", " forall n: 4 > n >= 2, T(floor(n/2)) + d " " (d > 0) ", " forall n >= 4,
) $
#block(
inset: (x: 1.2em),
)[
To guess the solution easily, we will assume that: $T(n) = T(n/2) + d$ \
$
T(n) &= T(n/2) + d \
&= T(n/4) + 2d \
&= T(n/2^k) + (k-1)d \
&= T(c) + (log(n/c) - 1)d \
&= d log(n) + (T(c) - log(c) - d)
$ \
So we will make a guess: $T(n) = O(log(n))$
]
#block(inset: (x: 1.2em))[
Define $c := max{T(2), T(3), d}$\
Assume $T(n) <= c log(n) ", " forall n: k > n$ \
$
T(k) &= T(floor(k/2)) + d \
&<= c log(floor(k/2)) + d \
&<= c log(k/2) + d \
&<= c log(k) - c + d \
&<= c log(k)" (1)"
$ \
$ T(n) <= c log(n) forall n: 4 > n >= 2 " (2)" $\
From (1), (2) $ => T(n) = O(log(n))$
]
]
== Master theorem
#theorem(
"Master theorem",
)[ \
$ T(n) := a T(n/b) + f(n) $
#text(style: "italic")[where:]
#block(inset: (x: 1.2em, y: 0em))[
- $a > 0$ \
- $b > 1$ \
- $exists n_0 > 0: f(n) > 0", " forall n >= n_0$
] \ \
$
=> T(n) = cases(
Theta(n^(log_b a))", if " exists epsilon > 0: f(n) = O(n^(log_b a - epsilon )), Theta(n^(log_b a) log(n)^(k+1))", if " exists k >= 0: f(n) = Theta(n^(log_b a) log(n)^k), Theta(f(n))", if " cases(
exists epsilon > 0: f(n) = Omega(n^(log_b a + epsilon )), exists n_0 > 0", " c < 1 :a f(n/b) <= c f(n)", " forall n >= n_0,
),
)
$ ] <master_theorem>
#example[
Solve the recurrence for merge sort: $T(n) = 2T(n/2) + Theta(n)$ \ \
We have $f(n) = Theta(n) = Theta(n^(log_2 2) log(n)^0)$, hence $T(n) = Theta(n^(log_2 2) log(n)^1) = Theta(n log(n))$ (according
to $2^"nd"$ case of @master_theorem)
]
== Akra-Bazzi method
#theorem("Akra-Bazzi method")[ \
$ T(x) := g(x) + sum_(i = 1)^k a_i T(b_i x + h_i (x)) $
#text(style: "italic")[where:]
#block(inset: (x: 1.2em, y: 0em))[
- $a_i > 0", " forall i >= 1$ \
- $0 < b_i < 1", " forall i >= 1$ \
- $exists c in NN: abs(g'(x)) = O(x^c)$
- $abs(h_i (x)) = O(x/log(x)^2)$
] \ \
$ => T(x) = Theta(x^p (1 + integral_1^x g(u)/(u^(p+1)) dif u )) $
#text(style: "italic")[where:] $sum_(i = 1)^k a_i b_i^p = 1$ ] <akra_bazzi_method>
#example[
Solve the recurrence: $T(x) = T(x/2) + T(x/3) + T(x/6) + x log(x)$
$
|(x log x)'| = |log x + 1| <= x ", "forall x >= 1 \
=> |(g(x))'| = O(x) " (1)"
$
$
|h_i(x)| = 0 = O(x/log(x) ^2) " (2)"
$
$
(1/2) ^ 1 + (1/3) ^ 1 + (1/6) ^1 = 1 " (3)"
$
From (1), (2), and (3), we can apply @akra_bazzi_method to get:
$
T(x) &= Theta(x (1 + integral_1^x (u log(u))/(u^2) dif u)) \
&= Theta(x (1 + integral_1^x log(u)/u dif u )) \
&= Theta(x (1 + 1/2 lr(log(u)^2 bar) ^x_1)) \
&= Theta(x (1 + 1/2 log(x)^2 )) \
&= Theta(x + 1/2 x log(x)^2) \
&= Theta(x log(x)^2) \
$
]
#pagebreak()
= Finding asymptotic bound of an algorithm
== Exact step-counting analysis
The asymptotic bound of an algorithm can be calculated by following the steps
below:
- Break the program into smaller segments
- Find the number of operations performed in each segment
- Add up all the number of operations, call it T(n)
- Find the asymptotic bound of T(n)
#example[
Analyze insertion sort \ \
We have the following analysis: \
#figure(
image("img/insertion-sort.png", width: 15cm), caption: "Pseudo code for insertion sort with analysis",
)
#text(style: "italic")[where:]
#block(
inset: (x: 1.2em, y: 0em),
)[
- $c_k$ denotes the cost of $k^"th"$ line\
- $t_i$ denotes the number of times the while loop test in line 5 is executed for
given $i$
] \ \
From the analysis, we can see that:
$
T(n) &= c_1 n + c_2 (n - 1) + c_4 (n - 1) + c_5 sum_(i = 2)^n t_i \
&" "" "" "+ c_6 sum_(i = 2)^n (t_i - 1) + c_7 sum_(i = 2)^n (t_i - 1) + c_8 (n - 1)
$ \ \
In the best case (when the array is already sorted), we have $t_i = 1$ for all $i$.
$
=> T(n) &= c_1 n + c_2 (n - 1) + c_4 (n - 1) + c_5 sum_(i = 2)^n 1 \
&" "" "" "+ c_6 sum_(i = 2)^n (1 - 1) + c_7 sum_(i = 2)^n (1 - 1) + c_8 (n - 1) \
&= c_1 n + c_2 (n - 1) + c_4 (n - 1) + c_5 n + c_8 (n - 1) \
&= (c_1 + c_2 + c_4 + c_5 + c_8) n - c_2 - c_4 - c_8 \
=> T(n) &= Omega(n)
$
\ \
In the worst case, we have $t_i = i$ for all $i$.
$
=> T(n) &= c_1 n + c_2 (n - 1) + c_4 (n - 1) + c_5 sum_(i = 2)^n i \
&" "" "" "+ c_6 sum_(i = 2)^n (i - 1) + c_7 sum_(i = 2)^n (i - 1) + c_8 (n - 1) \
&= c_1 n + c_2 (n - 1) + c_4 (n - 1) + c_5 ((n (n + 1))/ 2 - 1) \
&" "" "" "+ c_6 (n (n - 1))/ 2 + c_7 (n (n - 1))/ 2 + c_8 (n - 1) \
&= (c_5/2 + c_6/2 + c_7/2) n^2 \
&" "" "" " + (c_1 + c_2 + c_4 + c_5 / 2 - c_6 / 2 - c_7 / 2 + c_8) n - c_2 - c_4 - c_5 - c_8 \
=> T(n) &= O(n^2)
$ \ \
In conclusion, we have $T(n) = Omega(n)$ and $T(n) = O(n^2)$
]
== Recurrence relation
#example[Calculate asymptotic bound of merge sort \ \
Define $T(n)$ as the running time of the algorithm. \
From the implementation of merge sort, we have: $T(n) = 2T(n/2) + Theta(n)$
Applying @master_theorem, we can conclude that $T(n) = Theta(n log(n))$ ($2^"nd"$ case
with $b = 2$, $a = 2$, $k = 0$) ]
#pagebreak()
= Asymptotic notation and running time
When using asymptotic notation to characterize an algorithm's running time, make
sure that the asymptotic notation used is as precise as possible without
overstating which running time it applies to.
#example[\
In average case, quick sort runs in $Theta(n log n)$, so it also runs in $O(n^k)", " forall k >= 2$ \
In the same way, merge sort's running time can be $O(n^l)", " forall l >= 2$ \ \
Take $k = 2$ and $l = 3$, it's intuitive to conclude that quick sort is faster
than merge sort for large enough $n$ since $n^2 < n^3$ \
However, they both have the same asymptotic behavior (both runs in $Theta(n log n)$)
\ \
The error occurs due to the inaccuracy of the asymptotic notation used to
compare 2 algorithms.] \
Asymptotic notations only give a bound for the running time of an algorithm when
n is large enough. Hence, comparing algorithms with asymptotic notation is only
applicable for large enough n.
#example[\
Merge sort's running time is $Theta(n log n)$ and selection sort's running time
is $Theta(n ^ 2)$ so one may attempt to conclude that merge sort is faster than
selection sort for all n, which is a *wrong* statement. \
Consider the following benchmark:
#figure(
image("img/sort.svg"), caption: [
Comparing running time of merge sort and selection sort. \
Source: #link("https://github.com/akagiyuu/benchmark/tree/master/sort")
],
)
It's clear that selection sort is faster than merge sort for $n <= 60$ and merge
sort is faster than selection sort for $n >= 80$. \
]
#pagebreak()
= References
- #link("https://mitpress.mit.edu/9780262046305/introduction-to-algorithms/")
- #link("https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)")
- #link("https://en.wikipedia.org/wiki/Akra%E2%80%93Bazzi_method")
- #link("https://ocw.mit.edu/courses/6-042j-mathematics-for-computer-science-fall-2010/b6c5cecb1804b69a6ad12245303f2af3_MIT6_042JF10_rec14_sol.pdf")
- #link("https://www.geeksforgeeks.org/asymptotic-notations-and-how-to-calculate-them/")
- #link("https://www.geeksforgeeks.org/step-count-method-for-time-complexity-analysis/")
|
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/set-01.typ | typst | Other | // Test that lists are affected by correct indents.
#let fruit = [
- Apple
- Orange
#list(body-indent: 20pt)[Pear]
]
- Fruit
#[#set list(indent: 10pt)
#fruit]
- No more fruit
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/cetz/0.0.1/cmd.typ | typst | Apache License 2.0 | #import "matrix.typ"
#import "vector.typ"
#import "util.typ"
#import "path-util.typ"
#let typst-path = path
#let content(x, y, w, h, c) = {
((
type: "content",
segments: (("pt", (x,y)),),
bounds: (
(x + w/2, y - h/2),
(x - w/2, y + h/2)
),
draw: (self) => {
let (x, y) = self.segments.first().at(1)
place(
dx: x, dy: y,
c
)
},
),)
}
#let path(close: false, fill: none, stroke: none,
..segments) = {
let segments = segments.pos()
// Add a closing segment to make path calculations
// consider it.
if close {
let (s0, sn) = (segments.first(), segments.last())
segments.push(("line",
path-util.segment-end(sn),
path-util.segment-begin(s0)))
}
((
type: "path",
close: close,
segments: segments,
bounds: path-util.bounds(segments),
draw: (self) => {
let relative = (orig, c) => {
return vector.sub(c, orig)
}
let vertices = ()
for s in self.segments {
let type = s.at(0)
let coordinates = s.slice(1)
assert(type in ("line", "quadratic", "cubic"),
message: "Path segments must be of type line, quad or cube")
if type == "quadratic" {
// TODO: Typst path implementation does not support quadratic
// curves.
// let a = coordinates.at(0)
// let b = coordinates.at(1)
// let ctrla = relative(a, coordinates.at(2))
// let ctrlb = relative(b, coordinates.at(2))
// vertices.push((a, (0em, 0em), ctrla))
// vertices.push((b, (0em, 0em), (0em, 0em)))
let a = coordinates.at(0)
let b = coordinates.at(1)
let c = coordinates.at(2)
let samples = path-util.ctx-samples((:)) //(ctx)
vertices.push(a)
for i in range(0, samples) {
vertices.push(util.bezier-quadratic-pt(a, b, c, i / samples))
}
vertices.push(b)
} else if type == "cubic" {
let a = coordinates.at(0)
let b = coordinates.at(1)
let ctrla = relative(a, coordinates.at(2))
let ctrlb = relative(b, coordinates.at(3))
vertices.push((a, (0em, 0em), ctrla))
vertices.push((b, ctrlb, (0em, 0em)))
} else {
vertices += coordinates
}
}
place(
typst-path(
stroke: stroke,
fill: fill,
closed: self.close,
..vertices
)
)
},
),)
}
// Approximate ellipse using 4 quadratic bezier curves
#let ellipse(x, y, z, rx, ry, fill: none, stroke: none) = {
let m = 0.551784
let mx = m * rx
let my = m * ry
let left = x - rx
let right = x + rx
let top = y + ry
let bottom = y - ry
path(fill: fill, stroke: stroke,
("cubic", (x, top, z), (right, y, z),
(x + m * rx, top, z), (right, y + m * ry, z)),
("cubic", (right, y, z), (x, bottom, z),
(right, y - m * ry), (x + m * rx, bottom, z)),
("cubic", (x, bottom, z), (left, y, z),
(x - m * rx, bottom, z), (left, y - m * ry, z)),
("cubic", (left, y, z), (x, top, z),
(left, y + m * ry, z), (x - m * rx, top, z)))
}
#let arc(x, y, z, start, stop, rx, ry, mode: "OPEN", fill: none, stroke: none) = {
let samples = calc.abs(int((stop - start) / 1deg))
path(
fill: fill, stroke: stroke,
close: mode != "OPEN",
("line", ..range(0, samples+1).map(i => {
let angle = start + (stop - start) * i / samples
(
x - rx*calc.cos(start) + rx*calc.cos(angle),
y - ry*calc.sin(start) + ry*calc.sin(angle),
z
)
}) + if mode == "PIE" {
((x - rx*calc.cos(start), y - ry*calc.sin(start), z),
(x, y, z),)
} else {
()
})
)
}
#let mark(from, to, symbol, fill: none, stroke: none) = {
assert(symbol in (">", "<", "|", "<>", "o"), message: "Unknown arrow head: " + symbol)
let dir = vector.sub(to, from)
let odir = (-dir.at(1), dir.at(0), dir.at(2))
if symbol == "<" {
let tmp = to
to = from
from = tmp
}
let triangle(reverse: false) = {
let outset = if reverse { 1 } else { 0 }
let from = vector.add(from, vector.scale(dir, outset))
let to = vector.add(to, vector.scale(dir, outset))
let n = vector.scale(odir, .4)
(("line", from, (vector.add(from, n)),
to, (vector.add(from, vector.neg(n)))),)
}
let bar() = {
let n = vector.scale(odir, .5)
(("line", vector.add(to, n), vector.sub(to, n)),)
}
let diamond() = {
let from = vector.add(from, vector.scale(dir, .5))
let to = vector.add(to, vector.scale(dir, .5))
let n = vector.add(vector.scale(dir, .5),
vector.scale(odir, .5))
(("line", from, (vector.add(from, n)),
to, (vector.add(to, vector.neg(n)))),)
}
let circle() = {
let from = vector.add(from, vector.scale(dir, .5))
let to = vector.add(to, vector.scale(dir, .5))
let c = vector.add(from, vector.scale(dir, .5))
let pts = ()
let r = vector.len(dir) / 2
return ellipse(c.at(0), c.at(1), c.at(2), r, r).first().segments
}
path(
..if symbol == ">" {
triangle()
} else if symbol == "<" {
triangle(reverse: true)
} else if symbol == "|" {
bar()
} else if symbol == "<>" {
diamond()
} else if symbol == "o" {
circle()
},
close: symbol != "|",
fill: fill,
stroke: stroke,
)
}
|
https://github.com/lxl66566/my-college-files | https://raw.githubusercontent.com/lxl66566/my-college-files/main/信息科学与工程学院/算法导论/readme.md | markdown | The Unlicense | # 算法导论(2 学分)(选修)
没有考试,最后是考核。分数全靠作业。
typst version >= 0.10.0
|
https://github.com/hrbrmstr/2023-10-20-wpe-quarto-typst | https://raw.githubusercontent.com/hrbrmstr/2023-10-20-wpe-quarto-typst/main/README.md | markdown | # Quarto/Typst Custom Templates
Companion repo for [Daily Drop #357](https://dailyfinds.hrbrmstr.dev/p/drop-357-2023-10-20-weekend-project) |
|
https://github.com/oresttokovenko/resume-workflow | https://raw.githubusercontent.com/oresttokovenko/resume-workflow/main/README.md | markdown | # Resume Workflow CLI
Are you tired of the tedious task of tailoring each resume for every job application? Well you still have to do that, but the Resume Workflow CLI tool is here to help make it easier! This tool automates the creation of folders for each company you apply to and copies over template files (if you have a resume template which you prefer to use), allowing you to save time and stay organized. As a best practice, it creates a `job_description.txt` file within the generated directories but also offers customization capabilities. The `_template` folder can be customized with additional template files to be copied during the resume generation process, since you probably have a base resume that you want to start with. Focus on what matters most - the content of your resume, not copy and pasting.
Here is an example of basic structure using LaTeX and leveraging the `_template` option
```
_template/
├── font
│ └── font.otf
└── main.tex
```
A generated directory for a Software Engineer role at Facebook
```
Facebook
└── software_engineer
├── font
│ └── font.otf
├── job_description.txt
└── main.tex
```
## Benefits
- **Time-Saving:** Automates the creation of directory structures and copying of template files, reducing manual effort.
- **Consistency:** Ensures a standardized structure and format for each job application, as well as gracefully handles existing directories
- **Flexibility:** Allows for template customization through the `_template` folder, making it adaptable to different application requirements (Word, LaTeX, Typst, etc)
## For Use
1. **Install `pipx`:**
```sh
brew install pipx
pipx ensurepath
```
2. **Install the Resume Workflow tool:**
```sh
pipx install git+https://github.com/oresttokovenko/resume-workflow.git --python 3.11
```
3. **Run the tool from anywhere on your machine, no virtual environment required:**
```sh
resume-workflow -c Facebook -j "software engineer"
```
### Using the `-t/-T` Flag and the `_template` Folder
The `resume_workflow` tool includes an optional `-t/-T` flag to specify whether to use the `_template` folder. If the `_template` folder is present and contains files, those files will be copied over to the new job directory.
- To use the template folder (default behavior):
```sh
resume-workflow -c Facebook -j "software engineer" -t
```
- To run without using the template folder:
```sh
resume-workflow -c Facebook -j "software engineer" -T
```
If the `_template` folder is empty or not present, the tool will still function as expected, creating the necessary directories and files for your resume workflow.
## For Contributors
1. **Create a Virtual Environment and Activate it:**
```sh
python3.11 -m venv .venv
source .venv/bin/activate
```
2. **Install the Tool in Editable Mode:**
```sh
pip install --editable '.[dev]'
```
3. **Run the Tool from Within the Virtual Environment:**
```sh
resume-workflow -c facebook -j "software engineer"
```
## Roadmap
- Allow users to use different base resumes for various types of job applications by defining multiple template folders
```sh
resume-workflow -c Apple -j "platform engineer" -t _infra_engineer
``` |
|
https://github.com/mariunaise/HDA-Thesis | https://raw.githubusercontent.com/mariunaise/HDA-Thesis/master/graphics/quantizers/two-metric/reconstruction.typ | typst | #import "@preview/cetz:0.2.2": canvas, plot
#let line_style_aqua = (stroke: (paint: teal, thickness: 2pt))
#let line_style_eastern = (stroke: (paint: maroon, thickness: 2pt))
#let dashed = (stroke: (dash: "dashed"))
#let fill_aqua = (stroke: none, fill: aqua)
#let fill_olive = (stroke: none, fill: eastern)
#canvas({
plot.plot(size: (7,3),
legend: "legend.south",
legend-style: (orientation: ltr, item: (spacing: 0.5)),
x-tick-step: none,
x-ticks: ((-1.25, [-a]), (1.25, [a]), (0, [0]), (-2.125, [-T1]), (2.125, [T1]), (0.375, [T2]), (-0.375, [-T2])),
y-label: $cal(R)(1, 2, x)$,
x-label: $x$,
y-tick-step: none,
y-ticks: ((0, [0]), (1, [1])),
axis-style: "left",
x-min: -3,
x-max: 3,
y-min: 0,
y-max: 1,{
plot.add(((-3,0), (-2.125,0), (-2.125,1), (0.375,1), (0.375, 0), (3, 0)), line: "vh", style: line_style_aqua, label: [Metric 1])
plot.add(((-3, 0), (-0.375, 0), (-0.375, 1), (2.125, 1), (2.125, 0), (3, 0)), line: "vh", style: line_style_eastern, label: [Metric 2])
//plot.add-fill-between(((1.25, 0), (3, 0)), ((1.25, 1), (3, 1)), style: fill_olive)
plot.add-hline(1, style: dashed)
})
})
|
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/page_04.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Just page followed by pagebreak.
// Should result in one forest-colored A11 page and one auto-sized page.
#page("a11", flipped: true, fill: forest)[]
#pagebreak()
|
https://github.com/zenor0/FZU-report-typst-template | https://raw.githubusercontent.com/zenor0/FZU-report-typst-template/main/fzu-report/utils/fake-par.typ | typst | MIT License | // by Myriad-Dreamin
#let empty-par = par[#box()]
#let fake-par = context empty-par + v(-measure(empty-par + empty-par).height)
|
https://github.com/SillyFreak/typst-packages-old | https://raw.githubusercontent.com/SillyFreak/typst-packages-old/main/scrutinize/src/questions.typ | typst | MIT License | /// A boolean state storing whether solutions should currently be shown in the document.
/// This can be set using the Typst CLI using `--input solution=true` (or `false`, which is already
/// the default) or by updating the state:
///
/// ```typ
/// #questions.solution.update(true)
/// ```
///
/// Additionally, @@with-solution() can be used to change the solution state temporarily.
///
/// -> state
#let solution = state("scrutinize-solution", {
import "utils.typ": boolean-input
boolean-input("solution")
})
#let _solution = solution
/// Sets whether solutions are shown for a particular part of the document.
///
/// - solution (boolean): the solution state to apply for the body
/// - body (content): the content to show
/// -> content
#let with-solution(solution, body) = context {
let orig-solution = _solution.get()
_solution.update(solution)
body
_solution.update(orig-solution)
}
/// An answer to a free text question. If the document is not in solution mode,
/// the answer is hidden but the height of the element is preserved.
///
/// - answer (content): the answer to (maybe) display
/// - height (auto, relative): the height of the region where an answer can be written
/// -> content
#let free-text-answer(answer, height: auto) = context {
let answer = block(inset: (x: 2em, y: 1em), height: height, answer)
if (not solution.get()) {
answer = hide(answer)
}
answer
}
/// A checkbox which can be ticked by the student.
/// If the checkbox is a correct answer and the document is in solution mode, it will be ticked.
///
/// - correct (boolean): whether the checkbox is of a correct answer
/// -> content
#let checkbox(correct) = context {
if (solution.get() and correct) { sym.ballot.x } else { sym.ballot }
}
/// A table with multiple options that can each be true or false.
/// Each option is a tuple consisting of content and a boolean for whether the option is correct or not.
///
/// - options (array): an array of (option, correct) pairs
/// -> content
#let multiple-choice(options) = {
table(
columns: (auto, auto),
align: (col, row) => (left, center).at(col) + horizon,
..for (option, correct) in options {
(option, checkbox(correct))
}
)
}
/// A table with multiple options of which one can be true or false.
/// Each option is a content, and a second parameter specifies which option is correct.
///
/// - options (array): an array of contents
/// - answer (integer): the index of the correct answer, zero-based
/// -> content
#let single-choice(options, answer) = {
multiple-choice(options.enumerate().map(((i, option)) => (option, i == answer)))
}
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/fh-joanneum-iit-thesis/1.1.0/lib.typ | typst | Apache License 2.0 | // The FH JOANNEUM Template
//
// requires parameters set in the main file "thesis.typ"
//
// ******************
// Helper functionality: todo / quote / fhjcode / textit / textbf / fhjtable / ...
// ******************
// Helper to support long and short captions for outlines (list of figures)
// author: laurmaedje
// Put this somewhere in your template or at the start of your document.
#let in-outline = state("in-outline", false)
#let flex-caption(long, short) = context if in-outline.get() { short } else { long }
#let todo(term, color: red) = {
text(color, box[✨ #term ✨])
}
#let quote(message, by) = {
block(
radius: 1em, width: 90%,
inset: (x: 2em, y: 0.5em),
[
#message,
#par(
first-line-indent: 25em,
text(font: "Inria Serif", size: 9pt, [
(#by)
])
)
]
)
}
// inspired by: https://github.com/typst/typst/issues/344
#let fhjcode(
code: "",
language: "python",
firstline:0,
lastline:-1
) = {
// Custom layout for raw code
// with line numbering
show raw.where(block: true, lang: "trimmed_code"): it => {
//
// shorten the source code if firstline and/or lastline are specified
//
let theCode = it.text // contents -> string
let lines = theCode.split("\n")
let fromLine = if firstline > lines.len() { lines.len() } else { firstline };
let toLine = if lastline > lines.len() { lines.len() } else { lastline };
lines = lines.slice(fromLine,toLine)
set par(justify: false); grid(
columns: (100%, 100%),
column-gutter: -100%,
// output source code
block(
radius: 1em, fill: luma(240), width: 100%, inset: (x: 2em, y: 0.5em),
raw(lines.join("\n"),lang: language)
),
// output line numbers
block(
width: 100%, inset: (x: 1em, y: 0.6em),
for (idx,line) in lines.enumerate() {
text(size:0.6em,str(idx+1) + linebreak())
}
),
)
}
set text(size: 11pt)
// we use here INTERNAL lang parameter "trimmed_python"
// which supports trimming (see: show raw.where(...) )
raw(code, block:true, lang: "trimmed_code")
}
// macros to emphasise / italic / boldface in a specific way
// e.g. invent your own styles for tools/commands/names/...
#let textit(it) = [
#set text( style: "italic")
#h(0.1em, weak: true)
#it
#h(0.3em, weak: true)
]
#let textbf(it) = [
#set text( weight: "semibold")
#h(0.1em, weak: true)
#it
#h(0.2em, weak: true)
]
// Create a table from csv,
// render first line bold,
// use alternating line colors
#let fhjtable(
tabledata: "",
columns: 1,
) = {
let tableheadings= tabledata.first()
let data = tabledata.slice(1).flatten()
table(
columns: columns,
fill: (_, row) =>
if row == 0 {
rgb( 255, 231, 230 ) // color for header row
}
else if calc.odd(row) {
rgb( 228,234,250 ) // each other row colored
},
align: (col, row) =>
if row == 0 { center } else { left },
..tableheadings.map( x => [*#x*]), // bold headings
..data,
)
}
// Header
// empty for no heading on first page
#let ht-last = state("page-last-section", [])
#let ht-first = state("page-first-section", [])
#let fh-header-format = locate(
loc => [
// find first heading of level 1 on current page
#let first-heading = query(
heading.where(level: 1), loc).find(
h => h.location().page() == loc.page()
)
// find last heading of level 1 on current page
#let last-heading = query(heading.where(
level: 1), loc).rev().find(
h => h.location().page() == loc.page()
)
// test if the find function returned none (i.e. no headings on this page)
#{
if not first-heading == none {
ht-first.update([
// change style here if update needed section per section
// no counter:
// (#counter(heading).at(first-heading.location()).at(0))
#set align(right)
// Uncomment, if you like to have a heading on page with header of level 1
// #first-heading.body
])
ht-last.update([
// change style here if update needed section per section
// no counter included:
// (#counter(heading).at(last-heading.location()).at(0))
#last-heading.body
])
// if one or more headings on the page, use first heading
// change style here if update needed page per page
[#ht-first.display()] //, p. #loc.page()]
} else {
// no headings on the page, use last heading from variable
// change style here if update needed page per page
[#ht-last.display()] //, p. #loc.page()]
}
}
]
)
// ******************
// MAIN TEMPLATE:
// ******************
// This function gets your whole document as its `body` and formats
// it as a bachelor or master thesis in the style suggested by IIT @ FH JOANNEUM.
#let thesis(
expose: false,
study: "<study>",
language: "en",
bibfilename: "",
title: "Specify the title of your Thesis",
// The paper's subtitle. Can be omitted if you don't have one.
subtitle: none,
supervisor: "Specify your supervisor",
author: "Specify your author",
submission-date: "Specify submission date",
logo:none,
abstract-ge: none,
abstract-en: [Replact this with your abstract.],
biblio: none,
show-list-of: ("listings", "tables","equations","figures"),
doc
) = {
// Helper to support long and short captions for outlines (list of figures)
// author: laurmaedje
show outline: it => {
in-outline.update(true)
it
in-outline.update(false)
}
// Set PDF document metadata.
set document(title: title, author: author)
// Optimise numbers with superscript
// espcecially for nice bibliography entries
show regex("\d?\dth"): w => { // 26th, 1st, ...
let b = w.text.split(regex("th")).join()
[#b#super([th])]
}
show regex("\d?\d[nr]d"): w => { // 2dn, 3rd
let s = w.text.split(regex("\d")).last()
let b = w.text.split(regex("[nr]d")).join()
[#b#super(s)]
}
// if we find in bibentries some ISBN, we add link to it
show "https://doi.org/": w => { // handle DOIs
[DOI:]+str.from-unicode(160) // 160 A0 nbsp
}
show regex("ISBN \d+"): w => {
let s = w.text.split().last()
link("https://isbnsearch.org/isbn/"+s,w) // https://isbnsearch.org/isbn/1-891562-35-5
}
show footnote.entry: set par(
hanging-indent: 1.5em)
/*
show figure.where(kind: table): it => {
align(center)[
#block(below: 0.65em, it.body)
Table. #it.counter.display("1.") #it.caption
]
}
*/
//
// GLOBAL SETTINGS:
//
// Defaults:
let FHJ_THESIS_SUPERVISOR_LABEL = "Supervisor or Betreuer/in?"
let FHJ_THESIS_AUTHOR_LABEL = "Submitted by or Eingereicht von?"
// Following defaults should be overwritten
// dependent of master/bachelor study degree programme:
let FHJ_THESES_TITLE = "Master's or Bachelor's Thesis?"
let FHJ_THESIS_SUBMITTED_FOR = "submitted or zur Erlangung des akademischen Grades?"
let FHJ_THESES_TYP = "MSc or BSc?"
let FHJ_THESES_PROG_TYPE = "Master's or Bachelor's degree programme?"
let FHJ_THESIS_SUBMITTED_TO = "eingereicht am?" + linebreak()
let FHJ_THESES_PROG_NAME = "IMS or SWD or MSD?"
if (study == "ims") {
FHJ_THESES_TITLE = "Master's Thesis"
FHJ_THESIS_SUBMITTED_FOR = "submitted in conformity with the requirements for the degree of"
FHJ_THESES_TYP = "Master of Science in Engineering (MSc)"
FHJ_THESIS_SUBMITTED_TO = ""
FHJ_THESES_PROG_TYPE = "Master’s degree programme"
FHJ_THESES_PROG_NAME = "IT & Mobile Security"
} else if (study == "swd" and language == "en") {
FHJ_THESES_TITLE = "Bachelor's Thesis"
FHJ_THESIS_SUBMITTED_FOR = "submitted in conformity with the requirements for the degree of"
FHJ_THESES_TYP = "Bachelor of Science in Engineering (BSc)"
FHJ_THESIS_SUBMITTED_TO = ""
FHJ_THESES_PROG_TYPE = "Bachelor's degree programme"
FHJ_THESES_PROG_NAME = "Software Design and Cloud Computing"
} else if (study == "swd" and language == "de") {
FHJ_THESES_TITLE = "Bachelorarbeit"
FHJ_THESIS_SUBMITTED_FOR = "zur Erlangung des akademischen Grades"
FHJ_THESES_TYP = "Bachelor of Science in Engineering (BSc)"
FHJ_THESIS_SUBMITTED_TO = "eingereicht am"+ linebreak()
FHJ_THESES_PROG_TYPE = "Fachhochschul-Studiengang"
FHJ_THESES_PROG_NAME = "Software Design and Cloud Computing"
} else if (study == "msd" and language == "en") {
FHJ_THESES_TITLE = "Bachelor's Thesis"
FHJ_THESIS_SUBMITTED_FOR = "submitted in conformity with the requirements for the degree of"
FHJ_THESES_TYP = "Bachelor of Science in Engineering (BSc)"
FHJ_THESIS_SUBMITTED_TO = ""
FHJ_THESES_PROG_TYPE = "Bachelor degree programme"
FHJ_THESES_PROG_NAME = "Mobile Software Development"
} else if (study == "msd" and language == "de") {
FHJ_THESES_TITLE = "Bachelorarbeit"
FHJ_THESIS_SUBMITTED_FOR = "zur Erlangung des akademischen Grades"
FHJ_THESES_TYP = "Bachelor of Science in Engineering (BSc)"
FHJ_THESIS_SUBMITTED_TO = "eingereicht am"+ linebreak()
FHJ_THESES_PROG_TYPE = "Fachhochschul-Studiengang"
FHJ_THESES_PROG_NAME = "Mobile Software Development"
}else {
todo([
ERROR
Given setting '"+ study + "' for parameter 'study' is not supported.
Configuration value for <study> can be 'ims', 'swd',or 'msd'.
Check your configuration in the main file 'thesis.typ'.
Then compile again.
])
}
if (language == "en"){
FHJ_THESIS_SUPERVISOR_LABEL = "Supervisor"
FHJ_THESIS_AUTHOR_LABEL = "Submitted by"
}
if (language == "de"){
FHJ_THESIS_SUPERVISOR_LABEL = "Betreuer/in"
FHJ_THESIS_AUTHOR_LABEL = "Eingereicht von"
}
set text(
lang: if (language =="de"){
"de"
}else{
"en"
}
)
//
// heading: titles and subtitles
//
// Necessary for references: @backend, @frontend,...
set heading(numbering: "1.")
show heading.where(level: 1): it => [
// we layout rather large Chapter Headings, e.g:
//
// 3 | Related Work
//
#set text(size: 34pt )
#v(2cm)
#block[
#if it.numbering != none [
#counter(heading).display()
|
#it.body
]
#if it.numbering == none [
#it.body
]
#v(1cm)
]
#v(1cm, weak: true)
]
show heading.where(level: 2): it => [
#set text(size: 18pt)
#block[
#counter(heading).display()
#it.body]
// some space after the heading 2 (before text)
#v(0.5em)
]
show heading.where(level: 3): set text(size: 14pt)
// equations with numbers on the right side the (1) (2) (3) ...
set math.equation(numbering: "(1)")
set cite(style: "chicago-author-date")
set page(
paper: "a4",
binding:left,
margin: (inside: 6.5em, outside: 9em),
numbering: none,
number-align: right,
// Header setup from the template of University of Waterloo
// https://github.com/yangwenbo99/typst-uwthesis
header: fh-header-format,
)
// top logo image
if logo != none {
set align(center + top)
v(2cm) // top border
logo // Logo FH JOANNEUM (vector graphics)
}
v(6em)
//
// TITLE
//
set align(center)
text(26pt, weight: "bold", title)
v(18pt, weak: true)
//
// Just an expose or a full-featured thesis:
//
if (expose == true){
// start of expose (i.e. without tables, listings etc)
text(14pt, [
#subtitle
#v(5em)
*Exposé*
#v(5em)
#text(weight: "bold",[#FHJ_THESIS_SUPERVISOR_LABEL: #supervisor])
#text(weight: "bold",[#FHJ_THESIS_AUTHOR_LABEL: #author])
#v(3em)
#submission-date
#v(0.3em)
#todo([TODO:
Specify the title, subtitle, author, submission date, study, language, your name, and supervisor/advisor in the main _expose.typ_ file. Then compile with _typst compile expose.typ_. \
Finally, remove all TODOs (todo marcos) within your typst ource code.
] + align(center+bottom)[Preview printed #datetime.today().display().]
)
])
set align(left)
// end of expose
}else{
// start of thesis including list of listings, list of tables etc.
text(14pt, [
#subtitle
#v(5em)
*#FHJ_THESES_TITLE*
#FHJ_THESIS_SUBMITTED_FOR
*#FHJ_THESES_TYP*
#FHJ_THESIS_SUBMITTED_TO #FHJ_THESES_PROG_TYPE *#FHJ_THESES_PROG_NAME*
#v(0.5em)
<NAME> (University of Applied Sciences), Kapfenberg
#v(4em)
#text(weight: "bold",[#FHJ_THESIS_SUPERVISOR_LABEL: #supervisor])
#text(weight: "bold",[#FHJ_THESIS_AUTHOR_LABEL: #author])
#v(3em)
#submission-date
#v(0.3em)
#todo([TODO:
Specify the title, subtitle, author, submission date, study, language, your name, and supervisor/advisor in the main _thesis.typ_ file. Then compile with _typst compile thesis.typ_. \
Finally, remove all TODOs (todo marcos) within your typst ource code.
] + align(center+bottom)[Preview printed #datetime.today().display().]
)
])
pagebreak()
set page(numbering: "i", number-align: center)
counter(page).update(1)
//
// ABSTRACT
//
// ABSTRACT (en)
set align(center)
[*Abstract*]
set align(left)
par(justify: true, abstract-en)
pagebreak()
if abstract-ge != none {
// ABSTRACT (ge)
set align(center)
[*Kurzfassung*]
set align(left)
par(justify: true,abstract-ge)
pagebreak()
}
// Setting numbering for ToC, LoF, LoT, LoL, ...
set align(left)
enum(numbering: "I.")
//
// TABLE OF CONTENTS (ToC)
//
outline(depth:2, indent:true)
//
// LIST OF FIGURES (LoF)
//
if show-list-of.contains("figures"){
pagebreak()
if (language == "de"){
heading("Abbildungsverzeichnis", numbering: none)
}else{
heading("List of Figures", numbering: none)
}
outline(
title: none,
target: figure.where(kind: image),
)
}
//
// LIST OF TABLES (LoT)
//
if show-list-of.contains("tables"){
pagebreak()
if (language == "de"){
heading("Tabellenverzeichnis", numbering: none)
}else{
heading("List of Tables", numbering: none)
}
outline(
title: none,
target: figure.where(kind: table),
)
}
//
// LIST of LISTINGS LoL
//
if show-list-of.contains("listings"){
pagebreak()
if (language == "de"){
heading("Source Codes", numbering: none)
}else{
heading("List of Listings", numbering: none)
}
outline(
title: none,
target: figure.where(kind: raw)
)
}
} // end of thesis (i.e. not end of expose)
//
// MAIN PART
//
// Rest of the document with numbers starting with 1
set align(left)
enum(numbering: "1")
set page(numbering: "1", number-align: center)
counter(page).update(1)
set par(justify: true)
// everything you specified in the main "thesis.typ" file:
// i.e. all the imported other chapters
doc
//
// BIBLIOGRAPHY
//
if biblio != none {
text(12pt,biblio)
}
}
|
https://github.com/Enter-tainer/typstyle | https://raw.githubusercontent.com/Enter-tainer/typstyle/master/tests/assets/unit/markup/multi-tick-raw.typ | typst | Apache License 2.0 | `single tick`
``double backtick. Actually this is not raw text.``
```js
function fib(n) {
if (n <= 1) return 1;
return fib(n - 1) + fib(n - 2);
}
```
````md
# This is a markdown code block
```cpp
#include <iostream>
int main() {
std::cout << "Hello, World!" << std::endl;
return 0;
}
```
````
|
https://github.com/VisualFP/docs | https://raw.githubusercontent.com/VisualFP/docs/main/SA/design_concept/content/design/functional_requirements.typ | typst | #import "../../../acronyms.typ": *
= Functional Requirements <functional_requirements>
The following section describes all actors and use cases identified for the
VisualFP application.
== Actors
VisualFP has two actors:
#terms(
terms.item(
"Student User",
[
The student user is the primary user of VisualFP and, therefore, the main influence on the visualization design.
The student user wants to learn functional programming using VisualFP.
They want to do that by visually composing functions in a simple #ac("UI").
The #ac("UI") should simplify understanding functional concepts that many beginners struggle with.
]
),
terms.item(
"Expert User",
[
The expert user is an experienced professional who wants to use VisualFP to help them understand their code better.
For that, they want to import their existing Haskell project into VisualFP.
]
)
)
== Use Cases <use-cases>
@use_case_diagram gives an overview of all identified use cases.
By default, "user" in the use case description refers to the "student user".
#figure(
image("../../static/SA_use_cases.png", width: 80%),
caption: "Use Case Diagram"
)<use_case_diagram>
As the aim of this project is to find a visual representation of functional programming,
the use case descriptions are kept very brief and only state the intention behind the use case.
=== UC1 - Simple Function Composition
A user wants to compose a simple function using pre-defined functions, e.g., Integer parameters.
=== UC2 - Function Execution
A user wants to execute their visually composed functions to see the effects of their functions on data.
=== UC3 - Recursive Function Composition
A user wants to compose a function that is defined using itself.
To do so, the user needs possibilities to distinguish between a recursive and a base case.
=== UC4 - Function Composition using Higher-Order Functions
To create reusable and composable functions, a user wants to compose functions that take other functions as their input, in other words, higher-order functions.
=== UC5 - Curried Functions
A user wants to create a function by partially applying a curried function.
=== UC6 - Function Composition using Lists
A user wants to compose a function using lists, so that they can collect data and process it further.
=== UC7 - Data Type Composition
A user wants to be able to create their own data types to represent data of their problem domain accurately.
=== UC8 - Save Source File
A user wants to save their composed functions in a source file so they can keep their work when, e.g., restarting their computer.
=== UC9 - Open Source File
A user wants to open a previously saved source file to continue working on their program.
=== UC10 - Group Functions into Modules
An expert user wants to group functions into modules to keep their code organized.
=== UC11 - Import Haskell code
An expert user wants to import their existing Haskell project into VisualFP so they can get a better understanding of their code from its visualization.
== Prioritization & Scope
The focus of this project lies in creating a design that allows to develop functional applications visually and is suitable for beginners.
Use cases 1 - 6 have been deemed more important to reach this goal and thus have higher priority than use cases 6 - 9.
Use cases 10 and 11 are not in this project's scope but are listed for completion.
|
|
https://github.com/Mouwrice/thesis-typst | https://raw.githubusercontent.com/Mouwrice/thesis-typst/main/lib.typ | typst | #import "@preview/fontawesome:0.2.0": *
#import "@preview/codly:0.2.0": *
#import "@preview/drafting:0.2.0": *
#let link-icon = super[#fa-arrow-up-right-from-square()]
// Workaround for the lack of an `std` scope.
#let std-bibliography = bibliography
// This function gets your whole document as its `body` and formats
// it as an article in the style of the IEEE.
// Taken from https://github.com/typst/templates/tree/main/charged-ieee
#let template(
// The paper's title.
title: [Paper Title],
// An array of authors. For each author you can specify a name,
// department, organization, location, and email. Everything but
// but the name is optional.
authors: (),
// The paper's abstract. Can be omitted if you don't have one.
abstract: none,
preface: [Preface goes here],
// A list of index terms to display after the abstract.
index-terms: (),
// The result of a call to the `bibliography` function or `none`.
bibliography: none,
// The paper's content.
body
) = {
// Set document metadata.
set document(title: title, author: authors.map(author => author.name))
set text(font: "Noto Sans", lang: "en")
set heading(numbering: "1.1")
set figure(placement: auto)
show link: set text(style: "italic")
pagebreak()
// Configure the page.
set page(paper: "a4", margin: 2.5cm)
set align(center + horizon)
preface
set align(top + left)
pagebreak()
// Display abstract and index terms.
if abstract != none [
#set text(weight: 700)
#h(1em) _Abstract_---#h(weak: true, 0pt)#abstract
#if index-terms != () [
#h(1em)_Index terms_---#h(weak: true, 0pt)#index-terms.join(", ")
]
#v(2pt)
]
pagebreak()
// Table of contents.
outline(depth: 3, indent: true)
set par(leading: 10pt, justify: true)
show par: set block(above: 1em, below: 2em)
show figure: set block(breakable: true, below: 2em)
show figure.caption: emph
let icon(codepoint) = {
box(
height: 0.8em,
baseline: 0.05em,
image(codepoint)
)
h(0.1em)
}
show table.cell.where(y: 0): set text(weight: "bold")
let frame(stroke) = (x, y) => (
left: if x > 0 { 0pt } else { stroke },
right: stroke,
top: if y < 2 { stroke } else { 0pt },
bottom: stroke,
)
set table(
fill: (_, y) => if calc.odd(y) { rgb("EAF2F5") },
stroke: frame(rgb("21222C")),
)
show: codly-init.with()
codly(languages: (
tsv: (name: "TSV", icon: icon("images/tsv.png"), color: gray),
csv: (name: "CSV", icon: icon("images/csv.png"), color: gray),
))
set-page-properties()
show heading.where(level: 1): it => pagebreak(weak: true) + it
set page(numbering: "1")
counter(page).update(1)
// Display the paper's contents.
body
// Display bibliography.
if bibliography != none {
show std-bibliography: set text(8pt)
set std-bibliography(title: text(10pt)[References], style: "ieee")
bibliography
}
}
|
|
https://github.com/MatheSchool/typst-g-exam | https://raw.githubusercontent.com/MatheSchool/typst-g-exam/develop/test/fonts/test-003-page-letter.typ | typst | MIT License | #import "../../src/lib.typ": *
#set page(
"us-letter",
// "a4",
// width: 12cm,
// height: 4cm,
// margin: (x: 58pt, y: 4pt)
)
#show: g-exam.with(
school: (
name: "Sunrise Secondary School",
// logo: read("./logo.png", encoding: none),
),
exam-info: (
academic-period: "Academic year 2023/2024",
academic-level: "1st Secondary Education",
academic-subject: "Mathematics",
number: "2nd Assessment 1st Exam",
content: "Radicals and fractions",
model: "Model A"
),
)
#g-question(points: 2)[#lorem(30)]
#g-subquestion(points: 2)[#lorem(30)]
#g-subquestion(points: 2, points-position: right)[#lorem(30)]
#g-question(points: 1)[#lorem(30)]
#g-subquestion(points: 2)[#lorem(30)]
#g-subquestion(points: 2)[#lorem(30)]
#g-question(points: 2, points-position: right)[#lorem(30)]
#g-subquestion(points: 2, points-position: right)[#lorem(30)]
#g-subquestion(points: 2)[#lorem(30)]
#g-question(points: 1.5)[#lorem(30)] |
https://github.com/xhalo32/constructive-logic-course | https://raw.githubusercontent.com/xhalo32/constructive-logic-course/master/slides/slidetheme.typ | typst | // This theme is inspired by https://github.com/matze/mtheme
// The origin code was written by https://github.com/Enivex
#import "@preview/touying:0.4.2": *
// Consider using: NOTE doesn't work?
// #set text(font: "Fira Sans", weight: "light", size: 20pt)
// #show math.equation: set text(font: "Fira Math")
// #set strong(delta: 100)
// #set par(justify: true)
#let _saved-align = align
#let slide(
self: none,
title: auto,
footer: auto,
align: horizon,
..args,
) = {
self.page-args += (
fill: self.colors.neutral-lightest,
)
if title != auto {
self.m-title = title
}
if footer != auto {
self.m-footer = footer
}
(self.methods.touying-slide)(
..args.named(),
self: self,
title: if title == auto { self.m-title = title } else { title },
setting: body => {
show: _saved-align.with(align)
set text(fill: self.colors.neutral-dark)
show: args.named().at("setting", default: body => body)
body
},
..args.pos(),
)
}
#let title-slide(
self: none,
extra: none,
..args,
) = {
self = utils.empty-page(self)
let info = self.info + args.named()
let content = {
set text(fill: self.colors.neutral-dark)
set align(horizon)
block(width: 100%, inset: 2em, {
text(size: 1.3em, text(weight: "medium", info.title))
if info.subtitle != none {
linebreak()
text(size: 0.9em, info.subtitle)
}
line(length: 100%, stroke: .05em + self.colors.secondary-light)
set text(size: .8em)
if info.author != none {
block(spacing: 1em, info.author)
}
if info.date != none {
block(spacing: 1em, utils.info-date(self))
}
set text(size: .8em)
if info.institution != none {
block(spacing: 1em, info.institution)
}
if extra != none {
block(spacing: 1em, extra)
}
})
}
(self.methods.touying-slide)(self: self, repeat: none, content)
}
#let new-section-slide(self: none, short-title: auto, title) = {
self = utils.empty-page(self)
let content = {
set align(horizon)
show: pad.with(20%)
set text(size: 1.5em)
states.current-section-with-numbering(self)
block(height: 2pt, width: 100%, spacing: 0pt, utils.call-or-display(self, self.m-progress-bar))
}
(self.methods.touying-slide)(self: self, repeat: none, section: (title: title, short-title: short-title), content)
}
#let focus-slide(self: none, body) = {
self = utils.empty-page(self)
self.page-args += (
fill: self.colors.primary-dark,
margin: 2em,
)
set text(fill: self.colors.neutral-lightest, size: 1.5em)
(self.methods.touying-slide)(self: self, repeat: none, align(horizon + center, body))
}
#let slides(self: none, title-slide: true, outline-slide: false, outline-title: [Table of contents], slide-level: 1, ..args) = {
if title-slide {
(self.methods.title-slide)(self: self)
}
if outline-slide {
(self.methods.slide)(self: self, title: outline-title, (self.methods.touying-outline)())
}
(self.methods.touying-slides)(self: self, slide-level: slide-level, ..args)
}
#let register(
self: themes.default.register(),
aspect-ratio: "16-9",
header: states.current-section-with-numbering,
footer: [],
footer-right: states.slide-counter.display() + " / " + states.last-slide-number,
footer-progress: true,
..args,
) = {
// color theme
self = (self.methods.colors)(
self: self,
neutral-lightest: rgb("#fafafa"),
neutral-dark: rgb("#23373b"),
primary-dark: rgb("#65d3e9").lighten(50%),
secondary-light: rgb(255, 92, 168),
secondary-lighter: rgb("#d6c6b7"),
)
// save the variables for later use
self.m-progress-bar = self => states.touying-progress(ratio => {
grid(
columns: (ratio * 100%, 1fr),
components.cell(fill: self.colors.secondary-light),
components.cell(fill: self.colors.secondary-lighter)
)
})
self.m-footer-progress = footer-progress
self.m-title = header
self.m-footer = footer
self.m-footer-right = footer-right
// set page
let header(self) = {
set align(top)
if self.m-title != none {
show: components.cell.with(fill: rgb("#00000000"), inset: 1em)
set align(horizon)
set text(fill: self.colors.secondary-light, size: 1.2em)
utils.fit-to-width(grow: false, 100%, text(font: "Fira Sans", weight: "bold", utils.call-or-display(self, self.m-title)))
} else { [] }
}
let footer(self) = {
set align(bottom)
set text(size: 0.8em)
pad(.5em, {
text(fill: self.colors.neutral-dark.lighten(40%), utils.call-or-display(self, self.m-footer))
h(1fr)
text(fill: self.colors.neutral-dark, utils.call-or-display(self, self.m-footer-right))
})
if self.m-footer-progress {
place(bottom, block(height: 2pt, width: 100%, spacing: 0pt, utils.call-or-display(self, self.m-progress-bar)))
}
}
self.page-args += (
paper: "presentation-" + aspect-ratio,
header: header,
footer: footer,
header-ascent: 30%,
footer-descent: 30%,
margin: (top: 3em, bottom: 1.5em, x: 2em),
)
// register methods
self.methods.slide = slide
self.methods.title-slide = title-slide
self.methods.new-section-slide = new-section-slide
self.methods.touying-new-section-slide = new-section-slide
self.methods.focus-slide = focus-slide
self.methods.slides = slides
self.methods.touying-outline = (self: none, enum-args: (:), ..args) => {
states.touying-outline(self: self, enum-args: (tight: false,) + enum-args, ..args)
}
self.methods.alert = (self: none, it) => text(fill: self.colors.secondary-light, it)
self
}
// iridis
#let need-regex-escape = (c) => {
(c == "(") or (c == ")") or (c == "[") or (c == "]") or (c == "{") or (c == "}") or (c == "\\") or (c == ".") or (c == "*") or (c == "+") or (c == "?") or (c == "^") or (c == "$") or (c == "|") or (c == "-")
}
#let build-regex = (chars) => {
chars.fold("", (acc, c) => {
acc + (if need-regex-escape(c) { "\\" } else {""}) + c + "|"
}).slice(0, -1)
}
#let copy-fields(equation, exclude:()) = {
let fields = (:)
for (k,f) in equation.fields() {
if k not in exclude {
fields.insert(k, f)
}
}
fields
}
#let colorize-math(palette, equation, i : 0) = {
if type(equation) != content {
return equation
}
if equation.func() == math.equation {
// this is a hack to mark the equation as colored so that we don't colorize it again
if equation.body.has("children") and equation.body.children.at(0) == [#sym.space.hair] {
equation
} else {
math.equation([#sym.space.hair] + colorize-math(palette, equation.body, i:i), block: equation.block)
}
} else if equation.func() == math.frac {
math.frac(colorize-math(palette, equation.num, i:i), colorize-math(palette, equation.denom, i:i), ..copy-fields(equation, exclude:("num", "denom")))
} else if equation.func() == math.accent {
math.accent(colorize-math(palette, equation.base, i:i), equation.accent, size: equation.size)
} else if equation.func() == math.attach {
math.attach(
colorize-math(palette, equation.base, i:i),
..copy-fields(equation, exclude:("base",))
)
} else if equation.func() == math.cases {
math.cases(..copy-fields(equation, exclude:("children")), ..equation.children.map(child => {
colorize-math(palette, child, i:i)
}))
} else if equation.func() == math.vec {context {
let color = text.fill
show: text.with(fill: palette.at(calc.rem(i, palette.len())))
math.vec(
..copy-fields(equation, exclude:("children")),
..equation.children.map(child => {
show: text.with(fill: color)
colorize-math(palette, child, i:i + 1)
}),
)
}} else if equation.func() == math.mat { context {
let color = text.fill
show: text.with(fill: palette.at(calc.rem(i, palette.len())))
math.mat(
..copy-fields(equation, exclude:("rows")),
..equation.rows.map(row => row.map(cell => {
show: text.with(fill: color)
colorize-math(palette, cell, i:i + 1)
})),
)
show: text.with(fill: color)
} } else if equation.has("body") {
equation.func()(colorize-math(palette, equation.body, i:i), ..copy-fields(equation, exclude:("body",)))
} else if equation.has("children") {
let colorisation = equation.children.fold((i, ()), ((i, acc), child) => {
if child == [(] {
acc.push([
#show: text.with(fill: palette.at(calc.rem(i, palette.len())))
#equation.func()(([(],))])
(i + 1, acc)
} else if child == [)] {
acc.push([
#show: text.with(fill: palette.at(calc.rem(i - 1, palette.len())))
#equation.func()(([)],))])
(i - 1, acc)
} else {
acc.push(colorize-math(palette, child, i:i))
(i, acc)
}
})
equation.func()(..copy-fields(equation, exclude:("children")), colorisation.at(1))
} else if equation.has("child") { // styles
equation.func()(colorize-math(palette, equation.child, i:i), equation.styles)
} else {
equation
}
}
#let colorize-code(counter : state("parenthesis", 0), opening-parenthesis : ("(","[","{"), closing-parenthesis : (")","]","}"), palette) = (body) => context {
show regex(build-regex(opening-parenthesis)) : body => context {
show: text.with(fill: palette.at(calc.rem(counter.get(), palette.len())))
body
counter.update(n => n + 1)
}
show regex(build-regex(closing-parenthesis)) : body => context {
counter.update(n => n - 1)
text(fill: palette.at(calc.rem(counter.get() - 1, palette.len())), body)
}
body
} |
|
https://github.com/gaoachao/uniquecv-typst | https://raw.githubusercontent.com/gaoachao/uniquecv-typst/main/README.md | markdown | # uniquecv-typst
> 一个使用 [Typst](https://typst.app/) 编写的简历模板,基于 [uniquecv](https://github.com/dyinnz/uniquecv)
## 使用
### 在线
Typst提供了非常好用的 [webapp](https://typst.app)
- 创建 project
- 复制 main.typ 和 template.typ 到项目目录
- 在线编译
### 本地
- 安装 Typst
- macOS/Linux: `brew install typst`
- Arch Linux: `pacman -S typst`
- 安装图标字体 fontawesome(可参考 [install-the-fonts](https://github.com/typst/packages/tree/main/packages/preview/fontawesome/0.1.0#install-the-fonts) )
- clone 本仓库
- 编译(可参考[官方 Usage ](https://github.com/typst/typst))
```
typst compile path/to/main.typ path/to/output.pdf
```
### 效果

## 友情链接
- [uniquecv-latex](https://github.com/dyinnz/uniquecv)
- [werifu: HUST-typst-template](https://github.com/werifu/HUST-typst-template)
- [mgt: typst-preview-vscode](https://github.com/Enter-tainer/typst-preview-vscode)
|
|
https://github.com/donabe8898/typst-slide | https://raw.githubusercontent.com/donabe8898/typst-slide/main/README.md | markdown | MIT License | # typst-slide
typstで作ったスライド集。サークル活動のやつとか色々。
|
https://github.com/SkymanOne/zk-learning | https://raw.githubusercontent.com/SkymanOne/zk-learning/main/notes/snark_overview/snark_overview.typ | typst | #import "../base.typ": *
#show: note
= Overview of Modern SNARK Constructions
From this time we focus on the non-interactive proofs.
#def([
*SNARK*: a succint proof that a certain statement is true.
])
Example statement: "I know an `m` s.t. `SHA256(m) = 0`"
The proof is *short* and *fast* to verify.
*zk-SNARK*: the proof reveals nothing about `m`.
The power ZKPs:
#quote([_A single reliable machine (e.g. Blockchain) can monitor and verify the computations of a set of powerful machines working with unreliable software._])
= SNARK Components
== Arithemtic circuits
We denote finite field $FF = {0, ..., p - 1}$ as a finite field for some prime $p > 2$.
#def([
*Arithemtic circuit*: $C: FF^n -> FF$ - it takes $n$ elements in $FF$ and produces one element in $FF$.
It can be reason as:
- Directed asyclic graph where internal nodes are labeles as maths operations and inputs are labeleld as constants and variables.
- It defines an n-variate polynomial with an evaluation recipe
])
#align(center, image("arithmetic_circuit.png", width: 40%))
The above circuit can be represented as a n-variate polynomial $5x_0 + 3(x_1 + x_2) $
We denote $|C| = $ number of gates in a circuit $C$
We have two different circuit types:
- *Unstructured*: a circuit with arbitraty wires
- *Structured*: a circuit is built in layers of the same circuit layer that is repeated
#align(center,
diagram(
spacing: 2em,
node-stroke: 0.5pt,
node((0, 0), [*Input*], shape: circle, width: 5em),
edge("->"),
node((1, 0), [*M*],),
edge("->"),
node((2, 0), [*M*]),
edge("->"),
node((3, 0), [*...*], stroke: 0pt),
edge("->"),
node((4, 0), [*M*]),
edge("->"),
node((5, 0), [*Output*], shape: circle, width: 5em),
node(enclose: ((1, 0), (4, 0)))
)
)
$M$ is often called a virtual machine (VM), you can think about it as one step of computation.
== NARK: Non-Interactive ARgumebt of Knowledge
Given some arithemtic circuit: $C(x, w) -> FF$ where
- $x in FF^n$ - a pubic statement
- $w in FF^M$ - a secret wintess
Before executing the circuit there is a preprocessing setup: $S(C) ->$ public params (*_pp_*, *_pv_*)
It takes a circuit description and produces public params.
#align(center,
diagram(
spacing: 2em,
node-stroke: 0.5pt,
node((0, 0), [*pp, x, w*], stroke: 0pt),
edge("->"),
node((0, 1), [Prover], height: 3em),
edge("->", [proof $pi$ that $C(x, w) = 0$]),
node((7, 1), [Verifier], height: 3em),
edge("<-"),
node((7, 0), [*vp, x*], stroke: 0pt),
edge((7, 1), (8, 1), "->"),
node((8, 1), [Accept/Reject], stroke: 0pt),
)
)
#def([
A *preprocessing NARK* is a triple $(S, P, V)$:
- $S(C) -> $ public params _(pp, vp)_ for prover and verifier
- $P("pp", x, w) -> "proof" pi$
- $V("vp", x, pi) -> "accept/reject"$
])
Side note: _All algorithms and adversary have access to a random oracle_
#def([
*Random oracle*: is an oracle (black box) that responds to any unique queary with a uniformly distributed random response from its output domain.
If the input is repeated, the same output is returned.
])
== SNARK: Succinct Non-Interactive ARgumebt of Knowledge
We now impose additional requirements on the NARK
- *Completeness:* $forall x,w space C(x, w) = 0 => P[V("vp", x, P("pp", x, w)) = "accept"] = 1$
- *Knowledge soundness*: _V_ accepts $=> P "knows" w "s.t." C(x, w) = 0$. \ An extractor _E_ can extract a valid _w_ from _P_.
- (Optional) *Zero-Knowledge*: $(C, "pp", "vp", x pi)$ reveal nothing new about _w_.
#def([
A #underline([*succint*]) *preprocessing NARK* is a triple $(S, P, V)$:
- $S(C) -> $ public params _(pp, vp)_ for prover and verifier
- $P("pp", x, w) -> "short proof" pi$; $"len"(pi) = "sublinear"(|w|)$
- $V("vp", x, pi) -> "accept/reject"$; *fast to verify*: $"time"(V) = O_(lambda)(|x|, "sublinear"(|C|))$
])
In practice we have a stronger constraints:
#def([
A #underline([*strongly succint*]) *preprocessing NARK* is a triple $(S, P, V)$:
- $S(C) -> $ public params _(pp, vp)_ for prover and verifier
- $P("pp", x, w) -> "short proof" pi$; $"len"(pi) = log(|w|)$
- $V("vp", x, pi) -> "accept/reject"$; *fast to verify*: $"time"(V) = O_(lambda)(|x|, log(|C|))$
])
We have a _Big O_ notation with $lambda$ symbol, $lambda$ usually refers to some secret parameter that represents the level of security (e.g. length of keys, etc.). Therefore, the complexity is analyzed with repsect to the secret parameter.
You can notice that because the verifier need to verify the proof in shorter time than the circuit size,
it does not have the time to read the circuit. This is the reason why we have the preprocessing step _S_. It reads the circuit _C_ and generates a _summary_ of it. Therefore, $|"vp"| ≤ log(|C|)$
== Types of preprocessing Setup
Suppose we have a setup for some circuit _C_: $S(C; r) -> $ _public params (pp, vp)_, where _r_ - random bits.
We have the following types of setup:
- *Trusted setup per circuit*: $S(C; r)$, _r_ random bits must be kept private from the prover, otherwise it can prove false statements.
- *Trsuted universal (updatable) setup:* secret _r_ is independent of _C_ \ $S = (S_("init"), S_("index")): S_("init")(lambda;r) -> "gp", S_("index")("gp";C) -> "(pp, vp)"$ where
- $S_"init"$ - one time setup
- $S_"index"$ - deterministic algorith
- _gp_ - global params \
The benefit of the universal setup that we can generate params for as many circuits as we want.
- *Transperent setup:* no secret data, $S(C)$
#pagebreak()
= Overview of Proving Systems
#table(
columns: (auto, auto, auto, auto, auto),
inset: 10pt,
align: center,
table.header(
[], [*Size of proof*], [*Verifier time*], [*Setup*], [*Post-\ Quantum*]
),
[*Groth'16*], [\~ 200 bytes \ $O_(lambda(1))$], [\~ 1.5 ms \ $O_(lambda(1))$], [trusted setup per circuit], [no],
[*Plonk & \ Marlin*], [\~ 400 bytes \ $O_(lambda(1))$], [\~ 3 ms \ $O_(lambda(1))$], [universal trusted setup], [no],
[*Bulletproofs*], [\~ 1.5 KB \ $O_(lambda(log|C|))$], [\~ 3 sec \ $O_(lambda(|C|))$], [transperent], [no],
[*Bulletproofs*], [\~ 100 KB \ $O_(lambda(log^2|C|))$], [\~ 3 sec \ $O_(lambda(log^2|C|))$], [transperent], [yes],
)
= Knowledge Soundness
If _V_ accepts then _P_ knows _w_ s.t. $C(x, w) = 0$.
It means than we can _w_ from _P_.
#def([
$(S, P, V)$ is (adaptively) *knowledge sound* for a circuit _C_ if for every polynomial time adversary $A = (A_0, A_1)$ s.t.
_gp_ $<- S_("init")()$, $(C, x, "st") <- A_0("gp")$, _(pp, vp)_ $<- S_("index")(C)$, $pi <- A_1("pp", x, "st")$ :
$P[V("vp", x, pi) = "accept"] > 1 "/" 10^6$ (non-negligible).
])
_A_ acts as a malicious prover that tries to prove a statement without a knowledge of _w_. It is split into two algorithms $A_0$ and $A_1$
Given global parameters to $A_0$, the malicous prover generates a circuit, and a statement for which it tries to forge a proof for, the malicous prover also generates some internal state _st_.
Then, public params are generates from the circuit. Then malicious $A_1$ generates a forged proof $pi$ from prover params, a statements and an internal state.
If a malicious prover convinces a verifier with a probability grater than $1 "/" 10^6$, then there is an efficient *extractor* _E_ (that uses _A_) s.t.
#def([
_gp_ $<- S_("init")()$, $(C, x, "st") <- A_0("gp")$, $w <- E("gp", C, x)$ (using $A_1$):
$P(C(x, w) = 0) > 1 "/" 10^6 - epsilon$ (for a negligible $epsilon$)
])
= Building Efficent SNARKs
There are a general paradigm: two steps
- A functional commitment scheme. Requires Cryptographic assumptions
- A compatible interactive oracle proof. Does not require any assumptions
#align(center,
diagram(
spacing: 1em,
node-stroke: 0.5pt,
node((0, 0), [Functional \ commitment \ scheme], stroke: 0pt, name: <one>),
node((0, 3), [IOP], stroke: 0pt, name: <two>),
node((1, 2), [Proving process], stroke: 0pt, name: <prove>),
node((2, 2), [SNARK for \ general circuits], stroke: 0pt, name: <snark>),
edge(<one>, <prove>, "->"),
edge(<two>, <prove>, "->"),
edge(<prove>, <snark>, "->"),
)
)
== Functional Commitments
There are two algorithms:
- `commit(m, r) -> com` (_r_ is chosen at random)
- `verify(m, com, r) -> accept/reject`
There are two informal properties:
- *binding*: cannnot produce `com` and two valid commitment opennings for `com`
- *hiding*: `com` reveals nothing about commited data
Here we gave a standard hash construction:
Given some fixed hash function: $H: M times R -> T$, we the two algorithms become:
- `commit(m, r): com := H(m, r)`
- `verify(m, com, r): accept if com = H(m, r)`
Then we can construct a functional commitment scheme.
=== Describing commitment to a function
Given some family of functions: $F = {f: X -> Y}$
The commiter acts as a prover. The prover chooeses some randonmness _r_ and commits a descirption of a function $f$ with _r_ to a verifier. The function can decribed as a circuit, or as a binary code, etc. The verifier then sends $x in X$, and the prover will respond with $y in Y$ alognside a proof $pi$, such that $f(x) = y$ and $f in F$.
#pagebreak()
We can describe a commitment to a function family $F$ using the following procedure (syntax):
- $"setup"(1^lambda) -> "gp"$ - outputs global public parameteres _gp_.
- $"commit"("gp", f, r) -> "com"_f$ - produces a commitment to $f in F$ with $r in R$. It involves a *binding* (and optionall *hiding*, for ZK) committment scheme for $F$.
- eval(Prover P, verifier V) - an evaluation protocol between a prover and a verifier where for a given $"com"_f$ and $x in X, y in Y$:
- $P("gp", f, x, y, r) ->$ short proof $pi$
- $V("gp", "com"_f, x, y, pi) ->$ accept/reject.
This evaluation protocol is a SNARK itself for the *relation*: \ $f(x) = y, f in F, "commit"("gp", f, r) = "com"_f$
For the setup, the public statements are $"com"_f, x, y$ that are known to verifier. The prover is proving that it knows the description of $f$ (a witness), and $r$ s.t. the *relation* is true.
== Commitment schemes
- *Polynomial*: a commit to a #underline("univariate") $f(X) in FF^(≤d)_(p)[X]$. The family of functions is the set of all univarate polynomial function with degree of at most _d_
- *Multilinear*: a commit to a #underline("multilienar") $f in FF^(≤1)_(p)[X_1, ..., X_k]$. We a commiting to polynomial with multiple variables $X_1, ..., X_k$ but in each polynomial the degree is at most 1. e.g. $f(x_1, ..., x_k) = x_1x_2 + x_1x_4x_5 + x_7$
- *Vector (e.g. Merke trees)*: a commit to $accent(u, arrow) = (u_1, ..., u_d) in F^d_p$. In the future, we would like to "open" any particular cell in the vector s.t. $f_(accent(u, arrow))(i) = u_i$. We can reason as we are commit to a function that is identified by a vector. Therefore, we would like to prove that given index _i_ it evaluated to a cell $u_i$. Merkle trees are used for implementation of a vector commitmement.
- *Inner product* (aka inner product arguments - IPA): a commit to $accent(u, arrow) in F^d_p$. It commits to a function $f_(accent(u, arrow))(accent(v, arrow)) = (accent(u, arrow), accent(v, arrow))$ (inner product of _u_ and _v_). We later prove that given some vector _v_ for a function identified by a vector _u_, it results in an expected inner product value.
== Polynomial Commitments
Suppose a prover commits to a polynomial $f(X) in FF^(≤d)_(p)[X]$.
Then the evaluation scheme *eval* looks as following:
For public $u, v in FF_p$ (in finite field), prover can convince the verifier that the committed polynomial satisfies
#block(stroke: 1pt, inset: 8pt, [
$f(u) = v$ and $"deg"(f) ≤ d$
])
Note that the verifier knows $(d, "com"_f, u, v)$. To make this proof a SNARK, the proof size and the verifier time should be $O_(lambda)(log d)$
Also note that trivial commitmement schemes are not a polynomial commitment. An example of a trivial commitment is as follows:
- _commit_$(f = sum^d_(i=0)a_i X^i, r)$: outputs $"com"_f <- H((a_0, ..., a_d), r)$. We simply output a commitment to all coefficients of a polynomial (just a hash of them).
- _eval_: prover sends $pi = ((a_0, ..., a_d), r)$ to verifier; and verifier accepts if $f(u) = v$ and $ H((a_0, ..., a_d), r) = "com"_f$
*The problem* with this commitment scheme is that the proof $pi$ is not succinct. Specifically, the proof size and verification time are #underline("linear") in _d_ (but should be of $≤ log d$).
#linebreak()
Now let's look of the usage of the polynomial commitments. Let's start with an interesting observation.
For a non-zero $f in FF^(≤d)_p[X]$ and for $r <- FF_p$:
#stroke-block([
(\*) $P[f(r) = 0] ≤ d "/" p$
])
So, the proability that a randomly samples _r_ in the finite field $FF_p$ is one of the roots of the degrees in a polynomial as at most the number of roots divided by the number of values in the field.
Therefore for $r <- FF_p$: if $f(r) = 0$ then $f$ is most likely identically zero.
Another useuful observation is:
#stroke-block([
*SZDL lemma*: (\*) also holds for #underline("multvariate") polynomial (where _d_ is the total degree of _f_)
])
*Proof: TODO*
Based on the observaton aboce we can prove that two functions are identical.
Suppose p $tilde.equiv 2^256$ and $d lt.eq 2^40$ so that $d/p$ is negligible.
Consequently, let $f, g in FF^(lt.eq d)_p[X]$. \
Then for $r <- FF_p$ if $f(r) = g(r)$ then $f = g$ with high proability. This holds because
$f(r) = g(r) => f(r) - g(r) = 0 => f - g = 0 => f = g$. This gives a simple equality test protocol.
Now let's look at the protocol of the two committed polynomials.
#align(center,
diagram(
spacing: 2em,
node-stroke: 0.5pt,
node((0, 0), [$f, g in FF^(lt.eq d)_p[X]$], stroke: 0pt),
edge("->", [commitment]),
node((4, 0), [$"com"_f, "com"_g$], stroke: 0pt),
node((0, 1), [$y <- f(r) \ y' <- g(r)$], stroke: 0pt),
edge("<-", [$r$]),
node((4, 1), [$r <- R$], stroke: 0pt),
node((0, 2), [$(y, pi_f), (y', pi_g)$], stroke: 0pt),
edge((0, 2.5), (4, 2.5), "->", [$(y, pi_f), (y', pi_g)$]),
node((4, 2), [Accept if \ $pi_f, pi_g$ are valid \ and $y = y'$],
stroke: 0pt),
node((0, 3), [Prover], stroke: 0pt),
node((4, 3), [Verifier], stroke: 0pt),
node(enclose: ((0, 0), (0, 3))),
node(enclose: ((4, 0), (4, 3)))
)
)
Where $pi_f$ and $pi_g$ are the proves that the $y = f(r)$ and $y' = g(r)$ respectively.
#pagebreak()
== Fiat-Shamir Transform
This allows us to make a protocol non-interactive. However, it isn't secure for every protocol.
We are going to start by using a cryptographic hash function $H: M -> R$. The prover will then use
this function to generate verifier's random bits on its own using _H_.
The protocol becomes as follows:
- Let's $x = ("com"_f, "com"_g)$, and $w = (f, g)$
- The prover computes _r_, such that $r <- H(x)$
- The prover then computes $y <- f(r), y' <- g(r)$ and generates $pi_f, pi_g$
- The prover sends $y, y', pi_f, pi_g$ to verifier.
- The verifier can now also compute $r <- H(x)$ from $("com"_f, "com"_g)$ and verify the proof.
To prove knowledge soundness, we need to solve a theorem that the given protocol is a SNARK if
1. _d_ / _p_ is negligible (where $f, g in FF^(lt.eq d)_(p)[X]$)
2. _H_ is modelled as a random oracle.
In practice, _H_ is described as SHA256.
= Internative Oracle Proofs ($F$-IOP)
*Functional Commitment Schemes* allows us to commit to a function whereas *Interactive Oracle Proofs* allows us to boost the commitment into a SNARK for general circuits.
As an example we can take a polynomial scheme for $FF_p^(lt.eq d)[X]$ and, using Poly-IOP, boost into a SNARK for any circuit _C_ where $|C| < d$
Let's define what _F_-IOP is.
Let $C(x, w)$ be some arithemtic circuit and let $x in FF^n_p$. \
_F_-IOP is a proof system that proves $exists w: C(x, w) = 0$ as follows
#stroke-block([
Setup( _C_ ) $->$ public params _pp_ and _vp_. But public params for a verifier (_vp_) with contain a set of functions that will be replaced with function committments using Functional Commitment scheme.
You can think of them as _oracles for functions in F_
])
The set of oracles generated by the verifier can be quearied by it at any time. Remember, that in real SNARK, the oracles are commitmements to functions.
From the prover side, the interaction looks as following:
#align(center,
diagram(
spacing: 10em,
node-stroke: 0.5pt,
node(enclose: ((0,0), (0,1)), [*Prover* \ $P("pp", x, w)$]),
node(enclose: ((1,0), (1,1)), [*Verifier* \ $V("vp", x)$ \ $r_1 <- FF_p$ \ till $r_(t-1) <- FF_p$]),
edge((0, 0), (1, 0), "->", [oracle $f_1 in F$]),
edge((0, 0.2), (1, 0.2), "<-", [$r_1$]),
edge((0, 0.4), (1, 0.4), "->", [oracle $f_2 in F$]),
edge((0, 0.6), (1, 0.6), "..", []),
edge((0, 0.8), (1, 0.8), "<-", [$r_(t-1)$]),
edge((0, 1), (1, 1), "->", [oracle $f_t in F$]),
)
)
The verifier then proceed the verifcation by computing: $bold("verify"^(f_(-s), ..., f_t)(x, r_1, ..., r_(t-1)))$. ($-s$ is offset index to account for additional functions before $f_1$ that was sent by the prover.) \
It takes a statement _x_ and all randomness that the verifier has sent to the prover., and it's given access to oracle functions that
the verifier has and all the oracle functions that the prover sent as part of a proof. The verifier can evaluate any of the functions at any point and can decide whether to accept the proof or not.
== The IOP flavour
*Poly-IOP*
- Sonic
- Marlin
- Plonk
- etc
*Multilinear-IOP*
- Spartan
- Clover
- Hyperplonk
- etc
*Vector-IOP*
- STARK
- Breakdown
- Orion
- etc
(*Poly-IOP* + Poly-comit || *Multilinear-IOP* + Multilinear-Commit || *Vector-IOP* + Merkle) + *Fiat-Shamir Transform* = *SNARK*
= Reading
- #link("https://a16zcrypto.com/posts/article/zero-knowledge-canon", "a16z reading list") |
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/ops-invalid-15.typ | typst | Other | // Error: 3-19 cannot divide relative length by ratio
#((10% + 1pt) / 5%)
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compute/foundations-20.typ | typst | Other | // Error: 23-30 cannot access file system from here
#show raw: it => eval(it.text)
```
image("test/assets/files/tiger.jpg")
```
|
https://github.com/Merovius/go-talks | https://raw.githubusercontent.com/Merovius/go-talks/master/2024-07-09_gophercon/slides.typ | typst | #import "@preview/polylux:0.3.1": *
#import "./theme.typ": *
// Note: This is the first time I've used Typst to create a presentation and I
// was under big time pressure, so all of this is full of hacks and not very
// nice - Don't judge me :)
#show: simple-theme.with(
aspect-ratio: "16-9",
background: rgb("#ffffdd"),
)
#set text(font: "Go", size: 20pt)
#show raw: set text(font: "Go Mono")
#let filled-slide(body) = {
set page(margin: 0em)
body
set page(margin: 2em)
}
#title-slide[
#side-by-side(columns: (1fr, 3fr, 1fr))[
#image("gc_logo.png", height: 5cm, width: 5cm)
][
= Advanced generics patterns
<NAME>
https://blog.merovius.de/
#link("https://chaos.social/@Merovius")[\@Merovius\@chaos.social]
2024-07-09
][
#box(clip: true, radius: 5cm, width: 5cm, height: 5cm, image("avatar.jpg", height: 5cm))
]
]
#filled-slide[
#image("adoption.jpg", fit: "cover")
]
#filled-slide[
#image("biggest_challenge.png", width: 100%)
]
#filled-slide[
#image("good_tools.jpg", fit: "cover")
]
#focus-slide(background: rgb("#007d9d"))[
= The Basics
]
#slide[
```go
type Slice[E any] []E
```
#pause
```go
func (s Slice[E]) Filter(keep func(E) bool) Slice[E]
```
]
#slide[
```go
type Slice[E any] []E
```
```go
func (s Slice[E]) Filter(keep func(E) bool) Slice[E] {
var out Slice[E]
for i, v := range s {
if keep(v) { out = append(out, v) }
}
return out
}
```
#pause
```go
func Map[A, B any](s Slice[A], f func(A) B) Slice[B]
```
]
#slide[
```go
type Slice[E any] []E
```
```go
func (s Slice[E]) Filter(keep func(E) bool) Slice[E] {
var out Slice[E]
for _, v := range s {
if keep(v) { out = append(out, v) }
}
return out
}
```
```go
func Map[A, B any](s Slice[A], f func(A) B) Slice[B] {
out := make(Slice[B], len(s))
for i, v := range s {
out[i] = f(v)
}
return out
}
```
]
#slide[
```go
func usage() {
primes := Slice[int]{2, 3, 5, 7, 11, 13}
```
#pause
```go
strings := Map(primes, strconv.Itoa)
```
#pause
```go
fmt.Printf("%#v", strings)
// Slice[string]{"2", "3", "5", "7", "11", "13"}
```
#pause
```go
// package reflect
// func TypeFor[T any]() Type
intType := reflect.TypeFor[int]()
}
```
]
#centered-slide[
#text(size: 25pt, weight: "bold")[A type parameter can be inferred if and only if it appears in an argument.]
#pause
#text(size: 25pt, weight: "bold")[Corollary: If you want a type parameter to be inferrable, make sure it appears as an argument.]
]
#focus-slide(background: rgb("#007d9d"))[
= Constraints
]
#slide[
```go
// StringifyAll converts the elements of a slice to strings and returns the
// resulting slice.
func StringifyAll[E any](s []E) []string
```
]
#slide[
```go
// StringifyAll converts the elements of a slice to strings and returns the
// resulting slice.
func StringifyAll[E any](s []E) []string {
out := make([]string, len(s))
for i, v := range s {
out[i] = ???
}
return out
}
```
]
#slide[
```go
// StringifyAll converts the elements of a slice to strings and returns the
// resulting slice.
func StringifyAll[E ~string|~[]byte](s []E) []string
```
]
#slide[
```go
// StringifyAll converts the elements of a slice to strings and returns the
// resulting slice.
func StringifyAll[E ~string|~[]byte](s []E) []string {
out := make([]string, len(s))
for i, v := range s {
out[i] = string(v)
}
return out
}
```
#pause
```go
func usage() {
type Path string
s := []Path{"/usr", "/bin", "/etc", "/home", "/usr"}
fmt.Printf("%#v", StringifyAll(s))
// []string{"/usr", "/bin", "/etc", "/home", "/usr"}
}
```
]
#slide[
```go
// StringifyAll converts the elements of a slice to strings and returns the
// resulting slice.
func StringifyAll[E Bytes](s []E) []string {
out := make([]string, len(s))
for i, v := range s {
out[i] = string(v)
}
return out
}
type Bytes interface {
~string | ~[]byte
}
```
]
#slide[
```go
// StringifyAll converts the elements of a slice to strings and returns the
// resulting slice.
func StringifyAll[E fmt.Stringer](s []E) []string
```
]
#slide[
```go
// StringifyAll converts the elements of a slice to strings and returns the
// resulting slice.
func StringifyAll[E fmt.Stringer](s []E) []string {
out := make([]string, len(s))
for i, v := range s {
out[i] = v.String()
}
return out
}
```
#pause
```go
func usage() {
durations := []time.Duration{time.Second, time.Minute, time.Hour}
fmt.Printf("%#v", StringifyAll(durations))
// []string{"1s", "1m0s", "1h0m0s"}
}
```
]
#slide[
```go
// StringifyAll converts the elements of a slice to strings and returns the
// resulting slice.
func StringifyAll[E any](s []E, stringify func(E) string) []string
```
]
#slide[
```go
// StringifyAll converts the elements of a slice to strings and returns the
// resulting slice.
func StringifyAll[E any](s []E, stringify func(E) string) []string {
out := make([]string, len(s))
for i, v := range s {
out[i] = stringify(v)
}
return out
}
```
#pause
```go
func usage() {
// time.Time.String has type func(time.Time) string
strings := StringifyAll(times, time.Time.String)
// strconv.Itoa has type func(int) string
strings = StringifyAll(ints, strconv.Itoa)
}
```
]
#slide[
```go
package slices
func Compact[E comparable](s []E) []E
func CompactFunc[E any](s []E, eq func(E, E) bool) []E
func Compare[E cmp.Ordered](s1, s2 S) int
func CompareFunc[E1, E2 any](s1 []E1, s2 []E2, cmp func(E1, E2) int) int
func Sort[E cmp.Ordered](x []E)
func SortFunc[E any](x []E, cmp func(a, b E) int)
// etc.
```
]
#slide[
```go
func Sort[E cmp.Ordered](x []E) {
SortFunc(x, cmp.Compare[E])
}
func SortFunc[E any](x []E, cmp func(a, b E) int) {
// sort in terms of cmp
}
```
]
#slide[
```go
// Heap implements a Min-Heap using a slice.
type Heap[E cmp.Ordered] []E
```
#pause
```go
func (h *Heap[E]) Push(v E) {
*h = append(*h, v)
// […]
if (*h)[i] < (*h)[j] {
// […]
}
}
```
]
#slide[
```go
// HeapFunc implements a Min-Heap using a slice and a custom comparison.
type HeapFunc[E any] struct {
Elements []E
Compare func(E, E) int
}
```
#pause
```go
func (h *HeapFunc[E]) Push(v E) {
h.Elements = append(h.Elements, v)
// […]
if h.Compare(h.Elements[i], h.Elements[j]) < 0 {
// […]
}
}
```
]
#focus-slide(background: rgb("#007d9d"))[
= Generic interfaces
]
#slide[
```go
type Comparer interface {
Compare(Comparer) int
}
```
#pause
```go
// Does not implement Comparer: Argument has type time.Time, not Comparer
func (t Time) Compare(u Time) int
```
]
#slide[
```go
type Comparer[T any] interface {
Compare(T) int
}
```
#pause
```go
// implements Comparer[Time]
func (t Time) Compare(u Time) int
```
#pause
```go
// E must have a method Compare(E) int
type HeapMethod[E Comparer[E]] []E
```
#pause
```go
func (h *HeapMethod[E]) Push(v E) {
*h = append(*h, v)
// […]
if (*h)[i].Compare((*h)[j]) < 0 {
// […]
}
}
```
]
#slide[
```go
func push[E any](s []E, cmp func(E, E) int, v E) []E {
// […]
if cmp(s[i], s[j]) < 0 {
// […]
}
}
```
#pause
```go
func (h *Heap[E]) Push(v E) {
*h = push(*h, cmp.Compare[E], v)
}
```
#pause
```go
func (h *HeapFunc[E]) Push(v E) {
h.Elements = push(h.Elements, h.Compare, v)
}
```
#pause
```go
func (h *HeapMethod[E]) Push(v E) {
*h = push(*h, E.Compare, v)
}
```
]
#focus-slide(background: rgb("#007d9d"))[
= Pointer constraints
]
#slide[
```go
type Message struct {
Price int // in cents
}
func (m *Message) UnmarshalJSON(b []byte) error {
// { "price": 0.20 }
var v struct {
Price json.Number `json:"price"`
}
err := json.Unmarshal(b, &v)
if err != nil {
return err
}
m.Price, err = parsePrice(string(v.Price))
return err
}
```
]
#slide[
```go
func Unmarshal[T json.Unmarshaler](b []byte) (T, error) {
var v T
err := v.UnmarshalJSON(b)
return v, err
}
```
#pause
```go
func usage() {
input := []byte(`{"price": 13.37}`)
// Message does not satisfy json.Unmarshaler
// (method UnmarshalJSON has pointer receiver)
m, err := Unmarshal[Message](input)
// …
}
```
]
#slide[
```go
func Unmarshal[T json.Unmarshaler](b []byte) (T, error) {
var v T
err := v.UnmarshalJSON(b)
return v, err
}
```
```go
func usage() {
input := []byte(`{"price": 13.37}`)
// panic: runtime error: invalid memory address or
// nil pointer dereference
m, err := Unmarshal[*Message](input)
// …
}
```
]
#slide[
```go
func Unmarshal[T any, PT json.Unmarshaler](b []byte) (T, error) {
var v T
err := v.UnmarshalJSON(b)
return v, err
}
```
]
#slide[
```go
func Unmarshal[T any, PT json.Unmarshaler](b []byte) (T, error) {
var v T
err := v.UnmarshalJSON(b) // v.UnmarshalJSON undefined
return v, err
}
```
]
#slide[
```go
func Unmarshal[T any, PT json.Unmarshaler](b []byte) (T, error) {
var v T
err := PT(&v).UnmarshalJSON(b) // cannot convert &v to type PT
return v, err
}
```
#pause
```go
type Unmarshaler[T any] interface{
*T
json.Unmarshaler
}
```
]
#slide[
```go
func Unmarshal[T any, PT Unmarshaler[T]](b []byte) (T, error) {
var v T
err := PT(&v).UnmarshalJSON(b)
return v, err
}
```
```go
type Unmarshaler[T any] interface{
*T
json.Unmarshaler
}
```
#pause
```go
func usage() {
input := []byte(`{"price": 13.37}`)
m, err := Unmarshal[Message, *Message](input)
// …
}
```
]
#slide[
```go
func Unmarshal[T any, PT Unmarshaler[T]](b []byte) (T, error) {
var v T
err := PT(&v).UnmarshalJSON(b)
return v, err
}
```
```go
type Unmarshaler[T any] interface{
*T
json.Unmarshaler
}
```
```go
func usage() {
input := []byte(`{"price": 13.37}`)
m, err := Unmarshal[Message](input)
// …
}
```
]
#slide[
```go
func Unmarshal[T any, PT Unmarshaler[T]](b []byte, p *T) error {
return PT(p).UnmarshalJSON(b)
}
type Unmarshaler[T any] interface{
*T
json.Unmarshaler
}
func usage() {
input := []byte(`{"price": 13.37}`)
var m Message
err := Unmarshal(input, &m)
// …
}
```
]
#slide[
```go
func Unmarshal[PT json.Unmarshaler](b []byte, p PT) error {
return p.UnmarshalJSON(b)
}
func usage() {
input := []byte(`{"price": 13.37}`)
var m Message
err := Unmarshal(input, &m)
// …
}
```
]
#slide[
```go
func Unmarshal(b []byte, p json.Unmarshaler) error {
return p.UnmarshalJSON(b)
}
func usage() {
input := []byte(`{"price": 13.37}`)
var m Message
err := Unmarshal(input, &m)
// …
}
```
]
#focus-slide[
= Specialization
]
#slide[
```go
// UnmarshalText implements the encoding.TextUnmarshaler interface. The time
// must be in the RFC 3339 format.
func (t *Time) UnmarshalText(b []byte) error {
var err error
*t, err = Parse(RFC3339, string(b))
return err
}
// Parse parses a formatted string and returns the time value it represents.
func Parse(layout, value string) (Time, error) {
// parsing code
}
```
#pause
```go
func parse[S string|[]byte](layout string, value S) (Time, error) {
// parsing code
}
```
]
#slide[
```go
// UnmarshalText implements the encoding.TextUnmarshaler interface. The time
// must be in the RFC 3339 format.
func (t *Time) UnmarshalText(b []byte) error {
var err error
*t, err = parse(RFC3339, b)
return err
}
// Parse parses a formatted string and returns the time value it represents.
func Parse(layout, value string) (Time, error) {
return parse(layout, value)
}
```
```go
func parse[S string|[]byte](layout string, value S) (Time, error) {
// parsing code
}
```
]
#slide[
```go
// error: cannot use value (variable of type S constarined by string|[]byte)
// as string value in argument to strings.CutPrefix
rest, ok := strings.CutPrefix(value, month)
if !ok {
return fmt.Errorf("can not parse %q as month name", value)
}
```
]
#slide[
```go
func cutPrefix[S string|[]byte](s, prefix S) (after S, found bool) {
for i := 0; i < len(prefix); i++ {
if i >= len(s) || s[i] != prefix[i] {
return s, false
}
}
return s[len(prefix):], true
}
```
]
#slide[
```go
func cutPrefix[S string|[]byte](s, prefix S) (after S, found bool) {
switch s := any(s).(type) {
case string:
s, found = strings.CutPrefix(s, prefix)
return S(s), found
case []byte:
s, found = bytes.CutPrefix(s, prefix)
return S(s), found
default:
panic("unreachable")
}
}
```
]
#focus-slide(background: rgb("#007d9d"))[
= Phantom types
]
#slide[
```go
type X[T any] string
```
]
#slide[
```go
func Parse[T any](r io.Reader) (T, error)
```
]
#slide[
```go
func Parse[T any](r io.Reader) (T, error)
type buffer struct { /* … */ }
```
]
#slide[
```go
func Parse[T any](r io.Reader) (T, error)
type buffer struct { /* … */ }
var buffers = sync.Pool{
New: func() any { return new(buffer) },
}
```
]
#slide[
```go
func Parse[T any](r io.Reader) (T, error) {
b := buffers.Get().(*buffer)
b.Reset(r)
defer buffers.Put(b)
// use the buffer
}
type buffer struct { /* … */ }
var buffers = sync.Pool{
New: func() any { return new(buffer) },
}
```
]
#slide[
```go
func Parse[T any](r io.Reader) (T, error) {
b := buffers.Get().(*buffer[T]) // panics
b.Reset(r)
defer buffers.Put(b)
// use the buffer
}
type buffer[T any] struct { /* … */ }
var buffers = sync.Pool{
// Can't set New: No known type argument
}
```
]
#slide[
```go
type key[T any] struct{}
```
#pause
```go
func usage() {
var (
kInt any = key[int]{}
kString any = key[string]{}
)
fmt.Println(kInt == kInt) // true
fmt.Println(kString == kString) // false
}
```
]
#slide[
```go
type key[T any] struct{}
```
#pause
```go
var bufferPools sync.Map // maps key[T]{} -> *sync.Pool
```
#pause
```go
func poolOf[T any]() *sync.Pool {
k := key[T]{}
```
]
#slide[
```go
type key[T any] struct{}
```
```go
var bufferPools sync.Map // maps key[T]{} -> *sync.Pool
```
```go
func poolOf[T any]() *sync.Pool {
k := key[T]{}
if p, ok := bufferPools.Load(k); ok {
return p.(*sync.Pool)
}
```
]
#slide[
```go
type key[T any] struct{}
```
```go
var bufferPools sync.Map // maps key[T]{} -> *sync.Pool
```
```go
func poolOf[T any]() *sync.Pool {
k := key[T]{}
if p, ok := bufferPools.Load(k); ok {
return p.(*sync.Pool)
}
pi, _ := bufferPools.LoadOrStore(k, &sync.Pool{
New: func() any { return new(T) },
})
return pi.(*sync.Pool)
}
```
]
#slide[
```go
func Parse[T any](r io.Reader) (T, error)
```
]
#slide[
```go
func Parse[T any](r io.Reader) (T, error) {
pool := poolOf[T]()
```
]
#slide[
```go
func Parse[T any](r io.Reader) (T, error) {
pool := poolOf[T]()
b := pool.Get().(*buffer[T])
b.Reset(r)
defer pool.Put(b)
// use the buffer
}
```
]
#focus-slide(background: rgb("#007d9d"))[
= Overengineering
]
#slide[
```go
type Client struct { /* … */ }
func (c *Client) CallFoo(req *FooRequest) (*FooResponse, error)
func (c *Client) CallBar(req *BarRequest) (*BarResponse, error)
func (c *Client) CallBaz(req *BazRequest) (*BazResponse, error)
```
]
#slide[
```go
type Client struct { /* … */ }
func Call[Req, Resp any](c *Client, name string, r Req) (Resp, error)
```
#pause
```go
const (
Foo = "Foo"
Bar = "Bar"
Baz = "Baz"
)
```
#pause
```go
func usage() {
resp, err := rpc.Call[*rpc.FooRequest, *rpc.FooResponse](c, rpc.Foo, req)
// …
```
#pause
```go
resp, err := rpc.Call[*rpc.FooRequest, *rpc.BarResponse](c, rpc.Baz, req)
}
```
]
#slide[
```go
type Endpoint[Req, Resp any] string
```
#pause
```go
const (
Foo Endpoint[*FooRequest, *FooResponse] = "Foo"
Bar Endpoint[*BarRequest, *BarResponse] = "Bar"
Baz Endpoint[*BazRequest, *BazResponse] = "Baz"
)
```
#pause
```go
func Call[Req, Resp any](c *Client, e Endpoint[Req, Resp], r Req) (Resp, error)
```
#pause
```go
func usage() {
r1, err := rpc.Call(c, rpc.Foo, req) // r1 is inferred to be *FooResponse
```
#pause
```go
// type *rpc.FooRequest of req does not match inferred type *rpc.BazRequest
r2, err := rpc.Call(c, rpc.Baz, req)
```
#pause
```go
r3, err := rpc.Call[int, string](c, "b0rk", 42) // compiles, but broken
}
```
]
#slide[
```go
type Endpoint[Req, Resp any] struct{ name string }
```
#pause
```go
var (
Foo = Endpoint[*FooRequest, *FooResponse]{"Foo"}
Bar = Endpoint[*BarRequest, *BarResponse]{"Bar"}
Baz = Endpoint[*BazRequest, *BazResponse]{"Baz"}
)
```
#pause
```go
func Call[Req, Resp any](c *Client, e Endpoint[Req, Resp], r Req) (Resp, error)
```
#pause
```go
func usage() {
// cannot use "b0rk" (untyped string constant) as Endpoint[int, string] value
r1, err := rpc.Call[int, string](c, "b0rk", 42)
```
#pause
```go
e := rpc.Endpoint[int, string](rpc.Foo)
r2, err := rpc.Call(c, e, 42)
}
```
]
#slide[
```go
type Endpoint[Req, Resp any] struct{ _ [0]Req; _ [0]Resp; name string }
```
#pause
```go
var (
Foo = Endpoint[*FooRequest, *FooResponse]{name: "Foo"}
Bar = Endpoint[*BarRequest, *BarResponse]{name: "Bar"}
Baz = Endpoint[*BazRequest, *BazResponse]{name: "Baz"}
)
```
#pause
```go
func Call[Req, Resp any](c *Client, e Endpoint[Req, Resp], r Req) (Resp, error)
```
#pause
```go
func usage() {
// cannot convert rpc.Bar to rpc.Endpoint[int, string]
e := rpc.Endpoint[int, string](rpc.Bar)
resp, err := rpc.Call(c, e, req)
}
```
]
#focus-slide(background: rgb("#007d9d"))[
= Go forth and experiment
]
|
|
https://github.com/Tiggax/zakljucna_naloga | https://raw.githubusercontent.com/Tiggax/zakljucna_naloga/main/src/figures/pid.typ | typst | #import "@preview/fletcher:0.5.1" as fletcher: diagram, node, edge
#import "@preview/cetz:0.2.2": canvas, plot, draw, vector
#let pid_graph = canvas(
length: .9cm,
{
import draw: *
let sig(pos, ..args) = group(
..args,
{
circle((rel: (.1,-.03), to: pos), radius: .8, fill: black)
circle(
pos,
radius: .8,
fill: yellow,
stroke: black,
name: "center"
)
content("center", text(size: 3em, $Sigma$))
}
)
let process(pos, name, eq, color) = group(
name: name,
{
let a = (pos.at(0) -1.5, pos.at(1) - .6)
let shadow = (rel: (.1, -.03), to: a)
rect(shadow, (rel: (3,1.2), to: shadow), fill: black)
rect(a, (rel: (3,1.2), to: a), fill: color, name: "box")
if eq == none {
content("box", padding: .1, text(size: 1.5em, name))
} else {
content("box.west", anchor: "west", padding: .1, text(size: 2em, name))
content("box.east", anchor: "east", padding: .1,text(size: 1.4em, eq))
}
}
)
let const(pos, name) = group(
name: name,
{
let shadow = (pos.at(0) + .1, pos.at(1) - .03)
let tri(pos, ..args) = line((to: pos, rel: (-.5,0)), (rel: (0,.5), to: ()), (rel: (1.5,-.5), to: ()),(rel: (-1.5,-.5), to: ()), (rel: (0,.5), to: ()), ..args)
tri(shadow, fill: black)
tri(pos, fill: yellow)
content(pos, text(size: 0.8em, $K_#name.at(1)$))
}
)
sig((1.4,0), name: "first")
line((-1,0), "first", mark: (end: ">"), name: "sp")
content("sp.end", anchor: "north-east", padding: .1, $+$)
content("sp.mid", anchor: "south", padding: .2, $r(t)$)
rect((2.5,-3),(11.5,3), stroke: (dash: "dashed"), name: "group")
content("group.north", anchor: "south", padding: .1)[PID control]
const((4.5,2), "kP",)
process((7.5,2), "P", $e(t)$, color.mix(green, yellow) )
const((4.5,0), "kI",)
process((7.5,0), "I", $integral^t_0 e(t) d t$, color.mix(blue, white, green, white))
const((4.5,-2), "kD",)
process((7.5,-2), "D", $( d e(t))/(d t)$, color.mix(orange))
content((3.5,0),"", name: "err")
line("first","err", name: "err")
content((name: "err", anchor: 60%), anchor: "south", padding: .2, $e(t)$)
line("err.end",((), "|-", "P"), "kP", mark: (end: ">"), name: "to_p")
line("kP", "P.west", mark: (end: ">"))
line("err.end", "kI", mark: (end: ">"), name: "to_i")
line("kI", "I.west", mark: (end: ">"))
line("err.end",((), "|-", "D.west"), "kD", mark: (end: ">"), name: "to_d")
line("kD", "D.west", mark: (end: ">"))
sig((10.5,0), name: "second")
line("P.east", ((), "-|", "second"), "second", mark: (end: ">"), name: "p_s")
content("p_s.end", anchor: "south-west", padding: .1)[+]
line("I.east", "second", mark: (end: ">"), name: "i_s")
content("i_s.end", anchor: "south-east", padding: .1)[+]
line("D.east", ((), "-|", "second"), "second", mark: (end: ">"), name: "d_s")
content("d_s.end", anchor: "north-west", padding: .1)[+]
process((14.3,0), "process", none, white)
line("second", "process", mark: (end: ">"), name: "to_p")
content("to_p.mid", anchor: "south", padding: .2, $u(t)$)
line("process", (rel: (3,0), to:"process"), mark: (end: ">"), name: "out")
content("out.mid", anchor: "south", padding: .2, $y(t)$)
//content("out.end", anchor: "south-west", padding: .1)[Output]
line("out.mid", ("out.mid", "|-", (0,-4)), ((), "-|", "first"), "first", mark: (end: ">"), name: "feedback")
content("feedback.mid", anchor: "south", padding: .1)[feedback]
content("feedback.end", anchor: "north-west", padding:.1, text(size: 2em)[-])
}
)
#let pid_fn = canvas(
{
import draw: *
let sp(x) = if x > 1 {1} else {0}
let i = 0
let p_x = 0
let p_y = 0
let p_err = 0
let out = ((p_x, p_y),)
let pid = ((0,0),)
let pd = ((0,0),)
let id = ((0,0),)
let dd = ((0,0),)
let kp = .5
let ki = .01
let kd = .01
for x in range(1,100).map(x => x*0.1) {
let dx = (x - p_x)
let err = sp(x) - p_y
let p = kp * err
i += ki * err * dx
let d = kd * (err - p_err) / dx
let mv = p + i + d
p_y = p_y + mv
p_err = err
p_x = x
out.push((x,p_y))
pd.push((x, p))
id.push((x, i))
dd.push((x, d))
pid.push((x, mv))
}
plot.plot(
size: (16,8),
legend: "legend.inner-north-east",
x-label: none,
y-label: none,
x-tick-step: none,
y-tick-step: none,
y-min: -.1,
y-max: 1.2,
x-domain: (0,5),
{
plot.add(
domain: (0,10),
label: "SP",
{
x => sp(x)
}
)
plot.add(
label: "y",
{
out
})
plot.add(
label: "PID",
style: (stroke: yellow),
{
pid
}
)
plot.add(
label: "P",
style: (stroke: green),
{
pd
}
)
plot.add(
label: "I",
style: (stroke: aqua),
{
id
}
)
plot.add(
label: "D",
style: (stroke: orange),
{
dd
}
)
}
)
}
)
#pid_fn |
|
https://github.com/goshakowska/Typstdiff | https://raw.githubusercontent.com/goshakowska/Typstdiff/main/tests/test_working_types/ordered_list/ordered_list_inserted.typ | typst | + The first
+ The second
+ The third |
|
https://github.com/typst-doc-cn/tutorial | https://raw.githubusercontent.com/typst-doc-cn/tutorial/main/src/basic/writing-scripting.typ | typst | Apache License 2.0 | #import "mod.typ": *
#show: book.page.with(title: [初识脚本模式])
从现在开始,示例将会逐渐开始出现脚本。不要担心,它们都仅涉及脚本的简单用法。
== 内容块 <grammar-content-block>
有时,文档中会出现连续大段的标记文本。
#code(````typ
*从前有座山,山会讲故事,故事讲的是*
*从前有座山,山会讲故事,故事讲的是*
*...*
````)
这可行,但稍显麻烦。如下代码则显得更为整洁,它不必为每段都打上着重标记:
#code(````typ
#strong[
从前有座山,山会讲故事,故事讲的是
从前有座山,山会讲故事,故事讲的是
...
]
````)
例中,```typ #strong[]```这个内容的语法包含三个部分:
+ `#`使解释器进入#term("code mode", postfix: "。")
+ #typst-func("strong")是赋予#term("strong semantics")函数。
+ `[]`作为#term("content block")标记一段内容,供`strong`使用。
本小节首先讲解第三点,即#term("content block")语法。
#term("content block")的内容使用中括号包裹,如下所示:
#code(```typ
#[一段文本]#[两段文本] #[三段文本]
```)
#term("content block")不会影响包裹的内容——Typst仅仅是解析内部代码作为#term("content block")的内容。#term("content block")也*几乎不影响内容的书写*。
#pro-tip[
唯一的影响是你需要在内容块内部转义右中括号。
#code(```typ
#[x\]y]
```)
]
#term("content block")的唯一作用是“界定内容”。它收集一个或多个#term("content"),以待后续使用。有了#term("content block"),你可以*准确指定*一段内容,并用#term("scripting")加工。
#import "../figures.typ": figure-content-decoration
#align(center + horizon, figure-content-decoration())
#v(1em)
所谓#term("scripting"),就是对原始内容增删查改,进而形成文档的过程描述。因为有了#term("scripting"),Typst才能有远超Markdown的排版能力,在许多情况下不逊于LaTeX排版,将来有望全面超越LaTeX排版。
在接下来两小节你将看到Typst作为一门*编程语言*的核心设计,也是进行更高级排版必须要掌握的知识点。由于我们的目标首先仅是*编写一篇基本文档*,我们将会尽可能减少引入更多知识点,仅仅介绍其中最简单常用的语法。
== 解释模式 <grammar-enter-script>
```typ #strong[]```语法第一点提及:`#`使解释器进入#term("code mode")。
#code(```typ
#[一段文本]
```)
这个#mark("#")不属于内容块的语法一部分,而是Typst中关于「脚本模式」的定界符。
这涉及到Typst的编译原理。Typst程序包含一个解释器,用其从头到尾查看并#term("interpret")你的文档。
其特殊之处在于,解释器还具备多种#term("interpreting mode")。借鉴了LaTeX的文本和数学模式,在不同的#term("interpreting mode")下,解释器以不同的语法规则解释你的文档。Typst中,标记模式的语法更适合你组织文本,代码模式更适合你书写脚本,而数学模式则最适合输入复杂的公式。
// todo 三种解释模式的visualization
=== 标记模式
当解释器从头开始解释文档时,其处于#term("markup mode"),在这个模式下,你可以使用各种记号创建标题、列表、段落......在这个模式下,Typst语法几乎就和Markdown一样。
当其处于「标记模式」,且遇到一个「井号」时,Typst会立即将后续的一段代码认作「脚本」并执行,即它进入了「脚本模式」(scripting mode)。
=== 脚本模式
在「脚本模式」下,你可以转而*主要*计算各种内容。例如,你可以计算一个算式的「内容」:
#code(```typ
#(1024*1024*8*7*17+1)是一个常见素数。
```)
当处于「脚本模式」时,解释器在*适当*的时候从「脚本模式」退回为「标记模式」。如下所示,在「脚本模式」下解析到数字```typc 2```后解释器回到了「标记模式」:
#code(```typ
#2是一个常见素数。
```)
Typst总是倾向于更快地退出脚本模式。
#pro-tip[
具体来说,你几乎可以认为解释器至多只会解释一个*完整的表达式*,之后就会*立即*退出「脚本模式」。
]
=== 以另一个视角看待内容块
「内容块」的内容遵从标记语法。这意味着,当处于「脚本模式」时,你还可以通过「内容块」语法临时返回「标记模式」,以嵌套复杂的逻辑:
#code(```typ
#([== 脚本模式下创建一个标题] + strong[后接一段文本])
```)
如此反复,Typst就同时具备了方便文档创作与脚本编写的能力。
#pro-tip[
能否直接像使用「星号」那样,让#term("markup mode")直接将中括号包裹的一段作为内容块的内容?
这是可以的,但是存在一些问题。例如,人们也常常在正文中使用中括号等标记:
#code(```typ
区间[1, ∞)上几乎所有有理数都可以表示为$x^x$,其中$x$是无理数。
```)
如此,「标记模式」下默认将中括号解析为普通文本看起来更为合理。
]
== 数学模式
Typst解释器一共有三种模式,其中两种我们之前已经介绍。这剩下的最后一种被称为#term("math mode")。很多人认为Typst针对LaTeX的核心竞争点之一就是优美的#term("math mode")。
Typst的数学模式如下:<grammar-inline-math> ~ <grammar-display-math>
#code(````typ
行内数学公式:$sum_x$
行间数学公式:$ sum_x $
````)
由于使用#term("math mode")有很多值得注意的地方,且#term("math mode")是一个较为独立的模式,本书将其单列为一章参考,可选阅读。有需要在文档中插入数学公式的同学请移步#(refs.ref-math-mode)[《参考:数学模式》]。
== 函数和函数调用 <grammar-func-call>
这里仅作最基础的介绍。#(refs.scripting-base)[《基本字面量、变量和简单函数》]和#(refs.scripting-complex)[《复合字面量、控制流和复杂函数》]中有对函数和函数调用更详细的介绍。
在Typst中,函数与函数调用同样归属#term("code mode"),所以在调用函数前,你需要先使用#mark("#")让Typst先进入#term("code mode")。
与大部分语言相同的是,在调用Typst函数时,你可以向其传递以逗号分隔的#term("value"),这些#term("value")被称为参数。
#code(```typ
四的三次方为#calc.pow(4, 3)。
```)
这里#typst-func("calc.pow")是内置的幂计算函数,其接受两个参数:
+ 一为```typc 4```,为幂的底
+ 一为```typc 3```,为幂的指数。
你可以使用函数修饰#term("content block")。例如,你可以使用着重函数 #typst-func("strong") 标记一整段内容:
#code(```typ
#strong([
And every _fair from fair_ sometime declines,
By chance, or nature's changing course untrimm'd;
But thy _eternal summer_ shall not fade,
Nor lose possession of that fair thou ow'st;
])
```)
虽然示例很长,但请认真观察,它很简单。首先,中括号包裹的是一大段内容。在之前已经学到,这是一个#term("content block")。然后#term("content block")在参数列表中,说明它是#typst-func("strong")的参数。#typst-func("strong")与幂函数没有什么不同,无非是接受了一个#term("content block")作为参数。
类似地,#typst-func("emph")可以标记一整段内容为强调语义:
#code(```typ
#emph([
And every *fair from fair* sometime declines,
......
])
```)
Typst强调#term("consistency"),因此无论是通过标记还是通过函数,最终效果都必定是一样的。你可以根据实际情况任意组合方式。
== 内容参数的糖 <grammar-content-param>
在许多的语言中,所有函数参数必须包裹在函数调用参数列表的「圆括号」之内。
#code(```typ
着重语义:这里有一个#strong([重点!])
```)
但在Typst中,如果将内容块作为参数,内容块可以紧贴在参数列表的「圆括号」之后。
#code(```typ
着重语义:这里有一个#strong()[重点!]
```)
特别地,如果参数列表为空,Typst允许省略多余的参数列表。
#code(```typ
着重语义:这里有一个#strong[重点!]
```)
所以,示例也可以写为:
#code(```typ
#strong[
And every _fair from fair_ sometime declines,
]
#emph[
And every *fair from fair* sometime declines,
]
```)
#pro-tip[
函数调用可以后接不止一个内容参数。例如下面的例子后接了两个内容参数:
#code(```typ
#let exercise(question, answer) = strong(question) + parbreak() + answer
#exercise[
Question: _turing complete_?
][
Answer: Yes, Typst is.
]
```)
]
== 文字修饰
现在你可以使用更多的文本函数来丰富你的文档效果。
=== 背景高亮 <grammar-highlight>
你可以使用`highlight`高亮一段内容:
#code(```typ
#highlight[高亮一段内容]
```)
你可以传入`fill`参数以改变高亮颜色。
#code(```typ
#highlight(fill: orange)[高亮一段内容]
// ^^^^^^^^^^^^ 具名传参
```)
这种传参方式被称为#(refs.scripting-base)[「具名传参」]。
=== 修饰线
你可以分别使用#typst-func("underline")、#typst-func("overline")、或#typst-func("strike")为一段内容添加下划线<grammar-underline>、上划线<grammar-overline>或中划线(删除线)<grammar-strike>:
#{
set text(font: "Source Han Serif SC")
code(```typ
平地翻滚:#underline[ጿኈቼዽጿኈቼዽ] \
施展轻功:#overline[ጿኈቼዽጿኈቼዽ] \
泥地打滚:#strike[ጿኈቼዽጿኈቼዽ] \
```)
}
值得注意地是,被划线内容需要保持相同字体才能保证线段同时处于同一水平高度。
#code(```typ
#set text(font: ("Linux Libertine", "Source Han Serif SC"))
下划线效果:#underline[空格 字体不一致] \
#set text(font: "Source Han Serif SC")
下划线效果:#underline[空格 字体一致] \
```)
该限制可能会在将来被解除。
#typst-func("underline")有一个很有用的`offset`参数,通过它你可以修改下划线相对于「基线」的偏移量:
#code(```typ
#underline(offset: 1.5pt, underline(offset: 3pt, [双下划线]))
```)
如果你更喜欢连贯的下划线,你可以设置`evade`参数,以解除驱逐效果。<grammar-underline-evade>
#code(```typ
带驱逐效果:#underline[Language] \
不带驱逐效果:#underline(evade: false)[Language]
```)
=== 上下标
你可以分别使用#typst-func("sub")<grammar-subscript>或#typst-func("super")<grammar-superscript>将一段文本调整至下标位置或上标位置:
#code(```typ
下标:威严满满#sub[抱头蹲防] \
上标:香風とうふ店#super[TM] \
```)
你可以为上下标设置特定的字体大小:
#code(```typ
上标:香風とうふ店#super(size: 0.8em)[™] \
```)
你可以为上下标设置相对基线的合适高度:
#code(```typ
上标:香風とうふ店#super(size: 1em, baseline: -0.1em)[™] \
```)
== 文字属性
文本本身也可以设置一些「具名参数」。与#typst-func("strong")和#typst-func("emph")类似,文本也有一个对应的元素函数#typst-func("text")。#typst-func("text")接受任意内容,返回一个影响内部文本的结果。
当输入是单个文本时很好理解,返回的就是一个文本元素:
#code(````typ
#text("一段内容")
````)
当输入是一段内容时,返回的是该内容本身,但是对于内容的中的每一个文本元素,都作相应文本属性的修改。下例修改了「代码片段」元素中的文本元素为红色:
#code(````typ
#text(fill: red)[```
影响块元素的内容
```]
````)
进一步,我们强调,其实际修改了*缺省*的文本属性。对比以下两个情形:
#code(````typ
#text[```typ #strong[一段内容] #emph[一段内容]```] \
#text(fill: red)[```typ #strong[一段内容] #emph[一段内容]```] \
````)
可以看见“红色”的设置仅对代码片段中的“默认颜色”的文本生效。对于那些已经被语法高亮的文本,“红色”的设置不再生效。
这说明了为什么下列情形输出了蓝色的文本:
#code(````typ
#text(fill: red, text(fill: blue, "一段内容"))
````)
=== 设置大小 <grammar-text-size>
通过`size`参数,可以设置文本大小。
#code(```typ
#text(size: 12pt)[一斤鸭梨]
#text(size: 24pt)[四斤鸭梨]
```)
其中`pt`是点单位。中文排版中常见的#link("https://ccjktype.fonts.adobe.com/2009/04/post_1.html")[号单位]与点单位有直接换算关系:
#let owo = (
[初号],
[小初],
[一号],
[二号],
[小二],
[三号],
[小三],
[四号],
[小四],
[五号],
[小五],
[六号],
[小六],
[七号],
[八号],
)
#let owo2 = ([42], [36], [26], [22], [18], [16], [15], [14], [12], [10.5], [9], [7.5], [6.5], [5.5], [5])
#let owo3 = ([42], [–], [27.5], [21], [–], [16], [–], [13.75], [–], [10.5], [–], [8], [–], [5.25], [4])
#{
set align(center)
table(
columns: 9,
[字号],
..owo.slice(0, 8),
[中国(单位:点)],
..owo2.slice(0, 8),
[日本(单位:点)],
..owo3.slice(0, 8),
[字号],
..owo.slice(8),
[],
[中国(单位:点)],
..owo2.slice(8),
[],
[日本(单位:点)],
..owo3.slice(8),
)
}
另一个常见单位是`em`:
#code(```typ
#text(size: 1em)[一斤鸭梨]
#text(size: 2em)[四斤鸭梨]
```)
```typc 1em```是当前设置的文字大小。
关于Typst中长度单位的详细介绍,可以挪步#(refs.ref-length)[《参考:长度单位》]。
=== 设置颜色 <grammar-text-fill>
你可以通过`fill`参数为文字配置各种颜色:
#code(```typ
#text(fill: red)[红色鸭梨]
#text(fill: blue)[蓝色鸭梨]
```)
你还可以通过颜色函数创建自定义颜色:
#code(```typ
#text(fill: rgb("ef475d"))[茉莉红色鸭梨]
#text(fill: color.hsl(200deg, 100%, 70%))[天依蓝色鸭梨]
```)
关于Typst中色彩系统的详细介绍,详见#(refs.ref-color)[《参考:颜色、渐变填充与模式填充》]。
=== 设置字体 <grammar-text-font>
你可以通过`font`参数为文字配置字体:
#code(```typ
#text(font: "FangSong")[北京鸭梨]
#text(font: "Microsoft YaHei")[板正鸭梨]
```)
你可以用逗号分隔的「列表」同时为文本设置多个字体。Typst按顺序优先使用靠前字体。例如可以同时设置西文为Times New Roman字体,中文为仿宋字体:
#code(```typ
#text(font: ("Times New Roman", "FangSong"))[中西Pear]
```)
关于如何在不同系统上配置中文、西文、数学等多种字体,详见#(refs.misc-font-setting)[《字体设置》]。
== 「`set`」语法
Typst允许你为元素的「具名参数」设置新的「默认值」,这个特性由「`set`」语法实现。
例如,你可以这样设置文本字体:
#code(```typ
#set text(fill: red)
红色鸭梨
```)
`set`关键字后跟随一个与函数调用相同语法的表达式,表示此后所有的元素都具有新的默认值。这比```typ #text(fill: red)[红色鸭梨]```要更易读。
默认情况下文本元素的`fill`参数为黑色,即文本默认为黑色。经过`set`规则,其往后的文本都默认为红色。
#code(```typ
黑色鸭梨
#set text(fill: red)
红色鸭梨
```)
之所以说它是默认值,是因为仍然可以在创建元素的时候指定参数值以覆盖默认值:
#code(```typ
#set text(fill: red)
#text(fill: blue)[蓝色鸭梨]
```)
本节前面讲述的所有「具名参数」都可以如是设置,例如文本大小、字体等。
关于对「`set`」语法更详细的介绍,详见#(refs.content-scope-style)[《内容、作用域与样式》]。
== 图像 <grammar-image>
图像对应元素函数#typst-func("image")。
你可以通过#(refs.scripting-modules)[绝对路径或相对路径]加载一个图片文件:
#{
show image: set align(center)
set image(width: 40%)
code(```typ
#image("/assets/files/香風とうふ店.jpg")
```)
}
#typst-func("image")有一个很有用的`width`参数,用于限制图片的宽度:
#{
show image: set align(center)
code(```typ
#image("/assets/files/香風とうふ店.jpg", width: 100pt)
```)
}
你还可以相对于父元素设置宽度,例如设置为父元素宽度的`50%`:
#{
show image: set align(center)
code(```typ
#image("/assets/files/香風とうふ店.jpg", width: 50%)
```)
}
同理,你也可以用`height`参数限制图片的高度。
#{
show image: set align(center)
code(```typ
#image("/assets/files/香風とうふ店.jpg", height: 100pt)
```)
}
当同时设置了图片的宽度和高度时,图片默认会被裁剪:
#{
show image: set align(center)
code(```typ
#image("/assets/files/香風とうふ店.jpg", width: 100pt, height: 100pt)
```)
}
如果想要拉伸图片而非裁剪图片,可以同时使用`fit`参数:<grammar-image-stretch>
#{
show image: set align(center)
code(```typ
#image("/assets/files/香風とうふ店.jpg", width: 100pt, height: 100pt, fit: "stretch")
```)
}
“stretch”在英文中是拉伸的意思。
== 图形 <grammar-figure>
你可以通过#typst-func("figure")函数为图像设置标题:
#{
show image: set align(center)
set image(width: 40%)
code(```typ
#figure(image("/assets/files/香風とうふ店.jpg"), caption: [上世纪90年代,香風とうふ店送外卖的宝贵影像])
```)
}
#typst-func("figure")不仅仅可以接受#typst-func("image")作为内容,而是可以接受任意内容:
#{
show raw: set align(left)
code(````typ
#figure(```typ
#image("/assets/files/香風とうふ店.jpg")
```, caption: [用于加载香風とうふ店送外卖的宝贵影像的代码])
````)
}
// == 标签与引用
// #code(```typ
// #set heading(numbering: "1.")
// == 一个神秘标题 <myst>
// @myst 讲述了一个神秘标题。
// ```)
== 行内盒子 <grammar-box>
todo:本节添加box的基础使用。<grammar-image-inline>
#code(```typ
在一段话中插入一个#box(baseline: 0.15em, image("/assets/files/info-icon.svg", width: 1em))图片。
```)
== 链接 <grammar-link>
链接可以分为外链与内链。最简单情况下,你只需要使用#typst-func("link")函数即可创建一个链接:<grammar-http-link>
#code(```typ
#link("https://zh.wikipedia.org")
```)
特别地,Typst会自动识别文中的HTTPS和HTTP链接文本并创建链接:
#code(```typ
https://zh.wikipedia.org
```)
无论是内链还是外链,你都可以额外传入一段*任意*内容作为链接标题:
#code(```typ
不基于比较方法,#link("https://zh.wikipedia.org/zh-hans/%E5%9F%BA%E6%95%B0%E6%8E%92%E5%BA%8F")[排序]可以做到 $op(upright(O)) (n)$ 时间复杂度。
```)
请回忆,这其实等价于调用函数:
#code(```typ
#link("...")[链接] 等价于 #link("...", [链接])
```)
=== 内部链接 <grammar-internal-link>
你可以通过创建标签,标记*任意*内容:
#code(```typ
== 一个神秘标题 <myst>
```)
上例中`myst`是该标签的名字。每个标签都会附加到恰在其之前的内容,这里内容即为该标题。
#pro-tip[
在脚本模式中,标签无法附加到之前的内容。
#code(```typ
#show <awa>: set text(fill: red)
#{[a]; [<awa>]}
#[b] <awa>
```)
对比上例,具体来说,标签附加到它的#term("syntactic predecessor")。
这不是问题,但是易用性有可能在将来得到改善。
]
你可以通过#typst-func("link")函数在文档中的任意位置链接到该内容:
#code(```typ
== 一个神秘标题 <mystery>
讲述了#link(<mystery>)[一个神秘标题]。
```)
== 表格基础 <grammar-table>
你可以通过#typst-func("table")函数创建表格。#typst-func("table")接受一系列内容,并根据参数将内容组装成一个表格。如下,通过`columns`参数设置表格为2列,Typst自动为你生成了一个2行2列的表格:
#code(```typ
#table(columns: 2, [111], [2], [3])
```)
你可以为表格设定对齐:<grammar-table-align>
#code(```typ
#table(columns: 2, align: center, [111], [2], [3])
```)
其他可选的对齐有`left`、`right`、`bottom`、`top`、`horizon`等,详见#(refs.ref-layout)[《参考:布局函数》]。
== 使用其他人的模板
虽然这是一片教你写基础文档的教程,但是为什么不更进一步?有赖于Typst将样式与内容分离,如果你能找到一个朋友愿意为你分享两行神秘代码,当你粘贴到文档开头时,你的文档将会变得更为美观:
#code(````typ
#import "latex-look.typ": latex-look
#show: latex-look
= 这是一篇与LaTeX样式更接近的文档
Hey there!
Here are two paragraphs. The
output is shown to the right.
Let's get started writing this
article by putting insightful
paragraphs right here!
+ following best practices
+ being aware of current results
of other researchers
+ checking the data for biases
$
f(x) = integral _(-oo)^oo hat(f)(xi)e^(2 pi i xi x) dif xi
$
````)
一般来说,使用他人的模板需要做两件事:
+ 将`latex-look.typ`放在你的文档文件夹中。
+ 使用以下两行代码应用模板样式:
```typ
#import "latex-look.typ": latex-look
#show: latex-look
```
== 总结
基于《编写一篇基本文档》掌握的知识你应该可以:
+ 像使用Markdown那样,编写一篇基本不设置样式的文档。
+ 查看#(refs.ref-math-mode)[《参考:数学模式》]和#(refs.ref-math-symbols)[《参考:常用数学符号》],以助你编写简单的数学公式。
+ 查看#(refs.ref-datetime)[《参考:时间类型》],以在文档中使用时间。
// todo: 术语-翻译表
// todo: 本文使用的符号-标记对照表
== 习题
#let q1 = ````typ
#underline(offset: -0.4em, evade: false)[
吾輩は猫である。
]
````
#exercise[
用#typst-func("underline")实现“删除线”效果,其中删除线距离baseline距离为`40%`:#rect(width: 100%, eval(q1.text, mode: "markup"))
][
#q1
]
#let q1 = ````typ
#text(fill: rgb("00000001"))[I'm the flag]
````
#exercise[
攻击者有可能读取你文件系统中的内容,并将其隐藏存储在你的PDF中。请尝试将用户密码“<PASSWORD>”以文本形式存放在PDF中,但不可见:#rect(width: 100%, eval(q1.text, mode: "markup"))
][
#q1
]
#let q1 = ````typ
走#text(size: 1.5em)[走#text(size: 1.5em)[走#text(size: 1.5em)[走]]]
````
#exercise[
请仅用`em`实现以下效果,其中后一个字是前一个字大小的1.5倍:#rect(width: 100%, eval(q1.text, mode: "markup"))
][
#q1
]
#let q1 = ````typ
走#text(size: 1.5em)[走#text(size: 1.5em)[走]]
走#text(size: 1.5em)[走#text(size: 1.5em)[走]]
````
#exercise[
请仅用`em`实现以下效果,其中后一个字是前一个字大小的1.5倍:#rect(width: 100%, eval(q1.text, mode: "markup"))
][
#q1
]
#let q1 = ````typ
#set text(size: 2.25em);走#set text(size: 0.666666666em);走#set text(size: 0.666666666em);走
````
#exercise[
请仅用`em`实现以下效果,其中前一个字是后一个字大小的1.5倍。要求代码中不允许出现中括号也不允许出现双引号:#rect(width: 100%, eval(q1.text, mode: "markup"))
][
#q1
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.