repo
stringlengths 26
115
| file
stringlengths 54
212
| language
stringclasses 2
values | license
stringclasses 16
values | content
stringlengths 19
1.07M
|
---|---|---|---|---|
https://github.com/Omochice/toy-typst | https://raw.githubusercontent.com/Omochice/toy-typst/main/main.typ | typst | = 見出し1
これは段落です。空行は改段落を表します。
// 2つのスラッシュから始まる行はコメントです。このテキストは出力に現れません。
== 見出し2
テキストは _強調 (emphasis)_ することもできれば、
*強調 (strong emphasis)* することもできます。
- これは箇条書きです。
- インデントは箇条書きのネストを表します。
- 先頭を `+` とすれば番号付き箇条書きとなります。
|
|
https://github.com/HEIGVD-Experience/docs | https://raw.githubusercontent.com/HEIGVD-Experience/docs/main/_settings/typst/template-lab.typ | typst | #let conf(
title: none,
lesson: none,
lab: none,
author: "<NAME>.",
toc: none,
col: 2,
doc,
) = {
set text(font: "Times New Roman")
set page("a4",
header: [
#columns(2)[
#set align(left)
#set text(size: 12pt)
#author
#colbreak()
#set align(right)
#datetime.today().display("[month]-[year]")
]
#v(13pt)
],
header-ascent: 0%,
footer-descent: 20%,
margin: (x: 14mm, top: 14mm, bottom: 14mm),
numbering: "1/1"
)
v(15pt)
set align(center + horizon)
par()[
#text(18pt, title) \
#text(14pt, lesson, weight: "bold") \
#text(14pt, lab)
]
set align(center + top)
v(15pt)
set align(left)
set heading(numbering: "1.")
show outline.entry.where(
level: 1
): it => {
v(10pt, weak: true)
strong(it)
}
if(toc) {
outline(title: "Table des matières", indent: auto)
}
show heading.where(level: 1): it => block[
#colbreak()
#it
]
columns(col, doc)
} |
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/text/features_00.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test turning kerning off.
#text(kerning: true)[Tq] \
#text(kerning: false)[Tq]
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/024%20-%20Shadows%20over%20Innistrad/006_The%20Drownyard%20Temple.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"The Drownyard Temple",
set_name: "Shadows Over Innistrad",
story_date: datetime(day: 06, month: 04, year: 2016),
author: "<NAME>",
doc
)
#emph[<NAME>'s search for Sorin Markov has been beset by peril and has yielded more questions than answers. His investigation brought him to the twisted remains of Markov Manor, where he discovered a journal amid the rubble. He pursued the journal's description of the cryptoliths—twisted stones he had seen in Markov Manor—to other locations where they had appeared on the plane.]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
It was still evening when he reached Gavony. Overhead, the hunter's moon glowered through a thick blanket of misty rain that enveloped the moorlands.
<NAME>, Living Guildpact of Ravnica and mind mage extraordinaire, trudged through the rain in silence. An unparalleled command over telepathy did little to prevent him from half-sliding, half-falling down the slippery trails. He did, however, take some comfort in his release from the tense delusions of Markov Manor. His composure and thoughts had cleared—for now at least.
In the mists, a conjured light provided visibility for little more than a few feet ahead of him. He could proceed no farther.
"A world full of shadows and ghosts...and I'm the fool chasing after them," Jace mused aloud, feet squelching inside of rain-filled boots.
He cast a longing thought back to the marches under the capable guidance of his companions on Zendikar. Their pathfinding skills aside, the silence and solitude of his journey had started to become oppressive without them. He mused back over the familiar and distinctive patterns of their thoughts, the sounds of their voices. He—Jace's mouth twitched involuntarily—he could use their assistance.
As he pulled his cloak tighter around himself, his hands caught the weight of the journal in his pocket. A neat, compact folio bound in dark leather, held shut by a delicate wrought metal clasp. The pale face of the moonfolk whose face he had seen at the Manor flashed into his mind. My paper companion, he thought wryly.
#figure(image("006_The Drownyard Temple/01.jpg", width: 100%), caption: [Tamiyo's Journal | Art by Chase Stone], supplement: none, numbering: none)
He ran a cautious fingertip over the cover and up to the clasp. It fell open, pages fanned wide, each pale as a peeled apple underneath a web of script. Impossibly tidy calligraphy filled the pages, flanked by numbers nestled neatly inside of gridded tables.
Jace exhaled slowly, and pulled his hood protectively over the book to shield it from the misty rain as he turned the pages with attentive care.
Intricate field drawings filled the next page. An angel's wing—each feather described in painstakingly detailed line work. A gridded table of field drawings of delicately shaded circles under the boxed heading "Material Composition of the Heron Moon." A full-page image of a part-man, part-wolf, depicted in profile, was immediately recognizable to Jace as the same sort of creature as his unfortunate guide from the other night.
"Well, stranger. Tell me your secrets," Jace said as he brushed the dirt off a nearby rock, sat down, and began reading.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#emph[Entry 433] , #emph[Harvest Moon] :
#emph[A stoic rider on a dappled gray arrived at my study unexpectedly this morning, carrying with him a most curious delivery. A burlap-wrapped parcel, easily larger than a human, required both of our efforts to heave into the observatory's foyer. The rider said little, but pointed with a soiled boot-tip toward the label written in Jenrik's scrawl: "Specimen for immediate inspection."]
#emph[As I removed the wrappings, my breath caught in my throat as I saw fur, then claws, then the lupine muzzle come into view—a werewolf. A cursory examination revealed it to be far larger and more complete than nearly anything else of its kind that has passed through my hands. To my great surprise, the corpse was icy cold and had been dead for some time by now. The post-mortem reversion of lycanthrope corpses to their human forms was a well-known fact that stood in harsh contradiction to the specimen before my eyes. Though quite eager to begin my work, I did inquire for a receipt confirming the time of delivery—he signed it simply "<NAME>."]
#emph[The specimen was cleansed, drained, and labeled, and I began on the left anterior section. Large amounts of thick fur were first removed, revealing the sample's dermis.]
#emph[Though it is customary in such procedures to cover the face of the specimen, both to protect it from damage during the examination and for some of more delicate dispositions, I could not help but linger on its expression. Eyes wide and staring, its open mouth seemed to be caught in a call to something beyond the slayer in front of it in its last moments. Most likely, as so many that I had seen before, staring rapturously towards the Moon.]
#emph[The beast's expression brought to mind Jenrik's words to me. "The exact means by which a person is subjected to the curse of lycanthropy is unknown," he had said, "though it is closely linked to the basic nature of every lycanthrope. The sight of the moon fills them with unbearable savagery and strength, though the touch of her silver is poison."]
#emph[I still vividly recall my first days on Innistrad, a place of seemingly endless winter nights—the perfect slate to stage my lunar studies. As I stared up at the Heron, so perfectly full, clear, and bright that she drowned the stars, a rapturous...wildness bloomed in my heart as well. Perhaps it was the vivid memory of a past worlds away within the clouds. Perhaps there was something enviable in the lycanthrope, who did not fear to grasp that wildness and hold it close to them. Perhaps they know an ecstasy we never will, from the silvery tides of moon magic running through their veins.]
Pen lines had attempted to scratch out the three paragraphs above, though the depth of the pen marks made them legible under Jace's conjured light. The entry continued:
#emph[Hallmark colorations of a Gavony province howlpack were visible about the upper mandible. The area was marred by the presence of stringy connective tissue that had wrapped around the teeth. Closing of the jaw was likely impossible for the afflicted at the time of death.]
#emph[After the loss of three scalpels of Blessed Silver, attempts to make the first chest cavity incision required the use of our heavier tools, particularly a woodcutter's saw that had been hastily coated and blessed by Avacynian missionaries in the next town over. With great effort, the rib cage was separated, the specimen split from clavicle to pelvis, its contents exposed to air.]
#emph[I have often admired the lycanthrope's orderly interior, organs neatly packed and encased in their membranes, branching vessels traversing perfect pathways throughout. Massive lungs for communicating with their packs over great distances and for tree-lined sprints, a relentlessly effective liver for processing the flesh of their prey within minutes, heavily vascularized adrenal glands prepared to spill their contents into the bloodstream. An oblique reflection on the human form, elevated to a predator's ideal.]
#emph[This one, though. This one was...new. There was, in fact, little or nothing of the human form that remained within.]
#emph[The peritoneal interior was filled with a network of tough sinew of varying thicknesses that had grown to such an extent that it pushed aside many of the organs. Though the animal had appeared larger from the outside, a significant portion of this bulk was likely made up of such a substance. They connected in some places in thick nodules, clustered together.]
#emph[The largest cluster resided on what used to be the animal's liver, swollen to nearly twice its usual size.]
#emph[The organ emitted a foul odor—briny, rotten, and easily detectable despite my thick examination mask. I found myself surprisingly loath to excise the thing, though curiosity quickly conquered disgust.]
#emph[The halves separated, leaving a hard, round object embedded in one half, not unlike a sliced peach. They revealed a spongy mass of the twisted sinew studded with what appeared to be three broken teeth, and strands of thick gray fur.]
#emph[The pit stuck in the center of one of the halves. I rolled it over to face upward.]
#emph[No, not a "pit," but a sightless, yellow, lupine eye. An eye most likely staring skyward. Perhaps, as its cephalic sisters, heavenward toward the Moon.]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Jace looked up from his reading with an involuntary grimace. Absorbed in the entry, he hadn't noticed that the mist had cleared ahead of him. The moon illuminated his path, reflecting off the shallow marsh, silhouetting a twisting monolith.
#figure(image("006_The Drownyard Temple/02.jpg", width: 100%), caption: [Weirding Wood | Art by Jung Park], supplement: none, numbering: none)
It was approximately his height, the foundation formed from raw stone pulled from the earth that quickly turned to a hard-edged, twisting shape. Staring down the axis of its tip, Jace noted that the formation pointed to another just like it a couple hundred meters away. The trees themselves mirrored the direction of the monoliths. It in turn pointed to another, and another, until they disappeared from view in the distance.
For what might have been the first time since reaching Innistrad, Jace grinned, and a wave of relief washed over him. Perhaps some things might start to make sense.
The monolith was unmistakably the same as those he had seen at Markov Manor, the same as he had seen in the journal.
"And you, my paper companion, what did you know about this?" He eagerly thumbed through the pages to the image of the same twisted stones. An entry followed:
#emph[Entry 643] ,#emph[ Hunter's Moon:]
#emph[Alchemical analysis on the moorlands' cryptolith formations was completed today. It indicates a number of exceptional features of the samples received, including a high surface hardness, and a directional energy field along a twisting axis. Curiously, inspection of the striations suggests a material only recently emerged from the earth. In contrast, crystalline analysis seems to indicate the samples are far older than all other geological formations found within the area.]
Jace nodded. "Not methods I know well, but I like what you did here." He missed the convenient expedience of reading minds over picking through some of the minutia of these accounts.
#emph[The strength of the internal lodestone field in each monolith is able to distort local field lines and poles. Over time, we have received more reports of these formations, causing a net migration of our poles to a location just offshore. The disruptive properties of the stones appear to also extend to an ability to warp the flow of mana through the region, with potentially severe effects for beings composed of raw mana—particularly the angels of the plane. Perhaps there's more to Avacyn's madness...]
Jace held the back of his hand to the bottom of the monolith. It was cool and smooth, with a subtle network of some other lustrous mineral enmeshed in its surface.
A scintillation on its pointed upper face caught his eye. As he reached forward to touch the end of it, there was a #emph[POP] and a spark jumped from the end of the monolith to his hand. Jace jerked his hand backward as a thin trail of white smoke rose from his glove. The breath of something bright and bell-clear bloomed on his senses, then quickly faded.
"AH—! Azor's blood, what was that!" His thoughts went immediately to the journal, and he cradled it in the crook of an arm. "Are...are you all right?" he asked the book as he scanned it for scorch marks and rubbed the cover gingerly with a corner of his cloak.
"Well, did you ever find out what these actually...do? What do we make of these? Am I just following someone's trail into another scheme or trap or...?" Jace aimed a piercing stare at the journal's pages.
"Or is that what happened to you?" The journal, of course, said nothing.
The moor was silent, save for a rising buzz of swamp insects. Jace returned to reading.
#emph[Entry 735] , #emph[Hunter's Moon] #emph[:]
#emph[The previous week brought reports of continuing increases in werewolf-related fatalities sent by the Gavony Census, which have been confirmed by independent slayers and far exceed the numbers of Jenrik and Lotka's typical predator-prey predictions.]
Jace had become accustomed to a number of monikers. "Prey" was not one that sat particularly well with him.
#emph[Since then, roads to the observatory have been blockaded, and further information has been difficult to gather. Many of our colleagues have barricaded themselves in their homes and abandoned their work. Resources have thinned, but I remain determined to continue my recordings on their causes.]
#emph[The feeding behavior of Innistrad's supernatural inhabitants is closely entwined with the regular motions of the heron moon. A celestial conductor, she commands the mysterious motions of the primal heart that lead to transformation or murder with the shifts of her tides.]
#emph[As our colleagues in Kessig had seen the renewed savagery of lycanthropes, here in Nephalia we too have recorded signs of the moon's unease (see Table 6-32). The oceans themselves have risen to record high tides in addition to a change in their direction—]
Jace pored over the charts on the previous page's ledger with a critical editorial eye that would have made Lavinia proud, had she seen him do so more than a handful of times as the Guildpact.
—#emph[despite experiments performed in triplicate, far exceeding tolerances for measurement error. The gravitational force governing the movement of the tides appears to have shifted from the moon itself to a location very close to the sea—]
"Wait a second. Hang on," Jace said indignantly to the handwritten pages. "I've seen Kiora move the #emph[entire] Halimar Sea." (Or at least try to, he noted.) "And...and even if it was something that could move the tides, it would have to be huge. There's no way such a thing could have gone unnoticed!" He gave the book a cautionary glare before continuing to read.
#emph[Recent measurements of moon phase durations have shown asymmetric alterations. The implication is that the moon's orbit itself is being pulled in some direction by a very large, very nearby object still invisible to humanoid eyes.]
Jace scanned over the night sky. A single, lonely moon looked back from a bed of hazy constellations. He probed for any of his own telltale signs of illusion magic—nothing. "You're...you're sure that's it? What happens when it reaches the plane's surface? Do we all just wait, watch, hope while this thing heads toward us?"
#emph[Curiously, both the tidal vectors and the field distortion provide identical foci that may be traced to the same coordinates—a large reef off the coast of Nephalia.]
#emph[As candlelight flickers over my pen, I recall the lights of the soratami rites of the New Moon. We had held our festival lanterns in the ways of our forebears, beacons guiding each new moon to rise from the sea of clouds. What fruit will the reef bear to this plane?]
#emph[Each of my studies seems to blossom into more inquiries. For every answer, three questions...]
#emph[More questions, endless questions.]
More clues, still with no answers. Jace clenched and unclenched his fists, filled with nervous energy. The evidence was infuriating—nothing to hear, grasp, or know on his own. Even his own eyes seemed useless. He had no choice but to let the journal lead him along.
"Why aren't you here, in person? I have so many questions..." Jace gave a longing sigh toward the journal. Silence. "Of course. Wishful thinking."
The text of the pages stared back at him, defying him to reread their final words. "I know, I know. We've found a trail in the stones, I'll—er, we'll follow it. I just...I wish I knew better what I was looking for? Trail or trap, what have you left me here?"
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
The road to Nephalia ended at the base of its sea cliffs, and he could see the roofs of the port town of Selhoff just over the ridge. A precipitous, narrow footpath lead up the cliff side, and Jace soon found himself short of breath on the steep incline.
Jace felt his way around a bend in the path, and nearly collided with a fisherwoman.
"Oh! Sorry, I didn't see you th—"
Her eyes affixed to his—wide and vacant with an unblinking stare.
"So...another come to listen to her call, hmm?" she asked, her words tumbling out slowly. "You've come to see her too?" An eerie simulacrum of joy crept into her voice. "So many have arrived just today!"
"See her? See who?"
"She's finally here! Brought her feathered ones from the sky, tide came up right on with them! Broke right through the floodwalls, washed all of it away!"
Ah, of course, Jace thought—the journal had mentioned the rising tide levels. "You've seen the tides shift, too?"
"Oh, we had no need for all those things—we've found...something so much more than us! Think of all these things we're holding onto, weighing us down. Living in these shells made of meat, carrying our worries, slogging forward day after day. She's up there now, waiting for us, waiting to take it all away for us, to usher in a new world!"
"Slow down...'she'? Who is 'she'? What is she bringing?"
The fisherwoman barked out a laugh that lingered too long. "I was like you once. It's a terrible burden, knowing. So many questions, drowning in questions, and never enough answers! Now I've let them go, washed right out of my mind like the sea over shipwrecks. But once I'd wanted to know…things. Lots of things! Silly things. What is my greater purpose and will I ever achieve it? How will I die? When will winter end? Where does the eye stare? How many eyes? How many legs on the moon shrew—?"
Meaningless words flowed from her mouth until she gasped for breath like a land-bound fish.
Jace had heard enough—he wasn't going to get too far with conversation, but he needed any information her mind might hold. With a well-practiced gesture, Jace reached out with his mind to grasp at her thoughts.
The first he caught dissolved on contact into a cloud of blue vapor. The second was just a collage of images—twisted stones and something dark and roiling...the sea? Each thought seemed oddly hollow, formless. He frowned—this would require more intensive measures. Opening his own thoughts, he bridged their two minds...
#figure(image("006_The Drownyard Temple/03.jpg", width: 100%), caption: [Jace's Scrutiny | Art by Slawomir Maniak], supplement: none, numbering: none)
...and looked out into a dull, gray calm. It formed itself into gently curved walls of perfect smoothness. The roof of the dome was similarly smooth and featureless. No doors, no entrances, no exits. He looked down, expecting to see the fisherwoman's hands. Looking down, all he saw were his own damp palms and blue robes. Jace swore silently.
His form was somehow trapped in someone else's mind. He was someone else's living, breathing mental figment, trapped inside of their head. Panic was beginning to set in, turning the silence into a high-pitched ringing in his ears. Deep breaths. This was…unexpected.
Jace moved slowly around the perimeter of the dome, feeling the wall for cracks or imperfections. A complete circuit around the room produced nothing. Trying to suppress a growing panic, he leaned against a wall and glanced at the center of the room.
A nebulous shroud of some...thing hung in the air. No, not something, but nothing. A blind spot in space that seemed to remain no matter how he tried to peer around it.
Jace's pulse thundered in his temples, in tune with the blind spot in the center of the room. His sweating palms pushed against the smooth walls firmly now, though they refused to give.
He had altered minds before, instilling wild visions and distorting truths. But he had certainly never been one of those distortions before. No, he was still real and true, he was certain of it. He could prove it.
He took a deep breath, planted his feet firmly apart, made a fist, thumb #emph[outside] his fingers as Gideon had patiently insisted to him, and took a swing at the wall.
The impact resonated back through his body, and the shock through his nerves threw him backward. The walls vibrated like a tuning fork, each wave jangling through Jace's tortured brain.
His eyes flicked toward the center of the room. The blind spot in his vision had swelled to become an Object far larger than Jace himself, nearly touching the floor and ceiling of the interior of the dome that trapped Jace like a spider beneath a glass.
He shut his eyes tightly, gripping his head and trying to stay calm and concentrate.
"Solidly built, this one."
Jace's eyes snapped open. There stood another figure in hooded, damp, blue robes surrounded by a pale luminescence, who rubbed at his chin, staring thoughtfully up at the Object. It looked just like...Jace. Or, more accurately, one of his illusionary duplicates.
"We've never seen a place like this before, huh? Thoughts are a mess, place is just empty. But fascinating! What do you think is inside this thing?"
Jace gaped at the hooded duplicate, words starting to form then deflating on his tongue. He was certain he hadn't summoned it. Or had he, instinctively? He couldn't remember. Was it an effect of his entrapment in the mind of another?
"Oh, come away, can't we? We're so close now!" insisted another voice. Jace turned to see a second duplicate of himself, this one unhooded, moon-pale skin visible. "We've no time to waste with this poor woman—leave her be. We're almost to the Drownyard!"
The hooded duplicate shot an icy glare at the second. "And do what? Follow more of these anomalies? I'm tired of filtering through all these dead ends; there has to be someone around here who knows what holds it all together!"
The hooded duplicate put two hands to its forehead and stared earnestly up at the Object. Its face reddened and two veins bulged comically on his forehead as it began to sweat profusely.
Jace grimaced, watching himself with naked, harrowing self-consciousness.
"You really #emph[do ] look like that, you know." It came from a third illusionary duplicate, this one violet-eyed and smirking. It whispered something into the ear of the second, pale duplicate and the two giggled conspiratorially, pointing at the first duplicate that was still deep in concentration.
The pale duplicate composed itself and laid an earnest hand on Jace's shoulder.
"Months, no, #emph[years] , of physical studies, observations, measurements! You're so close to helping me complete my records!" The pale duplicate tugged on Jace's arm with earnest, impatient insistence.
The Object, now impossibly, menacingly large, stared down at Jace. The smooth walls of the Chamber warped and bent under the Object's pull, then buckled with a loud CRACK. Fragments of the sundered walls tore themselves away and into the Object, revealing a spidery lattice underneath. A myriad of eyes opened, buried within the latticed walls, staring through Jace and the fisherwoman and toward the Object in manic ecstasy. Voices from behind the walls roared with a white noise that pierced through Jace's senses and brought him to his knees. The floor, too, cracked, though he could no longer hear it, but vaguely he knew that it had also given way under his weight, and realized that he was falling—
—His eyes snapped open to find his hands clapped over his head, his body curled up on the ground. As he scanned the trees, the shape and substance of those sinewy walls clung to his vision like phantom limbs.
The fisherwoman stumbled as she came to, locking eyes with Jace in a brief, knowing stare. After a few inaudibly murmured words, she scrambled to her feet with a guttural snarl and scurried down the path away from the coast.
He barely noticed her leave as he continued his climb, deep in thought.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
The trail ended on the rocky shores just north of the reef near a small fishing outpost. Its floodwalls, as the fisherwoman had indicated, were indeed nearly a foot underwater, and a thick, shining layer of rotting marine slime coated what had once been the dock and its ships.
Boots caked in slime and sand, Jace waded into the shallows and let the waves pass over his feet. As he waited for it to recede, he realized the water was moving parallel with the shore, not away from it.
Something down the beach was indeed changing the normal motion of the waves.
South of the village, the moonlight shone down on a massive ring of jagged structures jutting up from the ocean, clawing at the waves and passing ships.
"The Drownyard," Jace breathed. "This is it! All the cryptoliths point here!"
Above the jagged ring was...still nothing? Hanging aloft in the skies above it, nothing more than the familiar heron moon.
He had been prepared for a lot of things. But #emph[nothing] ? "I thought you'd promised me something here! You told me I'd find something!" Jace hastily fished the journal out of his pocket and flipped it open.
#emph[In summary of this initial set of observations, our best explanation is the sudden migration of a large celestial Object in increasingly close proximity to Innistrad.]
He stared down into the empty, unfinished ring of stones dubiously. Large, but certainly not what he would classify as "celestial object" large. And the space above the ring appeared to be just that—empty space. "Uh, exactly how large were you thinking this thing is?"
#emph[Taken in total, the findings presented in this work support the presence of an object of significant mass. Most likely a new astral body, an eldritch moon of sufficient size as to provide a gravitational pull able to disrupt the normal patterns of both the tides and magical energy.]
"Astral body? Moon-sized?" Jace looked down into the empty area above the ring. Was there another illusionist hidden nearby? He didn't sense anything of the sort.
#emph[Future field studies will be arranged to investigate.]
Jace flipped forward, but found no more written on the topic. "You can't stop now! We're so close! Tell me! Tell me what it means!" He gripped the leather spine and shook the book with more force than he intended.
Flashes of movement caught his eye. Dense clouds roiled overhead, and a long procession of shambling humanoids waded through the frigid, shoulder-deep ocean water below. Zombies. More specifically, the waterlogged corpses of long-dead sailors left within Nephalia's reef.
#figure(image("006_The Drownyard Temple/04.jpg", width: 100%), caption: [Epiphany at the Drownyard | Art by <NAME>], supplement: none, numbering: none)
He realized with some distaste that the stench of rotting meat was not, in fact, fish at all, but a well-brined undead labor force.
The vivid memory of Liliana's zombies, their cold, rotted hands pushed up against Jace's windpipe, loomed in his mind.
Jace gave a cautionary gesture, and three duplicates appeared around him.
The sight of the zombies brought back Liliana's words. "This is a dead end. Go home, Jace!" she had said to him.
"No—!" He insisted aloud, with a vehemence that surprised him.
His thoughts were too loud. #emph[Slow down, Beleren] , he instructed himself.
No, he couldn't turn back. Not yet. Not when he could solve what even the journal didn't know.
Jace gritted his teeth against the bracing cold of the ocean water and waded in, keeping his distance from the prying eyes of the zombie procession. The stone formations here were similar to those he had seen in the moorlands, though these were far larger in scale, and humming with energy. Their twisted forms tapered into points, each facing the center of the circle.
A few of the stones jutted up from the shallows, away from the procession that gathered in the center of the circle. Jace made his way toward one of them and extended a hand to trace the stone's direction in dim light.
A jolt of energy jumped from the surface of the stone to Jace with a loud #emph[POP] , setting his ears and head ringing with a familiar sound.
He raised his head slowly. A memory twitched.
A blind spot, the Object, loomed in his vision, hovering just above the circle of stones in the distance. It pulsed with power, in time with the lustrous web of veins on the monoliths below it. This was the nexus of Innistrad's redirected leylines, the siphoned center of its energy.
"You never were able to keep your hands to yourself, Beleren. And did you really have to let that thing zap you #emph[again] ?" A voice came from over his shoulder.
The accompanying face peered over Jace's shoulder and emphatically rolled its violet eyes. "For a famously perceptive mage, you've had better moments." It reached forward as if to tap him on the tip of his nose with an illusionary finger. Jace's errant violet-eyed duplicate from his mental entrapment just minutes ago. Behind it were the others—the hooded duplicate and the pale duplicate.
"What are #emph[you] doing here?" Jace sputtered. "I left you and the other..." he pointed accusingly at the other duplicates, "...errant delusions back in that madwoman's head! You weren't welcome there, and if you're not going to help us against #emph[that] ," Jace motioned angrily at the reeking mass of zombies, "consider yourself unsummoned!"
"There's no need to be defensive. Look, you're doing a fine job of handling it already!" The violet-eyed duplicate pointed toward the center of the formation, where the pale duplicate and the hooded duplicate were eagerly advancing into the center of the ring of stones. They didn't appear to notice or care for the teeming zombie masses in their paths.
"#emph[Get back here! Move!] " Jace hissed under his breath. "#emph[Move back, damn you!] "
"We can finally complete our measurements! What would you put the dimensions of these stone samples at?" The pale duplicate's form wavered, its features slimming to delicate angles, rumpled hair becoming two neatly plaited tresses held in place by what appeared to be two leporine ears. The pale duplicate had reformed completely into a soratami—one of the moonfolk of Kamigawa. The same, it seemed, as the vision that he'd seen in Markov Manor, the one who wrote—
"—the journal. Is this...?" Jace sputtered, clutching at the book in his pocket. "I mean...is that you?"
"#emph[Most likely a new astral body, an eldritch moon of sufficient size as to provide a gravitational pull able to disrupt the normal patterns of both the tides and magical energy] ," the soratami illusion intoned with sudden solemnity. "I need you to focus, we've work to do—and where's your compass?" it yelled back at Jace as it strode purposefully toward the stones.
The hooded duplicate had already reached the base of the Object, where he stopped and stared up. "Just like in the madwoman's mind! Why did you have us leave her behind? Now we'll never know what she knew!" The shrill pitch of the duplicate's voice began to set the zombies astir. "Jace, look up!" it cried. "They're here!"
As Jace turned, something fell onto his head from above and rolled off into the sea. And again. Raindrops? He held out a hand, and grasped at the next as it fell.
They were...feathers? Falling from a dense cloud overhead. He squinted. No, it wasn't a cloud, it was made of moving things. Huge winged things. Angels.
They swarmed in midair above the center of the circle, some wheeling near the cryptoliths like moths to a flame and calling out in harsh, birdlike tones. The sound of their massive wingbeats echoed against the cliffs and through Jace's aching head.
Oh yes, he'd seen these before. The same pages that had first described the cryptoliths had described Avacyn in the same breath. A sign, a clue...it was something, it had to be something.
"Impressive-looking but useless creatures. Bird wings and bird brains," the violet-eyed duplicate scoffed, leaning against Jace's shoulder.
Below the swarm, the hooded duplicate only stared up at the sky, transfixed by the inexorable pull of the Object and the angels circling overhead. "What pulls the tides?" Jace heard it murmur. "Zombies or angels? What is my purpose, how will I end? Too many questions..."
It walked forward toward the center of the circle of twisted stones, up to its neck in the icy seawater, head tilted back, eyes still firmly focused upward. It continued moving doggedly forward as the waters closed over its head, entombing it below the surface. Jace watched in silence as the duplicate's face, #emph[his] face, slowly disappeared.
A voice spoke from over Jace's shoulder. "You remember what she told us, don't you?" the violet-eyed duplicate asked with a raised eyebrow and a smile too wide to be sincere.
"Excuse me?" <NAME> croaked, his throat dry.
"That first night when you came here to see #emph[her] ." The duplicate's voice had changed. It was...familiar now.
The outlines of its illusionary form wavered under the moonlight, and slowly it rearranged itself to a familiar form: <NAME>.
"I didn't come here to see her! I—I came to find Sorin!"
"She knows you. She didn't ask you to come here, surrounded by the undead and those..." she flicked a hand upward in brusque disgust "...winged vermin." Liliana's voice jangled his raw nerves like a skilled violinist playing a chord.
Jace stopped in his tracks. Of course. He'd known it all along, hadn't he?
"It was you! You brought them here! That was why you'd sent your ghouls after me, why you'd warned me about angels when I first arrived?" Jace could feel the blood rising in his face and could hear his voice, grating and shrill against her calm.
He stepped forward to face her. "This is #emph[your ] doing! You've always hated them, and you've been planning this for years, haven't you? You're the one who redirected the stones to herd the angels here and twist their minds! Lambs to the slaughter, all gathered together for you to cut down in one blow. How did you do it? What do you have planned? Do you know what forces you're toying with?"
Blood pounded in his temples; beads of sweat rolled off his brow. "Answer me! I'll not let you make a fool out of me!"
"You don't need my help for that, Jace. And you...you know better, don't you?" Though illusionary, Liliana's eyes were the same ancient, depthless violet that Jace had remembered. They brimmed with terrible secrets crafted from lifetimes of ruthlessness.
Frustrated words and accusations piled up in Jace's throat as he stared at the Liliana illusion's smiling face, but as he started to speak, it suddenly dissipated into the cold night air.
Jace made his way back to the shore and sat alone, shivering in the dark. His robes did nothing to keep the chill from his bones, and his numb feet refused to regain feeling. He was unhurt, but shaken. In front of him, the procession of ghouls continued on, undisturbed by Jace's passage.
He looked back at the circle of stones. The Object had disappeared.
His shaking hands went to the journal, but stopped as they went to flip open the cover. Questions still flooded his mind—how had Liliana moved the tides, or the stones for that matter? What was the astral formation that the journal had insisted on? The words of the journal lingered in his mind: "For every answer, three questions..."
#emph[No] , a dull, buzzing voice reverberated through his mind. #emph[Stop asking. Too many unanswered questions. You don't need the book and its bottomless well of mysteries. They'll drown you. ] Jace shoved the book away into his robes. #emph[You've come this far. You know the answer. Stop searching.]
He played the images over and over again in his mind—unable to erase the image of Liliana's face and its mocking smile.
"Angels. Zombies. Dead end..."
The hunter's moon hung alone in the sky expectantly, its silvery light seeming to cleanse the land and sea it illuminated. Jace knew what he had to do.
#figure(image("006_The Drownyard Temple/05.jpg", width: 100%), caption: [Drownyard Temple | Art by <NAME>], supplement: none, numbering: none)
|
|
https://github.com/Trebor-Huang/HomotopyHistory | https://raw.githubusercontent.com/Trebor-Huang/HomotopyHistory/main/abstract.typ | typst | #set page(paper: "a5", numbering: "— 1 —", margin: (top: 2.5cm, bottom: 2.5cm))
#set text(font: ("New Computer Modern", "Songti SC"), size: 10pt)
#set strong(delta:500)
#show heading: it => {
it
v(1pt)
par()[#text(size:0.5em)[#h(0.0em)]]
}
#set par(leading: 10pt, justify: true, first-line-indent: 2em)
#align(center)[
#set text(size: 20pt)
*同伦论简史* (摘要)
#set text(size: 12pt)
Trebor
#v(10pt)
]
同伦论发源于拓扑学的研究, 而逐渐抽象成为独立的学科. 当今的同伦论, 可以说是研究无穷群胚、谱与相关对象的学科. 从代数拓扑逐渐现代化的过程中, 同伦论曾经多次将学科的基础语言重写. 因此, 许多概念或许会稍显晦涩. 理清其历史脉络, 对于同伦论中概念的学习有很大帮助.
= 开端
一般认为, 同伦论的起源是 Poincaré 于 1895 年发表的 _Analysis Situs_.
在 Poincaré 之前, 可以说唯一的重要同伦概念是曲面的 Euler 特征. Betti 提出了任意维度的 Betti 数的概念, 大致而言是几何体中可以画出多少个 $k$ 维曲面, 使得它们不构成某个 $(k+1)$ 维曲面的边界.
Poincaré 将这个想法放在了同调的框架下, 即考虑曲面的线性组合, 并且商去边界. 利用这个框架, Poincaré 将 Euler 的多面体公式推广到了高维, 并且发现了 Poincaré 对偶的现象.
在 _Analysis Situs_ 中的另一个重要想法是基本群的概念. Poincaré 注意到了将曲面沿着边界粘起后得到的基本群可以从粘接的方式直接得到. 同时, 在一份补充中他构造了一个与三维球面 $SS^3$ 同调相同但是基本群不同的空间.
// Poincaré 的三种构造方式: 方程、坐标、粘接. https://webspace.science.uu.nl/~siers101/ArticleDownloads/Poincare%27_naw5-2012-13-3-196.pdf
事实上, 此时数学中甚至还没有拓扑空间的概念. Poincaré 的论证并不完全严谨. Fréchet 在 1906 年定义了度量空间, 而 Hausdorff 在 1914 年提出了拓扑空间的概念. 这使得拓扑学能严谨地研究更一般的几何体, 而不仅限于代数方程等传统方式构造的空间.
= 代数拓扑
Poincaré 也注意到, Betti 数 —— 在当代视角看, 就是同调群中无挠部分的秩 —— 遗漏了同调中的一部分信息. 例如在射影平面 $RR PP^2$ 中, 无穷远直线本身不是曲面的边界, 但是与自身相加之后是 $RR^2 subset.eq RR PP^2$ 的边界. 这种现象来自于空间的扭曲, 因此被称作 “挠” (torsion).
1925 年, Noether 认识到 Betti 数和挠数实际上可以统一到一个代数结构中, 即同调群. “挠元” 这个术语就是从拓扑引入到代数中的. 由于同调等计算是利用了空间的三角剖分或者类似的组合结构, 这部分数学被称作 “组合拓扑”. 随着同调群的提出, “代数拓扑” 这个名字就逐渐取代了原先的名字.
随着点集拓扑的理论逐渐成熟, 许多数学家致力于将同调推广到更一般的, 无需三角剖分的空间. 上同调理论也是在这段时间提出的. 1943 年, Eilenberg 提出奇异同调的概念, 随后数学界对一般的拓扑空间同调与上同调进行了系统性的研究. 1945 年, Eilenberg 与 Steenrod 提出了统摄同调理论的 Eilenberg–Steenrod 公理. 这 (至少限制在有限复形的范围内) 统一了当时提出的各种各样的同调与上同调理论.
= 同调代数
不同系数的同调之间有相互关联. 1935 年, Čech 发现整数系数的同调可以确定任何其他交换群系数的同调. 在 Čech 的文章中可以看到交换群的张量积与 $"Tor"_1$ 积的表现. 上同调的万有系数定理也随着 $"Hom"$ 与 $"Ext"$ 这两个代数概念一起提出.
群的低维同调与上同调是早已被研究的课题, 如叉同态 (crossed homomorphism) 的概念对应 $H^1 (G; A)$. Eilenberg 与 Mac Lane 利用 $K(G, 1)$ 空间定义了群的任意维数同调与上同调, 将同调代数的语言正式引入了代数学. 在此之后, 对结合代数与 Lie 代数的研究也利用这种新语言迅速发展.
Lyndon 在对群上同调的研究中已经注意到了谱序列的雏形. 1947 年, 根据 Cartan 的建议, Koszul 以滤链复形为核心概念, 发展出了谱序列的代数框架. 在第二次世界大战中, 数学家 Leray 在被俘期间发展了层论与层上同调, 谱序列成为了计算层上同调的重要工具. 在战后, Cartan, Eilenberg, Serre 等人将谱序列与层的理论进一步发展成熟.
Cartan 与 Eilenberg 在 1950 年代著书 《同调代数》, 为这个领域带来了一场革命. 他们定义了投射模的概念, 与内射模对举. 这使得 $"Ext"$ 和 $"Tor"$ 能统一地作为导出函子处理. 如 Hochschild 所言, 这本书的出现标志着同调代数的摸索时代正式结束.
// https://www.math.uchicago.edu/~may/PAPERS/118.pdf
= 范畴论
1945 年, Eilenberg 与 <NAME> 提出了范畴、函子与自然变换. 这是为了定义自然同构, 因此范畴在一段时间内都只是一种同调代数中临时方便性质的语言. 1955 年左右, Kan 提出了伴随函子、极限与余极限、Kan 扩张等的定义, 这些形成了当今范畴论的基础概念.
范畴论真正成为独立的学科, 是在 Grothendieck 发表于 1957 年的 _Sur quelques points d'algèbre homologique_. Grothendieck 公理化地定义了 Abel 范畴等. 这使得同调代数可以在更高的抽象层次上进行. 这也启发了数学家, 可以在其他领域中寻找能够囊括领域基础概念的范畴公理化体系. 同时, 伴随函子等概念在数学中的普遍性逐渐显现出来, 许多领域中的重要定理都可以放在范畴的框架下表述.
接下来, 在 1960 年代, Lawvere 将范畴论与数学基础联系起来. 他将集合构成的范畴公理化, 并且找到了许多逻辑系统的范畴语言表述. 同时, Lambek 将推理系统的性质放在了范畴的框架下研究. 这些研究催生了意象 (topos) 的概念. Grothendieck 将意象用于代数几何的研究. 而在 1970 年代, 意象在各个领域都得到了应用, 展现出截然不同的面貌, 正如盲人摸象一般.
同调论中的 $"Ext", "Tor"$ 等导出函子的概念, 被 Grothendieck 放在了导出范畴的体系下. 这将同调和同伦统一到了一起, 传统的同调理论成为了同伦理论的一个低维投影. 这一点之后会在无穷范畴的发展中得到进一步体现.
= 无穷范畴
一直以来, 同伦论的研究受制于空间的性质. 许多同伦定理都是先在性质较好的空间 (如 $Delta$ 复形) 上建立的. 1948 年, Whitehead 提出了 CW 复形. 它满足 Whitehead 定理, 即弱同伦等价能推出同伦等价. 1949 年, Eilenberg, Zilber 提出了单纯集的概念. 它相较之前的单纯复形避免了一些退化的面难以处理的问题. 之后, 由 Kan 发展了单纯集上的同伦论, 完全脱离了拓扑空间的限制.
1967 年, 仿照同调代数的做法, Quillen 提出了同伦代数的概念, 试图在范畴中抽象地考虑同伦. 这最终的结果是模型范畴的提出.
单纯集可以看作是无穷群胚的一种表现, 正如群表现之于群一样. 而单纯集构成的模型范畴给出了一种抽象的通过表现操作无穷群胚的办法. 1973 年, Boardman 与 Vogt 引入了拟范畴 (quasicategory), 标志着无穷范畴论的开端. 在这个框架下, 同伦范畴可以看作是无穷范畴包含信息的截断, 正如基本群是拓扑空间所包含信息的截断. 导出范畴也可以视作一种同伦范畴, 因此其对应的无穷范畴视角揭示了其本质. 进入 21 世纪, <NAME> 写出了《高阶意象论》, 整理了对无穷范畴论与无穷意象的研究.
= 展望
当今, 同伦论也有着激动人心的发展. 2009 年, 由 Voevodsky 等人提出了同伦类型论, 使得同伦理论可以脱离具体的模型而公理化的进行, 正如 Euclid 公理使得平面几何不需要具体构造实数一样. 这种类型论也可以作为数学的基础. 2012–13 年举办了泛等数学基础特别年, 从各个角度大大发展了同伦类型论.
当然, 同伦论的发展也有许多挑战. <NAME> 在《同伦论的未来》中提出了一些学科发展的忧虑. 例如同伦论仍然被视为拓扑学的子分支, 而实际上它所关心的问题与研究的方法已经与拓扑学相差较远, 因此这不利于审稿等学术活动.
同伦论有着非常丰富的历史, 在相对较短的时间内, 能凭借新的发现对旧有的基础理论作出多次重写, 是相当罕见的. 同时, 同伦论与数学、物理、计算机等领域都有广泛的联系. 这都是同伦论在未来蓬勃发展的动力源.
|
|
https://github.com/Tiggax/famnit_typst_template | https://raw.githubusercontent.com/Tiggax/famnit_typst_template/main/lib.typ | typst | MIT No Attribution | // Made By <NAME>
// questions and suggestions => https://github.com/Tiggax/famnit_typst_template
#let col = (
gray: rgb(128,128,128),
)
#let todo = [
#set text(fill: red)
*TODO*
]
#let split_author(author) = {
let a_list = author.split(" ")
(name: a_list.at(0), surname: a_list.slice(1).join(" ") )
}
#let surname_i(author) = {
let author = split_author(author)
(author.surname, " ", author.name.at(0), ".").join()
}
#let project(
naslov: "Naslov zaključne naloge",
title: "Title of the final work",
ključne_besede: ("typst", "je", "zakon!"),
key_words: ("typst", "is", "awesome!"),
izvleček: [],
abstract: [],
author: "<NAME>",
studij: "Ime študijskega programa",
mentor: (
name: "<NAME>",
en: ("es","ee"),
sl: ("ss","se"),
),
somentor: none,
work_mentor: none,
kraj: "Koper",
date: datetime(day: 1, month: 1, year: 2024),
zahvala: none,
kratice: (
("short": "long")
),
priloge: (),
bib_file: none,
text_lang: "sl",
body,
) = {
let auth_dict = split_author(author)
set document(author: (author), title: naslov, keywords: ključne_besede)
let header(dsp) = [
#set text(size: 10pt, fill: col.gray, top-edge: "cap-height")
#set align(top)
#v(1.5cm)
#surname_i(author) #naslov.\
Univerza na Primorskem, Fakulteta za matematiko, naravoslovje in informacijske tehnologije,#date.year()
#h(1fr)
#counter(page).display(dsp)
#line(start: (0pt,-6pt), length: 100%, stroke: col.gray + 0.5pt)
]
// Displays the name with prepends and postpends
let display_name(name, lang: "sl") = {
[#name.at(lang).at(0) #name.name #name.at(lang).at(1)]
}
set page(
numbering: "1",
number-align: center,
margin: (left: 3cm, top: 3cm),
header: header("I"),
footer-descent: 1.5cm,
footer: []
)
set text(
font: "Times New Roman",
lang: text_lang,
size: 12pt
)
show footnote: it => text(size: 10pt, it)
set heading(numbering: "1.1")
show heading.where(level: 1): it => text(size: 14pt, weight: "bold",upper(it))
show heading.where(level: 2): it => text(size: 14pt, weight: "bold", it)
show heading.where(level: 3): it => text(size: 12pt, it)
show heading.where(level: 4): it => text(size: 12pt, weight: "regular", it)
show figure.caption: it => text(size: 10pt,it)
show bibliography: set heading(numbering: "1.1")
// --------- COVER PAGE --------------
page(header: none, margin: (bottom: 5cm))[
#set text(size: 14pt, spacing: 0.28em)
#set align(center)
UNIVERZA NA PRIMORSKEM\
FAKULTETA ZA MATEMATIKO, NARAVOSLOVJE IN\
INFORMACIJSKE TEHNOLOGIJE
#align(center + horizon)[
ZAKLJUČNA NALOGA
#if text_lang == "en" {
[(FINAL PROJECT PAPER)]
}
]
#align(center + horizon)[
#set text(size: 18pt)
#upper(naslov)
#if text_lang == "en" {
[(#upper(title))]
}
]
#set align(right + bottom)
#upper(author)
]
// --------- Header ---------------
page(header:none)[
#set text(size: 14pt)
#set align(center)
UNIVERZA NA PRIMORSKEM\
FAKULTETA ZA MATEMATIKO, NARAVOSLOVJE IN\
INFORMACIJSKE TEHNOLOGIJE
#align(center + horizon)[
#set text(size: 12pt)
Zaključna naloga
#if text_lang == "en" {
[\ (Final project paper)]
}
#text(size: 14pt)[*#naslov*]
(#title)
]
#v(5em)
#align(left)[
Ime in priimek: #author\
Študijski program: #studij\
Mentor: #display_name(mentor)\
#if somentor != none [Somentor: #display_name(somentor)\ ]
#if work_mentor != none [Delovni mentor: #display_name(work_mentor)\ ]
]
#align(bottom + center)[
#kraj, #date.year()
]
#counter(page).update(1)
]
// ----------- zahala -----------------
if zahvala != none {
page()[
#text(weight: "bold", size: 18pt, if text_lang == "en" [Acknowledgement] else [Zahvala])
#zahvala
]
}
let item_counter(target, prefix) = context {
let cnt = counter(target).final().first()
if cnt > 0 {
let a = [#prefix: #cnt]
style( s => {
let m = measure(a, s)
a + h(11em - m.width)
})
}
}
let number_of_content() = context {
let p_cnt = counter(page)
[#p_cnt.at(query(<body_end>).first().location()).first()]
}
// ---- Ključna dokumentacija ----
page()[
#h(1fr)*Ključna dokumentacijska informacija*
#box(
stroke: black + 0.5pt,
inset: 0.5em,
width: 100%,
)[
Ime in PRIIMEK: #auth_dict.name #upper(auth_dict.surname)
Naslov zaključne naloge: #naslov
#v(2em)
Kraj: #kraj
Leto: #date.year()
#v(2em)
#[Število strani: #number_of_content()]
#h(4.7em)
#item_counter(figure.where(kind: image), "Število slik")
#item_counter(figure.where(kind: table), "Število tabel")
#item_counter(figure.where(kind: "Priloga"), "Število prilog")
#item_counter(page, "Št. strani prilog")
#context {
let cnt = query(ref).filter(it => it.element == none).map(it => it.target).dedup().len()
if int(cnt) > 0 {
let a = [Število referenc: #cnt]
style( s => {
let m = measure(a, s)
a + h(11em - m.width)
})
}
}
Mentor: #display_name(mentor)\
#if somentor != none [Somentor: #display_name(somentor)\ ]
#if work_mentor != none [Delovni mentor: #display_name(work_mentor)\ ]
#v(2em)
Ključne besede: #ključne_besede.join(", ")
Izvleček:
#v(1em)
#izvleček
]
]
// ---- Ključna dokumentacija (eng)----
page()[
#h(1fr)*Key document information*
#box(
stroke: black + 0.5pt,
inset: 0.5em,
width: 100%,
)[
Name and SURNAME: #auth_dict.name #upper(auth_dict.surname)
Title of the final project paper: #title
#v(2em)
Place: #kraj
Year: #date.year()
#v(2em)
Number of pages: #number_of_content()
#h(3.1em)
#item_counter(figure.where(kind: image), "Number of figures")
#item_counter(figure.where(kind: table), "Number of tables")
#item_counter(figure.where(kind: "Priloga"), "Number of appendix")
#item_counter(page, "Number of appendix pages")
#context {
let cnt = query(ref).filter(it => it.element == none).map(it => it.target).dedup().len()
if int(cnt) > 0 {
let a = [Number of references: #cnt]
style( s => {
let m = measure(a, s)
a + h(11em - m.width)
})
}
}
Mentor: #display_name(mentor, lang: "en")\
#if somentor != none [Co-mentor: #display_name(somentor, lang: "en")\ ]
#if work_mentor != none [work mentor: #display_name(work_mentor, lang: "en")\ ]
#v(2em)
Keywords: #key_words.join(", ")
Abstract:
#v(1em)
#abstract
]
]
// -------- TABLES ----------
let tablepage(outlin) = context {
let count = counter(outlin.target).final().first()
if count != 0 {
page(header: header("I"), outline(..outlin))
} else {
none
}
}
set page(header: header("1"))
tablepage((target: heading, title: if text_lang =="sl" {"Kazalo vsebine"} else {"Table of contents"}))
tablepage((target: figure.where(kind: table), title: if text_lang == "sl" {"Kazalo preglednic"} else {"Index of tables"}))
tablepage((target: figure.where(kind: image), title: if text_lang == "sl" {"Kazalo slik in grafikonov"} else {"Index of images and graphs"}))
show outline.entry: it => {
let f = it.body.children
[\ ] + upper(f.at(0)) + [ ] + f.at(2) + h(2em) + f.at(4)
}
tablepage((
target: figure.where(kind: "Priloga"),
title: if text_lang == "sl" {
"Kazalo prilog"
} else {
"Index of Attachments"
},
))
// kratice
if kratice != none {
page(header: header("I"))[
#upper(text(weight: "bold", size: 14pt, if text_lang == "en" [list of abbreviations] else [Seznam kratic]))
#kratice.pairs().map( ((short,desc)) => {
[/ #short: #desc #label(short)]
}).join("")
#counter(page).update(0)
]
}
show ref: it => {
if it.element in kratice.values().map(p => [#p]) {
let tar = str(it.target)
if it.citation.supplement != none {
let sup = it.citation.supplement
link(it.target)[#tar\-#sup]
} else {
link(it.target)[#tar]
}
} else {
it
}
}
show figure.where(kind: "Priloga"): it => {
}
// Main body.
set par(justify: true)
set page(header: header("1"))
[#metadata(none) <body_start>]
body
[#metadata(none) <body_end>]
pagebreak()
if bib_file != none {bib_file}
counter(page).update(0)
let priloga_counter = counter("priloga")
priloga_counter.step()
let priloga(content) = context {
let a = priloga_counter.get().first()
[
#figure(
supplement: if text_lang == "en" [Attachment] else [Priloga],
kind: "Priloga",
numbering: "A",
caption: text( style: "italic", content.at(0)),
[])
#label("priloga_" + str(a))
#content.at(1)
#priloga_counter.step()
]
}
set page(
header: align(right)[#if text_lang == "en" [Attachment] else [Priloga] #priloga_counter.display("A")],
header-ascent: 1cm,
)
for name in priloge {
priloga(name)
pagebreak(weak: true)
}
}
|
https://github.com/sysu/better-thesis | https://raw.githubusercontent.com/sysu/better-thesis/main/specifications/bachelor/cover.typ | typst | MIT License | #import "/utils/datetime-display.typ": datetime-display
#import "/utils/style.typ": 字号, 字体, sysucolor
// 封面
#let cover(
info: (:),
// 其他参数
stoke-width: 0.5pt,
min-title-lines: 2,
info-inset: (x: 0pt, bottom: 1pt),
info-key-width: 72pt,
info-key-font: "黑体",
info-value-font: "宋体",
column-gutter: -3pt,
row-gutter: 11.5pt,
bold-info-keys: ("title",),
bold-level: "bold",
) = {
assert(type(info.title) == array)
assert(type(info.author) == dictionary)
info.title = info.title + range(min-title-lines - info.title.len()).map((it) => " ")
if type(info.submit-date) == datetime {
info.submit-date = datetime-display(info.submit-date)
}
// 内置辅助函数
let info-key(
font: 字体.at(info-key-font, default: "黑体"),
size: 字号.小三,
body,
) = {
rect(
width: 100%,
inset: info-inset,
stroke: none,
text(
font: font,
size: size,
body,
),
)
}
let info-value(
font: 字体.at(info-value-font, default: "宋体"),
size: 字号.小三,
key,
body,
) = {
set align(center)
rect(
width: 100%,
inset: info-inset,
stroke: (bottom: stoke-width + black),
text(
font: font,
size: size,
weight: if (key in bold-info-keys) { bold-level } else { "regular" },
bottom-edge: "descender",
body,
),
)
}
let info-long-value(
font: 字体.at(info-value-font, default: "宋体"),
size: 字号.小三,
key,
body,
) = {
grid.cell(colspan: 3,
info-value(
font: font,
size: size,
key,
body,
)
)
}
let info-short-value(
font: 字体.at(info-value-font, default: "宋体"),
size: 字号.小三,
key,
body
) = {
info-value(
font: font,
size: size,
key,
body,
)
}
// 正式渲染
// 居中对齐
set align(center)
// 封面校徽
// 使用校方官方 VI 系统的 logo,来源:https://home3.sysu.edu.cn/sysuvi/index.html
image("/assets/vi/sysu_logo.svg", width: 3cm)
text(size: 字号.小初, font: 字体.宋体, weight: "bold", fill: sysucolor.green)[本科生毕业论文(设计)]
v(-2em)
line(length: 200%, stroke: 0.12cm + sysucolor.green);
v(-0.8em)
line(length: 200%, stroke: 0.05cm + sysucolor.green);
v(1.5cm)
// 论文题目
h(0.7cm)
block(width: 100%, grid(
columns: (25%, 1fr, 75%, 1fr),
column-gutter: column-gutter,
row-gutter: row-gutter,
info-key(size: 字号.二号, "题目:"),
..info.title.map((s) =>
info-long-value(size: 字号.二号, font: 字体.黑体, "title", s)
).intersperse(info-key(size: 字号.二号, " ")),
))
v(2.7cm)
// 学生与指导老师信息
set align(center + bottom)
block(width: 75%, grid(
columns: (info-key-width, 1fr, info-key-width, 1fr),
column-gutter: column-gutter,
row-gutter: row-gutter,
info-key("<NAME>"),
info-long-value("author", info.author.name),
info-key("学 号"),
info-long-value("student-id", info.author.sno),
info-key("院 系"),
info-long-value("department", info.author.department),
info-key("专 业"),
info-long-value("major", info.author.major),
info-key("指导教师"),
info-long-value("supervisor", info.supervisor.join(" ")),
))
v(2em)
text(font: 字体.黑体, size: 字号.小四)[#info.submit-date]
}
|
https://github.com/ralphmb/My-Dissertation | https://raw.githubusercontent.com/ralphmb/My-Dissertation/main/sections/regression.typ | typst | Creative Commons Zero v1.0 Universal | #show table: set text(8pt)
#show table: set align(center)
== Modelling match outcome
In this section we hope to explore the use of logistic regression in modelling match outcomes in football.
Bernoulli-distributed processes may have the 'success' chance $p$ be dependent on different variables $X$.
Here we have $p_i in (0,1)$ the probability of success for the $i$th subject, with $(p_i)/(1-p_i) in (0, infinity)$ the odds of success. Taking the logarithm of this quantity gives us the log-odds or 'logit' ($in (-infinity, infinity)$). The logit link function therefore allows us to use regression to model probabilities, as we can write the regression model as follows:
$ log (p_i)/(1-p_i) = beta_0 + beta_1 X_(i 1) + beta_2 X_(i 2) + ... $
Given a list of observed outcomes $in {0,1}$ and corresponding values for each variable, the coefficients $beta$ can be calculated using maximum-likelihood estimation. These coefficients have a nice interpretation in terms of the odds of success, as a unit increase in the $i"th"$ variable increases the odds of success by a factor of $e^(beta_i)$
An alternate way to express the model is to solve for $p_i$.
$ p_i = (1)/(1 + e^(-beta_0 - beta_1 X_(i 1) + ...)) $
As mentioned in the literature review, match outcome in football is three valued, but by grouping home losses with draws we can treat it as a binary response (hence in the R code this is called `result_bin`). Using logistic regression we can then predict the chance of a home win for given values of modelled variables, and examine the effect each variable has on the odds of winning.
First of all we will fit a full model, using most of the variables examined in the previous section. We will test whether red cards given to either side have an effect on the home team win chances. As the number of matches with any red cards is already fairly small we will use 1/0 valued variables denoting whether or not any were given to each side, rather than the number. Distance travelled by the away team will, as before, be grouped into a categorical variable based on whether it was above or below the median distance of 169km, or the distance between the stadia of Arsenal and Nottingham Forest. Distance could be used as a raw kilometre value, or undergo a log-transformation instead. Since it seems unlikely that a $1 arrow.r.bar 2$ increase in distance would have the same effect on win-odds as a $100 arrow.r,bar 101$ increase, we won't use the raw value, though the log-transformation could have more interesting results. The points each team scored in the previous (2021 - 2022) season will be included, as will a categorical variable denoting whether or not the match was a derby. A categorical variable denoting whether the match occured in the first or second half of the season will also be included.
We considered normalising the points variables. This would lead to more interpretable results when all covariates are set to zero but given the model would be equivalent as far as predictions, we decided against it.
#table(
columns:4,
[Variable], [Estimate], [Std. Error], [p],
[(Intercept)], [-1.527], [0.698], [0.029],
[Away team red card (yes:1)], [0.276], [0.769], [0.720],
[Home team red card (yes:1)], [-1.405], [0.683], [0.040],
[Distance grouping (far:1)], [0.404], [0.285], [0.160],
[Away points], [-0.014], [0.008], [0.081],
[Home Points], [0.033], [0.008], [0.000],
[Derby (yes:1)], [-0.087], [0.429], [0.840],
[Season half (later:1)], [0.370], [0.259], [0.150]
)
As can be seen from the p-value column, matches being derbies and red cards awarded to away teams seem to have little effect on match outcome. For derby matches this could be due to a correlation with the `distance_grouping` variable - rival teams are generally very geographically close to one another - raising the p-value. While red cards seem like an obvious boon to the other team, the sample size here might make the true effect size hard to uncover, as only 9 matches in this data set had any red cards given to the away team (18 to home teams).
Other variables have questionable significance, but there's an argument towards halving some of these p-values and treating them as one-tailed tests. The season half (`late_season`) seems like it could effect results either way, so a two-tailed test is appropriate for that variable. Red cards could be argued to be inherently good for the opposition so we will treat those as one-tailed, and the same for the 2021-2021 season league points variables and the closer/farther distance variable on the grounds that the away team would be less/more tired from the journey.
To build a more justified model, we will remove derbies and away-team red cards as variables. Though the distance grouping and season half have higher p-values, these will be kept. Season half might have some interesting interaction with other variables, and it may be worth checking whether distance becomes more relevant in the absence of the related derby variable.
A call to `glm` using the formula `result_bin ~ red_card_home + distance_grouping + opponent_points2021 + points2021 + late_season` gives a model with the following coefficients.
#table(
columns: 4,
[Variable], [Estimate], [Std. Error], [p],
[(Intercept)], [-1.511], [0.696], [0.030],
[Home team red card (yes:1)], [-1.386], [0.682], [0.042],
[Distance grouping (far:1)], [0.437], [0.258], [0.091],
[Away points], [-0.014], [0.008], [0.073],
[Home points], [0.033], [0.008], [0.000],
[Season half (later:1)], [0.372], [0.258], [0.150]
)
The AIC value of this model is 356.66, down from 360.5. Of these coefficients, home team red cards and home team points are unquestionably significant. The distance variable has a 1-sided p-value of 0.0453, and the away team points of 0.0365, justifying their inclusion.
Season half still does not pass the 5% bar, and we fit a model containing interaction terms between each variable here and the season-half but none of the interactions are significant. /*#highlight("Should I show evidence of this? I fit a model with formula = result_bin ~ (red_card_home + distance_grouping + opponent_points2021 + points2021) * late_season. None of the interaction terms were significant, points2021 had the lowest p-value at 73% so I tried a single interaction term with that and the p-value only went up. Models with single interaction terms with season half were insig. too.")*/
In light of the irrelevant interactions we will try removing the season-half variable, which hasn't had any improvement in significance.
#table(
columns: 4,
[Variable], [Estimate], [Std. Error], [p],
[(Intercept)], [-1.322], [0.680], [0.052],
[Home team red card (yes:1)], [-1.322], [0.684], [0.053],
[Away points], [-0.014], [0.008], [0.069],
[Home points], [0.033], [0.008], [0.000],
[Season half (later:1)], [0.423], [0.257], [0.100],
)
The AIC of this model is 356.75, ever-so-slightly higher than the previous. We can calculate a McFadden $R^(2)$ for this model of $0.078$, down slightly from the values of 0.085, 0.084 for each of the previous two models. This $R^2$ is quite low, suggesting much of the variation in the data is yet unnaccounted for. A McFadden $R^(2)$ in the 0.2 to 0.4 range is considered an excellent fit @mcfadden. The significance of the distance grouping is right on the 5% one-sided borderline, but for the sake of a more interesting model we decide to keep it. Given all the other variables are significant we will proceed with this model. /*Since our goal isn't particularly to predict the results of matches (for which we might prefer a model that captures more information), but to quantify the effect that different variables have on match outcome, we will proceed with this third model.*/
The coefficient of home team cards is remarkably similar to the intercept term, -1.3221 vs -1.3218, but as far as we can tell this is just coincidence.
Taking $exp$ on each coefficient we can see the effects of each variable on the odds.
Red cards are associated with a reduction in the odds of a home win by a factor of $exp(-1.322) = 0.267$, which makes sense given the home team would be facing at a one-player disadvantage for some portion of the match. As a heuristic, we can check this value as follows. Home teams won 129 of 272 total matches, and of the 15 where they were given red cards they won 3. The odds of each event (win in general, win with red card) are thus 0.902 and 0.25, and we see that $exp(-1.322) times 0.902 = 0.24 approx 0.25$.
/*Away teams can be seen to suffer a greater disadvantage when they must travel farther to attend the match. The p-value is right on the borderline, but for the sake of a more interesting model we decide to keep it.*/
Each additional point the home team achieved in the previous season corresponds to a 1.03 times greater odds of winning, whereas each such point by the opponent multiplies the odds by 0.986. We can see confidence interval bounds on these odds ratios in the table below. The mean column gives point estimates for the odds multiplier $exp(beta_i)$ of each variable, and the lower and upper columns give 95% confidence interval bounds on these.
#table(
columns: 4,
[Variable], [Lower], [Mean], [Upper],
[Red Card Home], [0.070], [0.267], [1.011],
[Away Points], [0.970], [0.986], [1.001],
[Home Points], [1.017], [1.034], [1.050],
[Distance], [0.922], [1.526], [2.526]
)
The small number of matches with red cards given out leads to the bounds on this multiplier being quite wide. The 5.2% two-sided p value given earlier is seen here as the uppermost bound ($z = 1.96$)on this quantity only just exceeds 1. The bounds in general are quite wide. In the case of red cards this is likely a consequence of the low sample size. For the other variables this could be because of a slight lack in predictive power. Points as a proxy for strength isn't a perfect system, as a team's performance will naturally change between and through seasons, and also because the points aren't solely determined by strength, for instance they can be deducted when teams break rules.
As we have 4 variables it's difficult to plot the probability of a home win, however assuming no red cards are given we can provide two contour plots of $Pr("Home Win")$ against the points of each team.
In this first plot we set the distance grouping variable to 0, representing a closer-located away team.
In @contourcloser we set the distance grouping variable to 0, representing a closer-located away team, and in @contourfarther we set the variable to 1.
The bounds on each axis correspond to just either side of the minimum and maximum points values attained in the previous league season, 38 and 93. Three solid lines have also been plotted, each a level curve at which a match has a home win probability at 25, 50, 75%. The dotted line is $x=y$, where the 'strength' of each team is equal.
#figure(
image("../assets/contour_plot_closer.png", fit: "contain"),
) <contourcloser>
#figure(
image("../assets/contour_plot_farther.png", fit: "contain"),
) <contourfarther>
The larger coefficient of home team points vs away team points is visible here as the steeper slope of the level curves compared to $x=y$. This corresponds to stronger home teams expecting a higher chance of victory even when playing against equal opposition, as compared to weaker home teams.
We can see the effect that location has as well, with 25% home win chance line being almost invisible when the home team plays more distant opposition.
Home advantage is tricky to quantify using this model due to the choice of response variable, which groups the neutral outcome of a tie with the negative outcome of loss. To make this somewhat easier to reason about we can try to fit a different class of model, one that can be fit against all three possible outcomes.
== Ordered logistic regression models
Ordered logistic regression models (OLRMs) are a way to generalise logistic regression, and they can be used to model situations with $>2$ ordered outcomes.
OLRMs have quite a similar setup to regular logistic regression. Binary logistic regression can be interpreted as a so-called latent variable model, whereby we define an unobserved variable $Y^(*)$ corresponding to the observed outcome $Y$ as follows. Let $Y^(*) = beta_0 + beta_1 X_1 + beta_2 X_2 + ... + epsilon$, with $epsilon$ a standard logistic error term and $X_i$ the $i"th"$ variable. Coefficients $beta$ are estimated such that $Y^(*)$ is positive when the observed outcome $Y=1$, and negative otherwise. $Y^(*)$ can be seen as a continuous version of the discrete outcome, on the log-odds scale.
We can suggestively rewrite the latent variable formulation of binary logistic regression as
$ Y = 1 arrow.r.double Y^(*)- beta_0 = sum_(i) beta_i X_i > -beta_0 $
The quantity $Y^(*) - beta_0$ therefore corresponds to different observed outcomes based on how it compares to $-beta_0$, which acts as a threshold value.\
In ordered logistic regression, a similar process is followed. Instead of observing outcomes $Y in {0,1}$, we observe a number of ordered outcomes, labelled ${1,..., k,... ,N}$, and find coefficients $beta_i, mu_k$ such that $mu_(k-1) <= Y^(*) = sum_(i) beta_(i) X_(i) <= mu_(k)$ corresponds to the observed outcome $Y = k$. The values $mu_k$ therefore take the place of $beta_0$ as threshold values marking different outcomes. The odds and probabilities of each outcome can be calculated using the following formula, reminiscent of $Y^(*)-beta_0$ in the previous paragraph:
$
"logit"(Pr(Y<=k)) = mu_k - Y^(*) $\
#table(
columns:4,
table.cell([Coefficients:], align: center, colspan:4 ),
[Variable], [Value], [Std. Error], [t value],
[Home red card], [-0.915], [0.484], [-1.892],
[Home Points], [0.027], [0.008], [3.548],
[Away Points], [-0.006], [0.007], [-0.813],
[Distance group], [0.266], [0.232], [1.147],
table.cell([Intercepts:], align: center, colspan: 4),
[Boundary], [Value], [Std. Error], [t value],
[Loss|Draw], [0.068], [0.627], [0.108],
[Draw|Win], [1.405], [0.632], [2.223]
) // olog_model_2 in code
Here we will try once more to fit a model to match outcome in football, this time respecting all three possible outcomes.
We can fit an OLRM in R using the `polr` function in the `MASS` package. For the sake of comparison we'll use a model fit against the same variables as the previous. Coefficients can be found in the table above.
The low t-values given clearly hint towards a low-confidence in these estimated values, a few of which are lower in magnitude than their standard errors. Nonetheless we can see the outcome this model would predict for a hypothetical game. For a game between Chelsea (74 pts, home) and Arsenal (69 pts) with no red cards we would see
$ hat(Y^(*)) = sum beta_i X_i = (&0.027 * 74 \
&-0.006 * 69 \
&+ 0.266 * 0 \
&- 0.915 * 0) = 1.677 $
This value 1.677 is greater than 1.405, hence the match is most likely a home win. We can also look at the probabilities predicted for this game to fall into each outcome. Given the boundaries $mu_k$ demarcating each outcome are $-infinity, 0.068, 1.405, infinity$, we can extract each probability as:
$ Pr(Y = k) &= Pr(Y <= k) - Pr(Y <= k-1)\
&="invlogit"(mu_k - hat(Y^(*)))\
&- "invlogit"(mu_(k-1) -hat(Y^(*))) $
//https://www.stata.com/manuals/rologit.pdf
Where $"invlogit"$ is the inverse logit function, $x arrow.bar (1+exp(-x))^(-1)$.
Thus the probability of a home loss would be $"invlogit"(0.068 -1.677) - "invlogit"(-infinity - 1.677) = 0.17$ (taking limits where needed), similarly $p=0.27$ for a draw and 0.56 for a home win.
In order for this class of models to be appropriate, the data must satisfy the proportional odds assumption.
This assumption mandates that the odds ratio between outcomes "$k_1$ or less" and "$k_2$ or less" must remain constant over different values of the independent variables.
There are a few suggested practical methods for assessing whether this assumption holds over a data set.
An online search suggests either graphical methods via plotting cumulative probability against each predictor @uclalogit, performing a likelihood ratio test between the OLRM and a multinomial regression fit using the same variables, or the Brant test @brant, implemented in R in the `brant` package. \
Multinomial logistic regression models are similar to OLRMs. In essence, seperate logistic regression models are fitted for each outcome, the results of each model giving the odds of a given subject falling into each outcomes. This allows for modelling outcomes that aren't ordered, at the cost of estimating multiple coefficients for each variable. The likelihood ratio test then looks at whether this model fits the data sufficiently better. The Brant test takes this same approach, but performs seperate tests on each variable.
/*https://stackoverflow.com/questions/37016215/testing-the-proportional-odds-assumption-in-r*/
We can perform the LRT fairly easily. The test has null hypothesis that the odds are indeed proportional. We find deviance values of 553.04 and 537.47, with degrees of freedom 12 and 20 for the OLRM and multinomial models respectively. This corresponds to a $chi^(2)$ test statistic of 15.55 on 8 d.f., higher than the critical value of $chi^(2)_(0.05) = 15.51$, hence we should assume that our data do not satisfy the proportional odds assumption.
The Brant test confirms this result.
#table(
columns: 4,
[Test for], [X2], [df], [p],
[Omnibus], [17.040], [4], [0.000],
[Home red cards (yes:1)], [0.600], [1], [0.440],
[Home points], [7.240], [1], [0.010],
[Away points], [5.620], [1], [0.020],
[Distance grouping (far:1)], [2.680], [1], [0.100]
)
The null hypothesis is again that the odds are proportional, and we can see the offending variables are the points of either team. We can try fitting a new model, without the raw points values and instead using the above/below median grouping by points variable that we saw in the exploratory analysis section.
#table(
columns:4,
[Test for], [X2], [df], [p],
[Omnibus], [6.030], [4], [0.200],
[Home red card (yes:1)], [0.730], [1],[0.39],
[Home point grouping (lower:1)], [1.010], [1],[0.31],
[Away point grouping (lower:1)], [1.69],[1],[0.19],
[Distance grouping (far:1)], [2.29],[1], [0.13],
)
And we see that the factor model can satisfy the assumptions required for OLR. This model ignores a lot of information as a result of disregarding the more granular data in the raw points values, however it meeting the assumptions does raise our confidence in the results of the model.
== Forecast Performance
We can test the forecast performance of all the models seen in this section. To avoid testing on the same data the models were trained on, we split our current dataset in two, 20% into the testing portion, assigned at random. New models with the same specifications were trained on the remaining 80% of the data. The coefficients in these testing models are fairly similar to the previous so we won't go into detail on them, though for obvious reasons the errors are wider.
This means the relevance of forecasting results here is murky. If we wished to simulate actual predictions, say predicting future matches in the league, we could have split the matches at a particular date and time. Though the significance of season time was shown earlier to be low, we thought it might bias the results to some degree. We could also have used the previous models to predict matches in the following league season. Between the change in teams due to promotion/relegation, as well as the even-more out of date points values, we decide to stick to the 2021-2022 season.
Out of 62 matches in the testing set, we see 26 wins, 20 ties and 16 home losses. In this sample tied results are very overrepresented, in general being rarer than losses, but the win:loss ratio remains similar, 1.63 vs 1.65 in the season overall.\
For the binary logistic regression model we see the following results.
#table(
columns:3,
[Pred/Actual], [Home Win],[Not],
[Home Win], [15], [11],
[Not], [11], [25]
)
So 40 out of 62 matches are correctly predicted. The model seems to err very symmetrically, though the tie-bias in the data is likely counteracting the bias inherent in the model. Below is the table for the ordinal logistic regression model that failed the proportional odds test earlier, included for comparison.
#table(
columns:4,
[Pred/Actual], [Win],[Tie],[Loss],
[Win], [25], [18], [13],
[Tie], [0], [2], [1],
[Loss], [1], [0], [2]
)
And for the categorical (factor) model that passes the Brant test, we see the following predictions.
#table(
columns:4,
[Pred/Actual], [Win],[Tie],[Loss],
[Win], [25], [19], [15],
[Tie], [0], [0], [0],
[Loss], [1], [1], [1]
)
We can see that both OLRMs are heavily skewed towards predicting home wins. Given the similarity of results we probably can't attribute this to the difference in variable choices, continuous or categorical descriptions of points. The large standard errors we saw earlier on the $mu_k$ threshold values are probably to blame. While it would lack any theoretical justification we could probably see more accurate results by increasing the values of both thresholds (between loss and tie, tie and win) as well as the distance between them.
A better solution might involve an entirely different quantification of "strength". Restricting ourselves to data also from the previous season, perhaps the team's ranking may be better, or strength could be calculated using the $alpha$ values estimated in a Bradley-Terry like model trained on last-season performance.
Of course, incorporating bookmaker's odds or even just results from previous matches in the season would likely yield much more capable models, so if we wanted more positive results those are the directions in which we'd be most likely to look. |
https://github.com/lebinyu/typst-thesis-template | https://raw.githubusercontent.com/lebinyu/typst-thesis-template/main/template/chapter_style.typ | typst | Apache License 2.0 | // import heading style
#import "global_style.typ": *
#let chapterpage(
chapterheading: "",
chaptnumber: int,
introduction: "",
mainbody: "",
reference: "",
title:""
) = {
set page(
margin: (x: 3cm, y:3cm),
numbering: "1",
)
set page(footer: {
counter(page).display((n) => {
let side = if calc.rem(n, 2) == 0 { left } else { right }
align(side, numbering("1", n))
})
})
styleheading_chapter(chapterheading,chaptnumber)
align(bottom)[#text(introduction)]
pagebreak(weak: true)
set page(header: {
counter(page).display((n) => {
let side = if calc.rem(n, 2) == 0 { left } else { right }
let headtext = if calc.rem(n, 2) == 0 { chapterheading } else { title }
align(side, text(baseline: 10pt)[#headtext])
line(length: 100%)
})
})
mainbody
pagebreak(weak: true)
reference
} |
https://github.com/r4ai/typst-code-info | https://raw.githubusercontent.com/r4ai/typst-code-info/main/.github/fixtures/line-numbers.typ | typst | MIT License | #import "../../plugin.typ": init-code-info, code-info
#show: init-code-info.with()
#code-info(show-line-numbers: true)
```rust
pub fn add(a: i32, b: i32) -> i32 {
a + b
}
pub fn sub(a: i32, b: i32) -> i32 {
a - b
}
pub fn mul(a: i32, b: i32) -> i32 {
a * b
}
pub fn div(a: i32, b: i32) -> i32 {
a / b
}
```
|
https://github.com/stat20/stat20handout-typst | https://raw.githubusercontent.com/stat20/stat20handout-typst/main/_extensions/stat20handout/typst-template.typ | typst |
#let stat20handout(
title: none,
title-prefix: none,
course-name: none,
semester: none,
cols: 1,
margin: (x: 1in, bottom: 1in, top: 1in),
paper: "us-letter",
font: (),
fontsize: 11pt,
sectionnumbering: none,
doc,
) = {
set page(
paper: paper,
margin: margin,
header: underline(offset: 5pt, smallcaps(
context {
let delimer = if title-prefix == none {
none
} else {
str(":")
};
if counter(page).get().first() == 1 [
#set text(12pt)
#title-prefix#delimer #title
#h(1fr)
Names:
#h(150pt)
]
})),
header-ascent: 40%,
footer: align(center, smallcaps([
#course-name #semester
]))
)
set par(justify: true,
leading: .7em)
set text(font: font,
size: fontsize)
set heading(numbering: sectionnumbering)
if cols == 1 {
doc
} else {
columns(cols, doc)
}
}
|
|
https://github.com/kdog3682/2024-typst | https://raw.githubusercontent.com/kdog3682/2024-typst/main/src/canva.typ | typst | #import "@preview/cetz:0.2.0"
#import "base-utils.typ": *
#set page(width: 8.5in, height: 11in, margin: 0pt)
#let create-page(dimensions, margin) = {
// not doing anything with margins yet
let o = (0, 0)
let (ox, oy) = o
// let ne = dimensions.map(resolve-inches)
let ne = dimensions
let (width, height) = ne
let center = ne.map((x) => x / 2)
let (half-width, half-height) = center
let se = (width, oy)
let nw = (ox, height)
let positive-diagonal = (o, ne)
let negative-diagonal = (nw, se)
let dimensions = (o, ne)
let center = (half-width, half-height)
let west = (ox, half-height)
let east = (width, half-height)
let south = (half-width, oy)
let north = (half-width, height)
return (
positive-diagonal: positive-diagonal,
negative-diagonal: negative-diagonal,
dimensions: dimensions,
center: center,
west: west,
east: east,
south: south,
north: north,
)
}
#let canva(fn, ..sink) = {
let kwargs = sink.named()
let box-attrs = (
// stroke: black,
stroke: none,
inset: 0pt,
radius: 0pt,
fill: none,
outset: 0pt,
)
let kwargs = sink.named()
let dimensions = kwargs.at("dimensions", default: (8.5in, 11in))
let margin = kwargs.at("margin", default: 0.5)
// let k = 4
// let k = 1
// dimensions = dimensions.map((x) => x * k)
// margin *= k
// let margin = resolve-inches(margin)
let length = 1in
let page = create-page(dimensions, margin)
// panic(page, length)
let canvas-default-attrs = (
length: length,
// debug: true,
)
let canvas-attrs = assign-fresh(kwargs, canvas-default-attrs)
let c = cetz.canvas(..canvas-attrs, fn(cetz.draw, page))
return box(..box-attrs, c)
}
#let test-create(draw, page) = {
draw.set-style(
rect: (
fill: red,
stroke: none
),
line: (
fill: blue,
stroke: (dash: "dashed")
),
grid: (
stroke: 0.5pt
)
)
// draw.circle(page.center)
// panic(page.dimensions)
// draw.grid(..page.dimensions)
// draw.grid((-1, 1), (8.5, 11))
// draw.rect((0, 0), (1, 1))
// return
for (k, v) in page {
if is-nested-array(v) {
draw.line(..v)
} else {
draw.circle(v)
}
}
}
// the canvas is not directly aligned ... it is okay
// can do this later
#canva(test-create)
// useful helper information is contained in page
// page.origin
// page.center
// page.north
// these are coordinate points perhaps
#let draw-lines(draw, page) = {
draw.line(..page.positive-diagonal)
draw.line(..page.negative-diagonal)
draw.grid(..page.dimensions)
}
#import "canva.typ": canva
#let cetz-background(create) = {
return canva(create, length: 1in, background: blue.lighten(75%))
}
|
|
https://github.com/7sDream/fonts-and-layout-zhCN | https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/05-features/shaping/slide-9.typ | typst | Other | #import "/lib/draw.typ": *
#import "/lib/glossary.typ": tr
#let start = (0, 0)
#let end = (1000, 660)
#let table-border-color = rgb("5595c2")
#let table-gray-color = rgb("e4e5e9")
#let arrow-color = rgb("309843")
#let feature-color = rgb("2e7cac")
#let lookup1-color = rgb("2d9641")
#let lookup2-color = rgb("c4b455")
#let no-hlines-pen = (pen, rows) => (x, y) => if y == 0 {
(bottom: none, rest: pen)
} else if y + 1 == rows {
(top: none, rest: pen)
} else {
(top: none, bottom: none, rest: pen)
}
#let graph = with-unit((ux, uy) => {
// mesh(start, end, (100, 100), stroke: 1 * ux + gray)
let glyph-stream = "official".codepoints()
let columns = glyph-stream.len()
let pen = 2 * ux + table-border-color
txt(
block(width: 955 * ux, table(
columns: (1fr,) * columns,
align: horizon + center,
inset: 0pt,
fill: (x, y) => if y > 0 { table-gray-color } else { white },
stroke: no-hlines-pen(pen, 2),
..glyph-stream.map(it => block(height: 50*ux, spacing: 0pt, text(fill: black, size: 32*ux, it))),
block(height: 50*ux, spacing: 0pt),
)),
(30, 652),
anchor: "lt",
)
arrow((210, 682-243), (210, 682-135), stroke: 30*ux + arrow-color, head-scale: 1.4)
rect((100, 682-255), end: (900, -50), radius: 50, fill: feature-color, shadow: (:))
txt(text(fill: white)[特性], (150, 400), anchor: "lt", size: 42*ux)
rect((175, 682-328), end: (760, 682-520), radius: 20, fill: lookup1-color, shadow: (:))
txt(text(fill: white)[#tr[lookup]1], (190, 682-345), anchor: "lt", size: 42*ux)
txt(text(fill: white)[
- 规则:`sub f f i by f_f_i`
- 规则:`sub f f by f_f`
- 规则:`sub f l by f_l`
], (190, 682-410), anchor: "lt", size: 24*ux)
rect((175, 682-530), end: (760, 682-665), radius: 20, fill: lookup2-color, shadow: (:))
txt(text(fill: white)[#tr[lookup]2], (190, 682-542), anchor: "lt", size: 42*ux)
txt(text(fill: white)[
- 规则
], (190, 682-610), anchor: "lt", size: 24*ux)
})
#canvas(
end,
start: start,
width: 100%,
clip: true,
graph,
)
|
https://github.com/antonWetzel/prettypst | https://raw.githubusercontent.com/antonWetzel/prettypst/master/test/default/columns.typ | typst | MIT License | #table(
columns: (1fr, 1fr),
[abcedf], [b],
[c], [d],
[e],
)
#table(
columns: (1fr, 1fr, 1fr),
[a000000], [b], [c0],
[d00], [e], [],
)
#table(
columns: (1fr, 1fr),
[C], [D],
[EEEEE], [FF],
)
#table(
columns: (1fr, 1fr),
table.header[A][B],
[C], [D],
)
#table(
columns: (1fr, 1fr),
table.cell(rowspan: 2)[AC], [B],
[D],
)
#table(
columns: (1fr, 1fr, 1fr, 1fr),
table.cell(colspan: 2)[AB], [C], [D],
[E], table.cell(colspan: 2)[FG], [H],
[I], [J], table.cell(colspan: 2)[KL],
)
#table(
columns: (1fr, 1fr, 1fr),
table.cell(rowspan: 2)[AD], [B], [C],
table.cell(rowspan: 2)[EH], [F],
[G], table.cell(rowspan: 2)[HK],
[I], [J],
)
#table(
columns: (1fr, 1fr, auto),
table.header([*Product*], [*Category*], [*Price*]),
[Apples], [Produce], [\$1.23],
[Oranges], [Produce], [\$3.52],
table.cell(colspan: 2)[*Produce Subtotal*], [*\$4.75*],
[iPhone], [Electronics], [\$1000.00],
table.footer(table.cell(colspan: 2)[*Total*], [*\$1004.75*]),
)
|
https://github.com/undefik/jconv | https://raw.githubusercontent.com/undefik/jconv/master/README.md | markdown | The Unlicense | # JConv - a simple Jupyer notebook converter for Typst
This utility leverages the sourcerer, ansi-render, based, cmarker and showybox packages to convert basic Jupyter notebook files to PDF.
## Usage
First, you need to import the file:
```typst
#import "jconv.typ": jconv
```
To output the notebook's contents as Typst content, you must supply the `jconv` function with a dictionary, like so:
```typst
#jconv(json("Untitled.ipynb"))
```
## Known issues
- Only PNG image output is currently working. Other image types will probably be added in later.
- The cmarker package only converts the Markdown features present in the commonMark spec, which happens to exclude math equations (converting those would be a nightmare, anyway)
## Other packages used in this package
- [sourcerer](https://typst.app/universe/package/sourcerer) - source code renderer
- [ansi-render](https://typst.app/universe/package/ansi-render) - ANSI output renderer
- [based](https://typst.app/universe/package/based) - Base64 conversion of images
- [cmarker](https://typst.app/universe/package/cmarker) - CommonMark renderer
- [showybox](https://typst.app/universe/package/showybox) - Used for boxes around Markdown and output blocks
|
https://github.com/j10ccc/algorithm-analysis-homework-template-typst | https://raw.githubusercontent.com/j10ccc/algorithm-analysis-homework-template-typst/main/config.typ | typst | #let frontmatter = (
name: [xxx],
student_number: [xxxxxxxxxxxx]
)
|
|
https://github.com/ClazyChen/Table-Tennis-Rankings | https://raw.githubusercontent.com/ClazyChen/Table-Tennis-Rankings/main/history/2019/MS-05.typ | typst |
#set text(font: ("Courier New", "NSimSun"))
#figure(
caption: "Men's Singles (1 - 32)",
table(
columns: 4,
[Ranking], [Player], [Country/Region], [Rating],
[1], [<NAME>], [CHN], [3659],
[2], [<NAME>], [CHN], [3473],
[3], [<NAME>], [CHN], [3312],
[4], [<NAME>], [CHN], [3303],
[5], [<NAME>], [CHN], [3224],
[6], [<NAME>], [SWE], [3189],
[7], [<NAME>], [JPN], [3183],
[8], [<NAME>], [GER], [3156],
[9], [<NAME>], [BRA], [3154],
[10], [#text(gray, "<NAME>")], [CHN], [3104],
[11], [<NAME>], [KOR], [3102],
[12], [<NAME>], [BLR], [3086],
[13], [MIZUTANI Jun], [JPN], [3085],
[14], [ZHOU Yu], [CHN], [3079],
[15], [KANAMITSU Koyo], [JPN], [3077],
[16], [NIWA Koki], [JPN], [3070],
[17], [<NAME>-Ju], [TPE], [3043],
[18], [GAUZY Simon], [FRA], [3037],
[19], [FANG Bo], [CHN], [3036],
[20], [YAN An], [CHN], [3027],
[21], [JEOUNG Youngsik], [KOR], [3018],
[22], [<NAME>], [CHN], [3002],
[23], [<NAME>], [KOR], [3000],
[24], [OVTCHAROV Dimitrij], [GER], [2987],
[25], [<NAME>], [CHN], [2970],
[26], [<NAME>], [KOR], [2964],
[27], [<NAME>ihao], [CHN], [2955],
[28], [<NAME>ang], [CHN], [2955],
[29], [<NAME>], [CRO], [2942],
[30], [<NAME>], [KOR], [2936],
[31], [<NAME>], [CHN], [2934],
[32], [<NAME>], [GER], [2931],
)
)#pagebreak()
#set text(font: ("Courier New", "NSimSun"))
#figure(
caption: "Men's Singles (33 - 64)",
table(
columns: 4,
[Ranking], [Player], [Country/Region], [Rating],
[33], [<NAME>], [POR], [2928],
[34], [#text(gray, "<NAME>un")], [KOR], [2927],
[35], [<NAME>], [GER], [2920],
[36], [<NAME>], [JPN], [2917],
[37], [<NAME>], [JPN], [2915],
[38], [<NAME>], [ENG], [2914],
[39], [UEDA Jin], [JPN], [2910],
[40], [<NAME>], [JPN], [2888],
[41], [<NAME>], [SVK], [2881],
[42], [YOSHIMURA Maharu], [JPN], [2881],
[43], [<NAME>], [KOR], [2868],
[44], [<NAME>], [IND], [2860],
[45], [ZHU Linfeng], [CHN], [2856],
[46], [XU Chenhao], [CHN], [2853],
[47], [<NAME>], [FRA], [2847],
[48], [<NAME>], [GER], [2846],
[49], [<NAME>], [BEL], [2846],
[50], [<NAME>], [GRE], [2837],
[51], [<NAME>], [JPN], [2837],
[52], [<NAME>], [NGR], [2833],
[53], [<NAME>], [CRO], [2822],
[54], [<NAME>], [SWE], [2820],
[55], [ZHAO Zihao], [CHN], [2812],
[56], [<NAME>], [AUT], [2809],
[57], [<NAME>], [DEN], [2808],
[58], [CHO Seungmin], [KOR], [2806],
[59], [YOSHIDA Masaki], [JPN], [2797],
[60], [ZHAI Yujia], [DEN], [2795],
[61], [JHA Kanak], [USA], [2790],
[62], [APOLONIA Tiago], [POR], [2790],
[63], [CHUANG Chih-Yuan], [TPE], [2790],
[64], [TAKAKIWA Taku], [JPN], [2787],
)
)#pagebreak()
#set text(font: ("Courier New", "NSimSun"))
#figure(
caption: "Men's Singles (65 - 96)",
table(
columns: 4,
[Ranking], [Player], [Country/Region], [Rating],
[65], [<NAME>], [POL], [2784],
[66], [<NAME>], [SLO], [2784],
[67], [<NAME>], [SWE], [2782],
[68], [<NAME>], [SWE], [2779],
[69], [<NAME>], [CHN], [2772],
[70], [<NAME>], [SLO], [2772],
[71], [<NAME>], [CHN], [2771],
[72], [<NAME>], [SWE], [2771],
[73], [<NAME>], [SWE], [2769],
[74], [SHIBAEV Alexander], [RUS], [2766],
[75], [AKKUZU Can], [FRA], [2765],
[76], [<NAME>], [GER], [2765],
[77], [OIKAWA Mizuki], [JPN], [2765],
[78], [<NAME>], [GER], [2765],
[79], [MURAMATSU Yuto], [JPN], [2761],
[80], [<NAME>], [SLO], [2761],
[81], [KOU Lei], [UKR], [2760],
[82], [<NAME>], [CAN], [2760],
[83], [<NAME>], [CHN], [2750],
[84], [<NAME>], [IND], [2750],
[85], [LUNDQVIST Jens], [SWE], [2748],
[86], [<NAME>], [FRA], [2748],
[87], [<NAME>], [SVK], [2744],
[88], [QIU Dang], [GER], [2743],
[89], [WANG Zengyi], [POL], [2743],
[90], [CHEN Chien-An], [TPE], [2739],
[91], [<NAME>], [IRI], [2738],
[92], [<NAME>], [AUT], [2732],
[93], [<NAME>], [CZE], [2726],
[94], [<NAME>], [POL], [2723],
[95], [<NAME>], [JPN], [2721],
[96], [UDA Yukiya], [JPN], [2714],
)
)#pagebreak()
#set text(font: ("Courier New", "NSimSun"))
#figure(
caption: "Men's Singles (97 - 128)",
table(
columns: 4,
[Ranking], [Player], [Country/Region], [Rating],
[97], [OLAH Benedek], [FIN], [2712],
[98], [IONESCU Ovidiu], [ROU], [2711],
[99], [KIM Donghyun], [KOR], [2709],
[100], [NORDBERG Hampus], [SWE], [2706],
[101], [KIZUKURI Yuto], [JPN], [2706],
[102], [HWANG Minha], [KOR], [2705],
[103], [CHIANG Hung-Chieh], [TPE], [2705],
[104], [NIU Guankai], [CHN], [2705],
[105], [TOGAMI Shunsuke], [JPN], [2703],
[106], [<NAME>], [JPN], [2696],
[107], [LIND Anders], [DEN], [2696],
[108], [WALKER Samuel], [ENG], [2696],
[109], [#text(gray, "<NAME>")], [PRK], [2688],
[110], [<NAME>], [POR], [2687],
[111], [<NAME>], [KOR], [2687],
[112], [<NAME>], [IRI], [2685],
[113], [KIM Minhyeok], [KOR], [2683],
[114], [<NAME>], [ALG], [2683],
[115], [LIVENTSOV Alexey], [RUS], [2678],
[116], [<NAME>], [AUT], [2677],
[117], [SIPOS Rares], [ROU], [2672],
[118], [<NAME>], [JPN], [2668],
[119], [XU Yingbin], [CHN], [2666],
[120], [LIU Yebo], [CHN], [2666],
[121], [DESAI Harmeet], [IND], [2665],
[122], [KIM Minseok], [KOR], [2664],
[123], [#text(gray, "GAO Ning")], [SGP], [2663],
[124], [HIRANO Yuki], [JPN], [2661],
[125], [<NAME> Ting], [HKG], [2659],
[126], [<NAME>], [GER], [2657],
[127], [<NAME>], [RUS], [2655],
[128], [<NAME>], [ECU], [2652],
)
) |
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/hydra/0.1.0/examples/main.typ | typst | Apache License 2.0 | #import "@local/hydra:0.0.1": hydra
#set page(header: hydra() + line(length: 100%))
#set heading(numbering: "1.1")
#show heading.where(level: 1): it => pagebreak(weak: true) + it
= Introduction
#lorem(750)
= Content
== First Section
#lorem(500)
== Second Section
#lorem(250)
== Third Section
#lorem(500)
= Annex
#lorem(10)
|
https://github.com/7sDream/fonts-and-layout-zhCN | https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/02-concepts/dimension/notional-size.typ | typst | Other | #import "/lib/draw.typ": *
#import "/template/theme.typ": theme
#import "/template/lang.typ": bengali
#let start = (0, 0)
#let end = (800, 300)
#let graph = with-unit((ux, uy) => {
// mesh(start, end, (100, 50), stroke: 1 * ux + gray)
let lb = (0, 60)
txt([#text(font: ("Noto Sans",))[Hx]#text(font: ("Cinzel",), size: 1.25em)[Hx]], lb, size: 260 * ux, anchor: "lb", dx: -10)
let widths = (175, 150, 250, 226)
let common-heights = (228, -60)
let heightss = (
(140, 185),
(140, 185),
(195,),
(195,),
)
for (width, heights) in widths.zip(heightss) {
for height in (..heights, ..common-heights) {
rect(
lb, width: width, height: -height,
stroke: 1.5 * ux + theme.main
)
}
lb = (lb.at(0) + width, lb.at(1))
}
})
#canvas(end, start: start, width: 90%, graph)
|
https://github.com/Area-53-Robotics/53E-Notebook-Over-Under-2023-2024 | https://raw.githubusercontent.com/Area-53-Robotics/53E-Notebook-Over-Under-2023-2024/giga-notebook/entries/visualization/decide.typ | typst | Creative Commons Attribution Share Alike 4.0 International | #import "/packages.typ": notebookinator, codetastic
#import notebookinator: *
#import themes.radial.components: *
#import codetastic: qrcode
#show: create-body-entry.with(
title: "Decide: Data Visualization",
type: "decide",
date: datetime(year: 2023, month: 11, day: 18),
author: "<NAME>",
witness: "<NAME>",
)
In order to make our decision we rated each option for the following properties:
- Ease of use on a scale of 0 to 10. This is how easy the software for the user
- Ease of development on a scale of 0 to 6. This is how easy it is for us to
actually write the software.
#decision-matrix(
properties: ((name: "Ease of use"), (name: "Ease of development"),),
("Native App", 8, 3),
("Grafana", 5, 5),
("LCD Screen", 5, 4),
)
#admonition(type: "decision")[
In the end we went with a native application. We are prioritising user
friendliness over development time, and so this is the best option.
]
We settled on the following tech stack:
- Tauri as our back-end, providing us with cross platform support and access to
any web framework for our front-end
- Svelte as our front-end framework.
- Skeleton UI Toolkit for our component library
= First Attempt
We got to work, and quickly had a prototype working. However we ran into some
issues almost immediately. As our app became more complex it became harder and
harder to make changes. We wanted features like being able to view multiple
graphs at once, and changing the types of views. If we wanted these feature's
we'd have to rewrite the app from the ground up.
#figure(
image("./viewinator_pid.png", width: 65%),
caption: "The viewinator, displaying PID output",
)
In addition, we still didn't have a good format for sending information off of
the brain. This problem was quickly becoming too complex for us to handle.
= Changes of Plans
This final nail in the coffin was when Cooper from team 614A started writing a
competing app. #footnote([
Cooper's visualization app:
#align(
bottom,
qrcode("https://github.com/Cooper7196/vex-dashboard", size: 2pt),
)
])
It already supported the features we wanted to add, and was looking overall more
polished. It was by no means complete, but it showed promise.
We thought that duplicating work was overall suboptimal, so we decided to switch
gears. We still needed to be able to visual data soon, so we reevaluated our
decision matrix, this time prioritising ease of development over user
experience.
We rated each option by:
- Ease of use on a scale of 0 to 5.
- Ease of development on a scale of 0 to 10.
Ideally we could get a working solution quickly, and then use Cooper's solution
once it became more polished.
#decision-matrix(
properties: ((name: "Ease of use"), (name: "Ease of development")),
("Native App", 4, 3),
("Grafana", 4, 4),
("LCD Screen", 3, 4),
)
#admonition(
type: "decision",
)[
We ended up going with the Grafana integration. This will only require a bit of
glue code to get running, and requires no front-end development, overall making
the whole process much easier.
]
= Implementation
For our data source, we decided to go with MQTT. This is an extremely simple
protocol used for communicating between IOT devices over a network. Grafana can
listen for the messages coming off of the MQTT broker and display them, and can
do so in real time, making it the perfect data source.
MQTT lets us send arbitrary data in a JSON format, which means that we can even
label the data as we send it over. Its also designed for real time use, making
it perfect for our use case.
All we would have to do is write code that would:
+ Connect to the brain via Bluetooth
+ Decode the incoming information
+ Send that information to an MQTT broker
|
https://github.com/soul667/typst | https://raw.githubusercontent.com/soul667/typst/main/PPT/光电系统集成 - 副本/海洋水质检测系统的设计与开发.typ | typst | #import "touying/lib.typ": *
#import "template.typ": *
// #import "todo.typ": *
#import "@preview/algorithmic:0.1.0"
#import algorithmic: algorithm
#let s = themes.simple.register(s, aspect-ratio: "16-9", footer: [Harbin Engineering University])
#let s = (s.methods.enable-transparent-cover)(self: s)
#let (init, slide, slides, title-slide, centered-slide, focus-slide,touying-outline) = utils.methods(s)
#show: init
#let themeColor = rgb(46, 49, 124)
#let head_main=info(
color: black,
(
// icon: haiyang,
content:"M 大作业答辩",
content1:"2021251124 古翱翔"
)
)
// #align(left+top,image("icon/校标.png",width: 15em))
#slide[
#v(-1.3em)
// #h(1em)
#align(top+right,image("icon/校标.png",width: 10em))
#align(center+horizon, text(size: 35pt)[
#head_main
])
#v(0.5em)
]
#set heading(numbering: "1.")
#set text(font:("Times New Roman","HYFangSongS"),size: 0.76em,spacing: 0.3em,weight: "extralight")
#show heading.where(level: 2):set text(themeColor, 0.9em,font: "Microsoft YaHei")
#show heading.where(level: 3):set text(themeColor, 0.8em,font: "Microsoft YaHei")
#show strong:set text(font: "HYCuFangSongJ")
// #show footnote:set text(themeColor, 0.9em,font: "Microsoft YaHei")
// 小标题样式
#show heading.where(level: 1):set text(themeColor, 1.5em,font:("Times New Roman","Microsoft YaHei"))
#set par(justify: true,first-line-indent: 2em) // 两端对齐,段前缩进2字符
// 二级标题下加一条横线
#show heading.where(level: 2): it => stack(
v(-1.1em),
align(top+right,image("icon/校标.png",width: 8em)),
// v(-1.1em),
// str(type( locate(loc => query(heading.where(level: 1), loc)))),
// str(type((1,2,3,4,5))),
// align(center,
// rect(fill: themeColor)[
// #for c in ("背景及意义","项目概述") [
// #text(fill:white,font: "Microsoft YaHei")[#c]
// ]
// ]
// )
// ,
// header_rect(),
align(center,[]),
v(-1em),
v(0.1em),
it,
v(0.6em),
// line(length: 100%, stroke: 0.05em + themeColor),
line(length: 100%,stroke: (paint: themeColor, thickness: 0.06em)),
v(0.8em),
)
// #set heading.where(level: 3)(numbering: "1.");
// #show heading.where(level: 3):set heading(numbering: "1.")
#show heading.where(level: 3): it => stack(
v(-0.0em),
it,
v(0.6em),
// line(length: 100%, stroke: 0.05em + themeColor),
line(length: it.body.text.len()*0.25em+2.8em,stroke: (paint: themeColor, thickness: 0.06em, dash: "dashed")),
v(0.1em),
)
// #centered-slide()
// = 111
#show heading: it => {
it
v(-0.8cm)
par()[#text()[#h(0.0em)]]
}
// #show centered-slide : it=>{
// it
// }
#centered-slide(section: [必做])
#slide[
== 处理压缩包中的光谱数据并作图,提取光谱中的特征并拟合数据。
]
|
|
https://github.com/gigu003/typst-templates | https://raw.githubusercontent.com/gigu003/typst-templates/main/qcreport/_extensions/qcreport/typst-template.typ | typst | MIT License |
// This is an example typst template (based on the default template that ships
// with Quarto). It defines a typst function named 'article' which provides
// various customization options. This function is called from the
// 'typst-show.typ' file (which maps Pandoc metadata function arguments)
//
// If you are creating or packaging a custom typst template you will likely
// want to replace this file and 'typst-show.typ' entirely. You can find
// documentation on creating typst templates and some examples here:
// - https://typst.app/docs/tutorial/making-a-template/
// - https://github.com/typst/templates
#let report(
title: none,
subtitle: none,
authors: none,
date: none,
univ_logo: "./_extensions/qcreport/logo1977.gif",
registry: "肿瘤登记处",
abstract: none,
header: " ",
footer: " ",
cols: 1,
margin: (x: 1.0in, y: 1.0in),
paper: "a4",
font: (),
fontsize: 11pt,
sectionnumbering: none,
toc: false,
doc,
) = {
set page(
paper: paper,
margin: margin,
fill: luma(250),
numbering: "- 1 -",
header: context {
if counter(page).get().first() > 2 [
#h(1fr)
#text(size:11pt)[#header]
]
},
footer: context [
#text(size:11pt)[#footer]
#h(1fr)
#text(size:11pt)[
#counter(page).display(
"1/1",
both: true,
)
]
]
)
set par(
leading: 1.5em,
justify: true,
linebreaks: auto,
first-line-indent: 2em,
)
set text(
font: font,
size: fontsize
)
// 设置标题格式
set heading(numbering: sectionnumbering)
show heading: it => locate(loc => {
let levels = counter(heading).at(loc)
let deepest = if levels != () {
levels.last()
} else {
1
}
set text(12pt)
if it.level == 1 [
#if deepest !=1 {
}
#set par(first-line-indent: 0pt)
#let is-ack = it.body in ([Acknowledgment], [Acknowledgement])
#set align(left)
#set text(if is-ack { 15pt } else { 15pt },font:"SimHei")
#v(36pt, weak: true)
#if it.numbering != none and not is-ack {
numbering("1.", deepest)
h(7pt, weak: true)
}
#it.body
#v(36pt, weak: true)
] else if it.level == 2 [
#set par(first-line-indent: 0pt)
#set text(size:14pt,font:"SimHei")
#v(24pt, weak: true)
#if it.numbering != none {
numbering("1.1.",..levels)
h(7pt, weak: true)
}
#it.body
#v(24pt, weak: true)
] else if it.level == 3 [
#set par(first-line-indent: 0pt)
#set text(size:14pt,font:"SimHei")
#v(15pt, weak: true)
#if it.numbering != none {
numbering("1.1.1.",..levels)
h(7pt, weak: true)
}
#it.body
#v(15pt, weak: true)
] else [
#set par(first-line-indent: 0pt)
#set text(size:12pt,font:"SimHei")
#v(12pt, weak: true)
#if it.numbering != none {
numbering("1.1.1.1.",..levels)
h(7pt, weak: true)
}
#it.body
#v(12pt, weak: true)
]
})
align(center)[
#image(univ_logo, height: 4cm)
#block(inset: 0.5cm)[
#text(size:15pt, fill:luma(80))[
河南省癌症中心\
河南省肿瘤登记处]
]
]
if title != none {
align(center)[#block(above: 2cm, below:2cm, height: 10%)[
#text(weight: "bold", size: 30pt)[#title]
]]
}
if subtitle != none {
align(center)[#block(above: 2cm, below:0cm, height: 5%)[
#text(weight: "bold", size: 15pt)[#subtitle]
]]
}
align(center)[#line(length: 80%, stroke: 1.5pt + luma(80))]
if registry != none {
align(center)[#block(above:1cm, below:4cm, height:5%)[
#text(weight: "bold", size: 15pt)[#registry]
]]
}
if authors != none {
let count = authors.len()
let ncols = calc.min(count, 3)
grid(
columns: (1fr,) * ncols,
row-gutter: 16pt,
..authors.map(author =>
align(left)[
#text(size:11pt)[
编写: #author.name \
单位: #author.affiliation \
联系: #author.email
]
]
)
)
}
if date != none {
align(center)[#block(inset: 10pt)[
#text(size: 12pt)[#date]
]]
}
pagebreak()
if toc {
block(above: 2em, below: 2em)[
#set par(leading: 1.5em)
#set text(size: 12pt)
#align(center)[
#outline(
title: "主要内容",
depth: 2,
indent: auto,
)
]
]
}
if toc {
pagebreak()
}
if abstract != none {
block(inset: 2em)[
#text(weight: "semibold")[Abstract] \
#h(1em) #abstract
]
}
if cols == 1 {
doc
} else {
columns(cols, doc)
}
}
|
https://github.com/tiankaima/typst-notes | https://raw.githubusercontent.com/tiankaima/typst-notes/master/7e1810-algo_hw/hw5.typ | typst | #import "@preview/cetz:0.2.2": *
#import "utils.typ": *
== HW5 (Week 6)
Due: 2024.04.14
=== Question 14.5-2
Determine the cost and structure of an optimal binary serach tree for a set of $n=7$ keys with the following probabilities:
#align(center)[
#table(
stroke: none,
columns: (auto, auto, auto, auto, auto, auto, auto, auto, auto),
table.header(
[$i$],
[$0$],
[$1$],
[$2$],
[$3$],
[$4$],
[$5$],
[$6$],
[$7$],
),
table.hline(start: 0, stroke: 0.5pt),
[$p_i$],
table.vline(start: 0, stroke: 0.5pt),
[],
[$0.04$],
[$0.06$],
[$0.08$],
[$0.02$],
[$0.10$],
[$0.12$],
[$0.14$],
[$q_i$],
[$0.06$],
[$0.06$],
[$0.06$],
[$0.06$],
[$0.05$],
[$0.05$],
[$0.05$],
[$0.05$],
)
]
#rev1_note[
Review: 最优二叉搜索树
考虑一组已经排序的关键字 $K={k_1, k_2, ..., k_n}$, 和对应的访问频率 $P={p_1, p_2, ... p_n}$, 另外哨兵节点的频率(假想的关键字, 位于真正关键字的「中间」) $Q={q_0, q_1, ... , q_n}$. 计算思路是: 定义 $e[i][j]$ 为包含关键字 $k_i ... k_j$ 子树的查找代价, 目标即寻找 $e[1][n]$, 这就转变为动态规划问题.
初始条件: $e[i][i-1]=q_(i-1)$. 另外考虑这样一个问题, 将$k_i, ..., k_j$ 成为一个节点的子树之后, 搜索的期望就增加了 $p_i + p_(i+1) + ... + p_j + q_(i-1)+ ... + q_j$. 记这个参数为 $w(i,j):=sum_(l=i)^j p_l + sum_(l=i-1)^j q_l$.
得到转移方程:
$
e[i][j] = min_(i <= r <= j)(e[i][r-1] + e[r+1][j] + w(i,j))
$
]
#ans[
Running the code provided in appendix, we get the following result (cost, preorder traversal of the optimal BST):
```text
(3.17, [5, 2, 1, 3, 4, 7, 6])
```
#let data = (
[$5$],
([$2$], ([$1$], [$d_0$], [$d_1$]), ([$3$], [$d_2$], ([$4$], [$d_3$], [$d_4$]))),
([$7$], ([$6$], [$d_5$], [$d_6$]), ([$d_7$],)),
)
#align(center)[
#canvas(
length: 1cm,
{
import draw: *
set-style(
content: (padding: .2),
fill: gray.lighten(80%),
stroke: gray.lighten(70%),
)
tree.tree(
data,
spread: 1.5,
grow: 1.4,
draw-node: (node, ..) => {
circle((), radius: .45, stroke: none)
content((), node.content)
},
draw-edge: (from, to, ..) => {
line(
(a: from, number: .6, b: to),
(a: to, number: .6, b: from),
mark: (end: ">"),
)
},
name: "tree",
)
},
)
]
]
=== Question 15.3-3
What is an optimal Huffman code for the following set of frequencies, based on the first 8 Fibonacci numbers?
#align(center)[
#table(
stroke: none,
columns: (auto, auto, auto, auto, auto, auto, auto, auto, auto),
table.header(
[$i$],
[$0$],
[$1$],
[$2$],
[$3$],
[$4$],
[$5$],
[$6$],
[$7$],
),
table.hline(start: 0, stroke: 0.5pt),
[$f_i$],
table.vline(start: 0, stroke: 0.5pt),
[$1$],
[$1$],
[$2$],
[$3$],
[$5$],
[$8$],
[$13$],
[$21$],
)
]
Can you generalize your answer to find the optimal code when the frequencies are the first $n$ Fibonacci numbers?
#rev1_note[
Review: Huffman 编码
典型的贪心算法, 思路如下:
- 取两个使用频率最低的两个子节点, 合并, 并记录新的节点的频率为两者之和.
- 重复上述过程, 直到所有节点合并为一个节点.
细节上需要维护一个最小堆, 初始化时用时 $O(n)$, 每次弹出、加入堆用时 $O(log n)$, 总时间复杂度 $O(n log n)$.
]
#ans[
#align(center)[
#table(
stroke: none,
columns: (auto, auto, auto, auto, auto, auto, auto, auto, auto),
table.header(
[$i$],
[$0$],
[$1$],
[$2$],
[$3$],
[$4$],
[$5$],
[$6$],
[$7$],
),
table.hline(start: 0, stroke: (paint: blue, thickness: 0.5pt)),
[$f_i$],
table.hline(start: 0, stroke: (paint: blue, thickness: 0.5pt)),
table.vline(start: 0, stroke: (paint: blue, thickness: 0.5pt)),
[$1$],
[$1$],
[$2$],
[$3$],
[$5$],
[$8$],
[$13$],
[$21$],
[$c_i$],
[$1111111$],
[$1111110$],
[$111110$],
[$11110$],
[$1110$],
[$110$],
[$10$],
[$0$],
)
]
The generalized answer: Huffman for the first $n$ Fibonacci numbers:
- the code for $i>0$ is $underbrace(1 dots.c 1, n-i) 0$
- the code for $i=0$ is $underbrace(1 dots.c 1, n)$.
Proof is also trivial, let's discuss sums of Fibonacci first:
$
f_n = f_(n-1) + f_(n-2) => f_(n) = f_(n+2) - f_(n-1)\
sum_(i=0)^n f_i = f_(n+2) - 1 => sum_(i=0)^n f_i < f_(n+2)
$
so after merging the first $k$ elements, we're left with $((sum_(i=0)^k f_i), f_(k+1), dots.c, f_n)$, amoung which $((sum_(i=k)^n f_i), f_(k+1))$ are the smallest two, so they should be merged first, and so on, using induction it's easy to prove Huffman generates such tree, thus giving the optimal code.
]
#v(1em)
=== Appendix
#box[
==== Code for Question 14.5-2 <code_14_5_2>
```py
import numpy as np
def resolve_pre_order(n, root, i, j):
if i < 1 or i > n or j < 1 or j > n or i > j:
return []
return []
if i == j:
return [i]
pre_order = []
r = root[i - 1][j - 1]
pre_order.append(r)
tmp = resolve_pre_order(n, root, i, r - 1)
pre_order.extend(tmp)
tmp = resolve_pre_order(n, root, r + 1, j)
pre_order.extend(tmp)
return pre_order
def optimal_bst(n, p, q):
e = np.zeros((n + 2, n + 1))
w = np.zeros((n + 2, n + 1))
root = np.zeros((n, n), dtype=int)
for i in range(1, n + 2):
e[i][i - 1] = q[i - 1]
w[i][i - 1] = q[i - 1]
for l in range(1, n + 1):
for i in range(1, n - l + 2):
j = i + l - 1
e[i][j] = float("inf")
w[i][j] = w[i][j - 1] + p[j - 1] + q[j]
for r in range(i, j + 1):
t = e[i][r - 1] + e[r + 1][j] + w[i][j]
if t < e[i][j]:
e[i][j] = t
root[i - 1][j - 1] = r
return e[1][n], resolve_pre_order(n, root, 1, n)
n = 7
p = [0.04, 0.06, 0.08, 0.02, 0.10, 0.12, 0.14]
q = [0.06, 0.06, 0.06, 0.06, 0.06, 0.05, 0.05, 0.05]
result = optimal_bst(n, p, q)
print(result)
```
] |
|
https://github.com/jasonelaw/bes-typst-memo | https://raw.githubusercontent.com/jasonelaw/bes-typst-memo/main/README.md | markdown | # Typst Memo Format for BES
This is a memo format based on the [BES memo template](https://employees.portland.gov/bes/resource-library/bes-forms-and-templates).
## Installing
```bash
quarto use template jasonelaw/bes-typst-memo
```
This will install the extension and create an example .qmd file that you can use as a starting place for your document.
## Using
Specify the sender, recipient, subject, etc. using YAML options, then write the body of the memo. For example, the following .qmd source:
```yaml
---
re: "Typst Memo Template"
sender: "<NAME>"
recipient: "Whomever it may concern"
date: today
date-format: long
format:
memo-typst: default
---
This is a memo.
...
```
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/gentle-clues/0.6.0/CHANGELOG.md | markdown | Apache License 2.0 | # Changelog
## v0.6.0 (latest)
- Added possibility to define default settings via `#show: gentle-clues.with()` - *lang*, *width*, *stroke-width*, *border-width*, *border-radius*, *breakable* - (See all options in [docs.pdf](docs.pdf))
- **Deprecated:** `#gc_header-title-lang.update("de")` use `#show: gentle-clues.with(lang: "de")` now.
- **Deprecated:** `#gc_enable-task-counter.update(false)` use `#show: gentle-clues.with(show-task-counter: false)` now.
- Added option to show all clues without headers. `#show: gentle-clues.with(headless: true)`
## v0.5.0
- Added option `breakable: true` to make clues breakable .
- Added spanish header titles. Use with `#gc_header-title-lang.update("es")`
- Removed aliases (breaking)
## v0.4.0
- Added french header titles. Use with `#gc_header-title-lang.update("fr")`
- Fixed minor border issues
- Added an task-counter (disable with `gc_enable-task-counter.update(false)`)
*Colors:*
- Changed default color to `navy`
- Fixed bug that the border was sometimes no longer visible after `typst 0.9.0` update.
- Changed default border-color to the same color as `bg-color`
- Added support for gradients: `#clue(_color: gradient.linear(..color.map.crest))`
- **Breaking:** Removed string color_profiles.
- Changed some predefined colors.
## v0.3.0
- Renamed entry files and base template
- Changed default `header-inset`. It's `0.5em` now.
- Added `gc_header-title-lang` state, which defines the language of the title. Accepts `"de"` or `"en"` at the moment.
- Changed `type` checks which requires typst version `0.8.0`
- Renamed parameter `color` to `_color` due to naming conflicts with the color type.
## v0.2.0
- Added option to set the header inset. `#admonish(header-inset: 0.5em)`
- Added custom color: `#admonish(color: (stroke: luma(150), bg: teal))`
- Added predefined example clue: `#example[Testing]`
## v0.1.0
- Initial release
|
https://github.com/Jozott00/typst-LLNCS-template | https://raw.githubusercontent.com/Jozott00/typst-LLNCS-template/main/template.typ | typst | #import "template/theorem_proof_cnf.typ": *
// all theorem related elements
#let (
theorem,
__thm-rules,
definition,
__def-rules,
proposition,
__prop-rules,
lemma,
__lem-rules,
proof,
__proof-rules,
corollary,
__corol-rules,
) = __llncs_thm_cnf()
// The project function defines how your document looks.
// It takes your content and some metadata and formats it.
// Go ahead and customize it to your liking!
#let project(
title: "",
thanks: none,
abstract: [],
authors: (),
keywords: (),
bibliography-file: none,
body
) = {
//// CONSTANTS
let PAR_INDENT = 15pt
let TOP_PAGE_MARING = 50mm
let TITLE_SIZE = 14pt
// Set the document's basic properties.
set document(author: authors.map(a => a.name), title: title)
set text(font: "New Computer Modern", lang: "en", size: 10pt)
//// EVALUATIONS
let author_running = {
let an = authors.map(it => {
let ns = it.name.split(" ")
[#ns.at(0).at(0). #ns.last()]
})
if an.len() < 2 {
an.join(", ")
} else {
[#an.first() et al.]
}
}
//// PAR CONFIG
set par(leading: 0.50em)
show par: set block(spacing: 0.4em)
//// PAGE CONFIG
set page(paper: "us-letter")
set page(margin: (left: 44mm, right: 44mm, top: TOP_PAGE_MARING, bottom: 45mm))
// set page header
set page(header: locate(loc => {
if loc.page() == 1 {return []}
let alignment = if (calc.rem(loc.page(), 2) == 1) { right } else { left }
align(alignment)[
#counter(page).display()
#h(1cm)
#author_running
]
}))
//// HEADING CONFIGS
set heading(numbering: "1.1")
// padding
show heading.where(level: 1): pad.with(bottom: 0.64em, top: 0.64em)
show heading.where(level: 2): pad.with(bottom: 0.9em)
show heading: it => {
if it.level == 1 {
set text(12pt, weight: "bold")
it
}else if it.level == 2 {
set text(10pt, weight: "bold")
it
} else if it.level == 3 {
set text(10pt, weight: "bold")
[#v(2em)#h(-PAR_INDENT) #it.body]
} else if it.level == 4 {
set text(10pt, weight: "regular", style: "italic")
[#v(1.5em)#h(-PAR_INDENT)#it.body]
}
}
//// SUPER CONFIGS
set super(size: 8pt)
//// FOOTNOTE CONFIGS
show footnote.entry: set text(9pt)
set footnote.entry(separator:
line(length: 54pt, stroke: 0.5pt)
)
///// FIGURE CONFIG
set figure.caption(separator: [. ]) // separator to .
show figure.caption: it => [*#it.supplement #it.counter.display()#it.separator*#it.body] // bold figure kind
show figure.where(kind: table): set figure.caption(position: top) // caption for table above figure
set figure(gap: 12pt)
show figure: pad.with(top: 20pt, bottom: 20pt)
show figure: set text(9pt)
// let Figure display as Fig
let fig_replace(it) = {
show "Figure": "Fig."
it
}
show figure.where(kind: image): fig_replace
show ref: fig_replace
//// ---- Start of content -----
v(-9mm)
// Title row.
align(center)[
#block()[
#text(weight: "bold", TITLE_SIZE, title)
#if type(thanks) == str and thanks.trim() != "" {
set super(size: 10pt)
footnote(numbering: it => [⋆#h(2pt)], thanks)
}
]
]
v(6mm)
// encapsulated styling
{
set align(center)
let insts = authors.map(it => it.insts)
.flatten()
.dedup()
// Author information.
authors.enumerate().map(it => {
let a = it.at(1)
// find references
let refs = a.insts
.map(ai => str(insts.position(i => i == ai) + 1))
.join(",")
let oicd = if a.oicd != none { [[#a.oicd]]} else {""}
// add "and" infront of last author
let und = if it.at(0) > 0 and it.at(0) == authors.len() - 1 { "and" } else { "" }
[#und #a.name#super([#refs#oicd])]
}).join(", ")
v(3mm)
// Institute information.
insts.enumerate().map(it => {
set text(9pt)
let inst = it.at(1)
[#super([#{it.at(0) + 1}]) ]
[#inst.name]
if "addr" in inst [, #inst.addr]
if "email" in inst [#par(text(font: "PT Mono", size: 8pt, inst.email))]
if "url" in inst [#par(inst.url)]
})
.map(par)
.join()
v(11.5mm)
// abstract and keywords.
block(width: 10cm)[
#set align(left)
#set par(justify: true)
*Abstract.* #abstract
#v(3.5mm)
#if keywords.len() > 0 {
let display = if type(keywords) == str { keywords } else { keywords.join([ $dot$ ]) }
text[*Keywords:* #display]
}
]
}
v(1mm)
// Main body.
//// PAR CONFIG MAIN
set par(justify: true, first-line-indent: PAR_INDENT)
// show theorem rules
show:__thm-rules
show:__def-rules
show:__prop-rules
show:__lem-rules
show:__proof-rules
show:__corol-rules
// show actual body
body
v(8pt)
// Display bibliography.
if bibliography-file != none {
show bibliography: set text(9pt)
bibliography(bibliography-file, title: text(12pt)[References], style: "springer-lecture-notes-in-computer-science")
}
}
/// Author creation function
#let author(name, oicd: none, insts: ()) = {
// make sure it is always an one dimensional array
if type(insts) != array {
insts = (insts,)
}
(
name: name,
oicd: oicd,
insts: insts,
)
}
/// Institute creation function
#let institute(name, addr: "", email: none, url: none) = {
(
name: name,
addr: addr,
email: if email != none { link("mailto: " + email) } else { none },
url: if url != none { link(url) } else { none },
)
}
|
|
https://github.com/rabotaem-incorporated/calculus-notes-2course | https://raw.githubusercontent.com/rabotaem-incorporated/calculus-notes-2course/master/main.typ | typst | #import "utils/core.typ": *
#show: notes.with(
name: "Конспект лекций по математическому анализу за " + {
if config.sem3 and config.sem4 {
"II курс"
} else if config.sem3 {
"III семестр"
} else if config.sem4 {
"IV семестр"
} else {
panic("empty conspect")
}
},
short-name: "Математический анализ",
lector: "<NAME>",
info: "СПБГУ МКН, Современное программирование, 2023-2024",
)
#show: show-references
#if config.reminders {
include "reminders.typ"
}
#if config.sem3 {
include "sections/01-leftovers/!sec.typ"
include "sections/02-measure-theory/!sec.typ"
include "sections/03-lebesgue-integral/!sec.typ"
include "sections/04-parametric-and-curves/!sec.typ"
}
// partially sem3, partially sem4
#include "sections/05-complex-functions/!sec.typ"
#if config.sem4 {
include "sections/06-fourier-series/!sec.typ"
}
#include "appendix.typ"
|
|
https://github.com/ngyngcphu/tick3d-docs | https://raw.githubusercontent.com/ngyngcphu/tick3d-docs/main/contents/02_phan_tich_yeu_cau/index.typ | typst | Apache License 2.0 | = Phân tích yêu cầu
Như đã đề cập ở phần Phạm vi dự án, nhóm sẽ ưu tiên mức hiện thực thủ công bao gồm sáu tính năng chính trên. Phần này sẽ mô tả chi tiết về toàn bộ yêu cầu chức năng và yêu cầu phi chức năng của hệ thống ở mức thủ công:
== Yêu cầu chức năng
=== User Story
==== Đối với khách hàng
#block(inset: (left:1cm))[
- Khách hàng có thể lựa chọn các mô hình 3D có sẵn trong hệ thống.
- Khách hàng có thể xem tất cả các mô hình 3D ở mục *All Things*.
- Khách hàng có thể tìm kiếm các mô hình 3D theo tên.
- Khách hàng có thể tìm kiếm các mô hình bằng cách lọc theo danh mục, giá tiền, mốc thời gian đăng.
- Khách hàng có thể xem thông tin chi tiết của mô hình 3D.
- Khách hàng có thể Upload file `.gcode` của riêng họ để đặt in.
- Khách hàng phải đăng nhập vào tài khoản user để thực hiện chức năng đặt hàng.
- Khách hàng có thể thêm mô hình 3D có sẵn vào giỏ hàng để lưu lại thông tin mô hình tiến hành đặt hàng. Khi thêm vào giỏ, khách hàng có thể chọn số lượng.
- Khách hàng có thể xem và chỉnh sửa thông tin giỏ hàng.
- Khách hàng có thể xóa một hoặc nhiều mô hình ra khỏi giỏ hàng cùng lúc.
- Đối với trường hợp đặt in theo file `.gcode` của riêng họ, khách hàng có thể tương tác hệ thống như đối với các mô hình 3D có sẵn.
- Khách hàng có thể xem đề xuất giá tiền để in mô hình 3D dựa vào các file `.gcode`.
- Khách hàng có thể kết hợp upload file và lựa chọn các mô hình 3D có sẵn.
- Khách hàng có thể xác nhận và gửi đơn hàng.
- Khách hàng điền thông tin đặt hàng bao gồm tên khách hàng, số điện thoại, địa chỉ nhận hàng (phường, quận, địa chỉ thêm do khách hàng cung cấp) và ghi chú (nếu có).
- Khách hàng có thể xem được phí ship dựa trên khoảng cách giao hàng và thời gian giao hàng dự kiến.
- Sau bước xác nhận đơn hàng và trước bước thanh toán, khách hàng có thể quay lại giỏ hàng
để mua thêm/xóa các mô hình 3D, chỉnh sửa thông tin đặt hàng.
- Khách hàng có thể chọn phương thức thanh toán là tiền mặt hoặc thanh toán online qua
Momo.
- Khách hàng có thể hủy đơn hàng nếu vẫn chưa được in (trạng thái *Đang chờ xử lý*).
- Khách hàng có thể theo dõi tình trạng đơn hàng: Đang chờ xử lý; Đang in; Đang giao; Đã thanh toán.
- Khách hàng có thể lựa chọn 2 phương thức thanh toán: Thanh toán bằng tiền mặt hoặc Thanh toán qua Momo.
- Trường hợp khách hàng lựa chọn thanh toán qua Momo, màn hình sẽ xuất hiện một mã vạch
(đã kèm số tiền) để người dùng quét mã. Mã sẽ có hiệu lực trong vòng 10 phút.
- Hệ thống thông báo thanh toán khách hàng thành công/thất bại.
]
==== Đối với người quản lý
#block(inset: (left:1cm))[
- Người quản lý có thể thêm mô hình 3D vào hệ thống: giá tiền, hình ảnh minh họa cũng phải được thêm vào.
- Người quản lý có thể xóa mô hình 3D, việc xoá mô hình 3D không ảnh hưởng tới các đơn hàng đã tiếp nhận trước đó.
- Người quản lý có thể sửa mô hình 3D, cập nhật không ảnh hưởng đến các đơn hàng đã thanh toán.
- Người quản lý có thể xem danh sách các đơn đặt hàng và cập nhật trạng thái của chúng.
- Người quản lý có thể nhấn chọn từng đơn hàng để xem thông tin chi tiết.
- Người quản lý có thể nhấn chọn xử lý để từ chối hoặc chuyển đơn hàng sang trạng thái tiếp theo.
- Người quản lý có thể lựa chọn từ chối hoặc chấp nhận nhiều đơn hàng cùng một lúc.
]
=== Chức năng hệ thống
==== Lựa chọn mô hình 3D
Trường hợp khách hàng lựa chọn các mô hình 3D có sẵn, hệ thống phải cung cấp các chức năng:
#block(inset: (left:1cm))[
- Phân chia các mô hình 3D theo các danh mục: Fashion, Hobby, Learning, Tools, Toys & Games, Art, Household.
- Hệ thống có mục *All Things* bao gồm tất cả các loại mô hình 3D.
- Các nhóm mô hình 3D được phân thành nhiều trang, mỗi trang chứa tối đa 10 mô hình.
- Mỗi mô hình 3D có một nút Like, hệ thống sẽ mặc định sắp xếp các mô hình 3D theo tiêu chí số lượt Like từ cao đến thấp.
- Sắp xếp các mô hình 3D theo giá tiền, mốc thời gian đăng. Lọc các mô hình 3D theo danh mục, khoảng thời gian. Giữ nguyên trạng thái lọc và tiêu chí sắp xếp khi chuyển trang. Trạng thái ban đầu của bộ lọc là *No filter*.
- Tìm kiếm các mô hình 3D theo tên. Hệ thống sẽ cố gắng tìm những mô hình 3D có tên giống như từ khóa đã nhập hoặc có tên gần giống. Nếu không tìm thấy, hiển thị danh sách rỗng kèm thông báo *Không tìm thấy*. Được phép áp dụng bộ lọc và tiêu chí sắp xếp khi màn hình xuất ra danh sách kết quả. Để quay về trạng thái trước khi tìm kiếm, nhấn nút *X* trên thanh tìm kiếm.
]
Trường hợp khách hàng upload file `.gcode` của riêng họ, hệ thống phải cung cấp các chức năng:
#block(inset: (left:1cm))[
- Chỉ cho phép các file định dạng `.gcode` được upload lên hệ thống.
- Các file `.gcode` phải được generate từ chính loại máy in FLSUN-V400.
]
==== Quản lý mô hình 3D
Người quản lý phải đăng nhập vào tài khoản admin để thực hiện chức năng này, bao gồm các thao tác:
#block(inset: (left:1cm))[
- Thêm/xóa/sửa mô hình 3D.
- Khi thêm mô hình 3D: giá tiền, hình ảnh minh họa cũng phải được thêm vào.
- Việc xoá mô hình 3D không ảnh hưởng tới các đơn hàng đã tiếp nhận trước đó.
- Các mô hình 3D được chỉnh sửa, cập nhật không ảnh hưởng đến các đơn hàng đã thanh toán.
]
==== Đặt mô hình 3D
Khách hàng phải đăng nhập vào tài khoản user để thực hiện chức năng này.
Trường hợp khách hàng chọn các mẫu mô hình 3D có sẵn, hệ thống phải cung cấp các chức năng:
#block(inset: (left:1cm))[
- Thêm mô hình 3D vào giỏ hàng để lưu lại thông tin mô hình hoặc tiến hành đặt hàng. Khi thêm vào giỏ, khách hàng có thể chọn số lượng.
- Thông tin mô hình 3D trong giỏ hàng bao gồm: tên mô hình, đơn giá, số lượng.
- Khi người dùng thêm mô hình 3D nhiều lần, những mô hình trùng tên sẽ được cộng dồn với số lượng và giá tương ứng.
- Giỏ hàng phải thống kê được các mô hình 3D, số lượng, đơn giá của từng mô hình và tổng tiền của giỏ hàng.
- Xem và chỉnh sửa thông tin giỏ hàng.
- Khách hàng có thể xóa một hoặc nhiều mô hình ra khỏi giỏ hàng cùng lúc.
]
Trường hợp khách hàng đặt in các mô hình 3D dựa trên các file `.gcode` của họ, ngoài các thao tác trên giỏ hàng tương tự như trên, hệ thống còn phải cung cấp các chức năng:
#block(inset: (left:1cm))[
- Khi nhấn nút `Upload file`, hệ thống sẽ hiện ra một modal window yêu cầu lựa chọn đơn hàng hoặc tạo một đơn hàng mới để chứa file đó.
- Cho phép upload nhiều file cho một đơn hàng.
- Đề xuất giá tiền để in mô hình 3D dựa vào các file `.gcode`.
- Cho phép kết hợp upload file và lựa chọn các mô hình 3D có sẵn.
]
==== Xác nhận và gửi đơn hàng
Hệ thống phải cung cấp các chức năng:
#block(inset: (left:1cm))[
- Cho phép chọn một hoặc nhiều mô hình 3D từ giỏ hàng để tiến hành đặt hàng.
- Có form cung cấp thông tin đặt hàng bao gồm tên khách hàng, số điện thoại, địa chỉ nhận hàng (phường, quận, địa chỉ thêm do khách hàng cung cấp) và ghi chú (nếu có).
- Hệ thống cung cấp tính năng tính phí ship dựa trên khoảng cách giao hàng và hiển thị thời gian giao hàng dự kiến.
- Phí ship được tính bằng 5000 VND cho 3 kilomet đầu tiên; 3000 VND cho mỗi kilomet tiếp theo và không vượt quá 30000 VNĐ. Khu vực giao hàng được giới hạn trong phạm vi TP HCM.
- Thời gian giao dự kiến được tính dựa trên quãng đường và lưu lượng giao thông tại thời điểm đặt hàng.
- Sau bước xác nhận đơn hàng và trước bước thanh toán, khách hàng có thể quay lại giỏ hàng để mua thêm/xóa các mô hình 3D, chỉnh sửa thông tin đặt hàng.
- Khách hàng có thể chọn phương thức thanh toán là tiền mặt hoặc thanh toán online qua Momo.
- Sau khi tiến hành đặt hàng thành công, các mô hình 3D đã được đặt sẽ bị xóa khỏi giỏ hàng và lịch sử đặt hàng sẽ được ghi lại vào hệ thống.
- Cho phép hủy đơn hàng nếu vẫn chưa được in (trạng thái *Đang chờ xử lý*).
- Khách hàng có thể theo dõi tình trạng đơn hàng: Đang chờ xử lý; Đang in; Đang giao; Đã thanh toán.
]
==== Xử lý đơn hàng
Người quản lý sẽ xem danh sách các đơn đặt hàng và cập nhật trạng thái của chúng. Việc này sẽ yêu cầu hệ thống cung cấp các tính năng sau:
#block(inset: (left:1cm))[
- Hiển thị danh sách đơn hàng thành các mục tương ứng với trạng thái của chúng. Trạng thái đơn hàng bao gồm: Đang chờ xử lý -> Đang in -> Đang giao -> Đã thanh toán.
- Ở mỗi mục, đơn hàng được sắp xếp mặc định dựa trên thời gian đơn hàng đó được ghi nhận.
- Người quản lý có thể nhấn chọn từng đơn hàng để xem thông tin chi tiết.
- Người quản lý có thể nhấn chọn xử lý để từ chối hoặc chuyển đơn hàng sang trạng thái tiếp theo.
- Người quản lý có thể lựa chọn từ chối hoặc chấp nhận nhiều đơn hàng cùng một lúc.
]
==== Thanh toán đơn hàng
Hệ thống hỗ trợ khách hàng thanh toán đơn hàng bằng tiền mặt và qua ví điện tử Momo:
#block(inset: (left:1cm))[
- Khách hàng có thể lựa chọn 2 phương thức thanh toán: *Thanh toán bằng tiền mặt* hoặc *Thanh toán qua Momo*.
- Trường hợp khách hàng lựa chọn thanh toán qua Momo, màn hình sẽ xuất hiện một mã vạch (đã kèm số tiền) để người dùng quét mã. Mã sẽ có hiệu lực trong vòng 10 phút.
- Hệ thống thông báo thanh toán thành công/thất bại.
]
== Yêu cầu phi chức năng
#block(inset: (left:1cm))[
- Hệ thống được truy cập thông qua web-based.
- hệ thống xử lý nhiều đơn đặt hàng in và xác lập độ ưu tiên trong cơ chế FCFS.
- Độ tin cậy (Reliability):
#block(inset: (left:1.2cm))[
\u{2218} Hệ thống duy trì dữ liệu/phục hồi về trạng thái trước khi có lỗi.
]
- Tính sẵn sàng (Availability):
#block(inset: (left:1.2cm))[
\u{2218} Hệ thống phải hoạt động 24/7.
]
- Khả năng tiếp cận (Accessibility):
#block(inset: (left:1.2cm))[
\u{2218} UI phải được hiển thị chính xác trên nhiều kích cỡ màn hình khác nhau:
#block(inset: (left:1.4cm))[
\u{25AA} Màn hình desktop: 1280x720 - 1920x1080.
#linebreak()
\u{25AA} Màn hình tablet: 601x962 - 1280x800.
#linebreak()
\u{25AA} Màn hình mobile: 360x640 - 414x896.
]
\u{2218} Hỗ trợ trên các trình duyệt khác nhau: Chrome, Edge, Firefox, Safari.
]
- Độ bảo mật (Security):
#block(inset: (left:1.2cm))[
\u{2218} Tuân thủ theo tiêu chuẩn OWASP
]
]
#pagebreak();
|
https://github.com/PuntitOwO/template-informe-memoria-fcfm | https://raw.githubusercontent.com/PuntitOwO/template-informe-memoria-fcfm/main/conf.typ | typst | MIT License | #let logos = (
escudo: "imagenes/institucion/escudoU2014.svg",
fcfm: "imagenes/institucion/fcfm.svg"
)
#let pronombre = (
el: (titulo: "O", guia: ""),
ella: (titulo: "A", guia: "A"),
elle: (titulo: "E", guia: "E"),
)
#let guia(visible: true, body) = if visible [
#set rect(width: 100%, stroke: black)
#set par(justify: true, first-line-indent: 0pt)
#block(breakable: false)[#stack(dir: ttb,
rect(fill: black, radius: (top: 5pt, bottom: 0pt), text(fill: white, "Guía (deshabilitar antes de entregar)")),
rect(fill: luma(230), radius: (top: 0pt, bottom: 5pt), body)
)]] else []
#let conf(
titulo: none,
autor: none, // diccionario con nombre y pronombre, (nombre: "", pronombre: pronombre.<el/ella/elle>)
informe: false, // false para propuesta, true para informe
codigo: "CC6908", // CC6908 para malla v3, CC6907 para malla v5
modalidad: "Memoria", // puede ser Memoria, Práctica Extendida, Doble Titulación con Magíster,Doble Titulación de Dos Especialidades
profesores: (), // si es solo un profesor guía, una lista de un elemento es ((nombre: "nombre apellido", pronombre: pronombre.<el/ella/elle>),))
coguias: (), // si es solo un profesor co-guía, una lista de un elemento es ((nombre: "nombre apellido", pronombre: pronombre.<el/ella/elle>),))
supervisor: none, // solo en caso de práctica extendida llenar esto, en otro caso none, (nombre: "<NAME>ellido", pronombre: pronombre.<el/ella/elle>)
anno: none, // si no se especifica, se usa el año actual
espaciado_titulo: 1fr, // espacio extra que rodea al título y al nombre en la portada, 1fr es lo mismo que el resto de espacios, 2fr es el doble, etc.
doc,
) = {
// Formato de página
set page(
paper: "us-letter",
number-align: center,
numbering: none,
// margin: (left: 3cm, rest: 2cm,) se configura después de la portada
)
// Formato de texto
set text(
lang: "es",
font: "New Computer Modern",
size: 12pt,
)
// Formato de headings
set heading(numbering: (..n) => {
if n.pos().len() == 1 [#numbering("1.", ..n) #h(1em)] // Espacio extra para headings de nivel 1
else if n.pos().len() == 2 [#none] // No numerar headings de nivel 2
else [#numbering("1.", ..n)] // Para el resto, numerar con formato 1.1.1.
})
let header = [
#set text(size: 13pt)
#stack(dir: ltr, spacing: 15pt,
[],
align(bottom+left, box(width: 1.35cm, image(logos.escudo))),
align(bottom+left, stack(dir: ttb, spacing: 5pt,
text("UNIVERSIDAD DE CHILE"),
text("FACULTAD DE CIENCIAS FÍSICAS Y MATEMÁTICAS"),
text("DEPARTAMENTO DE CIENCIAS DE LA COMPUTACIÓN"),
v(5pt),
),
)
)
]
let _propuesta = "PROPUESTA DE TEMA DE MEMORIA"
let _informe = "INFORME FINAL DE " + codigo
let _documento = [
#if informe [#_informe] else [#_propuesta]
PARA OPTAR AL TÍTULO DE \ INGENIER#autor.pronombre.titulo CIVIL EN COMPUTACIÓN]
let _modalidad = [MODALIDAD: \ #modalidad]
let _guia(gen: pronombre.el) = [PROFESOR#gen.guia GUÍA]
let _coguia(gen: pronombre.el) = [PROFESOR#gen.guia CO-GUÍA]
let _supervisor(gen: pronombre.el) = [SUPERVISOR#gen.guia]
let _ciudad = "SANTIAGO DE CHILE"
let _anno = if anno != none [#anno] else [#datetime.today().year()]
let portada = align(center)[
#stack(dir: ttb, spacing: 1fr,
..(
espaciado_titulo,
titulo,
0.5fr,
_documento,
espaciado_titulo,
upper(autor.nombre),
espaciado_titulo,
_modalidad,
if profesores.len() == 0 [#none]
else if profesores.len() == 1
[#_guia(gen: profesores.at(0).pronombre): \ #profesores.at(0).nombre]
else
[#_guia(gen: profesores.at(0).pronombre): \ #profesores.at(0).nombre \
#_guia(gen: profesores.at(1).pronombre) 2: \ #profesores.at(1).nombre],
if coguias.len() == 0 [#none]
else if coguias.len() == 1
[#_coguia(gen: coguias.at(0).pronombre): \ #coguias.at(0).nombre]
else
[#_coguia(gen: coguias.at(0).pronombre): \ #coguias.at(0).nombre \
#_coguia(gen: coguias.at(1).pronombre) 2: \ #coguias.at(1).nombre],
if supervisor == none [#none]
else [#_supervisor(gen: supervisor.pronombre): \ #supervisor.nombre],
[#_ciudad \ #_anno],).filter(it => it != [#none]),
)
]
// Portada
header
portada
// Comienza el documento, en página 1
set page(
numbering: "1",
margin: (left: 3cm, rest: 2cm,),
) // Activar numeración de páginas y márgenes
set par(
justify: true,
first-line-indent: 15pt,
) // Formato de párrafos
show par: set block(spacing: 2em) // Espacio entre párrafos
show heading: it => {
it
par(text(size:0.35em, h(0.0em)))
} // Workaround para que se aplique la indentación al primer párrafo luego de un heading
pagebreak(weak: true) // Salto de página
counter(page).update(1) // Reestablecer el contador de páginas
doc
} |
https://github.com/DieracDelta/presentations | https://raw.githubusercontent.com/DieracDelta/presentations/master/polylux/book/src/dynamic/alternatives-repeat-last.typ | typst | #import "../../../polylux.typ": *
#set page(paper: "presentation-16-9")
#set text(size: 50pt)
#polylux-slide[
#alternatives(repeat-last: true)[temporary][transitory][ephemeral][permanent!]
#uncover(5)[Did I miss something?]
]
|
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/docs/cookery/guide/renderer/angular.typ | typst | Apache License 2.0 | #import "/docs/cookery/book.typ": book-page
#import "/docs/cookery/term.typ" as term
#show: book-page.with(title: "Angular Library")
= Angular Library
Use #link("https://www.npmjs.com/package/@myriaddreamin/typst.angular")[`@myriaddreamin/typst.angular`].
Import the angular module containing the `typst-document` component.
```typescript
/// component.module.ts
import { TypstDocumentModule } from '@myriaddreamin/typst.angular';
```
And use directive `typst-document` in your template file.
```html
<typst-document props></typst-document>
```
== The `typst-document` component
=== Typical usage
```html
<typst-document
fill="#343541"
artifact="{{ artifact }}">
</typst-document>
```
=== `fill` property
Fill document with color.
```html
<typst-document fill="#343541">
</typst-document>
```
Note: Current typst.ts doesn't support a transparent background color in some browsers.
=== `artifact` property
Render the document with artifact from precompiler.
```html
<typst-document artifact="{{ artifact }}">
</typst-document>
```
#include "get-artifact.typ"
=== Set renderer initialization option for `typst-document`
Retrieve a #term.init-option for initializating the renderer for `typst-document`
```ts
typst-document.setWasmModuleInitOptions({
getModule: () =>
'http://localhost:20810/typst_ts_renderer_bg.wasm',
});
```
The default value is:
```ts
{
beforeBuild: [],
getModule: () => '/assets/typst-ts-renderer/pkg/typst_ts_renderer_bg.wasm',
}
```
=== Example: show document
See #link("https://github.com/Myriad-Dreamin/typst.ts/tree/main/packages/typst.angular/projects/demo")[typst.angular demo] for more details.
|
https://github.com/satshi/modern-physics | https://raw.githubusercontent.com/satshi/modern-physics/main/modernphys.typ | typst | // 今のところjarticleとappendixを定義している。
#import "template.typ": *
//オプションはfontsize, title, authors, date, abstract
#show: doc => jarticle(
fontsize: 12pt,
title: [現代物理学の基礎\ ―特殊相対論と量子力学入門―],
authors: ([山口 哲],),
date: datetime.today().display(年月),
doc,
)
// expvalというコマンドを使うため。
#import "@preview/physica:0.9.2": expval
#outline()
#pagebreak()
= 導入
= 光速
= 相対性原理
= 特殊相対論の基礎
== 特殊相対論の原理
== 慣性系
== ローレンツ変換
== ローレンツ変換からの簡単な帰結
=== 走っている時計の遅れ
=== 速度の合成則
= 特殊相対論的力学
== ミンコフスキー時空と固有時間
== 4元ベクトル
== 運動方程式
== 運動量とエネルギー
|
|
https://github.com/pluttan/electron | https://raw.githubusercontent.com/pluttan/electron/main/lab3/lab3.typ | typst | #import "@docs/bmstu:1.0.0":*
#import "@preview/tablex:0.0.8": tablex, rowspanx, colspanx, cellx
#show: student_work.with(
caf_name: "Компьютерные системы и сети",
faculty_name: "Информатика и системы управления",
work_type: "лабораторной работе",
work_num: "3",
discipline_name: "Электроника",
theme: "Ключевой режим работы транзистора (Вариант 13)",
author: (group: "ИУ6-42Б", nwa: "<NAME>"),
adviser: (nwa: "<NAME>"),
city: "Москва",
table_of_contents: true,
)
= Задание
== Цель работы
Исследовать статические режимы и переходные процессы в схеме простого транзисторного ключа.
#v(-10pt)
== Параметры схемы
#set text(9pt)
#align(center)[
#tablex(
columns: 16,
inset: 2pt,
align: center+horizon,
[$№$],[$R_б$, Ом],[$B$],[$B_r$],[$I_s$, A],[$C_"бк"$, Ф],[$C_"бэ"$, Ф],[$tau_r$, c],[$r_б$, Ом],[$F_alpha$, Гц],[$R_к$, Ом],[$E_"см"$, В],[$R_"см"$, Ом],[$U_"бэ"$, В],[$E_"вх"$, В], [$E_к$, В],
[13],[40000],[120],[0,95],[1,00E-12],[1,50E-11],[7,50E-12],[2,40E-05],[30],[2,50E+06],[2200],[1,4],[32500],[0,75],[11],[11]
)
]
#set text(14pt)
= Часть 1
Схема транзисторного ключа показана на рисунке 1:
#img(image("1.png", width:90%), [Схема 1])
Приведённая схема расчёта тока базы показана на рисунке 2:
#img(image("2.png", width:90%), [Схема 2])
По этой схеме найдем ток базы методом контурных токов:
$ cases(I_11 (R_b + R_"см") - I_22 R_"см" = E_in + E_"см",
- I_11 R_"см" + I_22 R_"см" = -E_"см" -U_"бэ") $
$ R_11 = R_э + R_"см"\ E_11 = E_"см" + E_"см" $
$ E_22 = -E_"см" - U_"бэ" $
$ I_11 (R_б + R_"см") - I_б R_"см" = E_"вх" + E_"см" $
$ -I_11 R_"см" + I_б R_"см"= -E_"см" - U_"бэ" $
$ I_22 = I_б = (E_"вх" + E_"см")/R_б - ((R_б + R_"см") (E_"см" + U_"бэ"))/(R_"см" R_б) = 0.00019 А $
Находим $R_к$ и подставляем в схему:
$ R_к = E_к/(B I_б) = 480 "Ом"$
Схема с $R_к$ границы режима насыщения показана на рисунке 3:
#img(image("3.png", width:90%), [Схема 3])
Построим график DC анализа для схемы 3, показанный на рисунке 4:
#img(image("4.png", width:90%), [DC анализ])
Схема для расчёт статического коэффициента усиления по току базы B в активном режиме транзистора показана на рисунке 5:
#img(image("5.png", width:90%), [Схема 4])
$ (23.6*10^"-3")/(197.1*10^"-6") = 117 tilde.eq 120 $
Построим DC sweep для тока на базе и коллекторе, что видно на рисунке 6:
#img(image("6.png", width:90%), [DC анализ])
$ (14.1*10^"-3")/(118.2*10^"-6") = 118 tilde.eq 120 $
Схема для исследования статического коэффициента усиления по току В при различных $R_к$ показана на рисунке 7:
#img(image("7.png", width:90%), [Схема 5])
Показатели, полученные при изменении $R_1$ на схеме 5:
#set text(12pt)
#align(center)[
#tablex(
columns: 8,
inset: 4pt,
align: center+horizon,
[$R_1$, Ом], [10], [100], [300], [600], [900], [1500], [5000],
[$I_б$, А], [0,000197], [0,000197], [0,000197], [0,000197], [0,000198], [0,000198], [0,0002],
[$I_к$, A], [0,0236], [0,0236], [0,0236], [0,018], [0,012], [0,0072], [0,0021],
[$U$, В], [10,7], [8,6], [3,9], [0,155], [0,126], [0,105], [0,07],
[b], [119,79695], [119,79695], [119,79695], [91,370558], [60,606061], [36,363636], [10,5]
)
]
#set text(14pt)
= Часть 2
Схема для исследования динамических характеристик при различном уровне входного сигнала показана на рисунке 8:
#img(image("8.png", width:90%), [Схема 6])
Графики Transient analyses для 5 В показаны на рисунках 9-11:
#img(image("9.png", width:90%), [Transient analyses для схемы 6])
#img(image("10.png", width:90%), [Начало фронта])
#img(image("11.png", width:90%), [Конец фронта])
#align(center)[
#tablex(
columns: 4,
inset: 7pt,
align: center+horizon,
[$E_r$, B], [$τ_ф$, мкс], [$τ_"рас"$, мкс], [$τ_с$, мкс],
[5], [4,8], [0,12], [2,42],
[7,5], [2,47], [0,33], [3,4],
[11], [1,81], [4,8], [3,8],
[12,5], [1,6], [5,5], [5,1]
)
]
Время формирования фронта для 11 В: $τ_ф = τ_в ln (S - 0.1)/(S - 0.9) = 1,9*10^"-6"$ – погрешность 5%, где
$
τ_в = 12 π f_в = 9,6*10^"-6" с\
f_в = (f_alpha)/(B+1) = 16528 "Гц"\
J_"б1" = (E_"вх" + Е_"см")/R_б - ((R_б + R_"см")(Е_"см"+U_"бэ"))/(R_"см" R_"б") = 1,9*10^"-4" А\
J_"б2" = U_"бэ"/R_"см" + Е_"см"/R_"см" = 6,6*10^"-5" А\
J_"бн" = Е_к/(B R_к) = 4,1*10^"-5" А\
S = J_"б1"/J_"бн" = 4,562
$
Время рассеивания для 9 В: $ τ_"рас" = τ_н ln (S J_"бн" + J_"б2")/(J_"бн" + J_"б2") = 5*10^"-6" arrow "погрешность" 4% $
Время среза для 9 В: $ τ_с = τ_в ln (J_"б1"/S + J_"б2")/J_"б2" = 4,7*10^"-6" arrow "погрешность 19%" $
= Часть 3
Исследование влияния форсирующего конденсатора показана на рисунке 12:
#img(image("12.png", width:90%), [Исследование влияния форсирующего конденсатора ])
График влияния форсирующего конденсатора с величиной 0,75пФ, показан на рисунке 13:
#img(image("13.png", width:90%), [График влияния форсирующего конденсатора с величиной 0,75пФ])
График влияния форсирующего конденсатора с величиной 20пФ, показан на рисунке 14:
#img(image("14.png", width:90%), [График влияния форсирующего конденсатора с величиной 20пФ])
График влияния форсирующего конденсатора с величиной 40пФ, показан на рисунке 15:
#img(image("15.png", width:90%), [График влияния форсирующего конденсатора с величиной 40пФ])
По графикам видно, что ток базы увеличивается и перезарядка емкостей проходит быстрее.
Исследование влияния конденсатора нагрузки показана на рисунке 16:
#img(image("16.png", width:90%), [Схема 8])
График влияния конденсатора нагрузки с величиной 0,5пФ, показан на рисунке 17:
#img(image("17.png", width:90%), [График влияния конденсатора нагрузки с величиной 0,5пФ])
График влияния конденсатора нагрузки с величиной 2пФ, показан на рисунке 18:
#img(image("18.png", width:90%), [График влияния конденсатора нагрузки с величиной 2пФ])
График влияния конденсатора нагрузки с величиной 10пФ, показан на рисунке 19:
#img(image("19.png", width:90%), [График влияния конденсатора нагрузки с величиной 10пФ])
По графикам видно, что ёмкостная нагрузка не влияет на время рассеивания и делает значение остальных параметров при увеличении ёмкости в цепи нагрузки.
Работа ключа с инверсным запиранием показана на рисунке 20:
#img(image("20.png", width:90%), [Работа ключа с инверсным запиранием])
График работы ключа с инверсным запиранием при $R_б = 1 "kОм", R_"см" = 250 "Ом" и R_к = 3 "kОм"$, показан на рисунке 21:
#img(image("21.png", width:90%), [График работы ключа с инверсным запиранием])
По графику видно, что рассеивание заряда сначала проходит у эмиттерного перехода. А также, что ток коллектора увеличивается, эмиттера уменьшается, а базы не меняется.
= Вывод
В ходе выполнения работы были исследованы статические режимы и переходные процессы в схеме простого транзисторного ключа.
|
|
https://github.com/SnowManKeepsOnForgeting/NoteofModernControlTheory | https://raw.githubusercontent.com/SnowManKeepsOnForgeting/NoteofModernControlTheory/main/Chapter2/Chapter2.typ | typst | #import "@preview/physica:0.9.3": *
#import "@preview/i-figured:0.2.4"
#set heading(numbering: "1.1")
#show math.equation: i-figured.show-equation.with(level: 2)
#show heading: i-figured.reset-counters.with(level: 2)
#set text(font: "CMU Serif")
#counter(heading).update(1)
= Description of State Space
== Definition
1. *Input variables*
We usually use $bold(u)_t = mat(delim: "[",u_1(t);u_2(t);dots.v;u_n(t))$ to represent input variables.
2. *State variables*
We usually use $bold(x)_t = mat(delim: "[",x_1(t);x_2(t);dots.v;x_n(t))$ to represent state variables.It is a least set to describe state of system.
3. *Output variables*
We usually use $bold(y)_t = mat(delim: "[",y_1(t);y_2(t);dots.v;y_n(t))$ to represent output variables.
4. *State equation*
State equation is a first order differential equation that describe relationship between input variables and state variables. We can write it as:
$
cases(accent(x,dot)_1 &= f_1(x_1,x_2,dots,x_n;u_1,u_2,dots,u_p,t),
accent(x,dot)_2 &= f_2(x_1,x_2,dots,x_n;u_1,u_2,dots,u_p,t),
&dots.v,
accent(x,dot)_n &= f_n (x_1,x_2,dots,x_n;u_1,u_2,dots,u_p,t))
$
Rewrite it as vector form:
$
bold(accent(x,dot))_t = bold(f)(bold(x)_t,bold(u)_t,t)
$
5. *Output equation*
Output equation is a equation that describe relationship between state variables and output variables. We can write it as:
$
cases(
y_1 &= g_1(x_1,x_2,dots,x_n;u_1,u_2,dots,u_p,t),
y_2 &= g_2(x_1,x_2,dots,x_n;u_1,u_2,dots,u_p,t),
&dots.v,
y_n &= g_n (x_1,x_2,dots,x_n;u_1,u_2,dots,u_p,t)
)
$
Rewrite it as vector form:
$
bold(y)_t = bold(g)(bold(x)_t,bold(u)_t,t)
$
6. *Description of State space of System*
We can describe state space of system by equations as:
$
cases(
bold(accent(x,dot))_t &= bold(f)(bold(x)_t,bold(u)_t,t),
bold(y)_t &= bold(g)(bold(x)_t,bold(u)_t,t)
)
$
When the system is linear, we can write it as:
$
cases(
bold(accent(x,dot)) &= bold(A)(t)bold(x) + bold(B)(t)bold(u),
bold(y) &= bold(C)(t)bold(x) + bold(D)(t)bold(u)
)
$
== Transfer function
Transfer function is a function that describe relationship between input and output of system.Given a system with different state,the transfer function is still the same which means it is not related to state of system in other words state variables.
*Single input -- Single output system*
Given a linear single input-single output system,we have state space representation as:
$
cases(
bold(accent(x,dot)) &= bold(A)bold(x) + bold(B) u,
y &= bold(C)bold(x) + D u
)
$
To get transfer function,we can use Laplace transform to get:
$
s bold(X) - bold(x)(0) = bold(A)bold(X) + bold(B) U(s)\
Y(s) = bold(C)bold(X)(s) + D U(s)
$<->
#set align(center)
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
*Laplace transfer*:
$
cal(L)[f(t)] = F(s) = integral_0^oo f(t) e^(-s t) dd(t)
$<->
$
cal(L)[k f(t)] = k F(s)
$<->
$
cal(L)[f(t) + g(t)] = F(s) + G(s)
$ <->
$
cal(L)[e^( -a t) f(t)] = F(s + a)
$<->
$
cal(L)[e^(a t) f(t)] = F(s - a)
$<->
$
cal(L)[f(t-T)] = e^(-s T) F(s)
$<->
$
cal(L)[f(a t)] = 1/a F(s/a)
$<->
$
cal(L)[dv(f,t)] = s F(s) - f(0)
$<->
$
cal(L)[dv(f,t,2)] = s^2 F(s) - s f(0) - f^'(0)
$<->
$
cal(L)[dv(f,t,n)] = s^n F(s) - s^(n-1) f(0) - s^(n-2) f^'(0) - dots - f^(n-1)(0)
$<->
$
cal(L)[integral_0^t f(t) dd(t)] = F(s)/s
$<->
$
f(oo) = lim_(s -> 0) s F(s)
$<->
$
f(0) = lim_(s -> oo) s F(s)
$<->
#table(
columns: 2,
table.header[$f(t)$][$F(s)$],
[$1$],[$1/s$],
[$t$],[$1/(s^2)$],
[$t^n$],[$n!/(s^(n+1))$],
[$e^(-a t)$],[$s+a$],
[$sin(omega t)$],[$(omega)/(s^2 + omega^2)$],
[$cos(omega t)$],[$(s)/(s^2 + omega^2)$],
[$u(t)$],[$1/s$],
[$delta(t)$],[$1$]
)
]
#set align(left)
The equations are organized as follows:
$
bold(X)(s) = (s bold(I) - bold(A))^(-1) [bold(x)(0) + bold(B) U(s)]\
Y(s) = bold(C)(s bold(I) - bold(A))^(-1) [bold(x)(0) + bold(B) U(s)] + D U(s)
$<->
Let initial condition be zero($bold(x)(0) = 0$),we can get:
$
Y(s) = [bold(C)(s bold(I) - bold(A))^(-1) bold(B) + D] U(s)
$<->
Thus,we can get transfer function as:
$
g(s) = Y(s)/U(s) = bold(C)(s bold(I) - bold(A))^(-1) bold(B) + D
$
Let $D = 0$,we can get:
$
g(s) = (bold(C) "adj"(s bold(I) - bold(A)) bold(B))/(det(s bold(I) - bold(A)))
$
*Multi input -- Multi output system*
Given a multi input-multi output system,we define transfer function between i-th out $y_i$ and j-th input $u_j$ as:
$
g_(i j)(s) = (Y_i (s))/(U_j (s))
$
where $Y_i (s)$ is Laplace transform of $y_i (t)$ and $U_j (s)$ is Laplace transform of $u_j (t)$.
Must mention that if we define transfer function in this way,we assume that all other inputs are zero.Because linear system satisfies the principle of superposition,so when we plus all inputs $U_1,U_2,dots,U_p$,we can get the i-th output $Y_i$ as:
$
Y_i = sum_(j=1)^p g_(i j) U_j
$
We can write it as matrix form:
$
bold(Y)(s) = bold(G)(s) bold(U)(s)
$
Thus given a linear multi input-multi output system,we have state space representation as:
$
cases(
bold(accent(x,dot)) &= bold(A)bold(x) + bold(B)bold(u),
bold(y) &= bold(C)bold(x) + bold(D)bold(u)
)
$
We can conduct as before to get transfer function as:
$
bold(G)(s) = bold(C)(s bold(I) - bold(A))^(-1) bold(B) + bold(D) = (bold(C) "adj"(s bold(I) - bold(A)) bold(B) + bold(D) "det"(s bold(I) - bold(A)))/(det(s bold(I) - bold(A)))
$
*Closed-loop System*
We have a closed-loop system as figure below:
#figure(image("pic/闭环系统.png", width: 50%), caption: [Closed-loop System])
We have:
$
bold(E)(s) = bold(u)(s) - bold(B)(s)\
bold(B)(s) = bold(H)(s)bold(y)(s) = bold(H)(s)bold(G)(s)bold(E)(s)\
bold(y)(s) = [bold(I) + bold(H)(s)bold(G)(s)]^(-1) bold(G)(s)bold(u)(s)
$<->
Thus the transfer function of closed-loop system is:
$
bold(G)_bold(H)(s) =[bold(I) + bold(H)(s)bold(G)(s)]^(-1) bold(G)(s)
$
*Regular*
We say a transfer function is regular if and only if when
$
lim_(s -> oo) g(s) = c
$where c is a constant.
And a transfer function is strictly regular if and only if when
$
lim_(s -> oo) g(s) = 0
$
== Establishing State Space Model by Differential Equation
Given a single input and single output system,if we have differential equation as:
$
y^((n)) + a_(n-1) y^((n-1)) + a_(n-2) y^((n-2)) + dots + a_0 y = b_n u^((n)) + b_(n-1) u^((n-1)) + dots + b_0 u
$<differential_equation>
where $m<=n$.
*Condition 1: $m = 0$*
We have:
$
y^((n)) + a_(n-1) y^((n-1)) + a_(n-2) y^((n-2)) + dots + a_0 y = b_0 u
$
We can define state variables as:
$
cases(
x_1 &= y,
x_2 &= y^((1)),
x_3 &= y^((2)),
&dots.v,
x_n &= y^((n-1))
)
$
We can get state equation as:
$
cases(
accent(x,dot)_1 = x_2,
accent(x,dot)_2 = x_3,
dots.v,
accent(x,dot)_(n-1) = x_n,
accent(x,dot)_n = -a_0 x_1 - a_1 x_2 - dots - a_(n-1) x_n + b_0u
)
$
We can rewrite it as vector form:
$
bold(accent(x,dot)) = mat(delim: "[",
0,1,dots,0;
dots.v,dots.v,dots.down,dots.v;
0,0,dots,1;
-a_0,-a_1,dots,-a_(n-1)
) bold(x) + mat(delim: "[",0;0;dots.v;b_0) \
y = mat(delim: "[",1,0,dots,0) bold(x)
$
*Condition 2:*$m eq.not n$
*Controllable Canonical Form Method:*
Let us note D as $dv(,t)$,we can rewrite @eqt:differential_equation as:
$
y = (b_m D^m + b_(m-1) D^(m-1) + b_(m-2) D^(m-2) + dots + b_0)/(D^n + a_(n-1) D^(n-1) + a_(n-2) D^(n-2) + dots +a_0) u
$<differential_equation_divide>
Let us discuss the case when $m<n$
Let
$
accent(y,tilde)^((n)) + a_(n-1) accent(y,tilde)^((n-1)) + a_(n-2) accent(y,tilde)^((n-2)) + dots + a_1 accent(y,tilde)^((1)) + a_0 accent(y,tilde) = u
$ Also as
$
accent(y,tilde) =1/(D^n + a_(n-1) D^(n-1) + a_(n-2) D^(n-2) + dots +a_0) u
$
we can get:
$
y = b_m accent(y,tilde)^((m)) + b_(m-1) accent(y,tilde)^((m-1)) + b_(m-2) accent(y,tilde)^((m-2)) + dots + b_0 accent(y,tilde)
$
We choose state variables as $x_1 = accent(y,tilde),x_2 = accent(y,tilde)^((1)) ,dots,x_n = accent(y,tilde)^((n-1)) $.We can get state equation as:
$
cases(
accent(x,dot)_1 = x_2,
accent(x,dot)_2 = x_3,
dots.v,
accent(x,dot)_(n-1) = x_n,
accent(x,dot)_n = -a_0 x_1 - a_1 x_2 - dots - a_(n-1) x_n + u
)
$
and output equation as:
$
y = b_0 x_1 + b_(1) x_2 + dots + b_m x_(m+1)
$
We can rewrite it as vector form:
$
bold(accent(x,dot)) = mat(delim: "[",
0,1,dots,0;
dots.v,dots.v,dots.down,dots.v;
0,0,dots,1;
-a_0,-a_1,dots,-a_(n-1)
) bold(x) + mat(delim: "[",0;0;dots.v;1)u \
y = [b_0,dots,b_m,0,dots,0] bold(x)
$
Let us discuss the case when $m=n$,we can rewrite @eqt:differential_equation_divide as:
$
y = [b_n + ((b_(n-1)-b_n a_(n-1))D^(n-1) + dots + (b_0 - b_n a_0))/(D^n +a_(n-1) D^(n-1) + dots + a_0)] u
$
Also let
$
accent(y,tilde)^((n)) + a_(n-1) accent(y,tilde)^((n-1)) + a_(n-2) accent(y,tilde)^((n-2)) + dots + a_1 accent(y,tilde)^((1)) + a_0 accent(y,tilde) = u
$
We can get:
$
y = (b_(n-1) -b_n a_(n-1))accent(y,tilde)^((n-1)) + (b_(n-2) - b_n a_(n-2))accent(y,tilde)^((n-2)) + dots + (b_0 - b_n a_0)accent(y,tilde) + b_n u
$
Thus we can write state equation in vector form in familiar way as:
$
bold(accent(x,dot)) = mat(delim: "[",
0,1,dots,0;
dots.v,dots.v,dots.down,dots.v;
0,0,dots,1;
-a_0,-a_1,dots,-a_(n-1)
) bold(x) + mat(delim: "[",0;0;dots.v;1)u \
y = [b_0 - b_n a_0,b_1 - b_n a_1,dots,b_(n-1) - b_n a_(n-1)] bold(x) + b_n u
$
*Undetermined Canonical Form Method:*
W.l.o.g,we assume that the equation is in the form of:
$
y^((n)) + a_(n-1) y^((n-1)) + dots + a_0 y = b_n u^((n)) + b_(n-1) u^((n-1)) + dots + b_0 u
$<wlog_diffeq>
We can define state variables as:
$
cases(
x_1 &= y - beta_0 u\
x_2 &= accent(x,dot)_1 - beta_1 u = accent(y,dot) - beta_0 accent(u,dot) - beta_1 u\
x_3 &= accent(x,dot)_2 - beta_2 u = accent(y,dot.double) - beta_0 accent(u,dot.double) - beta_1 accent(u,dot) - beta_2 u\
&dots.v\
x_n &= accent(x,dot)_(n-1)-beta_(n-1)u = y^((n-1)) - beta_0 u^((n-1)) - beta_1 u^((n-2)) - dots - beta_(n-1) u
)
$
Thus we have:
$
cases(
y = x_1 + beta_0 u\
accent(y,dot) = x_2 + beta_0 accent(u,dot) + beta_1 u\
accent(y,dot.double) = x_3 + beta_0 accent(u,dot.double) + beta_1 accent(u,dot) + beta_2 u\
#h(1em) dots.v\
y^((n-1)) = x_n + beta_0 u^((n-1)) + beta_1 u^((n-2)) + dots + beta_(n-1) u
)
$
Let us introduce a new variables $x_(n+1) = accent(x,dot)_n - beta_n u =accent(x,dot)_(n-1)-beta_(n-1)u = y^((n)) - beta_0 u^((n)) - beta_1 u^((n-1)) - dots - beta_(n) u $.Thus we have:
$
y^((n)) = x_(n+1) + beta_0 u^((n)) + beta_1 u^((n-1)) + dots + beta_(n) u
$
Substitute $y,accent(y,dot),dots,y^((n))$ into @eqt:wlog_diffeq,we can get:
$
(x_(n+1) + a_(n-1) x_n + dots + a_0 x_1) + beta_0 u^((n)) + (beta_1 + a_(n-1) beta_0)u^((n-1)) + \ (beta_2 + a_(n-1) beta_1 + a_(n-2) beta_0)u^((n-2)) + dots + (beta_n + a_(n-1) beta_(n-1) + a_(n-2) beta_(n-2) + dots + a_0 beta_0)u \ = b_n u^((n)) + b_(n-1) u^((n-1)) + b_(n-2) u^((n-2)) + dots + b_0 u
$
Compare the coefficients of $u^((n)),u^((n-1)),dots,u$,we can get:
$
cases(
x_(n+1) + a_(n-1) x_n + dots + a_0 x_1 &= 0\
beta_0 &= b_n\
beta_1 + a_(n-1) beta_0 &= b_(n-1)\
beta_2 + a_(n-1) beta_1 + a_(n-2) beta_0 &= b_(n-2)\
#h(7em) dots.v\
beta_n + a_(n-1) beta_(n-1) + a_(n-2) beta_(n-2) + dots + a_0 beta_0 &= b_0
)
$
In summary,we can get state equation as:
$
cases(
accent(x,dot)_1 = accent(y,dot) - beta_0 accent(u,dot) = x_2 + beta_1 u \
accent(x,dot)_2 = accent(y,dot.double) - beta_0 accent(u,dot.double) - beta_1 accent(u,dot) = x_3 + beta_2 u\
#h(1.5em) dots.v\
accent(x,dot)_(n-1) = y^((n-1)) - beta_0 u^((n-1)) - beta_1 u^((n-2)) - dots - beta_(n-2) accent(u,dot) = x_n + beta_(n-1)u \
accent(x,dot)_n = y^((n)) - beta_0 u^((n)) - beta_1 u^((n-1)) - dots - beta_(n-1) accent(u,dot) = -a_0 x_1 - a_1 x_2 - dots - a_(n-1) x_n + beta_n u
)
$
and output equation as:
$
y = x_1 + beta_0 u
$
We can rewrite it as vector form:
$
vec(delim: "[",
accent(x,dot)_1,accent(x,dot)_2,dots.v,accent(x,dot)_n
) = mat(delim: "[",
0,1,0,dots,0;
0,0,1,dots,0;
dots.v,dots.v,dots.v,dots.down,dots.v;
0,0,0,dots,1;
-a_0,-a_1,-a_2,dots,-a_(n-1)
) vec(delim: "[",
x_1,x_2,dots.v,x_n
) +
vec(delim: "[",
beta_1,beta_2,dots.v,beta_n
)u\
y = [1,0,0,dots,0] vec(delim: "[",
x_1,x_2,dots.v,x_n
) + beta_0 u
$
== Establishing State Space Model by Transfer Function
For a actual physical system,the transfer function of the system is always regular.
First,let us discuss the situation where the system is restrict regular in other words order of numerator of the transfer function is less than denominator of the transfer function.If we have a differential equation of system as:
$
y^((n)) + a_(n-1)y^((n-1)) + a_(n-2)y^((n-2)) + dots + a_1 accent(y,dot) + a_0 y = b_(n-1) u^((n-1)) + dots + b_1 accent(u,dot) + b_0 u
$
Then we have transfer function as:
$
g(s) = (Y(s))/(U(s)) = (b_(n-1) s^(n-1) + b_(n-2) s^(n-2) + dots + b_0)/(s^n + a_(n-1) s^(n-1)+ dots + a_1 s+a_0)
$
Introduce a intermediate variables $Z(s)$
We have:
$
g(s) = Y(s)/Z(s) Z(s)/U(s) = (b_(n-1) s^(n-1) + b_(n-2) s^(n-2) + dots + b_0) /1 1/(s^n + a_(n-1) s^(n-1)+ dots + a_(1)s+a_0)
$
Let us do inverse Laplace transform of $Z(s)$,we can get:
$
cases(
y = b_(n-1) z^((n-1)) + b_(n-2) z^((n-2)) + dots + b_1 accent(z,dot) + b_0 z \
z^((n)) + a_(n-1) z^((n-1)) + a_(n-2) z^((n-2)) + dots + a_1 accent(z,dot) + a_0 z= u\
)
$
We can define state variables as $x_1 = z,x_2 = accent(z,dot),x_3 = accent(z,dot.double),x_n = z^((n-1))$.We have state equation as:
$
cases(
accent(x,dot)_1 = x_2,
accent(x,dot)_2 = x_3,
dots.v,
accent(x,dot)_(n-1) = x_n,
accent(x,dot)_n = -a_0 x_1 - a_1 x_2 - dots - a_(n-1) x_n + u
)
$
And output equation as:
$
y = b_0 x_1 + b_1 x_2 + dots + b_(n-1) x_n
$
Let us discuss when the order of numerator of transfer function is as same as order of denominator of transfer function.We have transfer function as:
$
g(s) = (Y(s))/(U(s)) = (b_n s^n + b_(n-1) s^(n-1) + dots + b_0)/(s^n + a_(n-1) s^(n-1)+ dots + a_1 s+a_0)\
= b_n + ((b_(n-1) - b_n a_(n-1)) s^(n-1) + (b_(n-2) - b_n a_(n-2)) s^(n-2) + dots + (b_0 - b_n a_0))/(s^n + a_(n-1) s^(n-1)+ dots + a_1 s+a_0)
$
Let us note $h(s)$ as intermediate transfer function:
$
h(s) = (beta_(n-1)s^(n-1) + beta_(n-2)s^(n-2) + dots + beta_0)/(s^n + a_(n-1) s^(n-1)+ dots + a_1 s+a_0)
$
where $beta_i = b_i - b_n a_i$ for $i = 0,1,dots,n-1$.
We do the same thing as before,we can get:
$
cases(
y = beta_(n-1) z^((n-1)) + beta_(n-2) z^((n-2)) + dots + beta_1 accent(z,dot) + beta_0 z \
z^((n)) + a_(n-1) z^((n-1)) + a_(n-2) z^((n-2)) + dots + a_1 accent(z,dot) + a_0 z= u\
)
$
And
$
cases(
accent(x,dot)_1 = x_2,
accent(x,dot)_2 = x_3,
dots.v,
accent(x,dot)_(n-1) = x_n,
accent(x,dot)_n = -a_0 x_1 - a_1 x_2 - dots - a_(n-1) x_n + u
)
$
For output equation,all we need is to add a $b_n u$ term.
$
y = beta_0 x_1 + beta_1 x_2 + dots + beta_(n-1) x_n + b_n u
$
== Linear Transformation
Given a state variable vector $bold(x)$,the linear combination of the state variable vector is also a state variable vector $accent(bold(x),macron)$ if and only if the linear transformation matrix $bold(P)$ is invertible.
$
bold(x) = bold(P) accent(bold(x),macron)
$
In other words:
$
accent(bold(x),macron) = bold(P)^(-1) bold(x)
$
Let us discuss what would happen if we apply liner transformation to a *liner system*.
Given a linear system as:
$
cases(
bold(accent(x,dot)) &= bold(A) bold(x) + bold(B) bold(u),
bold(y) &= bold(C) bold(x) + bold(D) bold(u)
)
$
Let $bold(x) = bold(P) accent(bold(x),macron)$,we have:
$
cases(
accent(accent(bold(x),macron),dot) = bold(P)^(-1) bold(A) bold(P) accent(bold(x),macron) + bold(P)^(-1) bold(B) bold(u)\
bold(y) = bold(C) bold(P) accent(bold(x),macron) + bold(D) bold(u)
)
$
We have:
$
accent(bold(A),macron) = bold(P)^(-1) bold(A) bold(P)
$
$
accent(bold(B),macron) = bold(P)^(-1) bold(B)
$
$
accent(bold(C),macron) = bold(C) bold(P)
$
$
accent(bold(D),macron) = bold(D)
$
Let us try to transform state equations to *diagonal canonical form*.
Given a state equation as:
$
accent(bold(x),dot) = bold(A) bold(x) + bold(B) bold(u)
$
The eigenvalues of the system is defined as:
$
det(lambda bold(I) - bold(A)) = 0
$
*Diagonal Canonical Form*
*If the geometric multiplicity of the system is equal to the order of the system*,we can transform the state equation to diagonal canonical form by linear transformation.
Let $bold(P) = mat(delim: "[",bold(v_1),bold(v_2),dots,bold(v_n))^(-1)$.Then the state equation can be transformed to diagonal form as:
$
accent(accent(bold(x),macron),dot) = mat(delim: "[",lambda_1,0,dots,0;0,lambda_2,dots,0;dots.v,dots.v,dots.down,dots.v;0,0,dots,lambda_n) accent(bold(x),macron) + accent(bold(B),macron) bold(u)
$
where $accent(bold(B),macron)=bold(P)^(-1) bold(B)$.
*Trick*
If A is a companion matrix,then the state equation can be transformed to diagonal canonical form by transformation matrix $bold(P)$ where $bold(P)$ is a inverse vandermonde matrix.
$
bold(A) = mat(
delim: "[",0,1,0,dots,0;
0,0,1,dots,0;
dots.v,dots.v,dots.v,dots.down,dots.v;
0,0,0,dots,1;
-a_0,-a_1,-a_2,dots,-a_(n-1)
),bold(P) = mat(delim: "[",1,1,dots,1;lambda_1,lambda_2,dots,lambda_n;lambda_1^2,lambda_2^2,dots,lambda_n^2;dots.v,dots.v,dots.down,dots.v;lambda_1^(n-1),lambda_2^(n-1),dots,lambda_n^(n-1))^(-1)
$
*Jordan Canonical Form*
*If the geometric multiplicity of the system is less than the order of the system*,we can transform the state equation to jordan canonical form by linear transformation.
For those eigenvalues with geometric multiplicity less than the order of the system and let $bold(v_i)$ be their corresponding eigenvectors($lambda_i bold(v_i) = bold(A) bold(v_i)$),we define generalized eigenvectors as:
$
cases(
(lambda_i bold(I) - bold(A)) bold(v_i) &= 0\
(lambda_i bold(I) - bold(A)) bold(v_(i)^(')) &=- bold(v_i)\
(lambda_i bold(I) - bold(A)) bold(v_(i)^('')) &=- bold(v_(i)^('))\
#h(3em) dots.v\
(lambda_i bold(I) - bold(A)) bold(v_(i)^(sigma_i)) &=- bold(v_(i)^(sigma_(i-1)))
)
$
Then let $bold(P) = mat(delim: "[",bold(v_1),bold(v_1^(')),dots,bold(v_1^(sigma_1)),bold(v_2),bold(v_2^(')),dots,bold(v_2^(sigma_2)),dots)^(-1)$.Then the state equation can be transformed to jordan canonical form as:
$
accent(accent(bold(x),macron),dot) = mat(delim: "[",bold(J)_1,0,dots,0;0,bold(J)_2,dots,0;dots.v,dots.v,dots.down,dots.v;0,0,dots,bold(J)_n) accent(bold(x),macron) + accent(bold(B),macron) bold(u)
$
where $bold(J)_i$ is a jordan block corresponding to eigenvalue $lambda_i$ and $accent(bold(B),macron)=bold(P)^(-1) bold(B)$.
*Modal Form*
If the eigenvalues of the system are complex numbers,we can transform the state equation to modal form.
Let
$
lambda_1 = sigma + omega i ,lambda_2 = sigma - omega i
$
In this situation,the modal form of A is
$
bold(M) = mat(delim: "[",sigma,omega;-omega,sigma)
$
Let $bold(v_1)$ be the eigenvector of $lambda_1$ ($lambda_1 bold(v_1) = bold(A) bold(v_1)$).
$
bold(v_1) = bold(alpha) + bold(beta) i
$
The the transformation matrix $bold(P)$ is $mat(delim: "[",bold(alpha),bold(beta))^(-1)$. |
|
https://github.com/EpicEricEE/typst-marge | https://raw.githubusercontent.com/EpicEricEE/typst-marge/main/tests/parameter/side/test.typ | typst | MIT License | #import "/src/lib.typ": sidenote
#set par(justify: true)
#set page(width: 11cm, height: auto, margin: (x: 4cm, rest: 5mm))
#let sidenote = sidenote.with(numbering: "1")
#lorem(4)
#sidenote[This note is on the outside.]
#lorem(4)
#sidenote(side: "inside")[This note is on the inside.]
#lorem(4)
#sidenote(side: right)[This note is on the right.]
#lorem(4)
#sidenote(side: left)[This note is on the left.]
#lorem(4)
#set page(margin: (right: 4cm, rest: 5mm))
#lorem(4)
#sidenote[This note is on the right.]
#lorem(7)
|
https://github.com/Myriad-Dreamin/apollo-typst | https://raw.githubusercontent.com/Myriad-Dreamin/apollo-typst/main/typ/template/pages.typ | typst | #import "@preview/shiroa:0.1.0": *
#import "@preview/typst-apollo:0.1.0": pages
#import pages: *
#let blog-page = project
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/dashy-todo/0.0.1/lib/todo.typ | typst | Apache License 2.0 | #import "place-in-page-margin.typ": place-in-page-margin
#let to-string(content) = {
if type(content) == str {
content
} else if content.has("text") {
content.text
} else if content.has("children") {
content.children.map(to-string).join("")
} else if content.has("body") {
to-string(content.body)
} else if content == [ ] {
" "
}
}
#let todo(body, position: auto) = box(context {
assert(position in (auto, left, right), message: "Can only position todo on the left or right side currently")
let text-position = here().position()
place-in-page-margin(cur-pos: text-position, position: position)[
// shift the box slightly upwards for styling reasons
#let shift-y = .5em
#move(dy: -shift-y)[
#box(inset: 4pt, width: 100%)[
#box(stroke: orange, width: 100%)[
#place(
layout(size => (
context {
let cur = here().position()
let is-left = cur.x < page.width / 2
// defaults for right side
let line-size = cur.x - text-position.x
let line-x = -line-size
let tick-x = -line-size
// overwrites for left side
if is-left {
line-size = text-position.x - cur.x - size.width
line-x = size.width
tick-x = size.width + line-size
}
place(line(length: line-size, start: (line-x, shift-y), stroke: orange))
place(line(length: 4pt, start: (tick-x, shift-y), angle: -90deg, stroke: orange))
}
)),
)
// the todo message
#box(body, inset: 0.2em)
]
// invisible figure, s.t. we can reference it in the outline
// probably depends on https://github.com/typst/typst/issues/147 for a cleaner solution
#hide(
box(
height: 0pt,
width: 0pt,
figure(
none,
kind: "todo",
supplement: [TODO],
caption: to-string(body),
outlined: true,
),
),
)
]
]
]
}) |
https://github.com/noahjutz/AD | https://raw.githubusercontent.com/noahjutz/AD/main/notizen/sortieralgorithmen/quicksort/quicksort.typ | typst | #import "/config.typ": theme
#import "@preview/cetz:0.2.2"
#let swap_trace(trace, i, j) = {
let k = trace.position(n => n == i)
let l = trace.position(n => n == j)
(trace.at(k), trace.at(l)) = (trace.at(l), trace.at(k))
return trace
}
#let partition(nums) = {
let trace = range(nums.len())
let j = 1
for i in range(1, nums.len()) {
if nums.at(i) <= nums.at(0) {
(nums.at(i), nums.at(j)) = (nums.at(j), nums.at(i))
trace = swap_trace(trace, i, j)
j += 1
}
}
(nums.at(0), nums.at(j - 1)) = (nums.at(j - 1), nums.at(0))
trace = swap_trace(trace, 0, j - 1)
return (
nums.slice(0, j - 1),
(nums.at(j - 1),),
nums.slice(j, nums.len()),
trace
)
}
#let step(parts) = {
let out_parts = ()
let out_swaps = ()
let i = 0
for part in parts {
if part.len() == 0 {
continue
}
if part.len() == 1 {
out_parts.push(part)
out_swaps.push(i)
i += 1
continue
}
let (l, p, r, s) = partition(part)
out_parts += (l, p, r)
out_swaps += s.map(s => s + i)
i += (l + p + r).len()
}
return (out_swaps, out_parts)
}
#let quicksort_rows(parts) = {
if parts.all(p => p.len() <= 1) {
return ()
}
let (swaps, parts) = step(parts)
return (swaps, parts) + quicksort_rows(parts)
}
#let num_row(parts) = table(
columns: (1fr,) * parts.flatten().len(),
align: center + horizon,
..parts.map(p => {
if p.len() == 1 {
table.cell(
fill: theme.success_light,
str(p.at(0))
)
} else {
p.enumerate().map(((i, n)) => {
table.cell(
fill: if i == 0 {theme.tertiary_light}
else if n <= p.first() {theme.primary_light}
else {theme.secondary_light},
str(n)
)
})
}
}).flatten()
)
#let arrow_row(parts, swaps) = cetz.canvas(length: 100%, {
import cetz.draw: *
let kind(parts, index) = {
let i = 0
for x in parts {
let j = 0
for y in x {
if i == index {
return if x.len() == 1 {
"pass_through"
} else if j == 0 {
"pivot"
} else if y <= x.first() {
"left"
} else {
"right"
}
}
i += 1
j += 1
}
}
}
let arrow = group({
line((), (rel: (0, -12pt)))
arc((), start: 180deg, stop: 270deg, radius: 4pt)
line((), (rel: (8pt, 0)),
mark: (end: ">")
)
})
let n = swaps.len()
let height = 20pt
line((0, 0), (1, -height), stroke: none)
translate(x: .5/n)
for (from, to) in swaps.enumerate() {
let from_x = from / n
let to_x = to / n
let kind = kind(parts, from)
if kind == "pivot" {
on-layer(1, {
stroke(2pt)
bezier(
(from_x, 0),
(to_x, -height),
(from_x, -height/2),
(to_x, -height/2),
mark: (end: ">")
)
})
} else if kind == "right" {
group({
stroke(theme.secondary_trans)
bezier(
(from_x, 0),
(to_x, -height),
(from_x, -height/2),
(to_x, -height/2),
mark: (end: ">")
)
})
} else if kind == "left" {
group({
stroke(theme.primary_trans)
bezier(
(from_x, 0),
(to_x, -height),
(from_x, -height/2),
(to_x, -height/2),
mark: (end: ">")
)
})
}
}
})
#let quicksort(..nums) = {
let rows = quicksort_rows((nums.pos(),))
rows.insert(0, (nums.pos(),))
rows.push(none)
block(breakable: false, {
set block(above: 0pt)
for (parts, swaps) in rows.chunks(2) {
num_row(parts)
if swaps != none {
arrow_row(parts, swaps)
}
}
})
} |
|
https://github.com/kdog3682/mathematical | https://raw.githubusercontent.com/kdog3682/mathematical/main/0.1.0/src/draw/utils.typ | typst |
#import "@preview/cetz:0.2.0"
#let translation(c, x: 0, y: 0) = {
cetz.draw.translate(x: x, y: y)
c
cetz.draw.translate(x: -x, y: -y)
}
#let offset((a, b), x: 0, y: 0) = {
(a + x, b + y)
}
#let wrapper(c, x, y, name: none) = {
cetz.draw.group(name: name, ctx => translation(c, x: x, y: y))
}
#let rel(x, y, ..sink) = {
let args = sink.pos()
if args.len() == 1 {
return (rel: (x, y), to: args.first())
} else {
return (rel: (x, y))
}
}
|
|
https://github.com/jomaway/typst-teacher-templates | https://raw.githubusercontent.com/jomaway/typst-teacher-templates/main/examples/exam/mc.typ | typst | MIT License | #import "@local/ttt-exam:0.1.0": assignment, multiple-choice
#let data = toml("quizzes.toml")
#assignment[Kreuzen Sie die richtige Lösung an.
#for mct in data.questions {
multiple-choice(mct)
}
]
|
https://github.com/kdog3682/mathematical | https://raw.githubusercontent.com/kdog3682/mathematical/main/0.1.0/src/demos/fractions-from-ratios.typ | typst | #import "@local/typkit:0.1.0": *
#import "@local/mathematical:0.1.0": *
#let fractions-from-ratios(..sink) = {
let args = sinks.pos()
assert-is-color-value-object-array(args)
let values = args.map((x) => x.value)
let d = values.sum()
let callback(o) = {
return colored(fraction(o.value, d), o.fill)
}
let fractions = args.map(callback)
let ratio-items = map(values, colored, o.fill).intersperse(marks.math.colon)
}
#fractions-from-ratios(
(fill: "blue", value: 4),
(fill: "purple", value: 5),
)
|
|
https://github.com/7sDream/fonts-and-layout-zhCN | https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/07-localisation/localisation.typ | typst | Other | #import "/template/template.typ": web-page-template
#import "/template/heading.typ": chapter
#import "/template/components.typ": note, title-ref
#import "/template/lang.typ": arabic, arabic-amiri, balinese, devanagari, hind, sharada, taitham, telugu
#import "/lib/glossary.typ": tr
#show: web-page-template
#chapter[
// OpenType for Global Scripts
服务#tr[global scripts]的OpenType
]
// In the last chapter, we looked at OpenType features from the perspective of technology: what cool things can we make the font do? In this chapter, however, we're going to look from the perspective of language: how do we make the font support the kind of language features we need? We'll be putting together the substitution and positioning lookups from OpenType Layout that we learnt about in the previous chapter, and using them to create fonts which behave correctly and beautifully for the needs of different scripts and language systems.
上一章中,我们在分析各种OpenType特性时采用的是技术视角,也就是利用它们能做到哪些事。本章我们则会站在各种语言的角度,来看看它如何完成特定语言中的具体需求。通过结合之前学习的#tr[substitution]和#tr[positioning]#tr[lookup],我们要为不同的语言#tr[script]系统创建既正确又漂亮的字体。
|
https://github.com/csimide/cuti | https://raw.githubusercontent.com/csimide/cuti/master/demo-and-doc/demo-and-doc.typ | typst | MIT License | #import "../lib.typ": *
#import "./otr/utils.typ": *
#set page(margin: 2cm)
#set par(justify: true)
#show raw.where(block: false): set text(font: "Fira Code")
#show heading.where(level: 1): it => {pagebreak(weak: true); it}
#show heading.where(level: 2): it => {line(length: 100%); it}
= Cuti Demo & Doc
== Introduction
Cuti #footnote[/kjuːti/.] is a package designed for simulating fake bold / fake italic.
== Demo
=== Part 1: `font: ("Times New Roman", "SimSun")`
#block(
fill: rgb("dcedc8"),
stroke: 0.5pt + luma(180),
inset: 10pt,
width: 100%,
)[
#set text(font: ("Times New Roman", "SimSun"))
#grid(
columns: 2,
column-gutter: 0.2em,
row-gutter: 0.6em,
[Regular:], [你说得对,但是《Cuti》是一个用于伪粗体和伪斜体的包。],
[Bold(Font Only):], text(weight: "bold")[你说得对,但是《Cuti》是一个用于伪粗体和伪斜体的包。],
[Bold(Fake Only):], fakebold[你说得对,但是《Cuti》是一个用于伪粗体和伪斜体的包。],
[Bold(Fake+Font):], show-cn-fakebold(text(weight: "bold")[你说得对,但是《Cuti》是一个用于伪粗体和伪斜体的包。]),
[Italic(Font Only):], text(style: "italic")[你说得对,但是《Cuti》是一个用于伪粗体和伪斜体的包。],
[Italic(Fake Only):], fakeitalic[你说得对,但是《Cuti》是一个用于伪粗体和伪斜体的包。],
[Italic(Fake+Font):], regex-fakeitalic(reg-exp: "[\p{script=Han} !-・〇-〰—]", text(style: "italic")[你说得对,但是《Cuti》是一个用于伪粗体和伪斜体的包。]),
)
]
=== Part 2: `font: "Source Han Serif SC"`
#block(
fill: rgb("dcedc8"),
stroke: 0.5pt + luma(180),
inset: 10pt,
width: 100%,
)[
#set text(font: "Source Han Serif SC")
#grid(
columns: 2,
column-gutter: 0.2em,
row-gutter: 0.6em,
[Regular:], [前面忘了。同时,逐步发掘「Typst」的奥妙。],
[Bold(Font Only):], text(weight: "bold")[前面忘了。同时,逐步发掘「Typst」的奥妙。],
[Bold(Fake Only):], fakebold[前面忘了。同时,逐步发掘「Typst」的奥妙。],
[Bold(Fake+Font):], show-cn-fakebold(text(weight: "bold")[前面忘了。同时,逐步发掘「Typst」的奥妙。])
)
]
= Fake Bold
Cuti simulates fake bold by utilizing the `stroke` attribute of `text`. This package is typically used on fonts that do not have a `bold` weight, such as "SimSun". This package uses 0.02857em as the parameter for stroke. In Microsoft Office software, enabling fake bold will apply a border of about 0.02857em to characters. This is where the value of 0.02857em is derived from. (In fact, the exact value may be $1/35$.)
== fakebold
`#fakebold[]` with no parmerter will apply the #fakebold[fakebold] effect to characters.
#example(
```typst
- Fakebold: #fakebold[#lorem(5)]
- Bold: #text(weight: "bold", lorem(5))
- Bold + Fakebold: #fakebold[#text(weight: "bold", lorem(5))]
```
)
`#fakebold[]` can accept the same parameters as `#text`. In particular, if the `weight` parameter is specified, it can be used to outline based on a certain font weight. If `weight` is not specified, the baseline font weight will be inherited from the context. Specifying the `stroke` parameter will be ignored.
#example(
```typst
- Bold + Fakebold: #fakebold(weight: "bold")[#lorem(5)]
- Bold + Fakebold: #set text(weight: "bold"); #fakebold[#lorem(5)]
```
)
*Note:* The `base-weight` parameter used by `cuti:0.2.0` is still retained to ensure compatibility.
== #regex-fakebold
The `#regex-fakebold` is designed to be used in multilingual and multi-font scenarios. It allows the use of a RegExp string as the `reg-exp` parameter to match characters that will have the fake bold effect applied. It can also accept the same parameters as `#text`.
#example(
```typst
+ RegExp `[a-o]`: #regex-fakebold(reg-exp: "[a-o]")[#lorem(5)]
+ RegExp `\p{script=Han}`: #regex-fakebold(reg-exp: "\p{script=Han}")[衬衫的价格是9磅15便士。]
+ RegExp `\p{script=Han}`: #set text(weight: "bold"); #regex-fakebold(reg-exp: "\p{script=Han}")[衬衫的价格是9磅15便士。]
```
)
In Example \#3, `9` and `15` are the real bold characters from the font file, while the other characters are simulated as "fake bold" based on the `regular` weight.
If the `fill` parameter of `#text` is set to a specific color or gradient, , the fake bold outline will also change to the corresponding color.
#example(
```typst
- Blue + Fakebold: #fakebold(fill: blue)[花生瓜子八宝粥,啤酒饮料矿泉水。#lorem(5)]
- Gradient + Fakebold: #set text(fill: gradient.conic(..color.map.rainbow)); #fakebold[花生瓜子八宝粥,啤酒饮料矿泉水。#lorem(5)]
```
)
== show-fakebold
In multilingual and multi-font scenarios, different languages often utilize their own fonts, but not all fonts contain the `bold` weight. It can be inconvenient to use `#fakebold` or `#regex-fakebold` each time we require `strong` or `bold` effects. Therefore, the `#show-fakebold` function is introduced for `show` rule.
The `show-fakebold` function shares the same parameters as `regex-fakebold`. By default, `show-fakebold` will apply the RegExp `"."`, which means all characters with the `strong` or `weight: "bold"` property will be fakebolded if the corresponding show rule has been set.
#example(
```typst
#show: show-fakebold
- Regular: #lorem(10)
- Bold: #text(weight: "bold")[#lorem(10)]
```
)
Typically, the combination of bold + fakebold is not the desired effect. It is usually necessary to specify the RegExp to indicate which characters should utilize the fakebold effect.
#example(
```typst
#show: show-fakebold.with(reg-exp: "\p{script=Han}")
- Regular: 我正在使用 Typst 排版。
- Strong: *我正在使用 Typst 排版。*
```
)
It can also accept the same parameters as `#text`.
== cn-fakebold & show-cn-fakebold
`cn-fakebold(`#typebox[content]`)`
`show-cn-fakebold(`#typebox[content]`)`
`cn-fakebold` and `show-cn-fakebold` are encapsulations of the above `regex-fakebold` and `show-fakebold`, pre-configured for use with Chinese text. Please refer to the Chinese documentation for detailed usage instructions.
= Fake Italic
The `skew` function used in cuti is from typst issue #2749 (https://github.com/typst/typst/issues/2749) by Enivex.
Cuti simulates fake italic by utilizing `rotate` and `scale`. This package uses $-0.32175$ as the default angle. In Microsoft Office software, enabling fake italic will apply a $arctan(1/3)$ skew effect to characters. Please note that due to different English fonts having varying skew angles, you may need to find a suitable angle on your own. If using Times New Roman alongside SimSun, the default angle is relatively appropriate.
== fakeitalic
`fakeitalic(` \
#h(2em) `ang:` #typebox[angle] default: `-0.32175,` \
#h(2em) #typebox[content] \
`)`
`#fakeitalic[]` will apply the #fakeitalic[fakeitalic]#h(1pt) effect to characters.
#example(
```typst
- Regular: #lorem(5)
- Italic: #text(style: "italic", lorem(5))
- Fakeitalic: #fakeitalic[#lorem(5)]
- Fakeitalic + Fakebold: #fakeitalic[#fakebold[#lorem(5)]]
```
)
The angle of skew can be adjusted through the `ang` parameter.
#example(
```typst
- -10deg: #fakeitalic(ang: -10deg)[#lorem(5)]
- -20deg: #fakeitalic(ang: -20deg)[#lorem(5)]
- +20deg: #fakeitalic(ang: 20deg)[#lorem(5)]
```
)
== #regex-fakeitalic
`regex-fakeitalic(` \
#h(2em) `reg-exp:` #typebox[str] default: `"[^ ]",` \
#h(2em) `ang:` #typebox[angle] `,` \
#h(2em) `spacing:` #typebox[relative] #typebox[none] default: #typebox[none] `,` \
#h(2em) #typebox[content] \
`)`
The `#regex-fakeitalic` is designed to be used in multilingual and multi-font scenarios. It allows the use of a RegExp string as the `reg-exp` parameter to match characters that will have the fake bold effect applied. It also accepts the `ang` parameter.
#example(
```typst
+ RegExp `[a-o]`: #regex-fakeitalic(reg-exp: "[a-o]")[#lorem(5)]
+ RegExp `\p{script=Han}`: #regex-fakeitalic(reg-exp: "\p{script=Han}")[衬衫的价格是9磅15便士。]
+ RegExp `\p{script=Han}`: #set text(style: "italic"); #regex-fakeitalic(reg-exp: "\p{script=Han}", ang: -10deg)[衬衫的价格是9磅15便士。]
```
)
In Example \#3, `9` and `15` are the real italic characters from the font file, while the other characters are simulated as "fake italic".
== Issues at hand
The current implementation of faux italics disrupts spacing, particularly the spacing between symbols and characters. This is especially evident in the demo. |
https://github.com/Isaac-Fate/booxtyp | https://raw.githubusercontent.com/Isaac-Fate/booxtyp/master/src/sectioning.typ | typst | Apache License 2.0 | #import "colors.typ": color-schema
#import "equation.typ": equation-counter
#import "counters.typ": figure-counter, theorem-counter, definition-counter, example-counter, exercise-counter
#let chapter-rules(body) = {
// Handle level 1 headings with numbering "1.1"
show heading.where(level: 1, numbering: "1.1"): it => {
// Add a page break
// Do not add a page break if the chapter is the first one
// or the previous page is already blank
pagebreak(weak: true)
// Reset counters
locate(loc => {
let heading-numbers = counter(heading).at(loc)
figure-counter.update((..heading-numbers, 1))
theorem-counter.update((..heading-numbers, 1))
definition-counter.update((..heading-numbers, 1))
example-counter.update((..heading-numbers, 1))
exercise-counter.update((..heading-numbers, 1))
equation-counter.update((..heading-numbers, 1))
})
// Se the text style
set text(fill: color-schema.blue.dark, size: 2.5em)
// Get the chapter number
let chapter-number = locate(loc => {
let numbers = counter(heading).at(loc)
let number = numbering("1", ..numbers)
return number
})
[Chapter #chapter-number]
parbreak()
it.body
// Add some space below the title
v(1.7em)
}
// Handle level 1 headings without numbering
show heading.where(level: 1, numbering: none): it => {
// Add a page break
// Do not add a page break if the chapter is the first one
// or the previous page is already blank
pagebreak(weak: true)
// Se the text style
set text(fill: color-schema.blue.dark, size: 2.5em)
// Get the chapter number
let chapter-number = locate(loc => {
let numbers = counter(heading).at(loc)
let number = numbering("1", ..numbers)
return number
})
it.body
// Add some space below the title
v(1.7em)
}
// The rest of the document
body
}
#let section-rules(body) = {
set heading(numbering: "1.1")
show heading.where(level: 2): it => {
// Reset counters
locate(loc => {
let heading-numbers = counter(heading).at(loc)
figure-counter.update((..heading-numbers, 1))
theorem-counter.update((..heading-numbers, 1))
definition-counter.update((..heading-numbers, 1))
example-counter.update((..heading-numbers, 1))
exercise-counter.update((..heading-numbers, 1))
equation-counter.update((..heading-numbers, 1))
})
// Se the text style
set text(fill: color-schema.blue.dark, size: 16pt)
// Add some space above the title
v(1.5em)
it
// Add some space below the title
v(1.0em)
}
// The rest of the document
body
}
|
https://github.com/OverflowCat/BUAA-Automatic-Control-Components-Sp2024 | https://raw.githubusercontent.com/OverflowCat/BUAA-Automatic-Control-Components-Sp2024/neko/实验/3.typ | typst | #set text(lang: "zh", font: "Noto Serif CJK SC")
#show "。": "."
= 实验三 三相异步电动机的实验
== 实验(1):三相异步电机起动、改变电机转向
=== 2. 三相异步电动机直接起动的接线和直接起动试验
观察电动机起动瞬间最大的电流值,重复启动过程 5 次,取测得的电流数值的最大值 $ I_"st" = 1.12" A". $
=== 3. 三相异步电动机 Y-Δ起动的接线和 Y-Δ起动试验
观察电动机起动瞬间最大的电流值,重复启动过程 5 次,取测得的电流数值的最大值 $ I_"st" = 0.56" A". $
将 Y 连接的启动电流与直接起动方法的电流作定性比较:Y 连接的启动电流是 Δ 连接的启动电流的 $display(1/3)$。
=== 4. 三相异步电动机降压调速的接线及降压调速试验
从 $U_"BC"$ 线电压 $= U_"N" = 220" V"$ 开始逐次减小降低交流输出电压,直至电动机 $M$ 的转速 $n$ 降低至 $0 r/min$ (注意,当电动机转速开始明显降低时,调压器要缓缓调节),在这一过程中测取电动机 $M$ 的输入电压 $U_1$ (用 $V$ 表读数)、输出转速 $n$。共取数据 6-8 组。
#figure(
caption: "降压调速试验",
table(
columns: 10,
[$U_1$ (V)], [220], [100], [70], [60], [50], [48], [47], [46], [40],
[$n$ (r/min)], [-1495], [-1484], [-1463], [-1442], [-1374], [-1342], [-1278], [-185], [-55],
),
)
=== 5. 改变三相异步电动机的转向
#figure(
caption: "改变转向",
table(
columns: 4,
table.header([序号], [操作内容], [三相绕组顺序], [转向情况]),
[1], [接线未作改变], [ABC], [逆(负)],
[2], [任意两相绕组的接线对调], [BAC], [顺(正)],
),
)
=== 思考与练习
// + 本节实验中采用了什么启动方式?
// 星-角起动。
+ 比较异步电动机不同起动方法的优缺点。
/ 星-角起动: 可以有效降低启动电流,但启动转矩只有全电压启动的1/3,不适合大负载启动;
/ 串自耦变压器起动: 适合于大负载启动的场合,启动电流可控,对电网冲击较小,但结构较为复杂,成本高;
/ 定子串电阻或电抗起动: 结构简单,但不适合大负载启动。
+ 本次实验使用的三相异步电动机的极对数是多少?
2。
== 实验(2):三相异步电动机的开环调速实验
#figure(
caption: "三相正弦波脉宽控制器(SPWM)的特性测试数据",
table(
columns: (22.5%, 22.5%, 12.5%, 15%, 12.5%, 15%),
align: (auto, auto, auto, auto, auto, auto),
table.header(
[],
[参考给定 $U_n^(\*)$ (V)],
[转速 $n$
(r/min)],
[定子电压 $U_1$ (V)],
[定子频率 $f_1$ (Hz)],
[压频比
$U_1 \/ f_1$],
),
table.hline(),
[#strong[4];], [-710], [135], [177], [29], [6.10],
[#strong[5];], [-940], [177], [218], [32], [6.81],
[#strong[6];], [-1160], [218], [257], [38], [6.76],
[#strong[7];], [-1380], [257], [299], [45], [6.64],
[#strong[8];], [-1580], [299], [299], [52], [5.75],
),
)
#let c = x => str(calc.round(x, digits: 2))
#let HYPB = [恒压频比($(U_1) / (omega_1)$)]
#let Tgm1 = 1.48
#let Tgm2 = 1.10
#let tg1s = (0, 0.8, 1.0, 1.2, Tgm1)
#let tg2s = (0, 0.4, 0.6, 0.8, Tgm2)
#let te1s = tg1s.map(v => v / Tgm1)
#let te2s = tg2s.map(v => v / Tgm2)
// [-1600], [-1500], [-1460], [-1400], [-1160], [-750], [-710], [-680], [-640], [-500],
#let n1s = (-1600, -1500, -1460, -1400, -1160)
#let n2s = (-750, -710, -680, -640, -500)
#let n01 = n1s.first()
#let n02 = n2s.first()
#let s1s = n1s.map(n => (n01 - n) / n01)
#let s2s = n2s.map(n => (n02 - n) / n02)
#figure(
caption: [#HYPB;控制下异步电动机开环机械特性实验数据],
align(center)[#table(
columns: 11,
table.header($U^*_(n 2)$, table.cell(colspan: 5)[8 V], table.cell(colspan: 5)[4 V]),
table.hline(),
table.cell(rowspan: 2)[$T_e ("N" dot.c "m")$], [0], [TG1], [TG2], [TG3], [TGm], [0], [TG1], [TG2], [TG3], [TGm],
..tg1s.map(str), ..tg2s.map(str),
$T^*_e$, ..te1s.map(c), ..te2s.map(c),
table.cell(rowspan: 2)[n (r/min)], [$n_0$ (S0)], [$n_1$ (S1)], [$n_2$ (S2)], [$n_3$ (S3)], [$n_upright(m)$ ($S_upright(m)$)], [$n_0'$ ($S_0'$)], [$n_1'$ ($S_1'$)], [$n_2'$ ($S_2'$)], [$n_3'$ ($S_3'$)], [$n_upright(m)'$ ($S_upright(m)^'$)],
..n1s.map(str), ..n2s.map(str),
$S$, ..s1s.map(c), ..s2s.map(c),
)],
)<HYPB控制>
其中,转矩比 $T^*_e =T_e / T_"Gm", S = (n_0 - n) / n_0$。
按@HYPB控制 数据分别在@HYPBgraph 中绘制#HYPB;控制下异步电动机的开环机械特性
$S = f (T_e^(\*))$ (共2条曲线)。
#figure(
caption: HYPB + "控制下异步电动机的开环机械特性",
image("dist/3.svg", height: 10.55cm, width: 13cm),
)<HYPBgraph>
// #HYPB;控制下异步电动机的开环机械特性(①高速、②低速)
=== 思考题
+ 何为#HYPB;控制?其引入的目的是什么?
恒压频比控制方法是通过调整电机的电压和频率,实现对电机输出功率的恒定控制的方法。其基本原理是通过测量电机的输入电压和电流,计算出电机的功率因数,然后根据功率因数调整电机的电压和频率,使得电机的输出功率保持恒定。目的是可以使电机在最佳条件下运行,从而提高电机的运行效率,降低能耗。
+ DM02 模块在异步电动机变频调速系统中的作用是什么?
DM02 接收来自控制电路的低功率控制信号 $U_c$,将其放大并隔离,以驱动功率管 (OUT1 \~ OUT6) 的开通与关断,驱动三相交流感应电机。
+ 分析@HYPBgraph 中开环机械特性,说明特点。
对于同一转矩 $T$,$S omega$ 基本不变,因而 $Delta n$ 基本不变。 |
|
https://github.com/Lslightly/TypstTemplates | https://raw.githubusercontent.com/Lslightly/TypstTemplates/main/README.md | markdown | MIT License | # TypstTemplates
templates and fonts for Typst
# Usage
use `softlink.xx` to create soft-links.
|
https://github.com/VadimYarovoy/Networks2 | https://raw.githubusercontent.com/VadimYarovoy/Networks2/main/lab3/report/typ/task.typ | typst | #import "@preview/colorful-boxes:1.2.0": *
= Практические задание
== Задание 1
#colorbox(
title: "TODO",
color: "blue",
radius: 2pt,
width: auto
)[
Настройте свой веб-сервер (nginx+php+mysql или что-то подобное). Контент на сервере
мне не важен, подойдет любая дефолтная CMS (Wordpress, Drupal итд)
]
=== Настройка
\
Будем настривать *Wordpress* с *nginx+php+mysql* в *docker*:
+ Напишем *docker-compose*:
```yaml
version: '3.9'
services:
mysql:
image: mysql:8.0
container_name: mysql8
restart: unless-stopped
env_file: .env
volumes:
- dbfile:/var/lib/mysql
command: '--default-authentication-plugin=mysql_native_password'
networks:
- app
wp:
image: wordpress:5.7.0-php8.0-fpm
container_name: wordpress-5.7.0-php8.0-fpm
depends_on:
- mysql
restart: unless-stopped
env_file: .env
environment:
- WORDPRESS_DB_HOST=mysql:3306
- WORDPRESS_DB_USER=$MYSQL_USER
- WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD
- WORDPRESS_DB_NAME=$MYSQL_DATABASE
volumes:
- www-html:/var/www/html
networks:
- app
nginx:
image: nginx:1.19.8-alpine
depends_on:
- wp
container_name: nginx-1.19.8-alpine
restart: unless-stopped
ports:
- "80:80"
volumes:
- www-html:/var/www/html
- ./nginx-conf.d:/etc/nginx/conf.d
networks:
- app
volumes:
www-html:
dbfile:
networks:
app:
driver: bridge
```
+ MySQL Service (mysql):
- Используется образ MySQL версии 8.0.
- Название контейнера: mysql8.
- Перезапускается при необходимости.
- Использует файл с переменными окружения (.env) для настройки окружения.
- Данные MySQL сохраняются в Docker Volume dbfile.
- Задана команда для использования стандартной аутентификации MySQL.
+ WordPress Service (wp):
- Используется образ WordPress версии 5.7.0 с PHP версии 8.0 в роли FPM (FastCGI Process Manager).
- Название контейнера: wordpress-5.7.0-php8.0-fpm.
- Зависит от сервиса MySQL (mysql) и перезапускается при необходимости.
- Использует файл с переменными окружения (.env) для настройки окружения.
- Данные WordPress сохраняются в Docker Volume www-html.
- Связан с сетью app.
+ Nginx Service (nginx):
- Используется образ Nginx версии 1.19.8-alpine.
- Название контейнера: nginx-1.19.8-alpine.
- Зависит от сервиса WordPress (wp) и перезапускается при необходимости.
- Пробрасывает порт 80.
- Монтирует Docker Volume www-html для обмена файлами с WordPress и локальный каталог ./nginx-conf.d для -настроек Nginx.
- Связан с сетью app.
+ Volumes:
- www-html: Используется для хранения данных WordPress и общего доступа с контейнерами wp и nginx.
- dbfile: Используется для хранения данных MySQL.
+ Networks:
- app: Используется для связи контейнеров в рамках одной сети.
+ Создадим директорию nginx-conf.d
+ Внутри nginx-conf.d создаим файл конфигурации .conf
```
server {
listen 80;
listen [::]:80;
server_name my.server.ru;
index index.php index.html index.htm;
root /var/www/html;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass wp:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off;
}
location = /robots.txt {
log_not_found off;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}
```
+ Добавим в nginx-conf.d файл .env с информацие о базе данных
```env
MYSQL_ROOT_PASSWORD=<PASSWORD>
MYSQL_USER=wp_db_user
MYSQL_PASSWORD=<PASSWORD>
MYSQL_DATABASE=wp_db
```
+ Запустим веб сервер командой
```bash
docker compose up -d
```
+ Можем просматривать логи с помощью команд
```bash
docker-compose logs php
docker-compose logs mysql
docker-compose logs nginx
```
+ завершаем работу серверам
```bash
docker compose down
```
=== Результаты
#figure(
image("../pics/1.png", width: 100%),
caption: [
Шаг 1
],
)
#figure(
image("../pics/2.png", width: 100%),
caption: [
Шаг 2
],
)
== Задание 2
#colorbox(
title: "TODO",
color: "blue",
radius: 2pt,
width: auto
)[
Расскажите, какие настройки для оптимизации можно использовать (увеличение скорости
работы, увеличение надежности, SSL)
]
Оптимизация веб-сервера и приложений в Docker-контейнерах может включать в себя различные настройки для увеличения производительности, надежности и безопасности. Ниже приведены некоторые общие рекомендации:
=== Nginx:
+ Оптимизация конфигурации Nginx:
- Используйте оптимальные параметры воркеров и соединений.
- Настройте буферизацию и кеширование для статических ресурсов.
- Включите gzip-сжатие для уменьшения объема передаваемых данных.
+ SSL-настройки:
- Используйте современные версии протоколов TLS.
- Включите Perfect Forward Secrecy (PFS) для усиления безопасности.
+ Keepalive и таймауты:
- Настройте keepalive_timeout и keepalive_requests для эффективного использования соединений.
- Установите разумные значения таймаутов для обработки запросов.
=== PHP-FPM:
+ Оптимизация пула PHP-FPM:
- Настройте размеры пула и параметры ожидания соединений, чтобы соответствовать объему запросов.
- Используйте динамическое масштабирование пула для более эффективного использования ресурсов.
+ Оптимизация PHP:
- Включите оптимизации OPcache для уменьшения времени загрузки скриптов.
- Установите разумные значения параметров memory_limit и max_execution_time.
=== MySQL:
+ Настройка параметров MySQL:
- Оптимизируйте параметры конфигурации MySQL, такие как innodb_buffer_pool_size,
query_cache_size.
#pagebreak() |
|
https://github.com/Jollywatt/typst-wordometer | https://raw.githubusercontent.com/Jollywatt/typst-wordometer/master/tests/template.typ | typst | MIT License | #import "/src/lib.typ": *
#set page(width: 15cm, height: auto)
|
https://github.com/sitandr/typst-examples-book | https://raw.githubusercontent.com/sitandr/typst-examples-book/main/src/basics/math/classes.md | markdown | MIT License | # Classes
> See [official documentation](https://typst.app/docs/reference/math/class/)
Each math symbol has its own "class", the way it behaves. That's one of the main reasons why they are layouted differently.
## Classes
```typ
$
a b c\
a class("normal", b) c\
a class("punctuation", b) c\
a class("opening", b) c\
a lr(b c]) c\
a lr(class("opening", b) c ]) c // notice it is moved vertically \
a class("closing", b) c\
a class("fence", b) c\
a class("large", b) c\
a class("relation", b) c\
a class("unary", b) c\
a class("binary", b) c\
a class("vary", b) c\
$
```
## Setting class for symbol
```typ
Default:
$square circle square$
With `#h(0)`:
$square #h(0pt) circle #h(0pt) square$
With `math.class`:
#show math.circle: math.class.with("normal")
$square circle square$
```
|
https://github.com/EpicEricEE/typst-marge | https://raw.githubusercontent.com/EpicEricEE/typst-marge/main/src/resolve.typ | typst | MIT License | /// Resolve "auto" values with the given default.
#let resolve-auto(val, default) = {
if val == auto { default } else { val }
}
/// Resolve text direction, depending on the language.
///
/// Requires context.
#let resolve-dir() = {
let rtl-langauges = (
"ar", "dv", "fa", "he", "ks", "pa", "ps", "sd", "ug", "ur", "yi"
)
resolve-auto(text.dir, if text.lang in rtl-langauges { rtl } else { ltr })
}
/// Resolve page binding, depending on the text direction.
///
/// Requires context.
#let resolve-binding() = {
resolve-auto(page.binding, if resolve-dir() == ltr { left } else { right })
}
/// Resolve the given margin note side, depending on the text direction and
/// page binding.
///
/// Side can be "inside", "outside", start, end, left, right, top or bottom.
/// They can be given as alignment values or their string representations.
///
/// Requires context.
#let resolve-side(side) = {
let dir = resolve-dir()
let binding = resolve-binding()
let inside = if calc.odd(here().page()) { binding } else { binding.inv() }
if side in (left, "left") { left }
else if side in (right, "right") { right }
else if side in (top, "top") { top }
else if side in (bottom, "bottom") { bottom }
else if side in (start, "start") { if dir == ltr { left } else { right } }
else if side in (end, "end") { if dir == ltr { right } else { left } }
else if side == "inside" { inside }
else if side == "outside" { inside.inv() }
else { panic("invalid side") }
}
/// Resolve the page size.
///
/// Requires context.
#let resolve-page-size() = {
let width = resolve-auto(page.width, calc.inf * 1pt).to-absolute()
let height = resolve-auto(page.height, calc.inf * 1pt).to-absolute()
if page.flipped {
(width, height) = (height, width)
}
(width: width, height: height)
}
/// Resolve the page margin at the given side.
///
/// Side can be "inside", "outside", start, end, left, right, top or bottom.
/// They can be given as alignment values or their string representations.
///
/// Requires context.
#let resolve-margin(side) = {
let side = resolve-side(side)
let binding = resolve-binding()
let page-size = resolve-page-size()
let margin = if type(page.margin) in (length, relative, ratio, type(auto)) {
page.margin
} else if type(page.margin) == dictionary {
let inside = if calc.odd(here().page()) { binding } else { binding.inv() }
if "left" in page.margin and side == left { page.margin.left }
else if "right" in page.margin and side == right { page.margin.right }
else if "top" in page.margin and side == top { page.margin.top }
else if "bottom" in page.margin and side == bottom { page.margin.top }
else if "inside" in page.margin and side == inside { page.margin.inside }
else if "outside" in page.margin and side != inside { page.margin.outside }
else if "x" in page.margin and side.x != none { page.margin.x }
else if "y" in page.margin and side.y != none { page.margin.y }
else if "rest" in page.margin { page.margin.rest }
else { auto }
} else {
panic("invalid page margin")
}
// Resolve auto margin.
let result = resolve-auto(margin, {
let size = calc.min(page-size.width, page-size.height)
if size.pt().is-infinite() { size = 210mm }
2.5 / 21 * size
})
// Resolve relative values.
if type(result) == relative {
let relative-to = if side.axis() == "horizontal" {
page-size.width
} else {
page-size.height
}
result = result.length + if relative-to.pt().is-infinite() { 0pt } else {
result.ratio * relative-to
}
}
result.to-absolute()
}
/// Resolve the note padding into a dictionary with left and right keys.
///
/// The padding can be given as a single value or as a dictionary.
///
/// Requires context.
#let resolve-padding(padding) = {
let (left, right) = if type(padding) == length {
(left: padding, right: padding)
} else if type(padding) == array {
(left: padding.at(0, default: 0pt), right: padding.at(1, default: 0pt))
} else if type(padding) == dictionary {
let resolved = (left: 0pt, right: 0pt)
let binding = resolve-binding()
let inside = if calc.odd(here().page()) { binding } else { binding.inv() }
let start = if resolve-dir() == ltr { left } else { right }
if "inside" in padding { resolved.at(repr(inside)) = padding.at("inside") }
if "outside" in padding { resolved.at(repr(inside.inv())) = padding.at("outside") }
if "start" in padding { resolved.at(repr(start)) = padding.at("start") }
if "end" in padding { resolved.at(repr(start.inv())) = padding.at("end") }
if "left" in padding { resolved.left = padding.at("left") }
if "right" in padding { resolved.right = padding.at("right") }
resolved
} else if padding == none {
(left: 0pt, right: 0pt)
} else {
panic("invalid padding")
}
(left: left.to-absolute(), right: right.to-absolute())
}
|
https://github.com/kotfind/hse-se-2-notes | https://raw.githubusercontent.com/kotfind/hse-se-2-notes/master/prob/seminars/2024-09-09.typ | typst | = Введение
<NAME>
#figure(
caption: "Облако",
table(
columns: 2,
[Link:], `mega.nz/login`,
[Login:], `<EMAIL>`,
[Pass:], `<PASSWORD>`,
)
)
$ "Итог" = 0.1 dot "ИДЗ" + 0.25 dot "КР" + 0.15 dot "Сем" + 0.5 dot "Экз" $
|
|
https://github.com/HKFoggyU/hkust-thesis-typst | https://raw.githubusercontent.com/HKFoggyU/hkust-thesis-typst/main/hkust-thesis/layouts/appendix.typ | typst | LaTeX Project Public License v1.3c | #import "../imports.typ": *
#import "../utils/custom-numbering.typ": custom-numbering
// 文稿设置,可以进行一些像页面边距这类的全局设置
#let appendix(
// i-figured settings
show-equation: i-figured.show-equation.with(numbering: "(A.1)"),
show-figure: i-figured.show-figure.with(numbering: "A.1"),
reset-counters: i-figured.reset-counters,
// 标题字体与字号
heading-font: "Times New Roman",
heading-size: (12pt,),
heading-weight: ("regular",),
heading-top-vspace: (20pt, 4pt),
heading-bottom-vspace: (20pt, 8pt),
heading-pagebreak: (true, false),
heading-align: (center, auto),
config: (:),
..args,
it,
) = {
anti-front-end()
// 1.2 处理 heading- 开头的其他参数
let heading-text-args-lists = args.named().pairs()
.filter((pair) => pair.at(0).starts-with("heading-"))
.map((pair) => (pair.at(0).slice("heading-".len()), pair.at(1)))
// 2. 辅助函数
let array-at(arr, pos) = {
arr.at(calc.min(pos, arr.len()) - 1)
}
// Deal with math equation numbering
// i-figured settings
show math.equation.where(block: true): it => {show-equation(it, prefix: "eqn-")}
show figure: it => { show-figure(it, fallback-prefix: "fig-") }
// set math.equation(numbering: "(1.1)")
// set math.equation(numbering: (..nums) => {
// locate(loc => {
// "(" + str(counter(heading).at(loc).at(0)) + "." + str(nums.pos().first()) + ")"
// })
// },)
// 4. 处理标题
// 4.1 设置标题的 Numbering
// set heading(numbering: numbering)
// 4.2 设置字体字号并加入假段落模拟首行缩进
// set heading(numbering: "1.1", supplement: "")
counter(heading).update(0)
let my-numbering = custom-numbering.with(
first-level: (i, ..args) => "Appendix " + numbering("A", i),
depth: 3,
"A.1"
)
// let plain-numbering = "A."
set heading(numbering: my-numbering)
// show heading: reset-counters(equations: true)
show heading: it => {
set text(
font: constants.font-names.title,
size: constants.font-sizes.title,
)
v(array-at(heading-top-vspace, it.level))
if (it.depth == 1) {
[#upper(counter(heading).display()) \ #upper(it.body)]
// it
do-repeat([#linebreak()], 2)
} else {
it
}
// v(array-at(heading-bottom-vspace, it.level))
}
// 4.3 标题居中与自动换页
show heading: it => {
if (array-at(heading-pagebreak, it.level)) {
// 如果打上了 no-auto-pagebreak 标签,则不自动换页
if ("label" not in it.fields() or str(it.label) != "no-auto-pagebreak") {
pagebreak(weak: true, to: if config.twoside { "odd" })
}
}
if (array-at(heading-align, it.level) != auto) {
set align(array-at(heading-align, it.level))
it
} else {
it
}
}
show heading: it => {reset-counters(it, equations: true)}
// Main matter page numbering
// set page(numbering: "1")
// counter(page).update(1)
it
} |
https://github.com/pedrofp4444/BD | https://raw.githubusercontent.com/pedrofp4444/BD/main/report/content/[3] Modelação Concetual/relacionamentos.typ | typst | #let relacionamentos = {
[
== Identificação e Caracterização dos Relacionamentos
Na modelação concetual surgirão vários relacionamentos entre entidades responsáveis por representar as diferentes interações entre os elementos definidos na base de dados.
#figure(
caption: "Caracterização dos relacionamentos.",
kind: table,
table(
columns: 5 * (1fr, ),
stroke: (thickness: 0.5pt),
align: horizon,
fill: (x, y) => if y == 0 { gray.lighten(50%) },
table.header([*Entidade*], [*Multiplicidade*], [*Relacionamento*], [*Multiplicidade*], [*Entidade (relacionada)*]),
/* Relacionamento */ [Funcionário],
[N (obrigatório)],
[gere],
[M (obrigatório)],
[Funcionário],
/* Relacionamento */ [Funcionário],
[N (obrigatório)],
[desempenha],
[1 (obrigatório)],
[Função],
/* Relacionamento */ [Funcionário],
[N (obrigatório)],
[trabalha],
[M (obrigatório)],
[Terreno],
/* Relacionamento */ [Funcionário],
[N (parcial)],
[pertence],
[M (obrigatório)],
[Caso],
/* Relacionamento */ [Terreno],
[1 (parcial)],
[tem],
[N (obrigatório)],
[Caso],
)
)
#underline[*Relacionamento Funcionário - Funcionário*] (Requisito 9, #link(<Tabela1>, "Tabela 1"))
#align(center)[
#figure(
kind: image,
caption: "Ilustração do Relacionamento Funcionário - Funcionário.",
image("../../images/[3] - 1.png", width: 60%)
)
]
*Relacionamento*: Funcionário gere Funcionário.
*Descrição*: No que toca aos funcionários da Lusium, existem, entre eles, funcionários qualificados para gerir certos grupos de trabalhadores.
*Cardinalidade*: Funcionário (1,n) - Funcionário (1,n). Um funcionário qualificado gere, obrigatoriamente, um ou mais funcionários. Por outro lado, um funcionário é, obrigatoriamente, gerido por um ou mais funcionários qualificados.
*Atributos*: Este relacionamento não apresenta atributos.
#linebreak()
#underline[*Relacionamento Funcionário - Função*] (Requisito 4, #link(<Tabela1>, "Tabela 1"))
#align(center)[
#figure(
kind: image,
caption: "Ilustração do Relacionamento Funcionário - Função.",
image("../../images/[3] - 2.png", width: 25%)
)
]
*Relacionamento*: Funcionário desempenha Função.
*Descrição*: Os funcionários da Lusium distinguem-se com base na função que desempenham.
*Cardinalidade*: Funcionário (1,n) - Função (1,1). Um funcionário desempenha, obrigatoriamente, uma única função. Por outro lado, uma função é, obrigatoriamente, desempenhada por um ou vários funcionários.
*Atributos*: Este relacionamento não apresenta atributos, até porque a sua cardinalidade não é N:M.
#linebreak()
#underline[*Relacionamento Funcionário - Terreno*] (Requisito 5, #link(<Tabela1>, "Tabela 1"))
#align(center)[
#figure(
kind: image,
caption: "Ilustração do Relacionamento Funcionário - Terreno.",
image("../../images/[3] - 3.png", width: 80%)
)
]
*Relacionamento*: Funcionário trabalha em Terreno.
*Descrição*: Os funcionários operacionais da Lusium trabalham em terrenos monitorizados pela empresa.
*Cardinalidade*: Funcionário (1,n) - Terreno (1,n). Um funcionário trabalha, obrigatoriamente, em um ou mais terrenos. Por outro lado, num terreno trabalham, obrigatoriamente, um ou mais funcionários.
*Atributos*: Este relacionamento não apresenta atributos.
#linebreak()
#underline[*Relacionamento Funcionário - Caso*] (Requisito 15, #link(<Tabela1>, "Tabela 1"))
#align(center)[
#figure(
kind: image,
caption: "Ilustração do Relacionamento Funcionário - Caso.",
image("../../images/[3] - 4.png", width: 90%)
)
]
*Relacionamento*: Funcionário pertence a Caso.
*Descrição*: Os funcionários da Lusium podem estar associados a um caso de furto relacionado com um determinado terreno.
*Cardinalidade*: Funcionário (0,n) - Caso (1,n). Um funcionário pode ou não pertencer a um ou mais casos. Por outro lado, a um caso pertencem, obrigatoriamente, um ou mais funcionários.
*Atributos*: Este relacionamento contém os atributos *estado*, *envolvimento* e *notas*. Deste modo, estas características podem ser atribuídas a um determinado suspeito, isto é, um funcionário associado a um caso. Desta forma, *estado* faz referência ao estado atual do suspeito (inocente, em investigação ou culpado), *envolvimento* traduz-se num nível de 1 a 10, capaz de indicar o grau de envolvência do suspeito no caso e *notas* consistem em comentários adicionais não obrigatórios sobre o suspeito.
#linebreak()
#underline[*Relacionamento Terreno - Caso*] (Requisito 12, #link(<Tabela1>, "Tabela 1"))
#align(center)[
#figure(
kind: image,
caption: "Ilustração do Relacionamento Terreno - Caso.",
image("../../images/[3] - 5.png", width: 80%)
)
]
*Relacionamento*: Terreno tem Caso.
*Descrição*: Os terrenos da Lusium podem ter casos de furto de minérios associados.
*Cardinalidade*: Terreno (0,1) - Caso (1,n). Um terreno pode ou não ter um ou mais casos associados a si mesmo. Por outro lado, um caso está, obrigatoriamente, associado a um único terreno.
*Atributos*: Este relacionamento não apresenta atributos.
]
}
|
|
https://github.com/drupol/master-thesis | https://raw.githubusercontent.com/drupol/master-thesis/main/resources/typst/inputs-and-outputs-part4.typ | typst | Other | #import "../../src/thesis/imports/preamble.typ": *
#import "../../src/thesis/theme/colors.typ": *
#set align(center + horizon)
#set text(font: "Virgil 3 YOFF")
#grid(
columns: (1fr, 1fr, 1fr, 1fr, 1fr),
rows: (70pt, 25pt),
{
place(top + left, dx: 15pt, dy: 9pt)[#text(fill: umons-red)[Program]]
place(
top + left,
dx: 15pt,
dy: 31pt,
)[#text(fill: umons-turquoise)[Parameters]]
place(top + left, dx: 15pt, dy: 53pt)[#text(fill: umons-grey)[Environment]]
image("../../resources/images/build-inputs1.svg")
},
{
xarrow(sym: sym.arrow.r, width: 50pt, "")
xarrow(sym: sym.arrow.r, width: 50pt, "")
xarrow(sym: sym.arrow.r, width: 50pt, "")
},
{
place(top + left, dx: 35pt, dy: 38pt)[#text(size: .75em)[Evaluation]]
image("../../resources/images/build-inputs2.svg")
},
xarrow(sym: sym.arrow.r, width: 50pt, ""),
image("../../resources/images/inputs-icon.svg"),
"Inputs", "", "Computational environment", "", "Outputs",
)
|
https://github.com/Myriad-Dreamin/tinymist | https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/CONTRIBUTING.md | markdown | Apache License 2.0 |
# Contributing
Tinymist provides a single integrated language service for Typst.
**Multiple Actors** – The main component, [tinymist](./crates/tinymist/), starts as a thread or process, obeying the [Language Server Protocol](https://microsoft.github.io/language-server-protocol/). tinymist will bootstrap multiple actors, each of which provides some typst feature.
**Multi-level Analysis** – The most critical features are lsp functions, built on the [tinymist-query](./crates/tinymist-query/) crate. To achieve low latency, functions are classified into different levels of analysis.
+ `query_token_cache` – `TokenRequest` – locks and accesses token cache.
+ `query_source` – `SyntaxRequest` – locks and accesses a single source unit.
+ `query_world` – `SemanticRequest` – locks and accesses multiple source units.
+ `query_state` – `StatefulRequest` – acquires to accesses a specific version of compile results.
**Optional Features** – All rest features in tinymist are optional. The significant features are enabled by default, but you can disable them with feature flags. For example, `tinymist` provides preview server features powered by `typst-preview`.
**Editor Frontends** – Leveraging the interface of LSP, tinymist provides frontends to each editor, located in the [editor folder](./editors).
## Installing Toolchain
- [Cargo](https://doc.rust-lang.org/cargo/) – Cargo is the Rust package manager.
- [Yarn](https://yarnpkg.com/) – Yarn is a package manager that doubles down as project manager.
## Building and Running
To build tinymist LSP:
```bash
git clone https://github.com/Myriad-Dreamin/tinymist.git
# Debug
cargo build
# Release
cargo build --release
# RelWithDebInfo (GitHub Release)
cargo build --profile=gh-release
```
To run VS Code extension locally, open the repository in VS Code and press `F5` to start a debug session to extension.
## Local Documentation
To serve the documentation locally, run:
```bash
yarn docs
```
To generate and open crate documentation, run:
```bash
yarn docs:rs --open
```
> [!Tip]
> Check [Shiroa](https://myriad-dreamin.github.io/shiroa/guide/installation.html) to install the `shiroa` command for documentation generation.
## Server Entries
- `tinymist probe` – do nothing, which just probes that the binary is working.
- `tinymist lsp` – starts the language server.
- `tinymist preview` – starts a standalone preview server.
## Running Analyzer Tests
This is required if you have changed any code in `crates/tinymist-query`.
To run analyzer tests for tinymist:
```bash
cargo insta test -p tinymist-query --accept
```
> [!Tip]
> Check [Cargo Insta](https://insta.rs/docs/cli/) to learn and install the `insta` command.
## Running Syntax Grammar Tests
This is required if you are going to change the textmate grammar in `syntaxes/textmate`.
```bash
# in root
yarn test:grammar
# Or in syntaxes/textmate
cd syntaxes/textmate && yarn test
```
## Running E2E Tests
This is required if you have changed any code in `crates/tinymist` or `crates/tinymist-query`.
To run e2e tests for tinymist on Unix systems:
```bash
./scripts/e2e.sh
```
To run e2e tests for tinymist on Windows:
```bash
./scripts/e2e.ps1
```
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/027%20-%20Conspiracy%3A%20Take%20the%20Crown/002_Tyrants.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Tyrants",
set_name: "Conspiracy: Take the Crown",
story_date: datetime(day: 10, month: 08, year: 2016),
author: "<NAME>",
doc
)
#emph[Adriana is the captain of the guard of the High City of Paliano, a post that puts her in the service of the ghost king, Brago. But recently, she has begun to question the king's actions; he's crueler in his death than he was in life. It's clear from rumblings around the city that others share her doubts.]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Old habits die hard, and the hardest habits to kill are those that belong to the dead. Adriana, captain of the guard of the High City of Paliano, knew this better than most. She stood dutifully at her post, at the shoulder of the great King Brago. He had grown paranoid in his afterlife (a curious reaction to becoming immortal) and requested his captain attend him even in his times of counsel. Adriana was now in the great dining hall—an imposing stone chamber that echoed more than it warmed. It wasn't cozy, but the king preferred holding his meetings here for one reason or another. He seemed comforted by its large banners bearing the mark of his city, its swords and signets displayed on the walls. Brago seemed strangely content to spend his death hovering among the things he used to touch and wield. He never seemed sad that he could not hold them—he never seemed sad about anything anymore. He felt plenty of other things, but pity wasn't one of them. It was not a captain's place to question her king, so Adriana leaned to the left and stretched out a cramp in her right calf as she waited on the king to finish playing pretend.
<NAME> sat at the head of his dining room table before a clean plate and sparkling silverware, whispering quietly and patiently with two Custodi ghosts who hovered in the chairs to his left. The voices of the dead often grew quiet with age, and from Adriana's position near the back of the room, the clinking of her armor made the only noise in the hall. The three ghosts were discussing church business, and out of some bastardization of habit were doing so in front of glittering empty place settings. As they moved their hands in conversation they would curiously maneuver around the array of empty glasses and barren goblets.
Adriana had served the king for many years. She knew that even in death he retained a sort of muscle memory with regard to the customs of the living. Ghosts weren't anything special, but no one ever #emph[purposefully ] ended up one. When he retained his title after death, Adriana was left with a frightening realization. If her lord would never die, she was doomed to serve him her whole life. Captains in the past had grown close to several generations of royalty, yet she was doomed to only one. Paliano's throne was disrupted. Succession had hiccupped long ago.
Memory of this discovery did not soothe the cramp in her leg.
Every now and then she caught a word or two exchanged between the ghosts. They seemed to be discussing the success in their elimination of cogwork from the streets of Paliano. They seemed pleased with the closing of the Academy, happy that those who stood against them were absent or dead.
She had been ordered to help quell the insurrection then. To dismantle the Academy, to purge the pursuit of invention and innovation from the city.
A whisper of guilt traveled through Adriana's mind. The king she served in his life had become cruel in his death. She would never admit it out loud, but she knew it in her heart.
The ghosts' business concluded, the Custodi rose, and Adriana strode forward to escort them out. A servant girl entered behind her to clear the plates (#emph[Do they clean them again anyway? Isn't that a tremendous waste of soap? ] Adriana wondered). <NAME> nodded discreetly at his captain, and Adriana acknowledged by leading the clergy out of the dining hall and into the hallway. The two moved cautiously, with more of a chill to the air around them than normal. The manner all around the three was ill at ease.
Three minutes into the walk down the hallway, the two ghosts stopped in front of the main door. #emph["Captain Adriana..."] they whispered. Adriana stilled. She had never been addressed directly by the Custodi before.
The Custodi nearest her raised their hands in benediction. Ghostly fingers tapped chills on her skin—shoulder, shoulder, forehead. Adriana received the blessing willingly, but wondered absently why they were departing with such a formal goodbye.
The spirits departed, and Adriana turned, happy to relieve the cramp in her leg with a brief walk. A sudden but distant crash caught her ear and she walked briskly to the source—the cloakroom? The pantry? The scullery!
The servant girl from before held a mound of plates and silverware in her arms and was throwing them into the rubbish chute, one porcelain treasure after another, their journeys ending with a distant shatter into the trash heap at the end.
"Girl!" Adriana yelled.
The waif dropped a saucer in shock.
"What are you doing? Those are the property of the crown."
"Boss told us that Her Ladyship didn't like the plates," the girl said through alarmed eyes.
Her Ladyship?
"There is no queen in this castle."
"Boss said I wasn't supposed to say anything about Her Ladyship to you."
Adriana's hand gripped the hilt of her sword and turned on her heel, walking quickly up the stairs back to the great dining hall. The sound of more plates being tossed down the chute echoed in the stone hall behind her. The chilly goosebumps where the Custodi had blessed her began to feel more and more like a preemptive apology.
Her eyes raced to the other servants she passed. One hurriedly looked away. Another snuck through a passage to the servants' quarters. One was shaking out a fresh banner—a thorny rose sewn onto plush velvet—and Adriana broke into a full run toward her king.
The leather of her soles pounded the stone underfoot and the edges of her armor clanged together in her hurry, and as she burst into the great dining hall she skidded to a stupefied halt.
In the moment she reacted immediately, but in memory it was a tiny eternity, pregnant with significance.
At the other end of the great dining hall, a resolute woman in a strange jacket was braced in a full-body grimace, her firm arms gripping the shoulders of King Brago (#emph[how?!] ) and a rondel dagger buried deep in the neck of her king. For the first time in her life, Adriana was flummoxed. The woman in the strange jacket looked too solid to be a ghost, yet as she struggled to bury the dagger deeper her arms moved with a strange blur and shimmer of light. The king's mouth was open in a soundless shout. The woman changed her grip on the glimmering violet dagger and met eyes with Adriana across the room.
The captain of the guard of the High City of Paliano remembered how to breathe.
And then she remembered what her job was.
She closed the distance and lurched forward. Adriana did not know the nature of her foe, but she knew the physics of her king. She drew her sword and swung it directly through the face of <NAME> in an attempt to slice through his assassin. Adrenaline and fear stretched the seconds. In the instant of her swing Adriana locked eyes with the assassin. As her sword passed harmlessly through the face of Brago, she watched as the flesh of the assassin became translucent violet, the stranger's eyes boring into Adriana's.
#figure(image("002_Tyrants/01.jpg", width: 100%), caption: [Art by <NAME>], supplement: none, numbering: none)
Her attack negated, Adriana quickly dropped her sword and lurched forward as the assassin released and dropped Brago to the ground. Adriana instinctively tried to catch her king and was stunned when it actually worked—the spiritual tie that Brago had to his armor was dying alongside him, and Adriana found herself clutching the armor with the dying spirit of her king still inside.
His death was unlike any Adriana had witnessed before. It was impossible to look away.
The crook in Brago's neck where the assassin had buried her knife was rapidly corroding, the ghostly skin deteriorating and dissipating in a violet necrosis as it spread from the throat across the form of his body. As the virus traveled over his skin it left nothing but air in its wake, and in a matter of seconds the king's form had vanished.
Brago's gently glowing crown, form made physical with the absence of its host, dropped to the ground.
His sword remained sheathed on the belt.
Where her king once lay was now a pile of abandoned, shimmering garments, glistening in Captain Adriana's arms.
The assassin looked down at Adriana with a look of slightly bored accomplishment.
#figure(image("002_Tyrants/02.jpg", width: 100%), caption: [Kaya, Ghost Assassin | Art by Chris Rallis], supplement: none, numbering: none)
#figure(image("002_Tyrants/03.png", height: 40%), caption: [], supplement: none, numbering: none)
Adriana grabbed Brago's sword out of its sheath. She was uncertain of the assassin's next move. The assassin stood with the lazy confidence of someone who just woke up—dressed for a night at the pub instead of a day in the fighting pits. It was hateful. Adriana rushed her, Brago's glimmering sword gripped tight in her hand.
"#emph[Villain!] " she snarled.
Adriana thrust the sword directly into where the assassin's liver would be. In an instant the assassin's stomach turned a bizarre and translucent violet, the sword to passing easily through her. What should have been a life-taking injury was a minor inconvenience—the assassin grinned at Adriana's frozen shock.
Adriana collected her wits and swiftly pulled her slice upward, sword passing through the suddenly purple, unarmored torso of the assassin, through her shoulder. As her blade reached the height of its swing, Adriana took a sharp, surprising, very corporeal elbow to the jaw from the assassin. Adriana wasn't expecting that. The captain of the guard clumsily found her balance and purposefully stood back to assess her opponent.
"I was paid to hit only one mark. I'm not going to kill you," the assassin said.
Adriana's rage seethed through ragged breath. "Fight me fair, coward!"
The assassin's lips parted in an amused smile, and she returned a playful wink.
The captain of the guard responded by spitting directly at the stranger's eye.
In a flash the assassin's face shimmered with willful transparency and the spittle easily passed through to hit the wall behind her.
"Haven't had to dodge that before," the assassin said. Grinning, she stepped forward #emph[through ] Brago's empty armor on the floor. Her feet and shins shimmered with that same strange violet as she passed through the clutter of metal.
"You put an awful lot of effort into defending an empty suit," the assassin said with a sly drawl.
"That #emph[man] was our #emph[king—] "
"I heard he was an empty suit long before I put my dagger in him. And before that he was a #emph[tyrant] ," the assassin said. "As long as tyrants die, the chance for freedom lives."
Adriana was struck with an odd wave of guilt. She didn't know how to respond to that.
The assassin casually bowed, maintaining an amused eye contact with the captain of the guard. "Pleasure doing business with you."
The stranger straightened her jacket smartly and dropped into the floor. She descended in a quick ripple of violet. Adriana could only stare dumbly at the spot on the floor she disappeared through. #emph[The stables are directly underneath. There's no way I could catch her in time.]
The great dining hall was quiet. In that silent moment, Adriana let her breath out in a sigh. Brago's armor and crown lay in a heap in the spot where he fell. No evidence remained of his spirit save the light glow that lingered on his newly corporeal armor and crown. Adriana had never seen a ghost die before—perhaps it was normal for their belongings to materialize as their spirits vanished into a second death.
None of it made sense. None of it was possible.
#emph[I was foolish to accept this positon, ] Adriana thought. #emph[My job was to protect the king, and I failed at protecting a man who couldn't be killed. What purpose did I serve in the first place?]
The castle started to stir in realization. Banners bearing a thorny rose were unfurled. Servants came with dark curiosity to inspect the empty armor on the floor. Through it all, Adriana stayed silent at the back of the great dining hall.
Adriana's fingers grazed the hilt of Brago's sword. She supposed it would be safest in her hands.
#figure(image("002_Tyrants/04.jpg", width: 100%), caption: [Art by <NAME>], supplement: none, numbering: none)
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
The Custodi crowned Queen Marchesa, the First of her Name, the following day.
The ceremony was held in an immaculately decorated throne room. Banners bearing the sign of the Black Rose draped from freshly dusted rafters, new armor of thorny plates gleamed silver in the lights of candles dipped the prior week. The room was fresh with rare primroses and stunk of new clothes.
The castle staff looked at the new queen with familiarity. The Custodi obligingly went through the script of the coronation ceremony. None of the Paliano elite seemed unprepared. Everyone was ready. Everyone knew.
Adriana ached to kill each of these traitors where they stood. Every spare inch of the room bore the sigil of the new queen and it was #emph[all wrong] .
Earlier that morning when she had spoken with the guard, Adriana was relieved to find all of them as deep in the dark as she was. The great secret had been hidden from them, as well, and the captain of the guard was relieved to hear that at least her company burned with the same confusion and rage she did.
They stood now at her back and attending each door. The guard had their duty to crown and church, but none of them were happy about it. Brago's sword—she wouldn't dare lose sight of it—had remained tight in her palm through the duration of the ceremony.
Marchesa, the Black Rose, stood in the middle of it all, the dazzling conductor of a hideous symphony. Her gown was prudent and her jewelry humble, save for the glimmering ghostly crown that sat atop her head. Adriana did everything she could to not roll her eyes at the obvious attempt at modest attire to please the Custodi.
As soon as the spirits were finished with the coronation and the ghostly crown of Paliano sat on Marchesa's head, Adriana moved quickly to follow her to the royal chambers. She walked upstairs and behind the new queen, past a sea of averted eyes, followed by a gaggle of handmaidens in her wake. As they walked, Adriana began to realize how much #emph[money] must have gone into this endeavor. Bribes to pay off the Custodi. Money to pay off the staff. Payment for the assassin. And then there was the matter of the heaps upon heaps of rose-embroidered textiles that adorned the walls, bodies, horses of the castle.
#figure(image("002_Tyrants/05.jpg", width: 100%), caption: [Art by <NAME>], supplement: none, numbering: none)
#emph[And I had no idea. I watched for so long over the shoulder of a careless ghost and I had no idea.]
Adriana gave pause.
#emph[If I had known, would I have stopped it? Brago was cruel. He deserved a second death.]
Adriana studied the back of Marchesa as they all marched upstairs. What happened before would happen again. A king would be crowned, killed, replaced. A queen would be crowned, killed, replaced. And how many hundreds of her countrymen would die in the process of perpetuating this hideous cycle?
#emph[It is an endless engine.]
#emph[All we are doing is feeding this awful machine.]
Rage filled Adriana's heart as the realization set in and the assassin's words echoed in her mind. #emph[As long as tyrants die, the chance for freedom lives. ] Paliano had their chance for freedom with the death of one tyrant and instead gained another. #emph[Killing them off isn't enough. How can we turn that chance into certainty?]
Marchesa stopped in front of the doors to her chamber and allowed a servant girl to usher her in. Adriana followed, patiently waiting by the door as the handmaidens helped the new queen change from the coronation gown to the gown she would wear to address the public for the first time.
Her handmaidens disassembled her, revealing layer after hidden layer. Gown. Partlet. Farthingale. Kirtle. Petticoat. Bodice. When she was down to her stockings and shift, the handmaidens built her back up again, this time in garments more luxurious and finely made than before. Adriana could see the stitches that hid countless inner pockets, the secret lining to conceal pouches of rare poisons. Bodice. Petticoat. Kirtle. Farthingale. Partlet. Gown. The handmaidens topped the endless opulence by securing a chest plate.
There was no seduction in this chore, only a simple dominance when the queen met eyes with her captain of the guard. Endless layers containing endless secrets. Do you see how much I carry? Can you fathom how much I hide?
Once the last stay was tightened, Marchesa shooed her handmaidens out. Adriana stood tall and firm in stance before the velvet-drenched queen of the High City of Paliano.
"I sense you have words for me," the poisonmaster cooed. "My coronation speech to the citizens begins shortly, so please be quick with my time."
"This isn't how right of succession works."
"This isn't how right of succession works, #emph[your highness] ."
Adriana swallowed a snarl. "The Custodi claimed you were named in King Brago's will as his heir. You know I am no scholar, so perhaps you can be the one to explain to me why a #emph[ghost ] would need a #emph[will] ."
The new queen smiled. Her answer came easily. "The undying have no need to protect their assets, of course. But the Custodi was very willing to accept properly filed legal documents."
The captain of the guard's armor clinked as she stepped forward. "Brago had descendants, his daughters are—"
"Old and weak-willed. #emph[Their] sons and daughters are just as bad. I dealt with them a while ago, however, and it just so happened that my name was next in the line of succession."
Her name? Marchesa's family was small and distant in the royal family tree. Adriana felt nauseated. She held her ground as Marchesa calmly strode to the vanity near her, sitting daintily to apply an oxblood-red stain to her lips.
The question escaped without restraint. "How many of the other successors did you kill?"
"I only killed Brago," Marchesa said with an admissive eye roll. "Well, #emph[Kaya ] killed Brago. Paid her good money for it, too. The rest of the former king's family received a very generous grievance and the Custodi will receive a healthy tithing during each year of my reign."
The queen stood and smiled through venom-painted lips, "I pray that everyone who claimed me a fallen daughter of a fallen house enjoyed #emph[their] fall from the High City."
Adriana had stared down many a foe over her years of service. She had dealt with her share of household pests as well. This snake was no different. "Our city will not turn over to you so easily."
"They already have," Marchesa said plainly. She stood from the vanity and opened a chest under the window. From where Adriana stood she could see, peeking out of the interior of the chest, a brilliant and shining suit of armor. The queen lifted the black-rose-adorned breastplate so the captain could inspect it from where she stood. It was clearly built for her.
"You already know I'm not putting that on."
"I felt I should offer it at least."
Adriana shook her head in disbelief. "And what about the people?"
"They will adore me," Marchesa said, leaving the chest to return to her vanity. Despite only having ten fingers, she seemed to require thirty rings.
Adriana's heart quickened with rage. "And what if they don't adore you?"
Marchesa obviously hadn't considered that. She met Adriana's eyes as the captain continued.
"What if you step out to deliver your coronation speech and are met with a thousand citizens calling you a tyrant?"
"Then I will be #emph[tyrannical] ."
Adriana refused to let her eyes leave the gaze of the queen. "You won't kill me. If you do, my guard will retaliate without a second thought."
Marchesa shrugged and returned to applying rings. "Unfortunately, your deduction is correct. It is in my best interest to allow you to live," she said, her eyes shifting up. "It is in your best interest to stay in line."
Adriana spat in the Queen's face.
This time, the spit hit its target.
The Black Rose, for once in her life, did not see it coming. She sat in stunned horror, a shaking hand wiping saliva from her eye as Adriana grabbed the new armor from the chest and left.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Adriana wasted no time in letting her feelings be known.
She immediately went to where the rest of her guard was stationed and told them to find her after the coronation speech. She then made haste for the stables and tied the dreadful rose-adorned breastplate to a rope, hitching it to the back of her saddle to drag in the dirt behind her.
Adriana mounted her horse and began to ride.
The crowd making their way to the queen's speech parted in front of her. #emph[Look at your captain, ] Adriana thought, #emph[and look at what I think of your new queen.]
In the distance she could hear Marchesa's speech, amplified for all to hear. "The former captain has retired, with thanks from our fair city and a generous pension from the throne that will support her for the rest of her life, however long that may be."
Adriana rolled her eyes and urged her horse to move on. She rode towards the Thieves' Quarter, past hundreds of her fellow citizens, and felt overcome as she rode to make a speech of her own. She slowed to a stop, looking out over the confused and alarmed faces of her people. From atop her horse Adriana felt a power she had always allowed others to wield. She was tired of standing by while those around her grasped control.
She spoke to the crowded Thieves' Quarter with unassailable conviction. "Marchesa would have you stand with her, in service to a true crown resting upon a false head, and thereby she would make you a traitor!"
Adriana raised the sword of Brago and beat the symbol of her city on her shield. "If her flag is not your flag, then do not bow to it. If her rule is illegitimate, then so too are her laws. If she is not truly queen, then the servants of the throne are no better than her spies and assassins, and should be treated accordingly!"
The crowd hummed with agreement, and Adriana's spirit flew. #emph[They are sick of the engine, too.]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
In the weeks that followed, Brago's forced peace gave way to Marchesa's deep unrest. Those who served in Brago's guard broke their oaths to the crown under cover of darkness to patrol the streets and provide protection for the citizens. With the setting sun came a switching of sigils, and the symbol of the city became a reliable marker for who could be trusted in the night.
"Do you stand with the city?" the graffiti would ask passers-by in quiet places of the city. The citizens of the High City heard the rumors and felt the disquiet. They listened to the decrees of a poisonmaster-queen and the hiss of corruption her supporters sowed. The citizens heard it all, and Adriana heard it the loudest. But after her declaration in the Thieves' Quarter, she held her tongue. Her voice was not the one to ultimately rule the people. #emph[I am the hand that guards the voice] , she knew. #emph[I am the one who listens for trouble.]
And so, three moons after the night of the regicide, she traveled under cloak and cover of darkness to the home of the person she knew could help.
Adriana hadn't slept in days. She had been listening. Listening to her guard, listening to her citizens, listening for what the people needed and why they weren't being treated with respect by a leader who should love them. All that listening had proven one thing: Paliano didn't need a monarchy that hid itself behind castles and assassins. It needed a leader who understood Fiora at large.
Reaching her destination, Adriana quietly rapped on an ornate door built of sturdy foreign wood. The door creaked, and she was let inside by a face anyone in Paliano would know instantly.
#figure(image("002_Tyrants/06.jpg", width: 100%), caption: [Art by <NAME>], supplement: none, numbering: none)
The elven explorer Selvala stood on the other side of the door and glanced over her unexpected guest.
"Adriana. You come with news?"
"I come with a proposition."
Selvala took a second to assess the former captain. She nodded, and quietly showed Adriana in.
Selvala's home was quaint and modest; a traveler's home away from home.
Adriana left her cloak near the door and joined the elf at a table in front of a wood stove. Selvala, through the habit of her people, silently waited for the former captain of the guard to state her business.
#emph[There are no other options, ] Adriana knew. #emph[If she will not say yes then the future of our city is lost to tyrants forever.]
Adriana accepted a small mug of tea the elf had set on the tabletop. She looked Selvala in the eye and built up the courage for the most important pitch she had ever given. "Paliano's monarchy isn't stable. It is an endless, murderous engine of violence," Adriana said, voice steady and confident in the privacy of the elf's home.
Selvala nodded. A small movement heavy with affirmation.
"If we as citizens wish to live for the possibility of freedom, that engine must be halted. You are well-respected among the people and a uniting force for our city," Adriana continued, "the finest nominee for a senator I can think of."
Selvala's eyes widened in half-contained surprise.
Adriana leaned forward in her chair, heart burning with the conviction of an entire city. She allowed a rare smile to escape her lips as she asked the most important question she would ever ask in her life.
"Will you help us build the Republic of Paliano?"
|
|
https://github.com/konradroesler/lina-skript | https://raw.githubusercontent.com/konradroesler/lina-skript/main/lina-2.typ | typst | #import "utils.typ": *
#import "template.typ": uni-script-template
#show: doc => uni-script-template(
title: [Vorlesungsskript],
author: [<NAME>],
module-name: [LinA II\* SoSe 24],
doc
)
#bold[Wiederholung:]
$K$ sei ein beliebiger Körper, $V$ ein $n$-dimensionaler $K$-Vektorraum,
$
L(V, V) = { f: V -> V | f "lin. Abbildung" }
$
$f in L(V,V)$ heißt Endomorphismus. Ist $f in L(V,V)$, so läßt sich $f$ bezüglich einer Basis $B = {v_1, ..., v_n}$ von $V$ eindeutig durch eine Matrix
$
A^(B, B)_f = (a_(i j))_(1 <= i, j <= n) in K^(n,n)
$
Es gilt
$
f(v_j) = sum_(i = 1)^n a_(i j) v_i wide 1 <= j <= n
$
Abbildung
$
F: L(V, V) -> K^(n,n)
$
ist ein Isomorphismus.
Basiswechsel? Basen $B, C$ von $V$
#figure(
image("bilder/527.jpg", width: 80%)
)
(siehe Lem. 5.27, LinA I\*)
Eine zentrale Frage: Sei $f in L(V,V)$, existiert eine Basis $B = {v_1, ..., v_n}$ von $V$, so dass $A_f^(B,B)$ eine möglichst einfache Form besitzt?
z.B. Diagonalmatrix:
$
A_f^(B,B) = mat(lambda_1, ..., 0; dots.v, dots.down, dots.v; 0, ..., lambda_n)
$
Wir werden:
#boxedlist[
Endomorphismen charakterisieren, die sich durch eine Diagonalmatrix beschreiben lassen.
Wenn ja: Dann gilt $f(v_j) = lambda_j v_j$
$==> f$ ist eine Streckung von $v_j$ um den Faktor $lambda_j$.
][
Die Jordan-Normalform herleiten.
]
= Eigenwerte und Eigenvektoren
Eigenwerte charakterisieren zentrale Eigenschaften linearer Abbildungen. Z.B.
#boxedlist[
Lösbarkeit von linearen Gleichungssystemen
][
Eigenschaften von physikalischen Systemen
$->$ gewöhnliche Differentialgleichungen
$->$ Eigenschwingungen / Resonanzkatastrophe
Zerstörung einer Brücke über dem Fluß Maine / Milleanium-Bridge London
]
== Definition und grundlegende Eigenschaften
#definition("1.1", "Eigenwert und Eigenvektor (Endomorphismus)")[
Sei $V$ ein $K$-Vektorraum. Ein Vektor $v in V, v != 0_V$, heißt #bold[Eigenvektor] von $f in L(V,V)$, falls $lambda in K$ mit
$
f(v) = lambda v
$
existiert. Der Skalar $lambda in K$ heißt der #bold[Eigenwert] zum Eigenvektor $v in V$.
] <def>
#definition("1.2", "Eigenwert und Eigenvektor (Matrix)")[
Sei $K$ ein Körper und $n in NN$. Ein Vektor $v in K^n$, $v != 0_(K^n)$, heißt Eigenvektor von $A in K^(n,n)$, falls $lambda in K$ mit
$
A v = lambda v
$
existiert. Der Skalar $lambda in K$ heißt der Eigenwert zum Eigenvektor $v in V$.
] <def>
#bold[Bemerkungen:]
#boxedlist[
In Def 1.1 kann $dim(V) = oo$ sein. Dies ist für viele Definitionen/Aussagen in denen wir Endomorphismen betrachten, der Fall.
][
Für $dim(V) < oo$ kann man jedes $f in L(V,V)$ eindeutig mit einer Matrix $A$ identifizieren. Dann: Def 1.2 ist Spezialfall von Def 1.1.
]
#boxedlist[
#bold[Achtung:] $0 in K$ kann ein Eigenwert sein:
$
mat(1,1;1,1) vec(1,-1) = 0 dot vec(1,-1)
$
Der Nullvektor $0 in V$ ist #bold[nie] ein Eigenvektor.
Für $dim(V) = 0$ besitzt $f$ keinen Eigenvektor für $f in L(V,V)$.
][
Ist $v$ Eigenvektor zum Eigenwert $lambda$, so ist auch $alpha v$ für jedes $alpha in K without {0}$ ein Eigenvektor
$
f(alpha v) = alpha f(v) = alpha lambda v = lambda (alpha v)
$
]
Zentrale Frage dieses Kapitels:
Existens von Eigenwerten? Wenn sie existieren: Weitere Eigenschaften?
#bold[Beispiel 1.3:] Sei $I subset RR$ ein offenes Intervall und $V$ der unendlichdimensionale Vektorraum der auf $I$ beliebig oft differenzierbaren Funktionen. Ein Endomorphismus $f in L(V,V)$ ist gegeben durch
$
f(phi) = phi' wide forall phi in V
$
Die Abbildung $f$ hat jedes $lambda in RR$ als Eigenwert, da für $c in RR without {0}$ und die Funktion
$
phi(x) := c dot e^(lambda x) space != space 0_V wide forall x in I
$
gilt
$
f(phi(x)) = f(c dot e^(lambda x)) = lambda (c e^(lambda x)) = lambda phi(x)
$
Hier: $phi'(x) = f(phi)$ ist eine gewöhnliche Differentialgleichung.
#bold[Beispiel 1.4:] Wir betrachten die lineare Abbildung $f: RR^2 -> RR^2$, welche durch
$
f(vec(x_1, x_2)) = vec(x_2, -x_1) = mat(0,1;-1,0) vec(x_1, x_2)
$
definiert ist. Sei $x$ ein Eigenvektor, dann gilt
$
f(vec(x_1, x_2)) = vec(x_2, -x_1) = lambda vec(x_1, x_2) \
<==> x_2 = lambda x_1 space "und" space -x_1 = lambda x_2
$
O.B.d.A: $x_2 != 0$
$
==> x_2 = lambda(-lambda x_2) = - lambda^2 x_2
==> - lambda^2 = 1, space lambda in RR ==> lambda^2 >= 0 ==> - lambda^2 <= 0 space arrow.zigzag
$
D.h. $f$ besitzt keinen Eigenwert/-vektor. Für $f: CC^2 -> CC^2$ ändert sich dies! $==>$ Die Wahl von $K$ entscheidet!
#bold[Beispiel 1.5:] Wieder $f: RR^2 -> RR^2$, diesmal
$
f(vec(x_1, x_2)) = vec(2 x_2, 2 x_1) = underbrace(mat(0,2;2,0), =: A) vec(x_1, x_2)
$
Dann gilt für $v_1 = vec(1,0), v_2 = vec(1,1), v_3 = (-1, 1)$ dass $f(v_1) = vec(0,2), f(v_2) = vec(2,2) = 2 dot v_2$ und $f(v_3) = vec(2,-2) = (-2) dot v_3$.
#figure(
image("bilder2/1_5.jpg", width: 80%)
)
Beobachtung: $dim(V) = 2$
zwei Eigenwerte: $2, -2$, es existieren keine Weiteren,
zwei Eigenvektoren: $v_2 = vec(1,1), v_3 = vec(-1,1)$, sind linear unabhängig
#lemma("1.6")[
Es sei $f in L(V,V)$ ein Endomorphismus. Eigenvektoren zu paarweise verschiedenen Eigenwerten von $f$ sind linear unabhängig.
]
#italic[Beweis:] Es seien $v_1, ..., v_m$ Eigenvektoren zu den paarweise verschiedenen Eigenwerten $lambda_1, ..., lambda_m$ von $f$. Beweis durch Induktion:
Induktionsanfang: $m = 1$, $lambda_1, v_1 != 0$ $==> v_1$ lin. unabh.
Induktionsschritt: $m-1 -> m$
Induktionsvorraussetzung: Behauptung gelte für $m-1$
Betrachte
$
alpha_1 v_1 + alpha_2 v_2 + ... + alpha_m v_m = 0 space (*) space space alpha_m in K \
==>^("EV, f()") alpha_1 lambda_1 v_1 + alpha_2 lambda_2 v_2 + ... + alpha_m lambda_m v_m = 0 \
==>^((\*) dot lambda_m) lambda_m alpha_a v_1 + lambda_m alpha_2 v_2 + ... + lambda_m alpha_m v_m = 0
$
Wir bilden die Differenz aus Zeile 1 und 2
$
underbrace((lambda_1 - lambda_m), != 0) alpha_1 v_1 + underbrace((lambda_2 - lambda_m), != 0) alpha_2 v_2 + ... + underbrace((lambda_(m-1) - lambda_m), != 0) alpha_(m-1) v_(m-1) = 0
$
$v_1, ..., v_(m-1)$ lin. unabh. $==>$ $alpha_1 = alpha_2 = ... = alpha_(m-1) = 0$
Einsetzen in (\*) liefert
$
alpha_m underbrace(v_m, != 0) = 0 ==> alpha_m = 0
$
$==> v_1, ..., v_m$ lin unabh.
#endproof
#bold[Folgerung:] Es gibt höchstens $n = dim(V)$ verschiedene Eigenwerte für $n = dim(V) < oo$.
#definition("1.7", "Eigenraum")[
Ist $f in L(V,V)$ und $lambda in K$, so heißt $#sspace$
$
"Eig"(f, lambda) = {v in V | f(v) = lambda v}
$
der #bold[Eigenraum] von $f$ bezüglich $lambda$.
] <def>
Es gilt:
#boxedlist[
$"Eig"(f, lambda) subset.eq V$ ist ein Untervektorraum
][
$lambda$ ist Eigenwert von $f$ $<==> "Eig"(f, lambda) != {0}$
][
$"Eig"(f, lambda) without {0}$ ist die Menge der zu $lambda$ gehörenden Eigenvektoren von $f$.
][
$"Eig"(f, lambda) = "ker"(f - lambda "Id")$
][
$dim("Eig"(f, lambda)) = dim(V) - rg(f - lambda "Id")$
][
Sind $lambda_1, lambda_2 in K$ verschiedene Eigenwerte, so ist $"Eig"(f, lambda_1) sect "Eig"(f, lambda_2) = {0}$
]
Die letzte Aussage kann verallgemeinert werden zu:
#lemma("1.8")[
Sei $V$ ein $K$-Vektorraum mit $dim(V) = n < oo$ und $f in L(V,V)$. Sind $lambda_1, ..., lambda_m, m <= n$, paarweise verschiedene Eigenwerte von $f$, so gilt
$
"Eig"(f, lambda_i) sect limits(sum_(j = 1)^m)_(j != i) "Eig"(f, lambda_j) = {0} wide forall i = 1, ..., m
$
]
#italic[Beweis:] Summe von Vektorräumen, vgl. Def 3.32 LinA I.
Sei $i in {1, ..., m}$ fest gewählt.
$
v in "Eig"(f, lambda_i) sect limits(sum_(j = 1)^m)_(i != j) "Eig"(f, lambda_j)
$
Also ist
$
v = limits(sum_(j = 1)^m)_(j != i) v_j space "für" v_j in "Eig"(f, lambda_j) space "für" space j != i
$
$==> -v + limits(sum_(j = 1)^m)_(j != i) v_j = 0$
Aus Lemma 1.6 folgt damit $v = 0$.
#endproof
Über die Identifikation von Endomorphismen und Matrizen für $dim(V) < oo$ erhält man:
#corollary("1.9")[Für ein $n in NN$ und einem Körper $K$ sei $A in K^(n,n)$. Dann gilt für jedes $lambda in K$, dass
$
dim("Eig"(A, lambda)) = n - rg(A - lambda I_n)
$
Insbesondere ist $lambda in K$ ein Eigenwert von $A$, wenn $rg(A - lambda I_n) < n$ ist.
]
#definition("1.10", "Geometrische Vielfachheit")[
Ist $f in L(V,V)$ und $lambda in K$ ein Eigenwert von $f$, so heißt $#sspace$
$
g(f, lambda) := dim("Eig"(f, lambda)) wide (> 0)
$
die geometrische Vielfachheit des Eigenwerts $lambda$.
] <def>
== Das charakteristische Polynom
Wir bestimmt man Eigenwerte?
#lemma("1.11")[
Seien $A in K^(n,n)$ und $lambda in K$. Dann ist
$
det(A - lambda I_n)
$
ein Polynom $n$-ten Grades in $lambda$.
]
#italic[Beweis:] Mit der Leibniz-Formel folgt,
$
det lr((underbrace(A - lambda I_n, tilde(a)_(i j))), size: #25%) = sum_(sigma in S_1) sgn(sigma) dot tilde(a)_(1 sigma(1)) dot ... dot tilde(a)_(n sigma(n)) \
= underbrace(underbrace((a_(1 1) - lambda) dot (a_(2 2) - lambda) dot ... dot (a_(n n) - lambda), sigma = "Id"), in cal(P)_n space.thin "in" space.thin lambda) + underbrace(underbrace(S, sigma != "Id"), in cal(P)_(n -2) space.thin "in" space.thin lambda)
$
Weiter gilt:
$
(a_(1 1) - lambda) dot ... dot (a_(n n) - lambda) = (-1)^n lambda^n + (-1)^(n-1) lambda^(n-1) (a_(1 1) + ... + a_(n n)) + underbrace(S_1, in cal(P)_(n-2) "in" lambda)
$
Insgesamt: Es existieren Koeffizienten $a_0, ..., a_n in K$ mit
$
det(A - lambda I_n) = a_n lambda^n + a_(n-1) lambda^(n-1) + ... + a_1 lambda + a_0 \
a_n = (-1)^n \
a_(n-1) = (-1)^(n-1) (a_(1 1) + ... + a_(n n))
$
man kann zeigen: $a_0 = det(A)$
#endproof
Man nennt $a_(1 1) + a_(2 2) + ... + a_(n n)$ auch die #bold[Spur] von $A$.
#definition("1.12", "Charakteristisches Polynom")[
Sei $A in K^(n,n)$ und $lambda in K$. Dann heißt das Polynom $n$-ten Grades
$
P_A (lambda) := det(A - lambda I_n)
$
das charakteristische Polynom zu $A$.
] <def>
#lemma("1.13")[
Sei $A in K^(n,n)$ und $lambda in K$. Der Skalar $lambda$ ist genau dann Eigenwert von $A$, wenn
$
P_A (lambda) = 0
$
gilt.
]
#italic[Beweis:] Die Gleichung
$
A v = lambda v <==> A v - lambda v = 0 <==> (A - lambda I_n) v = 0
$
hat genau eine Lösung $v in V, v != 0$, wenn $rg(A - lambda I_n) < n$, vgl. Satz 6.3 aus LinA I. Dies ist genau dann der Fall, wenn
$
det(A - lambda I_n) = 0, "vlg. D10 aus LinA I"
$
#endproof
#bold[Beispiel 1.14:] Eigenwerte und -vektoren von
$
A = mat(3,8,16;0,7,8;0,-4,-5)
$
Regel von Sarrus liefert
$
P_A (lambda) = mat(3-lambda,8,16;0,7-lambda,8;0,-4,-5-lambda) \
= (3-lambda)(-35-7 lambda+5 lambda + lambda^2 +32) \
= (3-lambda)[(7-lambda)(-5-lambda)-8(-4)]-8 (0-0) + 16(0-0) \
= (3-lambda)(lambda^2 - 2 lambda -3 ) =(3-lambda)(lambda+1)(lambda-3)
$
$==>$ Eigenwerte sind $lambda = 3$ und $lambda = -1$
Zugehörige Eigenvektoren?
$lambda = -1$:
$
A v = -v <==> (A + I_3) v = 0 \
mat(4,8,26;0,8,8;0,-4,-4) vec(v_1, v_2, v_3) = vec(0,0,0)
$
LGS lösen: $==> v_2 = -v_3, v_1 = -2 v_3$
Damit ist z.B.: $w_1 = (2, 1, -1)^top$ Eigenvektor.
$lambda = 3$:
$
(A-3I_3) v = 0 <==> \
mat(0,8,16;0,4,8;0,-4,-8) vec(v_1, v_2, v_3) = 0 in RR^3 <==> v_2 + 2v_3 = 0
$
Damit sind z.B.: $w_2 = (1,2,-1)^top, w_3 = (-1,2,-1)$ Eigenvektoren.
$lambda = -1$: einfache Nullstelle und $dim("Span"(w_1)) = 1$ passt zu $rg(A - (-1) I_n) = 2$ und $ dim("Eig"(A_1 - 1)) = 3- 2 = 1$.
$lambda = -3$: doppelte Nullstelle und $dim("Span"(w_2, w_3)) = 2$ passt zu $rg(A - 3 I_n) = 1$ und $dim("Eig"(A, 3)) = 3-1 = 2$
#lemma("1.15")[
Sei $A in K^(n,n)$. Dann gilt
$
p_A (.) = p_(A^top) (.)
$
D.h. eine Matrix und ihre Transponierte haben die gleichen Eigenwerte.
]
#italic[Beweis:]
$
p_A (lambda) = det(A- lambda I_n) =^("D12") = det((A-lambda I_n)^top) = det(A^T - lambda I_n) = p_(A^top) (lambda)
$
#endproof
#bold[Achtung:] Die Eigenwerte bleiben gleich, aber nicht die Eigenvektoren.
#bold[Beispiel 1.16:] Für die Matrix $A$ aus Bsp. 1.14 gilt
$
A^top = mat(3,0,0;8,7,-4;16,8,-5) ==> det(A^top - lambda I_n) = (3-lambda)[(7-lambda)(-5-lambda)+4 dot 8] \
= -(lambda-3)^2(lambda+1)
$
Aber
$
mat(3,0,0;8,7,-4;16,8,-5) vec(2,1,-1) = vec(6,27,45) != (-1) vec(2,1,-1)
$
Man kann ausrechnen:
$
tilde(w)_1 = vec(0,1,2) space "EV zu EW" -1, tilde(w)_2 = vec(0,1,1), tilde(w)_3 space "EV zu EW" 3
$
Übertragung auf Endomorphismen?
$p_f (lambda) space f in L(V,V)$, $B "Basis" => exists! A_f^(B,B)$, $C "Basis" ==> exists! A_f^(C,C)$
$
p_(A^(B,B)_f) (lambda) =^"?" p_(A^(C,C)_f) (lambda)
$
#definition("1.17", "ähnliche Matrizen")[
Zwei Matrizen $A, B in K^(n,n)$ heißen #bold[ähnlich], wenn es eine Matrix $T in "GL"_n (K)$ gibt, so dass $A = T B T^(-1)$ gilt.
] <def>
Man kann leicht beweisen, dass die Ähnlichkeit von Matrizen eine Äquivalenzrelation auf der Menge der quadratischen Matrizen ist.
Mit $det(A^(-1)) =^"D11" (det(A))^(-1)$ folgta für zwei ähnliche Matrizen $A$ und $B$, dass
$
det(A) = det(T B T^(-1)) = det(T) det(B) det(T^(-1)) = det(B)
$
#bold[Beispiel 1.18:] Sei $f in L(RR^3, RR^3)$, d.h. $V= RR^3$, gegeben durch
$
f(vec(x_1, x_2, x_3)) = vec(x_1, -4 x_1 + 7 x_2, 3 x_1 + 5 x_2 + 3 x_3)
$
Wir betrachten für den $RR^3$ die Basen
$
E = {vec(1,0,0), vec(0,1,0), vec(0,0,1)}, \
B = {vec(1,0,0), vec(1,1,0), vec(1,1,1)}, \
C = {vec(0,0,-1), vec(1,0,0), vec(0,-1,0)}
$
Für darstellende Matrix von $f$ bezüglich der Standardmatrix $E$ erhalten wir aus Satz 5.18, LinA I,
$
f(e_j) = sum_(i=1)^3 a_(i j) e_i quad forall j in {1,2,3}
$
dass
$
A_f^(E,E) = mat(1,0,0;-4,7,0;3,5,3)
$
Das zugehörige kommutative Diagramm ist gegeben durch
#figure(
image("bilder2/1_18.jpg", width: 40%)
)
Für die Basis $B$ erhalten wir
$
f(vec(1,0,0)) = vec(1,-4,3) = 5 vec(1,0,0) + (-7) vec(1,1,0) + 3 vec(1,1,1) \
f(vec(1,1,0)) = vec(1,3,8) = (-2) vec(1,0,0) + (-5) vec(1,1,0) + 8 vec(1,1,1) \
f(vec(1,1,1)) = vec(1,3,11) = (-2) vec(1,0,0) + (-8) vec(1,1,0) + 11 vec(1,1,1) \
==> A_f^(B,B) = mat(5,-2,-2;-7,-5,-8;3,8,11)
$
Herleitung bezüglich Matrizen?
#figure(image("bilder2/1_18_2.jpeg", width: 30%))
Koordinatenabbildung $Phi_B$?
Abbildung vom $RR^3$ + Standardbasis $E$ in den $V (= RR^3)$ + Basis $B$.
$
Phi_B = (e_i) = v_i quad "für" quad B = {v_1, v_2, v_3} \
==> A_(Phi_B)^(E, B) = mat(1,1,1;0,1,1;0,0,1)
$
Damit folgt insgesamt:
$
A_f^(B,B) = (A_(Phi_B)^(E,B))^(-1) I_n A_f^(E,E) I_n^(-1) A_(Phi_B)^(E,B) = (A_(Phi_B)^(E,B))^(-1) A_f^(E,E) underbrace(A_(Phi_B)^(E,B), in "GL"_n (RR)) \
==> A_f^(B,B) "und" A_f^(E,E) "sind ähnlich"
$
Für die Basis $C$ erhalten wir
$
f(vec(0,0,-1)) = vec(0,0,-3) = 3 vec(0,0,-1) + 0 vec(1,0,0) + 0 vec(0,-1,0) \
f(vec(1,0,0)) = vec(1,-4,3) = (-3) vec(0,0,-1) + 1 vec(1,0,0) + 4 vec(0,-1,0) \
f(vec(0,-1,0)) = vec(0,-7,-5) = 5 vec(0,0,-1) + 0 vec(1,0,0) + 7 vec(0,-1,0)
$
Als Darstellungsmatrix erhält man
$
A_f^(C,C) = mat(3,-3,5;0,1,0;0,4,7)
$
Als Matrizenmultiplikation
#figure(image("bilder2/1_18_3.jpeg", height: 30%))
Darstellung von $Phi_C$?
$Phi_C (e_i) = w_i quad "für" quad C = {w_1, w_2, w_3}$
$
A_(Phi_C)^(E,C) = mat(0,1,0;0,0,-1;-1,0,0)
$
$
A_f^(C,C) = (A_(Phi_C)^(E,C))^(-1) I_n A_f^(E,E) I_n^(-1) A_(Phi_C)^(E,C) = (A_(Phi_C)^E,C)^(-1) A_f^(E,E) A_(Phi_C)^(E,C)
$
Also auch: $A_f^(C,C)$ ist ähnlich zu $A_f^(E,E)$.
Alternativ:
$
A_f^(C,C) &= (A_(Phi_C)^(E,C))^(-1) I_n I_n^(-1) A_(Phi_B)^(E,B) A_f^(B,B) (A_(Phi_B)^(E,B))^(-1) I_n A_(Phi_C)^(E,C) \
&= underbrace((A_(Phi_C)^(E,C))^(-1) A_(Phi_B)^(E,B), in "GL"_n (RR)) A_f^(B,B) (A_(Phi_B)^(E,B))^(-1) A_(Phi_C)^(E,C)
$
Jetzt allgemein: $f in L(V,V)$, $dim(V) < oo$, $B,C$ seien Basen von $V$ $==>$
$
A := A_f^(B,B) wide tilde(A) := A_f^(C,C)
$
und es existiert $T in "GL"_n (K)$ als Basistransformationsmatrix, so dass
$
tilde(A) = T A T^(-1)
$
Dann gilt
$
p_(tilde(A)) (lambda) &= det(tilde(A) - lambda I_n) = det(T A T^(-1) - lambda T T^(-1)) \
&= det(T (A -lambda I_n) T^(-1)) \
&= det(T) det(A - lambda I_n) det(T^(-1)) \
&= p_A (lambda)
$
D.h. für einen Endomorphismus ist das charakteristische Polynom der zugehörigen Darstellungsmatrix unabhängig von der Wahl der Basis!
Damit ist es sinnvoll, für $f in L(V,V)$, $dim(V) < oo$,
$
p_f (.) := p_A (.)
$
für $A$ als Darstellungsmatrix $A_f^(B,B)$ für eine Basis $B$.
#lemma("1.19")[
Sei $V$ ein $K$-Vektorraum mit $dim(V) = n < oo$ und $f in L(V,V)$. Dann sind folgende Aussagen äquivalent:
#box(width: 100%, inset: (left: 0.5cm, right: 0.5cm))[
1. $lambda in K$ ist ein Eigenwert von $f$.
2. $lambda in K$ ist ein Eigenwert der Darstellungsmatrix $A_f^(B,B)$ für eine gewählte $B$ von $V$.
]
]
Des weiteren gilt auch. Für zwei ähnliche $A$ und $B$ gilt $p_A (lambda) = p_B (lambda)$
$
A, B "ähnlich" ==> p_A (lambda) = p_B (lambda)
$
z.B.
$
A = mat(1,0;2,1) wide B = mat(1,0;0,1) \
p_A (lambda) = (1-lambda)^2 = p_B (lambda), "aber für jedes" T in "GL"_2 (RR) "gilt" \
T B T^(-1) = T T^(-1) = I != A "also" A, B "nicht ähnlich"
$
Weitere Beobachtung: Aus Lemma 1.13 und Lemma 1.19 folgt, dass die Eigenwerte von $f in L(V,V)$ die Nullstellen des charakteristischen Polynoms der Matrix $A_f^(B,B)$ für eine Basis $B$ ist. Dies gilt #bold[nicht] i.a. für Darstellungsmatrizen $A_f^(B,C)$ für $B != C$.
#definition("1.20", "Algebraische Vielfachheit")[
Sei $V$ ein $K$-Vektorraum mit $dim(V) = n < oo$. Ist $f in L(V,V)$ und $tilde(lambda)$ ist Eigenwert von $f$ hat das charakteristische Polynom $p_f (lambda)$ die Form
$
p_f (lambda) = (lambda - tilde(lambda))^d dot tilde(p) (lambda)
$
für ein $tilde(p)(.) in KK[lambda]$ mit $tilde(p)(tilde(lambda)) != 0$, so nennt man $d$ die #bold[algebraische Vielfachheit] von $tilde(lambda)$ und bezeichnet sie $a(f, tilde(lambda))$.
] <def>
#lemma("1.21")[
Seien $V$ ein $K$-Vektorraum, $dim(V) = n < oo$, und $f in L(V,V)$. Für Eigenwert $tilde(lambda)$ von $f$ gilt
$
g(f, tilde(lambda)) <= a(f, tilde(lambda))
$
]
#italic[Beweis:] Ist $tilde(lambda)$ EW von $f$ mit der geometrischen Vielfachheit $m := g(f, tilde(lambda))$, so gibt es nach Def. 1.10 zu $tilde(lambda)$ $m$ linear unabhängige Eigenvektoren $v_1, ..., v_m in V$.
Gilt $m = n = dim(V)$ sind ${v_1, ..., v_m}$ schon Basis von $V$.
Gilt $m < n$, so folgt aus dem Basisergänzungssatz (Satz 3.21, LinA I), dass man ${v_1, ..., v_m}$ zu einer Basis ${v_1, ..., v_m, v_(m+1), ..., v_n} =: B$ ergänzen. Wegen $f(v_j) = tilde(lambda) v_j, 1<=j<=m$, gilt
$
A_f^(B,B) = mat(tilde(lambda) I_n, A_1; 0, A_2) in K^(n,n)
$
für zwei Matrizen $A_1 in K^(m, n-m)$, $A_2 in K^(n-m, n-m)$.
Mit D9 aus LinA I folgt
$
p_f (lambda) = (tilde(lambda) - lambda)^m dot det(A_2 - lambda I_(n-m,n-m))
$
$==>$ EW $tilde(lambda)$ ist mindestens $m$-fache Nullstelle von $p_f (lambda)$. Für $m = n ==> A_f^(B,B) = tilde(lambda) I_n ==>$ $p_f (lambda) = (tilde(lambda) - lambda)^m$
#endproof
#pagebreak()
= Diagonalisierbarkeit und Normalform
== Diagonalisierbarkeit
#definition("2.1", "Diagonalisierbar")[
Sei $V$ ein $K$-Vektorraum mit $dim(V) = n < oo$. Ein $f in L(V,V)$ heißt #bold[diagonalisierbar], wenn es eine Basis $B$ von $V$ gibt, so dass $A_f^(B,B)$ eine Diagonalmatrix ist. D.h. es existieren $lambda_1, ..., lambda_n in K$ mit
$
A_f^(B,B) = mat(lambda_1, ..., 0;dots.v,dots.down,dots.v;0,dots,lambda_n) in K^(n,n)
$
] <def>
Entsprechend nennen wir eine Matrix $A in K^(n,n)$ #bold[diagonalisierbar], wenn es eine Matrix $T in "GL"_n (K)$ und eine Diagonalmatrix $D in K^(n,n)$ gibt mit
$
A = T D T^(-1)
$
D.h. $A$ ist ähnlich zu einer Diagonalmatrix.
#theorem("2.2")[
Sei $V$ ein $K$-Vektorraum mit $dim(V) = n < oo$ und $f in L(V,V)$. Dann sind folgende Aussagen äquivalent:
#box(width: 100%, inset: (left: 0.5cm, right: 0.5cm))[
1. $f$ ist diagonalisierbar
2. Es gibt eine Basis $B$ von $V$ bestehend aus Eigenvektoren von $f$.
3. #[
Das charakteristische Polynom $p_f (.)$ zerfällt in $n$ Linearfaktoren über $K$, d.h.
$
p_f (lambda) = (lambda - lambda_1) dot ... dot (lambda - lambda_n)
$
mit Eigenwerten $lambda_1, ..., lambda_n in K$ für $f$ und für jeden Eigenwert $tilde(lambda)$ gilt $a(f, tilde(lambda)) = g(f, tilde(lambda))$.
]
]
]
#italic[Beweis:]
"$1 ==> 2$": $f$ diagonalisierbar $==>$ $exists {v_1, ..., v_n} = B$ Basis von $V, lambda_1, .., lambda_n in K$:
$
tilde(A) := A_f^(B,B) = mat(lambda_1, ..., 0;dots.v,dots.down,dots.v;0,dots,lambda_n) wide f(v_j) = sum_(i = 1)^n a_(i j) v_i
$
$==> f(v_j) = lambda_i v_i, 1 <=i <=n, v_i != 0$. $==>$ Damit sind $lambda_1, ..., lambda_n$ Eigenwerte von $f$ mit zugehörigen Eigenvektoren $v_1, ..., v_n$. $==> 2.$
"$2 ==> 1$": Ist $B = {v_1, ..., v_n}$ eine Basis von $V$ bestehend aus Eigenvektoren, so gibt es zugehörige Eigenwerte $lambda_1, ..., lambda_n$ mit $f(v_j) = lambda_j v_j$, $1<= j <=n$ $==>$
$
A_f^(B,B) = mat(lambda_1, ..., 0;dots.v,dots.down,dots.v;0,dots,lambda_n)
$
"$2==>3$": Sei $B = {v_1, ..., v_n}$ eine Basis von Eigenvektoren, $lambda_1, ..., lambda_n$ seien die zugehörigen Eigenwerte $==>$
$
p_f (lambda) = p_(A_f^(B,B)) (lambda) = det(A_f^(B,B) - lambda I_n) \
= (lambda_1 - lambda) dot (lambda_2 - lambda) dot ... dot (lambda_n - lambda)
$
$==>$ $p_f (.)$ zerfällt in Linearfaktoren. Verschiedene Eigenwerte $tilde(lambda)_1, ..., tilde(lambda)_k, k <=n$. Der Eigenwert $tilde(lambda)_i$ besitzt die algebraische Vielfachheit $m_j := a(f, tilde(lambda)_j)$ genau dann, wenn er $m_j$-mal auf den Diagnolen von $A_f^(B,B)$ steht. Dies ist genau dann der Fall, wenn $m_j$ Eigenvektoren zu $tilde(lambda)_j$ in $B$ enthalten sind. Diese sind linear unabhängig $==>$
$
1. &dim("Eig"(f, tilde(lambda)_j)) = g(f, tilde(lambda)_j) >= m_j = a(f, tilde(lambda)_j) \
2. &"Lemma 1.21:" g(f, tilde(lambda)_j) <= a(f, tilde(lambda)_j) \
$
$
1 and 2 ==> g(f, tilde(lambda)_j) = a(f, tilde(lambda)_ j)
$
"$3==>2$": Seien $tilde(lambda)_1, ..., tilde(lambda)_k, k<=n$ die paarweise verschiedenen Eigenwerte von $f$. Wir wissen: $cal(P)_n in p_f (.)$ zerfällt in Linearfaktoren, $a(f, tilde(lambda)_j) = g(f, tilde(lambda)_j), 1<=j<=n$.
$
dim(V) = n = sum_(j = 1)^k a(f, tilde(lambda)_j) = sum_(j = 1)^k g(f, tilde(lambda)_j) = sum_(j = 1)^k dim("Eig"(f, tilde(lambda)_j))
$
Es gilt (Lemma 1.8):
$
"Eig"(f, tilde(lambda)_j) sect sum_(i = 1)^k "Eig"(f, tilde(lambda)_i) = 0 quad forall j = 1, ..., k
$
Dann folgt (Lemma 3.31, (2), Lemma 3.35, Satz 3.14) (direkte Summe, $U subset V$ UVR $==>$ $dim(U) <= dim(V)$, $U=V$ $dim(U) = dim(V)$, Basis $<==>$ eindeutige Darstelltung), dass die zu $tilde(lambda)_1, ..., tilde(lambda)_n$ linear unabhängigen Eigenvektoren, die jeweils eine Basis von $"Eig"(f, tilde(lambda)_j)$, $1 <=j<=k$, eine Basis von $V$ bilden.
#endproof
In Verbindung mit Lemma 1.6 folgt unmittelbar:
#corollary("2.3")[
Sei $V$ ein $K$-Vektorraum mit $dim(V) = n < oo$ und $f in L(V,V)$ mit $n$ paarweise verschiedenen Eigenwerten, dann ist $f$ diagonalisierbar.
]
#bold[Bemerkung:] Das Kriterium der $n$ paarweise verschiedenen Eigenwerte ist nicht notwendig z.B. $V = K^n$, $B = E$ Standardbasis
$
f: "Id": K^n -> K^n, ==> A_f^(E,E) = I_n ==> 1 n"-facher Eigenwert"
$
#bold[Beispiel 2.4:] Fortsetzung von Bsp. 1.14
$
A = mat(3,8,16;0,7,8;0,-4,-5), "EW:" -1, 3 \
w_1 = vec(2,1,-1) "EV zu" -1, space w_2 = vec(1,2,-1), w_3 = vec(-1,2,-1) "EV zu" 3
$
$==>$ $exists$ Basis von Eigenvektoren
$==>^"Satz 2.2"$ $A$ ist diagonalisierbar
$
p_A (lambda) = (3-lambda)(lambda+1)(lambda-3) \
a(f, -1) = 1 = g(f, -1) \
a(f, 3) = 2 = g(f, 3)
$
$T in "GL"_n (RR)$ so, dass $T^(-1) A T = D$?
Die zu $B = {w_1, w_2, w_3}$ gehörende Koordinatentransformation $Phi_B$ ist gegeben durch
$
A_(Phi_B)^(E,B) = mat(2,1,-1;1,2,2,;-1,-1,-1)
$
Dann gilt: Für $f in L(RR^3, RR^3)$ mit
$
A_f^(E,E) = A wide A_f^(B,B) = mat(-1,0,0;0,3,0;0,0,3) = D
$
Mit Basiswechsel von $A$ zu $D$
$
D = (A_(Phi_B)^(E,B))^(-1) A underbrace(A_(Phi_B)^(E,B), = T)
$
#bold[Beispiel 2.5:] Nicht jeder Endomorphismus bzw. jede Matrix ist diagonalisierbar. Bsp. 1.4:
$
f: RR^2 -> RR^2, quad f(vec(x_1, x_2)) = overbrace(mat(0,1;-1,0), A) vec(x_1, x_2), quad p_f (lambda) = lambda^1 + 1
$
D.h. über $RR$ zerfällt $p_f (.)$ nicht in Linearfaktoren.
Ein weiteres Beispiel
$
A = mat(5,10,7;0,-3,-3;0,3,3)
$
$==>$ $p_A (lambda) = (5-lambda) lambda^2$ $==>$ $p_A (.)$ zerfällt in Linearfaktoren. $a(f, lambda_i), g(f, lambda_i)$ für $lambda_1 = 5, lambda_2 = 0$.
Lemma 1.21: $g(f, lambda_i) <= a(f, lambda_i)$
$==>$ $g(f,5) = 1 = a(f,5)$, $a(f,0) = 2, g(f,0) >= 1$
Ein Eigenvektor zu $lambda = 0$ sind
$
w_1 = vec(3,-5,5) ==> g(f, 0) = 1 < 2 = a(f,0)
$
$==>$ $f$ nicht diagonalisierbar.
#sect_delim
Mit Satz 2.2 erhält man einen Algorithmus zur Überprüfung, ob ein gegebenes $f in L(V,V)$ (bzw. $A in K^(n,n)$) diagonalisierbar ist:
#enum([
Bestimme mit einer Basis $B$ von $V$ die Darstellungsmatrix $A = A_f^(B,B)$
],[
Bestimme für $A$ das charakteristische Polynom $p_A (.)$ (Determinantenberechnung)
],[
Zerfällt $p_A (.)$ in Linearfaktoren über $K$? Nein: $f$ nicht diagonalisierbar. Ja: Seien $lambda_i, 1 <= i <= k <=n = dim(V)$ die paarweise verschiedene Eigenwerte von $f$.
Für $i = 1, ..., k$
#enum([
Bestimme eine Basis von $"Eig"(f, lambda_i)$
],[
Prüfe, ob $a(f, lambda_i) = g(f, lambda_i)$
])
Gilt $a(f,lambda_i) = g(f, lambda_i)$ für alle $i in {1,...,k}$. Nein: $f$ ist nicht diagonalisierbar. Ja: $f$ ist diagonalisierbar.
])
#bold[Beispiel 2.6:] Fischer/Springborn
Betrachtet wird: Masse aufgehänt an einer Feder. Zur Zeit $t = 0$ in Position $y(0) = alpha$ und ausgelenkt in senkrechter Richtung mit Geschwindigkeit $beta = dot(y) (0)$
$y(t) corres$ Position der Masse zum Zeitpunkt $t$
#figure(image("bilder2/2_6.jpeg", width: 50%))
Dieses System wird durch die gewöhnliche Differentialgleichungen
$
dot.double(y) + 2 mu dot(y) + omega^2 y = 0, quad y(0) = alpha, dot(y) (0) = beta
$
Umschreiben
$
dot(y)_0 &= y_1 \
dot(y)_1 &= -omega^2 y_0 - 2 mu y_1
$
mit $y_0 = y, dot.double(y)_0 = dot.double(y), y_0 (0) = alpha, y_1 (0) = beta$.
$
dot(tilde(y)) := vec(dot(y)_0,dot(y)_1) = mat(0,1;-omega^2,-2 mu) vec(y_0, y_1)
$
$
p_A (lambda) = lambda^2 + 2 mu lambda + w^2
$
mit den potentiellen Nulstellen
$
lambda = -mu plus.minus sqrt(mu^2 - w^2)
$
Man unterscheidet drei Fälle:
#boxedlist[$0<=mu<omega$, d.h. $mu^2-omega^2<0$ $==>$ schwache Dämpfung][$mu = omega$, d.h. $mu^2 = omega^2$ $==>$ aperiodischer Fall $==>$ $a(A, -mu) = 2$, $dim("Eig"(A, -mu)) = 1$, $A$ nicht diagonalisierbar][$mu > omega$, d.h. $mu^2 >omega^2$, starke Dämpfung]
Eine solche Eigenwertanalyse kann auch nutzen, um das Langzeitverhalten von Lösungen von gewöhnlichen DGL zu bestimmen.
#figure(image("bilder2/2_6_2.jpeg", width: 70%))
#theorem("2.7")[
Sei $V$ ein $K$-Vektorraum mit $dim(V) = n < oo$ und $f in L(V,V)$. Dann sind folgende Aussagen äquivalent:
#enum([
Das charakteristische Polynom $p_f (.)$ zerfällt über $K$ in Linearfaktoren.
],[
Es gibt eine Basis $B$ von $V$, so dass $A_f^(B,B)$ eine obere Dreiecksmatrix ist, d.h.
$
A_f^(B,B) = mat(1, ..., *; dots.v, dots.down, dots.v; 0, ..., *)
$
und $f$ ist damit #bold[triangulierbar].
])
]
#italic[Beweis:] Beweis von Satz 14.18 im Liesen/Mehrmann
#endproof
Nun ist das Ziel:
Bestimmung einer Basis $B$ von $V$, so dass $A_f^(B,B)$ eine obere Dreiecksmatrix ist, die möglichst nah an einer Diagonalmatrix ist und von der geometrischen Vielfachheiten der Eigenwerte abgelesen werden können.
D.h. $p_f (.)$ zerfällt in Linearfaktoren mit den Eigenwerten $lambda_1, ..., lambda_k$ (notwendig, Satz 2.7) und wir wollen eine Basis $B$ bestimmen, so dass $A_f^(B,B)$ Diagonalblockgestalt hat mit
$
A_f^(B,B) = mat(J_1 (lambda_1), ..., 0; dots.v, dots.down, dots.v; 0, ..., J_m (lambda_m))
$
wobei jeder Diagonalblock die Form
$
J_i (lambda_i) = mat(lambda_i, 1, ..., 0; 0, dots.down, dots.down, dots.v; dots.v, dots.down, dots.down, 1; 0, ..., 0, lambda_i) in K^(d_i, d_i) wide (*)
$
#definition("2.8", "Jordan-Block")[
Sei $V$ ein $K$-Vektorraum mit $dim(V) = n < oo$, $f in L(V,V)$ und $lambda_i$ ein Eigenwert von $f$. Eine Matrix der Form $(*)$ heißt #bold[Jordan-Block] der Größe $d_i$ zum Eigenwert $lambda_i$.
] <def>
Wegen der Bedeutung der Jordan-Normalform gibt es zahlreiche Herleitungen mit unterschiedlichen mathematischen Hilfsmitteln.
Hier: Beweis über die Dualitätstheorie basirend auf einer Arbeit von V. Pt $minus(a)$ k (1956)
== Dualräume
#definition("2.9", "Linearform, Dualraum")[
Sei $V$ ein $K$-Vektorraum. Eine Abbildung $f in L(V, K)$ heißt #bold[Linearform]. Den $K$-Vektorraum $V^* := L(V,K)$ nennt man #bold[Dualraum].
] <def>
Gilt $dim(V) = n < oo$ so folgt aus Satz 5.18 LinA I, dass $dim(V^\*) = n$ gilt. Ist $B = {v_1, ..., v_n}$ eine Basis von $V$ und $C = {1}$ eine Basis des $K$-Vektorraum $K$, dann gilt für
$
f(v_i) = mu_i in K space "für" space f in V^\*", d.h." f: V -> K,
$
für $i = 1, ..., n$ und damit
$
A_f^(B,C) = (mu_1, ..., mu_n) in K^(1,n)
$
#bold[Beispiel 2.10:] Sei $V$ der $RR$-Vektorraum der auf dem Intervall $[0, 1]$ stetigen, reellwertigen Funktionen und $a in [0,1]$. Dann sind
$
g_1: &V -> RR, quad g_1 (f) := integral_0^1 f(x) d x \
g_2: &V -> RR, quad g_2 (f) := f(a)
$
Linearformen auf $V$.
Basis des Dualraums?
#theorem("2.11")[
Sei $V$ ein $K$-Vektorraum mit $dim(V) = n < oo$ und $B = {v_1 .., v_n}$ eine Basis von $V$. Dann gibt es genau eine Basis $B^* = {v_1^*, ..., v_n^*}$ von $V^* = L(V, K)$ für die
$
v_i^* (v_j) = delta_(i j) quad i, j = 1, ..., n
$
gilt. Diese Basis heißt die zu $B$ duale Basis.
]
#italic[Beweis:] Lemma 4.10: LinA I. Es gibt eine lineare Abbildung $v_i^*$ für die $v_i^* (v_j) = delta_(i j)$ für $j = 1, ..., n$, für $i = 1, ..., n$. Noch zu zeigen: $v_i^*$ sind Basis von $V^*$. Wir wissen schon: $dim (V^*) = n$. Also: Es reicht zu zeigen: ${v_i^*}_(i = 1, ..., n)$ linear unabhängig. Seien $mu_i in K$ so, dass
$
sum_(i = 1)^n mu_i v_i^* = 0 in V^* = L(V,K)
$
Dann gilt:
$
0_K = 0_(V^*) (v_j) = sum_(i = 1)^n mu_i v_i^* (v_j) = mu_j quad j = 1, ..., n
$
#endproof
#definition("2.12", "duale Abbildung")[
Seien $V$ und $W$ zwei $K$-Vektorräume mit den zugehörigen Dualräumen $V^*$ und $W^*$. Für $f in L(V,W)$ heißt
$
f^*: W^* -> V^*, quad f^* (h) = h circ f
$
die zu $f$ #bold[duale Abbildung].
] <def>
#figure(image("bilder2/2_12.jpeg", width: 40%))
Seien $U subset.eq V$ und $Z subset.eq V^*$ zwei Unterräume. Dann heißt die Menge
$
U^0 := {h in V^* | h(u) = 0 "für alle" u in U}
$
#bold[Annihilator] von $U$ und die Menge
$
Z^0 := {v in V | z(v) = 0 "für alle" z in Z}
$
#bold[Annihilator] von $Z$.
Man kann sich überlegen:
#boxedlist[
Die Mengen $U^0 subset.eq V^*$ und $Z^0 subset.eq V$ sind Untervektorräume von $V^*$ bzw $V$
][
Es gilt für $f in L(V,V)$
$
(f^k)^* = (f^*)^k
$
]
Des Weiteren besitzt die duale Abbildung folgende Eigenschaften:
#lemma("2.13")[
Sind $V, W$ und $X$ drei $K$-Vektorräume. Dann gilt
#enum[
Ist $f in L(V,W)$, dann ist die duale Abbildung $f^*$ linear, d.h. $f^* in L(W^*, V^*)$
][
Ist $f in L(V, W)$ und $g in L(W, X)$, dann ist $(g circ f)^* in L(X^*, V^*)$ und es gilt $(g circ f)^* = f^* circ g^*$
][
Ist $f in L(V,W)$ bijektiv, dann ist $f^* in L(W^*, V^*)$ bijektiv und es gilt $(f^*)^(-1) = (f^(-1))^*$
]
]
#italic[Beweis:] ÜB
#endproof
#lemma("2.14")[
Sei $V$ ein endlichdimensionaler Vektorraum, $f in L(V,V)$, $f^* in$ $L(V^*, V^*)$ und $U subset.eq V$, sowie $W subset.eq V^*$ zwei Vektorräume. Dann gilt:
#enum[
$dim(V) = dim(W) + dim(W^0)$
][
Ist $f$ nilpotent vom Grad $m$, dann ist die duale Abbildung $f^*$ ebenfalls nitpotent vom Grad $m$.
][
Ist $W subset.eq V^*$ ein $f^*$-invarianter Vektorraum, dann ist $W^0$ ein $f$-invarianter Unterraum.
]
]
#italic[Beweis:] ÜA
#endproof
#definition("2.15", "nilpotent vom Grad m")[
Sei ${0} != V$ ein $K$-Vektorraum. Man nennt $f in L(V,V)$ #bold[nilpotent], wenn ein $m in NN$ existiert, so dass $f^m = 0 in L(V,V)$ gilt. Gilt für dieses $m$, dass $f^(m-1) != 0 in L(V,V)$, so heißt $f$ #bold[nilpotent vom Grad m] und $m$ is der #bold[Nilpotenzindex] von $f$.
] <def>
#definition("2.16", [$f$-invarianter Unterraum])[
Sei $V$ ein $K$-Vektorraum mit $dim(V) = n < oo$, $U subset.eq V$ ein Unterraum und $f in L(V,V)$. Gilt $f(U) subset.eq U$, d.h. ist $f(u) in U$ für alle $u in U$, so nennt man $U$ einen $f$-invarianten Unterraum von $V$.
] <def>
#definition("2.17", "Bilinearform")[
Seien $V$ und $W$ zwei $K$-Vektorräume. Eine Abbildung $a: V times W -> K$ heißt Bilinearform, wenn
#enum[
$a(dot, w): V -> K$ für alle $w in W$ eine lineare Abbildung ist und
][
$a(v, dot): W -> K$ für alle $v in V$ eine lineare Abbildung ist
]
Eine Bilinearform $a(dot , dot )$ heißt #bold[nicht ausgeartet] in der ersten Variable, wenn aus
$
a(v, w) = 0 quad "für alle" w in W
$
folgt, dass $v = 0$ ist. Eine Bilinearform heißt nicht ausgeartet in der zweiten Variable, wenn aus
$
a(v, w) = 0 quad "für alle" v in V
$
folgt, dass $w = 0$ ist. Falls $a(dot , dot )$ in beiden Variablen nicht ausgeartet ist, so nennt man $a(dot, dot)$ eine #bold[nicht ausgeartete Bilinearform] und die Räume $V,W$ ein #bold[duales Paar von Räumen] oder #bold[duales Raumpaar] bezüglich $a(dot , dot)$. Ist $V = W$, so heißt $a(dot,dot)$ eine #bold[Bilinearform auf $V$]. Eine Bilinearform $a(., .)$ auf $V$ heißt #bold[symmetrisch], wenn $a(v, w) = a(w, v)$ für alle $v, w in V$, ansonsten heißt $a(dot,dot)$ unsymmetrisch.
] <def>
#bold[Bemerkung:] Damit $V, W$ ein duales Raumpaar für eine nicht ausgeartete Bilinearform bilden können, muss $dim(V) = dim(W)$ gelten.
#lemma("2.18")[
Sei $V$ ein endlichdimensionaler $K$-Vektorraum, $f in L(V,V)$, $f^* in L(V^*, V^*)$ die duale Abbildung zu $f$, $U subset.eq V$ und $W subset.eq V^*$ zwei Untervektorräume. Ist die Bilinearform
$
a: U times W -> K, (v, h) arrow.bar h(v)
$
nicht ausgeartet ist, d.h. sind $U$ und $W$ ein duales Raumpaar bezüglich dieser Bilinearform, so ist
$
V = U oplus W^0
$
]
#italic[Beweis:] Sei $u in U sect W^0$. Dann gilt $h(u) = 0$ für alle $h in W$. Weil $U, W$ ein duales Raumpaar bzgl. $a(dot ,dot )$ bilden, folgt $u = 0$. Außerdem $dim(U) = dim(W)$ gelten. Damit folgt aus Lemam 2.14, 1., dass
$
dim(V) &= dim(W) + dim(W^0) \
&= dim(U) + dim(W^0)
$
$==>$ $V = U oplus W^0$
#endproof
== Zyklische $f$-invariant Unterräume
Jetzr: Genauere Analyse der Struktur von Eigenräumen
#bold[Beispiel:] Ist $V$ ein $K$-Vektorraum, $f in L(V,V)$ und $lambda in K$ ein Eigenwert von $f$, so ist $"Eig"(f, lambda)$ ein $f$-invarianter Unterraum, da: Für $v in "Eig"(f, lambda)$ gilt $f(v) = lambda v in "Eig"(f, lambda)$.
Sei $V$ ein $K$-Vektorraum mit $dim(V) = n < oo$ und $f in L(V,V)$. Ist $v in V without {0}$, so existiert ein eindeutig definiertes $m = m(f, v) in NN$, sodass die Vektoren
$
v, f(v), f(f(v)), ..., f^(m-1)(v)
$
linear unabhängig, die Vektoren
$
v, f(v), ..., f^(m) (v)
$
jedoch linear abhängig sind. Wegen $dim(V) = n$, muss $m<=n$ gelten!
#definition("2.19", [Grad von $v$])[
Die eindeutig definiert Zahl $m(f, v) in NN$ heißt Grad von $v$ bezüglich $f$.
$
0 != v, f(v), f^2 (v), ..., f^(m-1) (v) "lin. unabh." \
v, f(v), ..., f^m (v) "lin. abh."
$
$==>$ Grad $m$ von $v$, $m in NN$.
] <def>
#bold[Bemerkungen:]
#list[
Der Vektor $v = 0 in V$ ist lin. abhängig.
Deswegen muss man $v != 0$ fordern oder $m in NN union {0}$.
][
Der Grad von $0 != v in V$ ist gleich 1, genau dann wenn $v, f(v)$ linear abhängig sind. Das ist genau dann der Fall wenn $v$ ein Eigenvektor von $f$ ist. Damit folgt auch: Ist $v in V$ kein Eigenvektor von $f$ und $v != 0$, so ist der Grad von $v$ also $m(v, f) >= 2$.
]
#definition("2.20", "Krylov-Raum")[
Sei $V$ ein $K$-Vektorraum mit $dim(V) = n < oo$, $f in L(V,V)$, $v in V$ und $j in NN$. Der Unterraum
$
cal(K)_j (f, v) := "Span"{v, f(v), f^2 (v), ..., f^(j-1) (v) } subset.eq V
$
heißt #bold[j-ter Krylov-Raum] von $f$ und $v$.
] <def>
<NAME> (russischer Schiffsbauingeneur und Mathematiker, 1863-1945). Krylov-Räume spielen auch eine wichtige Rolle für das CG-Verfahren (Conjugate Gradients).
#lemma("2.21")[
Sei $V$ ein $K$-Vektorraum mit $dim(V) = n < oo$ und $f in L(V,V)$. Dann gilt:
#enum[
Hat $0 != v in V$ den Grad $m$ bzgl. $f$, so ist $cal(K)_m (f,v)$ ein $f$-invarianter Unterraum und es gilt:
$
"Span" {v} = cal(K)_1 (f, v) subset cal(K)_2 (f,v) subset ... subset cal(K)_m (f, v) = cal(K)_(m+j) (f, v)
$
für alle $j in NN$.
][
Hat $0 != v in V$ den Grad $m$ bzgl. $f$ und ist $U subset.eq V$ ein $f$-invarianter Unterraum, so dass $v in U$, so ist
$
cal(K)_m (f,v) subset.eq U
$
D.h. betrachtet man alle $f$-invarianten Unterräume von $V$, die $v$ enthalten, so ist $cal(K)_m (f,v)$ derjenige mit der kleinsten Dimension.
][
Gilt für $v in V$, dass $f^(m-1) (v) != 0$ und $f^m (v) = 0$ für ein $m in NN$, dann ist
$
dim(cal(K)_j (f, v)) = j quad "für" space j = 1, ..., m
$
]
]
#italic[Beweis:]
#enum[
ÜA
][
Sei $U subset.eq V$ ein $f$-invarianter Unterraum mit $v in U$. Dann gilt $f^j (v) in U$ für $j = 1, ..., m-1$. Da $v$ den Grad $m$ hat, sind $v, f(v), ..., f^(m-1) (v)$ linear unabhängig.
$==>$ $cal(K)_m (f,v) subset.eq U quad "und" quad dim(cal(K)_m (f,v)) = m <= dim (U)$
][
Seien $mu_0, ...., mu_(m-1) in K$ so gewählt, dass
$
0 = mu_0 v + mu_1 f(v) + ... + mu_(m -1) f^(m-1) (v)
$
gilt. Anwendung $f^(m-1)$
$
0 = mu_0 f^(m-1) (v) + mu_1 f^m (v) = mu_0 underbrace(f^(m-1) (v), != 0) \
==> mu_0 = 0
$
Für $m > 1$ kann man dieses Argument induktiv für $f^(m-j)$, $j = 2, ..., m$, anwenden und erhält damit
$
mu_1 = ... = mu_(m-1) = 0
$
$==>$ Beh.
]
#endproof
#bold[Beobachtungen:] Hat $v$ den Grad $m$ bzgl. $f$ gilt
#list[
$cal(K)_j (f, v)$ ist für $j < m$ kein $f$-invarianter Unterraum, da $0 != f(f^(j-1)(v)) = f^j (v) in.not cal(K)_j (f,v)$
][
wie oben gezeigt, bilden die Vektoren $v, f(v), ..., f^(m-1) (v)$ eine Basis von $cal(K)_m (f,v)$. Wendet man $f$ auf ein Element dieser Basis an, d.h. $f^(k+1) (v), k = 0, ..., m-1$, so erhält man für $k = m-1$ $f^(m)(v)$ als Linearkombination von $v, f(v), ..., f^(m-1)(v) ==> f^(m) (v) in cal(K)_m (f, v)$. Deswegen wird $cal(K)_m (f,v )$ auch #bold[zyklische invarianter Unterraum] zu $v$ von $f$ genannt.
]
#lemma("2.22")[
Sei ${0} != V$ ein $K$-Vektorraum. Ist $f in L(V,V)$ nilpotent vom Grad $m$, so gilt $m <= dim(V)$.
]
#italic[Beweis:] Nach Definition existiert ein $v in V$ mit $f^(m-1) (v) != 0$ und $f^(m)(v) = 0$. Lemma 2.21 sichert, dass $v, f(v), ..., f^(m-1)(v)$ linear unabhängig $==>$ $m <= dim(V)$.
#endproof
#bold[Beobachtung:] Sei $V$ ein $K$-Vektorraum und $f in L(V,V)$. Ist $U subset.eq V$ ein $f$-invarianter Unterraum, so gilt für die Einschränkung von $f$ auf $U, d.h.$
$
f|_U : U -> U, quad u -> f(u),
$
dass $f|_U in L(U, U)$.
#theorem("2.23")[
#bold[Fittingzerlegung]
Sei $V$ ein endlichdimensionaler $K$-Vektorraum und $f in L(V,V)$. Dann existieren $f$-invariante Unterräume $U subset.eq V$ und $W subset.eq V$, so dass gilt:
#enum[
$V = U oplus W$
][
$f|_U in L(U, U)$ ist bijektiv
][
$f|_W in L(W,W)$ ist nilpotent
]
]
#italic[Beweis:] $v in ker(f)$. Dann gilt wegen der Linearität von $f$, sodass $f^2 (v) = f(f(v)) =^(f(v) = 0) 0 ==> ker(f) subset.eq ker(f^2)$
Induktiv zeigt man:
$
{0} subset.eq ker(f) subset.eq ker(f^2) subset.eq ker(f^3) subset.eq ...
$
Da $dim(V) < oo$, muss es eine kleinste Zahl $m in NN union {0}$ geben, so dass $ker(f^m) = ker(f^(m+j))$ für alle $j in NN$. Damit sehen wir
$
U = im(f^m) quad "und" quad W = ker(f^m)
$
Zeige: $U$ und $W$ sind $f$-invariant. Sei $u in U$. Dann existiert $w in V$ mit $f^m (w) = u$ $==>$ $f(u) = f(f^m (w)) = f^m (f(w)) in U$.
Sei $w in W$. Dann gilt
$
f^(m) (f(w)) = f (f^m (w)) = 0 ==> f(w) in W
$
Also existieren $f$-invariante Unterräume $U subset.eq V$ und $W subset.eq V$.
#enum[
Es gilt $U + W subset.eq V$. Die Dimensionsformel für lineare Abbildungen (Satz 4.16, LinA I) liefert für $f^m$, dass
$
dim(V) = dim(U) + dim(W)
$
Ist $v in U sect W$ $==>$ $exists w in V: v = f^m (w) (v in U)$
$
v in W ==> 0 = f^m (v) = f^m (f^m (v)) = f^(2m) (v)
$
Es gilt $ker(f^m) = ker(f^(2m)) ==> v = f^m (v) = 0$
$==> V = U oplus W$
][
Sei $v in ker(f|_k) subset.eq U$. Dann existiert ein $w in V$, so dass $f^m (w) = v$ gilt. $==>$ $0 = f(v) = f(f^m (w)) = f^(m+1) (w)$. Mit $ker(f^m) = ker(f^(m+1)) ==> w in ker(f^m) ==> v = f^m (w) = 0$ $==>$ $f$ injektiv.
Aus der Dimensionsformel folgt, dass $f$ surjektiv ist.
][
Sei $v in W$. Dann gilt
$
0 = f^m (v) = (f|_W)^m (v)
$
$==>$ $(f|_W)^m = 0 in L(W,W)$, d.h. $(f|_W)^m$ ist die Nullabbildung $==>$ $f|_W$ nilpotent.
]
#theorem("2.24")[
Sei $V$ ein endlichdimensionaler $K$-Vektorraum, $f in L(V,V)$ nilpotent vom Grad $m$, $v in V$ ein beliebiger Vektor mit $f^(m-1) (v) != 0$ und $h in V^*$ mit $h(f^(m-1)(v)) != 0$. Dann sind $v$ und $h$ vom Grad $m$ bzgl. $f$ und $f^*$. Die beiden Räume $cal(K)_m (f,v)$ bzw. $cal(K)_m (f^*, h)$ sind zyklisch $f$- bzw. $f^*$-invariante Unterräume von $V$ bzw. $V^*$. Sie bilden ein duales Raumpaar bzgl. der Bilinearform
$
a: cal(K)_m (f, v) times cal(K)_m (f^*, h) -> K, quad (macron(v),macron(h)) arrow.bar macron(h)(macron(v))
$
und es gilt
$
V = cal(K)_m (f, v) oplus (cal(K)_m (f^*, h))^0
$
Hierbei ist $cal(K)_m (f^*, h)^0$ ein $f$-invarianter Unterraum von $V$.
]
#italic[Beweis:] Für $v in V$ gilt $f^(m-1) (v) != 0$. Lemma 2.20 $==>$ $cal(K)_m (f, v)$ $m$-dimensionaler zyklischer $f$-invarianter Unterraum von $V$. Für $V^*$ gilt
$
0 != h(f^(m-1)(v)) = (f^*)^(m-1) (h) (v)
$
Dann ist $0 != (f^*)^(m-1) (h) in L(V^*, V^*)$. $f$ nilpotent von Grad $m$ $==>$ (Lemma 2.14) $f^*$ nilpotent von Grad $m$ $==>$
$
(f^*)^m (h) = 0 in L(V^*, V^*)
$
$==>$ (Lemma 2.20) $cal(K)_m (f^*, h)$ ist $m$-dimensionaler zyklischer $f^*$-invarianter Unterraum von $V^*$.
Nun zu zeigen: $cal(K)_m (f,v), cal(K)_m (f^*, h)$ sind ein duales Raumpaar. Sei
$
macron(v) = sum_(i = 0)^(m-1) mu_i f^i (v) space in cal(K)_m (f,v)
$
so gewählt, dass
$
macron(h)(macron(v)) = a(macron(v), macron(h)) = 0 quad forall macron(h) in cal(K)_m (f^*, h)
$
Zeige induktiv, dass $mu_k = 0, k = 0 , ..., m-1$. Wegen $((f^*)^(m-1)(h)) in cal(K)_m (f^*, h)$ folgt
$
0 = ((f^*)^(m-1)(h))(macron(v)) = h(f^(m-1) (macron(v))) = sum_(i = 0)^(m-1) mu_i h(f^(m-1+i)(v)) = mu_0 underbrace(h(f^(m-1)(v)), != 0) \
==> mu_0 = 0
$
Sei nun $mu_0 = ... = mu_(k - 1) = 0$ fü ein $k in {1, ..., m-2}$. Wegen $(f^*)^(m-1-k)(h) in cal(K)_m (f^*, h)$ folgt aus der Darstellung von $macron(v)$, dass
$
0^((*)) = ((f^*)^(m-1-k)) (h))(macron(v) = h(f^(m-1-k) (macron(v)))) = sum_(i = 0)^(m-1) mu_I h(f^(m-i+i-k) (v)) = mu_k underbrace(h(f^(m-1)(v)), != 0) = mu_k \
==> macron(v) = 0
$
$==> a(.,.)$ ist nicht ausgeartet in der ersten Komponente. Analog zeigt man, dass $a(., .)$ auch in der zweiten Kompontente nicht ausgeartet ist $==>$ $cal(K)_m (f, v)$ und $cal(K)_m (f^*, h)$ sind ein duales Raumpaar.
Mit Lemma 2.18: $V = cal(K)_m (f, v) oplus (cal(K)_m (f^*, h))^0$
Mit Lemma 2.14, 3: $(cal(K)_m (f^*, h))^0$ ist $f$-invarianter UR von $V$.
#endproof
(zyklisch $f$-invarianter UR: $v, f(v), f^2 (v), ...$)
== Die Jordan-Normalform
#theorem("2.25")[
Sei $V$ ein endlichdimensionaler $K$-Vektorraum und $f in L(V,V)$. Ist $lambda in K$ ein Eigenwert von $f$, dann gibt es $f$-invariante Unterräume $U subset V$ und ${0} != W subset.eq V$, so dass
#enum[
$V = U oplus W$
][
die Abbildung $f|_U - lambda "Id"_U$ ist bijektiv und
][
die Abbildung $f|_W - lambda "Id"_W$ ist nilpotent
]
Des Weiteren ist $lambda$ kein Eigenwert von $f|_U$.
]
#italic[Beweis:] Wir definieren
$
g := f - lambda "Id"_V in L(V,V)
$
Satz 2.23: $exists$ $g$-invariante UR $U subset.eq V$ und $W subset.eq V$:
$
V = U oplus W, space g|_U "bijektiv", space g|_W "nilpotent"
$
Annahme: ${0} = W ==> V = U$
$
==> g|_U = g|_V = g space "bijektiv"
$
$lambda$ ist Eigenwert von $f ==> exists 0 != v: f(v) = lambda v$
$
&==> g(v) = f(v) - lambda v = lambda v - lambda v = 0 \
&==> ker(g) supset.eq {0,v} != {0} space arrow.zigzag space g "bijektiv" \
&==> U subset V
$
Annahme: $lambda$ ist Eigenwert von $f|_U$
$
&==> exists 0!= v in U: f(v) = lambda v \
&==> g|_U (v) = f(v) - lambda v = lambda v - lambda v = 0 space arrow.zigzag space g|_U "bijektiv"
$
#endproof
#bold[Beispiel 2.26:] Wir betrachten $V = RR^5$, die Standardbasis $E$ und $f in L(V,V)$ gegeben durch
$
A = mat(-3,-1,4,-3,-1;1,1,-1,1,0;-1,0,2,0,0;4,1,-4,5,1;-2,0,2,-2,1)
$
Dann gilt
$
p_f (lambda) = p_A (lambda) = (lambda-1)^4 (lambda-2)^1 \
==> "EW:" 1,2 quad a(f, 1) = 4 quad a(f,2) = 1
$
$==>$ $p_A (.)$ zerfällt in Linearfaktoren
$lambda_1 = 1:$ Es gilt $ker(g_1^3) = ker(g_1^4)$ für $g_1 := f - lambda_1 "Id"_V$
$==> m_1 = 3$
$
U_1 = "Span"{vec(0,1,2,3,-2)} wide W_1 = "Span"{vec(1,-1,1,0,1),vec(1,1,1,0,-1),vec(0,0,1,1,0),vec(0,0,0,0,1)}
$
$lambda_2 = 2:$ Für $g_2 = f-lambda_2 "Id"_V$ gilt $ker(g_2) = ker(g_2^2)$
$==> m_2 = 1$
$
U_2 = "Span"{vec(-5,1,-1,4,-2), vec(-1,-1,0,1,0), vec(4,-1,0,-4,2), vec(-3,1,0,3,-2)}, W_2 = "Span"{vec(0,1,2,3,-2)}
$
Beobachtung: $dim(W_1) = a(f, lambda_1)$, $dim(W_2) = a(f, lambda_2)$
#theorem("2.27")[
Sei $V$ ein endlichdimensionaler $K$-Vektorraum und $f in L(V,V)$. Ist $lambda in K$ ein Eigenwert von $f$, dann existieren für den Unterraum $W$ aus Satz 2.25 Vektoren $w_1, ..., w_k in W$ und $d_1, ..., d_k in NN$, so dass
$
W = cal(K)_(d_1) (f, w_1) oplus cal(K)_(d_2) (f, w_2) oplus ... oplus cal(K)_(d_k) (f, w_k)
$
Des Weiteren gibt es eine Basis $B$ von $W$, so dass
$
A_(f|_W)^(B,B) = mat(J_(d_1) (lambda),,0;,dots.down,,;0,,J_(d_k) (lambda))
$
]
#italic[Beweis:] Sei wie in Satz 2.25 $g := f - lambda "Id"_V$ und $g_1 := g|_W$ nilpotent vom Grad $d_1$. Dann gilt $1 <= d_1 <= dim(W)$.
Sei $w_1 in W$ ein Vektor mit $g_1^(d_1 -1) (w_1) != 0$. Wegen $g^(d_1) (w_1) = 0$
$==>$ $g_1^(d_1-1) (w_1)$ ist ein Eigenvektor von $g_1$ zum Eigenwert $0$.
Lemma 2.21, 3, liefert, dass die $d_1$ Vektoren
$
{w_1, g(w_1), ...., g_1^(d_1-1) (w)}
$
linear unabhängig sind. Außerdem ist $W_1 := cal(K)_(d_1) (g_1, w_1)$ ein $d_1$-dimensionaler zyklischer $g_1$-invarianter UR von $W$. Also ist
$
B_1 := {g_1^(d_1-1)(w_1), g_1^(d_1-2) (w_2), ..., g_1 (w_1), w_1}
$
eine Basis von $cal(K)_(d_1) (g_1, w_1) = W_1$ und
$
A_(g_1 |_(W_1))^(B_1, B_1) = mat(0,1,,0;,dots.down,dots.down,;,,dots.down,1;0,,,0) = J_(d_1) (0) in K^(d_1, d_1)
$
Per Definition gilt $A_(g_1|_(W_1))^(B_1, B_1) = A_(g|_(W_1))^(B_1, B_1)$. Ist $d_1 = dim(W)$: siehe unten $staudihaufen$.
Sei nun $d_1 < dim(W)$. Satz 2.25 sichert, dass es für $g_1 in L(W,W)$ einen $g_1$-invarianten Unterraum $tilde(W) != {0}$ mit $W = W_1 oplus tilde(W)$ gibt.
Die Abbildung $g_2 := g_1 |_tilde(W)$ ist nilpotent vom Grad $lambda_2$ mit $1 <= d_2 <= d_1$.
Wiederholung der Konstruktion:
$exists w_2 in tilde(W): g_2^(d_2 -1)(w_1) != 0, ..., W_2 := cal(K)_(d_2) (g_2, w_2)$ ... UR von $tilde(W) subset.eq W$,
$
B_2 := {g_2^(d_2-1) (w_2) , g_2^(d_2-2) (w_2), ..., g_2 (w_2), w_2}
$
$
A_(g|_(W_2))^(B_2, B_2) = A_(g_2 |_(W_2))^(B_2, B_2) = mat(0,1,,0;,dots.down,dots.down,;,,dots.down,1;0,,,0)
$
Nach $k <= dim(W)$ Schritten muss diese Konstruktion abbrechen und es gilt
$
W &= cal(K)_(d_1) (g_1, w_1) oplus cal(K)_(d_2) (g_2, w_2) oplus ... oplus cal(K)_(d_K) (g_k, w_k) \
&= cal(K)_(d_1) (g, w_1) oplus cal(K)_(d_2) (g,w_2) oplus ... oplus cal(K)_(d_2) (g, w_k)
$
Vereinigt man die Basen $B_1, ..., B_k$ zu einer Basis $B$ von $W$ (direkte Summe!), so erhält man
$
A_(g|_W)^(B,B) = mat(A_(g|_(W_1))^(B_1, B_1),,0;,dots.down,;0,,A_(g|_(W_k))^(B_k, B_k)) = mat(J_(d_1) (0 ),,0;,dots.down,;0,,J_(d_k) (0))
$
Jetzt: Übertragung auf $f = g + lambda "Id"_V$. Man kann sich leicht überlegen, dass jeder $g$-invariante Unterraum von $V$ auch $f$-invariant ist und damit gilt:
$
cal(K)_(d_i) (f, w_i) = cal(K)_(d_i) (g, w_i) space "für" i = 1, ..., k
$
$
==>^"ÜA" W = cal(K)_(d_1) (f, w_1) oplus ... oplus cal(K)_(d_k) (f, w_k)
$
Für $j in {1, ... k}$ und $0 <= l <= d_j -1$ ist
$
f(g^l (w_j)) &= g(g^l (w_j)) + lambda g^l (w_j) \
&= lambda g^l (w_j) + underbrace(g^(l+1) (w_j), = 0\, l = d_j -1)
$
$
==> A_(f|_W)^(B,B) = mat(A_(f|_W_1)^(B_1,B_1),,0;,dots.down,;0,,A_(f|_W_k)^(B_k, B_k)) = mat(J_d_1 (lambda),,0;,dots.down,;0,,J_d_k (lambda))
$
#endproof
#bold[Beispiel 2.28:] Fortsetzung von Bsp 2.26
#let gorone = $#text(fill: orange)[1]$ //orange one =^ gorone
$
A = mat(-3,-1,4,-3,-1;1,1,-1,1,0;-1,0,2,0,0;4,1,-4,5,1;-2,0,2,-2,1) wide "EW:" quad #stack(spacing: 1em, [$lambda_1 = 1, a(f, lambda_1) = 4 = dim(W_1)$],[$lambda_2 = 2, a(f, lambda_2) = 1 = dim(W_2)$])
$
$lambda_gorone = 1$: $g^gorone_(|_W_1)$ nilpotent vom Grad $lambda_1^gorone = 3$ und $1<d_1^gorone<dim(W_1)$
Erinnerung: $g_1^gorone = f - lambda_gorone I_d$. Für $w_1^gorone = mat(0,0,0,0,1)^T$ ist $(g_1^gorone)^2 (w_1) = mat(1,0,1,0,0)^T != 0$ und $(g_1^gorone)^3 (w_1) = 0 in V = RR^5$.
Mit Lemma 2.21:
$
{w_1, (g_1^gorone)^1 (w_1^gorone), (g_1^gorone)^2 (w_1^gorone)} = {vec(0,0,0,0,1), vec(-1,0,0,1,0), vec(1,0,1,0,0)}
$
$
==> "Span"{w_1, g_1^gorone (w_1^gorone), (g_1^gorone)^2 (w_1)} = cal(K)_3 (g_1^gorone, w_1^gorone)
$
$d_1^gorone < dim(W_1) ==>$ es existiert zu $W_(1 1) := cal(K) (g_1^gorone, w_1^gorone)$ ein $tilde(W)_1 != {0}$ mit $W_1 = W_(1 1) oplus tilde(W)_1$.
Zum Beispiel: $w_2^gorone = mat(1,-1,1,0,1)^T$ $==>$
$
w_2^gorone, w_1^gorone, g_1^gorone (w_1^gorone), (g_1^gorone)^2 (w_1^gorone) quad "lin. unab." \
tilde(W)_1 := "Span"{w_2^gorone} sect cal(K)_3 (g_1^gorone, w_1^gorone) = {0}
$
Es gilt $g_2^gorone := g_1^gorone|_tilde(W)_1$ nilpotent vom Grad $1$
$
==> d_2^gorone = 1 wide W_1 = cal(K)_3(g_1^gorone, w_1^gorone) oplus cal(K)_1 (g_2^gorone, w_2^gorone)
$
#let gorto = $#text(fill: orange)[2]$
Weitherhin kann man nachrechnen
$
cal(K)_3 (f, w_1^gorone) = "Span"{w_1, g_1^gorone (w_1^gorone), (g_1^gorone)^2 (w_1^gorone)} = cal(K)_3 (g_1^gorone, w_1^gorone) \
cal(K)_1 (f, w_2) = "Span"{w_2^gorone} = cal(K)_1 (g_2^gorone, w_2^gorone)
$
$lambda_gorto = 2$:
$
&g_1^gorto |_W_2 "nilpotent vom Grad" lambda_1^gorto = 1 \
&lambda_1^2 = dim(W_2) \
&w_1^gorto = mat(0,1,2,3,-2)^T != 0 \
&(g_1^gorto)^1 (w_1^gorto) = 0 in V ==> W_2 = cal(K)_1 (f, w_1^gorto)
$
$
A_(f|_W_1)^(B^1, B^1) = mat(1,1,0,0;0,1,1,0;0,0,1,0;0,0,0,1), quad A_(f|_W_2)^(B^2, B^2) = (2) \
==>^"Ziel:" A_f^(B,B) = mat(1,1,0,,0;0,1,1,,;0,0,1,,;,,,1,;0,,,,2)
$
#theorem("2.29")[
Sei $V$ ein $K$-Vektorraum mit $dim(V) < oo$ und $f in L(V,V)$. Ist $lambda in K$ ein Eigenwert von $f$, dann gilt für die $d_j, 1 <= j <= k$ aus Satz 2.27, dass
$
&a(f, lambda) = dim(W) = d_1 + ... + d_k \
&g(f, lambda) = k
$
]
#italic[Beweis:] Für den Unterraum U aus Satz 2.23/2.25 ist die Abbildung $f|_U = (f - lambda "Id")|_U$ bijektiv $==>$ $lambda$ ist kein Eigenwert von $f|_U$. Daraus erhält man
$
a(f, lambda) = dim W = d_1 + ... + d_k
$
Zur Bestimmung von $g(f, lambda)$ sei $v in W$ ein beliebiger Vektor. Dann ist
$
v = sum_(j = 1)^k sum_(l = 0)^(d_j -1) mu_(j l) g^l (w_j)
$
und es gilt
$
f(v) = sum_(j = 1)^k sum_(l = 0)^(d_j -1) mu_(j l) f(g^l (w_j)) = sum_(j = 1)^k sum_(l = 0)^(d_j -1 ) mu_(j l) g^l (w_j) + sum_(j = 1)^k sum_(l = 0)^(d_j -1) mu_(j l) g^(l+1) (w_j ) \
= lambda v + underbrace(sum_(j = 1)^k sum_(l = 0)^(d_j- 2) mu_(j l) overbrace(g^(l+1) (w_j), "lin. unab."), = 0)
$
$v in "Eig"(f, lambda) <==> mu_(j l) = 0, 1 <= j <= k, 0 <=l <= d_j -2$
$
<==> v = sum_(j = 1)^k mu_j g^(d_j -1) (w_j)
$
Für $v != 0$ muss mindestens ein Koeffizient $mu_j != 0, j = 1, ..., k$. Daraus folgt
$
"Eig"(f, lambda) = "Span" underbrace({g^(d_1 -1) (w_1)\, ...\, g^(d_k -1) (w_k)}, "lin. unab. wegen direkter Summe")
$
#bold[Beispiel 2.30:] Fortsetzung von Bsp. 2.28. Es gilt
$
"Eig"(f, 1) = "Span"{vec(1,-1,1,0,1), vec(1,1,1,0,-1)} ==> g(f, 1) = 2
$
$lambda_1 = 1$: $a(f, 1) = 4 = 3 + 1 = d_1^1 + d_2^1$, $g(f, 1) = 2 = k$
$lambda_2 = 2$: $a(f, 2) = 1 = d_1^2$, $g(f, 2) = 1$
#bold[Fazit:] Für einen Eigenwert $lambda$ zu $f in L(V,V)$ gilt:
#list[
Die geometrische Veilfachheit des Eigenwert $lambda$ ist gleich der Anzahl der Jordanblöcke zu diesem Eigenewrt in der entsprechenden Dartsellungsmatrix
$
A_f^(B,B) = mat(J_(d_1) (lambda_1),,0;,dots.down,;0,,J_(d_k) (lambda_m))
$
][
Die algebraische Vielfachheit des Eigenwert $lambda$ ist gleich der Summe der Dimensionen der zugehörigen Jordanblöcke
][
In jedem Unterraum $cal(K)_(d_j) (f, w_j)$ gehört genau ein Eigenvektor und seine Vielfachheiten.
]
Was gilt für weitere Eigenwerte?
Ist $tilde(lambda) != lambda$ ein weiterer Eigenwert von $f$, dann ist $tilde(lambda)$ auch ein Eigenwert der Einschränkung $f|_U in L(U_lambda, U_lambda)$
$==>$ Man kann die Sätze 2.25-2.29 auf $f|U_lambda$ anwenden. Damit erhält man
#boxedlist[
$U_lambda = X oplus Y$
][
$f|_X - tilde(lambda) "Id"_X$ ist bijektiv
][
$f|_Y - tilde(lambda) "Id"_Y$ ist nilpotent
][
Der UVR $Y$ ist die direkte Summe von Krylovräumen
][
Es gibt eine Darstellungsmatrix von $f|_Y$ bestehend aus Jordanblöcken
]
Da man dieses Argument für alle paarweise verschiedene Eigenewerte von $f$ anwenden kann, erhält man.
#theorem("2.31")[
Sei $V$ ein endlichdimensionaler $K$-Vektorraum und $f in L(V,V)$. Zerfällt das charakteristische Polynom $p_f (.)$ in Linearfaktoren, so gibt es eine Basis $B$ von $V$ für welche die Darstellungsmatrix in Jordan-Normalform ist, d.h.
$
A_f^(B,B) = mat(J_(d_1) (lambda_1),,0;,dots.down,;0,,J_(d_k) (lambda_m))
$
]
#italic[Beweis:] s.o.
#endproof
<NAME> (fr. Mathematiker, 1838-1922) gab diese Form 1870 an. Zwei Jahre vor Jordan bewies Karl Weierstraß (dt. Mathematiker, 1815-1897) ein Resultat, aus dem die JNF folgt.
#bold[Beispiel 2.32:]
$
A = mat(-3,-1,4,-3,-1;1,1,-1,1,0;-1,0,2,0,0;4,1,-4,5,1;-2,0,2,-2,1) arrow.squiggly J = mat(1,1,0,,0;0,1,1,,;0,0,1,,;,,,1,;0,,,,2) = A_f^(B,B)
$
$B = {(g_1^1)^2 (w_1^1),g_1^1 (w_1^1), w_1^1, w_2^1, w_1^2}$
Für
$
S = mat(1,-1,0,1,0;
0,0,0,-1,1;
1,0,0,1,2;
0,1,0,0,3;
0,0,1,1,-2)
$
gilt $J = S^(-1) A S$. Also $J$ ähnlich zu $A$.
Für $f in L(V,V)$ hatten wir:
#boxedlist[
$f$ ist diagonalisierbar $<==>$
#boxedlist[
$p_f (.)$ zerfällt in Linearfaktoren
][
$forall "EW" lambda "von" f: a(f, lambda) = g(f, lambda)$
]
][
zerfällt $p_j (.)$ in Linearfaktor $==>$ $exists$ Basis $B:A_f^(B,B)$ in JNF
]
#bold[Folgerung:] Existiert eine Darstellungsmatrix in Jordan-Normalform: $f$ ist diagonalisierbar $<==> d_i = 1 forall i in {1, ..., k}$
#bold[Frage:] Wann zerfällt $p_f (.)$ in Linearfaktoren?
#underline[Fundamendtalsatz der Algebra:]
Jedes Polynom $p in P[t]$ über $CC$ mit einem Grad größer $0$ hat mindestens eine Nullstelle.
#italic[Beweis:] Liesen, Mehrmann, Kapitel 15, braucht substantiell Hilfsmittel aus der Analysis.
Damit folgt unmittelbar:
#corollary("2.33")[
Jedes Polynom $p in P[t]$ über $CC$ zerfällt in Linearfaktoren, d.h. es gibt $a, lambda_1, ..., lambda_n in CC$ mit $n = "grad"(p)$ und
$
p(t) = a(t - lambda_1)(t - lambda_2) ... (t - lambda_n)
$
]
Daraus folgt direkt:
#corollary("2.34")[
Sei $V$ ein endlichdimensionaler $CC$-Vektorraum. Dann besitzt jedes $f in L(V,V)$ eine Jordan-Normalform.
]
Matrix-Version:
#corollary("2.35")[
Sei $K$ ein Körper und $A in K^(n,n)$, so dass das charakteristische Polynom $p_A (.)$ in Linearfaktoren zerfällt. Dann ist $A$ ähnlich zu einer Matrix $J$ in Jordan-Normalform.
]
Ist die Jordan-Nomralform eindeutig bestimmt?
#theorem("2.36")[
Sei $V$ ein $K$-Vektorraum mit $dim(V) = n< oo$. Bestizt $f in L(V,V)$ eine Jordan-Normalform, so ist diese bis auf die Reihenfolge der Jordanblöcke eindeutig bestimmt.
]
#startproof sehr technisch, z.B. Liesen, Mehrmann Satz 16.12, Fischer/Springborn, Abschnitt 5.7.
#endproof
Alternativer Beweis für die JNF über Hauptvektoren und Haupträume, vgl. Fischer/Spingborn, Abschnitt 5.5.
Damit: Für Bsp. 2.32 wären
$
mat(2,,;,J_3 (1),;,,1) quad "oder" quad mat(2,,;,1,;,,J_3 (1))
$
alternative JNF. Jordanblöcke bleiben gleich. D.h. Satz 2.36 rechtfertigt den Namen "Normalform".
#pagebreak()
= Euklidische und unitäre Vektorräume
Jetzt: $V$ Vektorraum über $RR$ oder $CC$ mit $dim(V) < oo$.
Damit: Definition eines Skalarproduktes und Verallgemeinerung von Begriffen aus der Geometrie für $RR^2$ bzw. $RR^3$. Dies beinhaltet auch Orthogonalität und orthonormale Basen.
== Skalarprodukt und Normen
Für $K = RR$ werden wir Bilinearformen (Def. 2.17) verwenden. Für $K =CC$ benötigen wir
#definition("3.1", "Sesquilinearform")[
Seien $V$ und $W$ zwei $CC$-Vektorräume. Man nennt eine Abbildung
$
s: V times W -> CC, (v, w) arrow.bar s(v, w)
$
#bold[Sesquilinearform] auf $V times W$, wenn für alle $v, v_1, v_2 in V$, $w, w_1, w_2 in W$ und $lambda in CC$ gilt
#enum[
$s(v_1+v_2, w) = s(v_1, w) + s(v_2, w)$ und $s(lambda v, w) = lambda s(v,w )$
$corres$ $s(.,.)$ ist linear in der ersten Komponente
][
$s(v, w_1 + w_2) = s(v, w_1) + s(v, w_2)$ und $s(v, lambda w) = macron(lambda) s(v, w)$
]
Ist $V=W$, so heißt $s$ Sesquilinearform auf $V$. Eine Sesquilinearform auf $V$ nennt man hermitesch, wenn
$
s(v, w) = overline(s(w, v)) quad forall v, w in V
$
] <def>
#definition("3.2", "Skalarprodukt")[
Sei $V$ ein $K$-Vektorraum. Eine Abbildung
$
ip(., .): V times V -> K, quad (v, w) -> ip(v, w)
$
nennet man #bold[Skalarprodukt] oder #bold[inneres Produkt] auf $V$, wenn gilt
#enum[
Ist $K = RR$, so ist $ip(.,.)$ eine symmetrische Bilinearform
][
Ist $K = CC$, so ist $ip(.,.)$ eine hermitesche Sesquilinearform
][
$ip(.,.)$ ist positiv definit, d.h. es gilt
$
&ip(v,v) >= 0 quad forall v in V \
&ip(v,v) = 0 <==> v = 0 in V
$
]
Ein $RR$-Vektorraum mit einem Skalarprodukt nennt man #bold[euklidischen Vektorraum] und einen $CC$-Vektorraum mit einem Skalarprodukt #bold[unitären Vektorraum].
] <def>
#bold[Bemerkungen:]
#boxedlist[
Für alle $v in V$ gilt $ip(v,v) in RR^+$ unabhängig von $K = RR$ oder $K = CC$
][
Ein Unterraum eines euklidischen (unitären) Vektorraums ist wieder ein euklidischer (unitärer) Vektorraum.
]
#definition("3.3", "hermitesche Matrix")[
Für eine Matrix $A = (a_(i j)) in CC^(m,n)$ ist die hermitesch transponierte von $A$ definiert als
$
A^H = (macron(a)_(j i)) in CC^(n, m)
$
Gilt $A = A^H$, so heißt $A$ #bold[hermitesche Matrix].
] <def>
Ist $A in RR^(m, n)$, so $A^H = A^T$. Für eine hermitesche Matrix $A$ gilt $a_(i i) = macron(a)_(i i) ==> a_(i i) in RR$.
#bold[Beispiel 3.4:] Man kann leicht nachrechnen:
#list[
Für $V = RR^n$ ist
$
ip(v, w) := v^T w = sum_(i = 1)^n v_i w_i
$
ein Skalarprodukt. Es ist das Standardskalarprodukt im $RR^n$.
][
Für $V = CC^n$ ist
$
ip(v, w) := w^H v = sum_(i = 1)^n macron(w)_i v_i
$
ein Skalarprodukt. Es ist das Standardskalarprodukt im $CC^n$.
][
Für $V = K^(m,n)$ ist
$
ip(A, B) := "Spur" underbrace((B^H A), in K^(n, n)) = sum_(i = 1)^n (sum_(j = 1)^m macron(b)_(i j) a_(j i))
$
][
Auf dem Vektorraum der auf dem Intervall $[0, 1]$ stetigen, reellwertigen Funktionen ist
$
ip(f, g) := integral_0^1 f(x) g(x) d x
$
ein Skalarprodukt.
]
#lemma("3.5")[
#bold[Cauchy-Schwarz-Ungleichung]
Ist $V$ ein euklidischer oder unitärer Vektorraum, so gilt
$
abs(ip(v, w))^2 <= ip(v, v) dot ip(w,w) quad forall v, w in V
$
wobei das Gleichheitszeichen genau dann gilt,, wenn $v$ und $w$ linear abhängig sind.
]
#startproof Für $w = 0$ folgt die (Un-) gleichung.
Für $w != 0$ definiert man
$
lambda := ip(v, w)/ip(w,w)
$
Dann folgt
$
0 &<= ip(v-lambda w, v - lambda w) = ip(v, v) - macron(lambda) ip(v, w) - lambda ip(w, v) - lambda dot (- macron(lambda)) ip(w, w) \
&= ip(v, v) - overline(ip(v, w))/ip(w, w) ip(v, w) - ip(v, w)/ip(w, w) ip(w, v) + abs(ip(v, w))^2/(ip(w, w))^2 ip(w, w) \
&= ip(v, v) - abs(ip(v, w))^2/ip(w,w) \
$
$
==> abs(ip(v, w))^2 <= ip(v,v) dot ip(w,w)
$
"$=$":
$
0 = ip(v - lambda w, v - lambda w) \
<==> v -lambda w = 0 <==> v = lambda w <==> w = lambda^(-1) v
$
#endproof
$
ip(v, lambda w) = macron(lambda) ip(v, w)
$
Deshalb:
Die Cauchy-Schwartsche Ungleichung ist ein sehr wichtiges Instrument der Analysis, z.B. für Approximationsfehler.
Nächstes Ziel: Vektoren $v in V$ eine Länge zuzuordnen $->$ Norm als Verallgemeinerung des Betrags
Für die reellen Zahlen: $abs(.): RR -> RR^+, x arrow.bar abs(x)$ mit
#boxedlist[
$abs(lambda x) = abs(lambda) dot abs(x) wide forall lambda in RR, forall x in RR$
][
$abs(x) >= 0 wide forall x in RR, abs(x) = 0 <==> x = 0$
][
$abs(x+y) <= abs(x) + abs(y) wide forall x, y in RR$
]
#definition("3.6", "Norm")[
Sei $V$ ein $K$-Vektorraum. Eine Abbildung
$
norm(.): V -> RR, quad v arrow.bar norm(v)
$
nennt man Norm auf $V$, wenn für alle $v, w in V$ und $lambda in K$ gilt:
#boxedlist[
sie ist homogen, d.h. #v(-1.5em) #h(100cm)
$
norm(lambda v) = abs(lambda) dot norm(v)
$
][
sie ist positiv definit, d.h:
$
norm(v) >= 0, quad norm(v) = 0 <==> v= 0 in V
$
][
sie erfüllt die Dreiecksungleichung, d.h.
$
norm(v + w) <= norm(v) + norm(w)
$
]
Einen $K$-Vektorraum, auf dem eine Norm definierst ist, nennt man #bold[normierten Raum].
] <def>
#bold[Beispiel 3.7:] Man kann leicht nachrechnen:
#list[
Ist $ip(.,.)$ das Standardskalarprodukt auf $RR^m$ oder $CC^m$, dann definiert
$
norm(v) := ip(v,v)^(1/2) = (v^T v)^(1/2) space "bzw." space = (v^H v)^(1/2)
$
eine Norm auf $RR^m$ bzw. $CC^m$. Sie wird #bold[euklidische Norm] genannt
][
Für $V = K^(m,n)$ ist
$
norm(A)_F := ("Spur"(A^H A))^(1/2) = (sum_(i = 1)^n (sum_(j = 1)^m abs(a_(j i ))^2))^(1/2)
$
eine Norm. Sie wird Frobeniusnorm genannt. Es gilt $norm(A)_F = norm(A^H)_F$ für alle $A in K^(m,n)$.
][
Auf dem Vektorraum der auf dem Intervall $[0,1]$ stetigen, reellwertigen Funktionen ist
$
norm(f) := ip(f,f)^(1/2) = (integral_0^1 (f(x))^2 d x)^(1/2)
$
eine Norm. Sie wird $L_2$- oder $L^2$-Norm genannt.
][
Sei $p in RR$, $p >= 1$ und $V = K^n$. Dann definiert
$
norm(v)_p = (sum_(i = 1)^n abs(v_i)^p)^(1/p)
$
eine Norm im $K^n$. Sie wird $p$-Norm genannt. Für $n = 2$ erhält man die euklidische Norm. Für $p -> oo$ erhält man die sogenannte $oo$-Norm
$
norm(v)_oo := max_(1 <= i <= n) abs(v_i)
$
Je nach Situation kann es einem erheblichen Unterschied bedeuten, welche Norm betrachtet wird. Für $V = RR^2$:
#figure(image("bilder2/3_7.jpeg", width: 100%))
][
Die $p$-Norm auf $K^(m,n)$ ist definiert durch
$
norm(A)_p := sup_(0 != v in KK^n) norm(A v)_p/norm(v)_p
$
$norm(A)_p$ ist die durch die $p$-Norm induzierte Matrix-Norm.
Man kann zeigen:
#boxedlist[
Supremum wird angenommen
][
$norm(A)_p = limits(max_(v in K^n))_(norm(v)_p = 1) norm(A v)_p$
]
Man kann zeigen:
$
norm(A)_1 = max_(1<=j<=m) sum_(i = 1)^n abs(a_(i j)) quad "(Spaltensummennorm)" \
norm(A)_oo = max_(1<= i<=n) sum_(j = 1)^m abs(a_(i j)) quad "(Zeilensummennorm)"
$
]
#corollary("3.8")[
Sei $V$ ein $K$-Vektorraum mit einem Skalarprodukt. Dann ist die Abbildung
$
norm(.): V -> RR, quad v arrow.bar norm(v) := (ip(v,v))^(1/2)
$
eine Norm auf $V$. Man nennt sie die durch das Skalarprodukt induzierte Norm.
]
#startproof
#enum[
Homogenität: (Es gilt mit $"Re"(z) <= abs(z) forall z in CC$)^((\*))
$
norm(lambda v)^2 = ip(lambda v, lambda v) = lambda macron(lambda) ip(v,v) = abs(lambda^2) ip(v,v)
$
][
Positive Definitheit:
$
ip(v,v) >= 0 ==> norm(v) >= 0 \
ip(v,v) = 0 <==> v = 0, \
<==> norm(v) = 0
$
][
$norm(v+w) <= norm(v) + norm(w)$ $#sspace$
$
norm(v+w)^2 = ip(v+w,v+w) = ip(v,v) + ip(v,w) + ip(w,v) + ip(w,w) \
$
$
&= ip(v,v) + ip(v,w) + overline(ip(v,w)) + ip(w,w) \
&= ip(v,v) + 2 "Re"(ip(v,w)) + ip(w,w) \
&<=^((\*)) ip(v,v) + 2abs(ip(v,w)) + ip(w,w) \
&<=^("CSU") ip(v,v) + 2 ip(v,v) ip(w,w) + ip(w,w) \
&= norm(v)^2 + 2 norm(v) norm(w) + norm(w)^2 \
&= (norm(v) + norm(w))^2
$
$
==>^(sqrt(space)) norm(v+w) <= norm(v) + norm(w)
$
]
== Winkel und Orthogonalität
In $RR^2$ bzw. $RR^3$ ist der von zwei Vektoren eingeschlossene Winkel anschaulich klar. Übertragung auf allgemeine Vektorräume?
Zunächst: $V = RR^2$, Standardskalarprodukt $ip(v,w) = w^T v$ und der damit induzierten Norm.
Aus Cauchy-Schwartz folgt:
$
-1 <= ip(v,w)/(norm(v) dot norm(w)) quad forall v, w in RR^2 without {0}
$
D.h. dieser Quotient ist gleich $cos(theta)$ für ein $theta in [0, pi]$. Diesen nennt man den zwischen $v$ und $w$ eingeschlossenen Winkel.
$
ip(v,w)/(norm(v) dot norm(w)) = cos(theta) quad -> quad angle.arc (v, w) := arccos ip(v,w)/(norm(v) dot norm(w))
$
Passt das zur "üblichen" Winkeldefinition?
Aufgrund der Eigenschaften des Skalarprodukts folgt
$
angle.arc (v, w) = angle.arc (w, v), quad angle.arc(lambda v, w) = angle.arc(v,w) = angle.arc(v, lambda w) quad forall lambda > 0
$
Für $v != 0 != w$ und
$
tilde(v) = 1/norm(v) v space (==> norm(tilde(v)) = 1) space "und" space tilde(w) = 1/(norm(w)) space (==> norm(tilde(w)) = 1)
$
gilt $angle.arc(v, w) = angle.arc(tilde(v), tilde(w))$. Im Einheitskreis erhält man
#figure(image("bilder2/3_8_2.jpeg", width: 25%))
Also gibt es $alpha, beta in [0, 2pi$ mit
$
tilde(v) = (cos beta, sin beta)^T quad tilde(w) = (cos alpha, sin alpha)^T
$
Gilt $alpha, beta in [0, pi$ folgt aus einem Additionstheorem für $cos$
$
cos(beta-alpha) &= cos alpha cos beta + sin alpha sin beta \
&= ip(tilde(v), tilde(w)) dot 1 dot 1 \
angle.arc(tilde(v), tilde(w)) = cos(beta-alpha)
$
Man kann den Winkel auch über die Gleichung
$
ip(v,w) = norm(v) dot norm(w) dot cos(angle.arc(v,w))
$
definiere. Dann ist auch $v = 0$ und/oder $w= 0$ erlaubt. Stehen $v$ und $w$ senkrecht aufeinander ($v perp w$)
$
cos(angle.arc(v,w)) = cos(pi/2) = 0 quad ==> quad ip(v,w) = 0
$
#definition("3.9", "orthogonal")[
Sei $V$ ein endlichdimensionaler euklidischer oder unitärer Vektorraum.
#enum[
Zwei Vektoren $v, w in V$ heißten #bold[orthogonal] bezüglich des gegebenen Skalarproduktes $ip(.,.)$, wenn gilt $ip(v,w) = 0$.
][
Für dieses Skalarprodukt heißt eine Basis ${v_1, ..., v_n}$ von $V$ #bold[Orthogonalbasis], wenn
$
ip(v_i, v_j) = 0 quad i, j = 1, ..., n, space i != j
$
Ist zusätzlich für die induzierte Norm
$
ip(v_i, v_i)^(1/2) = norm(v_i) = 1 quad i = 1, ..., n
$
so heißt ${v_1, ..., v_n}$ #bold[Orthonormalbasis] von $V$. ($<==> ip(v_i, v_j) = delta_(i j)$)
]
] <def>
#theorem("3.10")[
Sei $V$ ein euklidischer oder unitärer Vektorraum mit $dim(V) = n < oo$. Sei ${v_1, ..., v_n}$ eine Basis von $V$. Dann existiert eine Orthonormalbasis ${w_1, ..., w_n}$ von $V$.
]
#startproof Per Induktion über $n$.
Induktionsanfang: $n = 1$
Sei $v_1 in V$, $v_1 != 0$. Dann gilt für $w_1 = norm(v_1)^(-1) v_1, norm(w_1) = 1$ und $"Span"{v_1} = "Span"{w_1}$. $==>$ ${w_1}$ ONB
Induktionsschritt: $n -> n +1$
Die Aussage gelte für $n$. Sei $dim(V) = n+1$ und ${v_1, ..., v_(n+1)}$ eine Basis von $V$. Dann ist $U = "Span"{v_1, ..., v_n}$ ein $n$-dimensionaler Unterraum von $V$. Nach Induktionsvorraussetzung existiert eine ONB ${w_1, ..., w_n}$ von $U$. D.h,
$
"Span"{w_1,...,w_n} = "Span"{v_1, ..., v_n}
$
Für
$
tilde(w)_(n+1) = v_(n+1) - sum_(k = 1)^n ip(v_(n+1), w_k) w_k
$
gilt wegen $v_(n+1) in.not U$, dass $tilde(w)_(n+1) != 0$. Mit dem Austauschsatz von Steinitz (Satz 2.23, LinA I) folgt für $w_(n+1) = norm(tilde(w)_(n+1))^(-1) tilde(w)_(n+1)$, dass
$
V = "Span"{v_1, ..., v_(n+1)} = "Span"{w_1, ..., w_(n+1)}
$
Für $j = 1, ..., n$ erhält man
$
ip(w_(n+1), w_j) &= ip(norm(tilde(w)_(n+1)) tilde(w)_(n+1), w_j)) \
norm(tilde(w)_(n+1))^(-1) ip(v_(n+1) - sum_(k = 1)^n ip(v_(n+1), w_k) w_k, w_j) \
&= norm(tilde(w)_(n+1))^(-1) (ip(v_(n+1), w_j) - sum_(k = 1)^n ip(v_(n+1), w_k) ip(w_k, w_j))
$
$
norm(tilde(w)_(n+1))^(-1) (ip(v_(n+1), w_j)- ip(v_(n+1), w_j)) = 0
$
$==>$ ${w_1, ..., w_(n+1)}$ sind ONB.
#endproof
Diese Orthogonalisierung ist als Gram-Schmidt-Verfahren bekannt. <NAME> (dänishcer Mathematiker, 1850-1916), <NAME> (deutscher Mathematiker, 1876-1959). Das Verfahren wurde bereits vor Laplace und Cauchy verwendet.
#bold[Algorithmus 3.11: Gram-Schmidt-Verfahren]
Gegeben: ${v_1, ..., v_n}$ als Basis eines euklidischen (unitären) Vektorraums $V$
#enum[
Setze $w_1 := norm(v_1)^(-1) v_1$
][
Für $j = 2, ..., n$ setze $#sspace$
$
tilde(w)_j := v_j - sum_(k = 1)^(j-1) ip(v_j, w_k) w_k \
w_j := norm(tilde(w)_j)^(-1) tilde(w)_j
$
]
Die ursprüngliche Basis ${v_1, ..., v_n}$ hat dann die Darstellung
$
(v_1, ..., v_n) = (w_1, ..., w_k) underbrace(mat(norm(v_1), ip(v_1, w_1),...,ip(v_n, w_1);0,norm(tilde(w)_2), dots.down, dots.v;dots.v,dots.v,dots.down, ip(v_n, w_(n-1));0,0,,norm(tilde(w)_n)), = R)
$
Da alle Diagnonaleinträge von $R$ ungleich $0$ sind, ist $R$ invertierbar. Sei nun $U$ ein $m$-dimensionaler Unterraum von $RR^n$ oder $CC^n$ mit dem Skalarprodukt. Wir definieren eine Orthonormalbasis ${w_1,.., w_m}$ die Matrix
$
Q = (w_1, ..., w_m) in K^(n, m)
$
Damit gilt im reellen Fall
$
RR^(m,m) #scale(x: -100%, $in$) Q^T Q = (w^T_i w_j)_(i, j = 1, ..., m) = (delta_(i j))_(i, j = 1, ..., m) = I_m
$
und im komplexen Fall
$
CC^(m,m) #scale(x: -100%, $in$) Q^H Q = (w_i^H w_j)_(i, j = 1, ..., m) = I_m
$
für $m = n$: $Q^T = Q^(-1)$ bzw. $Q^H = Q^(-1)$
Umgekehrt gilt: Ist für eine Matrix $Q in K^(m, n)$ $Q^T Q = I_m$ bzw. $Q^H Q = I_m$, so sind die Spalten von $Q$ eine ONB bzgl. des Standardskalarproduktes eines $m$-dimensionalen Unterraums von $RR^n$ bzw. $CC^m$. Damit gilt:
#theorem("3.12")[
Sind $v_1, ..., v_m in K^n$ linear unabhängig, dann gibt es eine Matrix $Q in K^(n, m)$ mit orthonormalen Spalten bezüglich des Standardskalarproduktes und eine obere Dreiecksmatrix $R in "GL"_m (K)$ mit
$
K^(n, m) #scale(x: -100%)[$in$] (v_1, ..., v_m) = Q R
$
als sogenannte $Q R$-Zerlegung
]
$Q R$ $->$ numerische lineare Algebra $->$ kleinste Quadrate-Problem
//bild
Die Matrix $Q$ ist längenerhaltend:
#lemma("3.13")[
Sei $Q in K^(m, n)$ eine Matrix mit orthogonalen Spalten bzgl des Standardskalarproduktes. Dann gilt $norm(v)_2 = norm(Q v)_2$ für alle $v in K^n$, wobei hier $norm(.)_2$ die euklidische Norm ist.
]
#startproof
$
norm(v)_2^2 = ip(v,v) = v^H v = v^H I v = v^H Q^H Q v = norm(Q v)_2^2
$
#endproof
#definition("3.14", "Orthogonale und unitäre Matrizen")[
#list[
Eine Matrix $Q in RR^(n,n)$ heißt #bold[orthogonal], wenn $Q^T Q = I_n$ gilt. Wir definieren
$
O_n (RR) := {Q in RR^(n,n) | Q "orthogonal"}
$
][
Eine Matrix $Q in CC^(n,n)$ heißt #bold[unitär], wenn $Q^H Q = I_n$. Wir definieren
$
U_n (CC) := {Q in CC^(n,n) | Q "unitär"}
$
]
] <def>
Für orthogonale bzw. unitäre Matrizen gilt
$
RR^(n,n) #scale(x: -100%)[$in$] Q^T Q = I_n ==> Q^T = Q^(-1), CC^(n,n) #scale(x: -100%)[$in$] Q^H Q = I_n ==> Q^H = Q^(-1)
$
D.h.
#lemma("3.15")[
Die Mengen $O_n (RR)$ und $U_n (CC)$ bilden Untergruppen von $"GL"_n (RR)$ und $"GL"_n (CC)$.
]
#startproof Hier nur $"GL"_n (RR)$
Für $I_n in RR^(n,n)$ gilt $I_n^T I_n = I_n$ $==>$ $I_n in O_n (RR)$ $==>$ $O_n (RR) != emptyset$.
zu zeigen: Gruppeneigenschaften
#boxedenum[
Abgeschlossenheit bzgl. der inneren Verknüpfung
Sind $Q_1, Q_2 in O_n (RR)$. Dann gilt:
$
(Q_1 Q_2)^T Q_1 Q_2 = Q_2^T Q_1^T Q_1 Q_2 = I_n \
==> Q_1 Q_2 in O_n (RR)
$
][
Neutrales Element: $I_n$
][
Inverses Element: $Q^(-1) = Q^T$
]
Jetzt: Übertragung auf Endomorphismen, auch der geometrische Aspekt
#definition("3.16", "orthogonale Abbildung")[
Eine Abbildung $f in L(V,V)$ heißt #bold[orthogonal] ($V = RR$) bzw. #bold[unitär] ($V = CC$) falls gilt
$
ip(f(v), f(w)) = ip(v, w) quad forall v, w in V
$
] <def>
#definition("3.17", "")[
Wir definieren für einen euklidischen Vektorraum $V$
$
O(V) := {f in L(V,V) | f "orthogonal"}
$
bzw. für einen unitären Vektorraum $V$
$
U(V) := {f in L(V,V) | f "unitär"}
$
] <def>
#lemma("3.18")[
Sei $f in L(V,V)$ orthogonal bzw. unitär. Dann gilt:
#enum[
$norm(f(v)) = norm(v) quad forall v in V$ für die durch das Skalarprodukt induzierte Norm
][
$v bot w$ $==>$ $f(v) bot f(w)$
][
$f$ ist ein Isomorphismus und $f^(-1)$ ist ebenfalls orthogonal bzw. unitär.
][
Ist $lambda in K$ ein Eigenwert von $f$, so gilt $abs(lambda) = 1$
]
]
#startproof 1 und 2 folgt direkt aus der Definition.
3: Injektivitt folgt aus 1 + pos. Definitheit der Norm. Surjektivität folgt dann aus der Dimensionsformel. Aus der Surjektivität von $f$ und $F$ orthogonal bzw. unitär folgt diese Eigenschaft auch für $f^(-1)$.
4: Ist $lambda$ ein Eigenwert von $f$ mit dem Eigenvektor $v != 0$, so gilt
$
norm(v) = norm(f(v)) = norm(lambda v) = abs(lambda) norm(v) \
1 = abs(lambda)
$
Aus der Definition des Skalarproduktes und orthogonal bzw. unitär folgt
#corollary("3.19")[
Gilt für $f in L(V,V)$, dass
$
norm(f(v)) = norm(v)
$
für die durch das Skalarprodukt induzierte Norm, so ist $f$ orthogonal bzw. unitär.
]
Aus diesen Gründen werden orthogonale bzw. unitäre Abbildungen auch Isometrien genannt.
#theorem("3.20")[
Sei $V$ ein euklidischer (unitärer) Vektorraum mit einer Orthonormalbasis $B = {v_1, ..., v_n}$ und $f in L(V,V)$. Dann gilt:
$
f in O(V) "bzw." f in U(V) quad <==> quad A_f^(B,B) in O_n (RR) "bzw." A_f^(B,B) in U_n (CC)
$
D.h. die Abbildungen
$
O(V) -> O_n (RR), f arrow.bar A_f^(B,B) space "bzw." space U(V) -> U_n (CC), f arrow.bar A_f^(B,B)
$
sind Isomorphismen.
]
#startproof Hier nur für $K = RR$
"$==>$": $f$ orthogonal
Dann gilt wegen der Orthonormalität von $B$ für $A_f^(B,B) = (a_(i j))$, dass
$
delta_(i j) = ip(v_i, v_j) =^("3.16") ip(f(v_i), f(v_j)) = ip(sum_(l = 1)^n a_(l i) v_l, sum_(k = 1)^n a_(k j) v_k) = sum_(l = 1)^n a_(l i) a_(l j)
$
Also:
$
I_n = (A_f^(B,B))^T A_f^(B,B) ==> A_f^(B,B) in O_n (RR)
$
"$<==$": $A_f^(B,B) in O_n (RR)$. Für die zugehörige lineare Abbildung $f$ gilt wegen
$
f(v_i) = sum_(l = 1)^n a_(l i) v_l,
$
dass
$
ip(f(v_i), f(v_j)) = ip(sum_(l = 1)^n a_(l i) v_l, sum_(k = 1)^n a_(k j) v_k) = sum_(l = 1)^n a_(l i) a_(l j) =^(A_f^(B,B) in O_n (RR)) delta_(i j) = ip(v_i, v_j) \
==> f in O(V)
$
#endproof
== Selbstadjungierte Abbildungen
Was ist ein adjungierter Endomorphismus?
#lemma("3.21")[
Sei $V$ ein euklidischer (unitärer) Vektorraum und $f in L(V,V)$. Dann gibt es genau ein $g in L(V,V)$ mit
$
ip(f(v), w) = ip(v, g(w)) quad forall v, w in V
$
Ist $B$ eine Orthonormalbasis von $V$, so gilt
$
A_g^(B,B) = (A_f^(B,B))^H
$
]
#startproof Hier nur für $K = RR$. Da $B$ orthonormal ist gilt für $v = Phi_B (x)$ und $w = Phi_B (y)$, dass
$
ip(v, w) = ip(A_(Phi_B)^(E, B) v, A_(Phi_B)^(E, B) w)_(RR^n) = x^T underbrace((A_(Phi_B)^(E,B))^T A_(Phi_B)^(E,B), I) y = x^T y = ip(x,y)_(RR^n) quad forall v, w in V
$
Dann gilt für $A_f^(B,B)$
$
ip(A_f^(B,B) x, y)_(RR^n) = (A_f^(B,B) x)^T y = x^T (A_f^(B,B))^T y = ip(x, (A_f^(B,B))^T y)_(RR^n)
$
Damit ist wegen der Definition des Skalarproduktes eindeutig eine lineare Abbildung mit der Darstellungsmatrix $(A_f^(B,B))^T$ gegeben. Diese bestimmt eindeutig den gesuchten Endomorphismus $g$.
#endproof
#definition("3.22", "adjungierter Endorphismus")[
Die in Lemma 3.21 eindeutig definierte Abbildung $g in L(V,V)$ nennt man den zu $f in L(V,V)$ #bold[adjungierten Endomorphismus]. Er wird mit $f^"ad"$ bezeichnet.
] <def>
#definition("3.23", "selbstadjungiert")[
Sei $V$ ein euklidischer (unitärer) Vektorraum und $f in L(V,V)$. Der Enomorphismus $f$ heißt #bold[selbstadjungiert], wenn
$
ip(f(v), w) = ip(v, f(w)) quad forall v, w in V
$
gilt. D.h. $f^"ad" = f$.
] <def>
#bold[Bemerkungen:] Es folgt unmittelbar
#boxedlist[
Ist $f in L(V,V)$ und $B$ eine ONB, so gilt
$
f "selbstadjungiert" <==> A_f^(B,B) "ist symmetrisch bzw. hermitesch, d.h." A = A^H
$
][
Ist $f$ orthogonal bzw. unitär, so ist $f^"ad" = f^(-1)$, denn für $u, v in V$ mit $w = f(u)$ d.h. $u = f^(-1) (w)$ gilt
$
ip(f(v), w) = ip(f(v), f(u)) = ip(v, u) = ip(v, f^(-1) (w))
$
]
#lemma("3.24")[
Sei $V$ ein euklidischer (unitärer) Vektorraum und $f in L(V,V)$ selbstadjungiert. Dann sind alle Eigenwerte von $f$ reell und das charakteristische Polynom zerfällt in Linearfaktoren.
]
#startproof Sei zunächst $K =CC$. Sei $lambda$ ein Eigenwert von $f$ mit zugehörigen Eigenvektor $v != 0$. Dann gilt
$
lambda underbrace(ip(v,v), > 0) = ip(lambda v, v) = ip(f(v), v) = ip(v, f(v)) = ip(v, lambda v) = macron(lambda) ip(v, v)
$
$==> lambda = macron(lambda) in RR$
Fundamentalsatz der Algebra $==>$ $p_f (.) = p_A (.)$ zerfällt über $CC$ in Linearfaktoren.
Sei nun $K =RR$.
$B$ ONB $==> A:=A_f^(B,B) = (A_f^(B,B))^T$ ist eine spezielle komplexe Matrix
$==>$ wie oben folgt für $p_A (.)$ betrachtet über $CC$, dass $p_A (.)$ in Linearfaktoren zerfällt
$
(lambda - lambda_i) wide (lambda_i "ist EW" in RR)
$
$==> p_A (.)$ zerfällt auch über $RR$ in Linearfaktoren.
#endproof
#theorem("3.25")[
Sei $V$ ein euklidischer (unitärer) Vektorraum und $f in L(V,V)$ selbstadjungiert. Dann gibt es eine Orthonormalbasis von $V$ die aus Eigenvektoren zu den reellen Eigenwerten von $f$ besteht.
]
#startproof Sei $n = dim(V) < oo$.
Für $n = 1$: klar $checkmark$
$n-1 -> n$:
Wegen Lemma 3.24 gilt
$
p_f (lambda) = plus.minus (lambda-lambda_1) dot ... dot (lambda - lambda_n)
$
mit $lambda_1, ..., lambda_n in RR$. Zu $lambda_1$ existiert ein Eigenvektor $v_1$ mit $norm(v_1) = 1$. Dann gilt für
$
u in U := {u in V | ip(v_1, u) = 0},
$
dass
$
ip(v_1, f(u)) = ip(f(v_1), u) = lambda_1 underbrace(ip(v_1, u), = 0) = 0
$
d.h. $f(U) subset.eq U$. Also ist $U$ invariant unter $f$. Die Einschränkung $f|_U : U->U$ ist selbstadjungiert mit
$
p_(f|_U) = plus.minus (lambda - lambda_2) dot ... dot (lambda - lambda_k)
$
Nach Induktionsvorraussetzung ex. ONB für $U$. Die Vereinigung dieses ONB mit $v_1$ ist ONB für $V$.
#endproof
Für die Matrixform erhalten wir analog:
#lemma("3.26")[
Sei $A in K^(n,n)$ symmetrisch (hermitesch). Dann gibt es ein $T in "GL"_n (K)$ und $lambda_1, ..., lambda_n in RR$ so dass gilt
$
T A T^(-1) = mat(lambda_1,,0;,dots.down,;0,,lambda_n)
$
]
#startproof Über Darstellungsmatrix eines selbstadjungierten Endomorphismus.
#endproof
#definition("3.27", "positiv definite Matrix")[
Eine symmetrische (hermitesche) Matrix $A in K^(n,n)$ heißt #bold[positiv definit], wenn
$
v^T A v > 0 "bzw." v^H A v > 0 wide forall v in V without {0}
$
] <def>
Lemma 3.26 $==>$ $A$ symmetrisch (hermitesch) $==>$ $A$ diagonalisierbar
Des Weiteren gilt:
#theorem("3.28")[
Sei $A in K^(n,n)$ symmetrisch (hermitesch). Dann sind folgende Aussagen äquivalent:
#enum[
$A$ ist positiv definit
][
Alle Eigenwerte $lambda_1, ..., lambda_n in RR$ von $A$ sind positiv.
]
]
#startproof Hier nur $K = CC$, $K = RR$ folgt analog.
"$1==>2$": Sei $lambda$ Eigenwert von $A$. Lemma 3.26 $==>$ $lambda in RR$. Sei $v$ ein Eigenvektor zu $lambda$, dann gilt:
$
0 < v^H A v = v^H A^H v = (A v)^H v = (lambda v)^H v = lambda v^H v = lambda underbrace(norm(v)^2, > 0) ==> lambda > 0
$
"$2==>1$": Satz 3.25: Es exitstiert ONB ${v_1, ..., v_n}$ bestehend aus Eigenvektoren zu Eigenwerten von $A$.
$
v_i^H A v_j = lambda_i v_i^H v_j = lambda_i delta_(i j)
$
Jedes $v in V$ bestizt eine Darstellung
$
v = sum_(i = 1)^n mu_i v_i, quad mu_i in CC, 1<=i<=n
$
$
v^H A v = ip(sum_(i = 1)^n mu_i v_i, sum_(j = 1)^n mu_j A v_j) = sum_(i = 1)^n sum_(j = 1)^n mu_i macron(mu)_j lambda_j underbrace(ip(v_i, v_j), delta_(i j)) \
= sum_(i = 1)^n mu_i macron(mu)_i underbrace(lambda_i, > 0) = sum_(i = 1)^n underbrace(abs(mu_i)^2, >= 0) lambda_i > 0 quad "für" v != 0
$
#endproof
Zur Berechnung einer solchen ONB:
#bold[Algorithmus 3.29:] Gegeben: $A in K^(n,n)$ bzw. $f in L(V,V)$ mit $A_f^(B,B) = A$.
#list[
Bestimme
$
p_A (lambda) = plus.minus (lambda - lambda_1)^(k_1) dot ... dot (lambda - lambda_m)^(k_m)
$
mit paarweise verschiedenen $lambda_i$, $1 <= i<= m$. Ist dies nicht möglich: STOP
][
Für jeden Eigenwert $lambda_i$ der algebraischen Vielfachheit $k_i$ bestimme eine Basis des dazugehörigen Eigenraums $"Eig"(A, lambda_i)$. Stimmen geometrische und algebraische Vielfachheit nicht überein: STOP
][
Orthonormalisiere die Vereinigung der jeweiligen Basen mit dem Gram-Schmidt-Verfahren.
]
#bold[Beispiel 3.30:] Fortsetzung von Beispiel 2.32
Wir betrachten wieder
$
A = mat(3,-1,4,-3,-1;1,1,-1,1,0;-1,0,2,0,0;4,1,-4,5,1;-2,0,2,-2,1) \
p_A (lambda) = (lambda-1)^4 (lambda-2) wide lambda_1 = 1 > 0, lambda_2 = 2 > 0
$
Es gilt
$
"Eig"(A, 1) = "Span"{vec(1,-1,1,0,1), vec(1,1,1,0,-1), vec(0,0,1,1,0), vec(0,0,0,0,1)} = {v_1, v_2, v_3, v_4} \
"Eig"(A, 2) = "Span"{mat(0,1,2,3,-2)^T} = {v_5}
$
GS-Verfahren
$
w_1 = norm(v_1)^(-1) v_1 = 1/2 mat(1,-1,1,0,1)^T
$
$underline(j = 1)$
$
tilde(w)_2 = v_2 - sum_(k = 1)^1 ip(v_2, w_k) w_k = mat(1,1,1,0,-1)^T - 0, quad w_2 = 1/2 mat(1,1,1,0,-1)^T
$
$underline(j = 2):$
$
tilde(w)_3 = v_3 - sum_(k = 1)^2 ip(v_3, w_k) w_k = mat(0,0,1,1,0)^T - 1/2 dot 1/2 dot (1,-1,1,0,1)^T - 1/2 dot 1/2 dot mat(1,1,1,0,-1)^T \
= mat(-1/2,0,1/2,1,0)^T, quad w_3 = 1/sqrt(3/2) mat(-1/2,0,1/2,1,0)^T
$
$j = 3$:...
$j = 4$:...
#pagebreak()
= Affine Geometrie
Bisher als Struktur:
$
"Gruppen" ==> "Körper" ==> "Vektorraum über Körper" ==> #stack(dir: ttb, spacing: 1em)[
Unterraum #h(12.5em)
][
affine Unterräume als weitere Struktur
]
$
Zur Motivation/Startpunkt, $RR^3$
Gerade:
$
G := {vec(1,2,3) + a dot vec(1,1,0) | a in RR}
$
Ebene:
$
E := {vec(1,2,3) + a dot vec(1,0,0) + b dot vec(0,1,0) | a, b in RR}
$
Def. 6.5 aus LinA: $G$ und $E$ sind affine Unterräume des $RR^3$. Damit ist $mat(1,2,3)^T$ ein Punkt im Raum zu dem ein Vektor als Repräsentant einer Äquivalenzklasse addiert wird. D.h.
$
G: P limits(+)_= ar(v)
$
Wir hatten schon: $U = v limits(+)_= W$ mit $W$ UVR
== Operation einer Gruppe auf einer Menge
#definition("4.1", "Wirkung einer Gruppe")[
Es sei $G$ eine Gruppe mit der Verknüpfung $circ$ und dem neutralen Element $e$ sowie eine Menge $M$. Eine Abbildung der Form
$
G times M -> M, quad (g, m) arrow.bar g circs m
$
nennt man #bold[Wirkung] oder #bold[Operation] der Gruppe $G$ auf der Menge $M$, falls gilt
#boxedenum[
$(g_1 circ g_2) circs m = g_1 circs (g_2 circs m) quad forall g_1, g_2 in G, forall m in M$
][
$e circs m = m quad forall m in M$
]
] <def>
#bold[Beispiel 4.2:]
#boxedlist[
Passend zum obigen Beispiel der Gerade/Ebene:
Sei $G = V$ ein $K$-Vektorraum (vgl. Def. 2.26 LinA I), $M = V$. Dann ist durch
$
V times V -> V, (v, x) arrow.bar v + x
$
eine Operation von $V$ auf sich selbst definiert.
][
Für die Gruppe $G := (RR, +)$ und $M = S^1$ als Einheitskreis im $RR^2$, d.h.
$
S^1 = {x in RR^2 | x_1^2 + x_2^2 = 1}
$
wird durch
$
(a, x) arrow.bar e^(i a) dot x quad forall a in G, forall x in M
$
eine Operation von $(RR, +)$ auf $S^1$ gegeben.
][
Sei $G = "GL"_n (RR)$ und $M =RR^n$. Dann definiert
$
(A, x) arrow.bar A dot x in M quad forall A in G, forall x in M
$
eine Operation von $"GL"_n (RR)$ auf $M$.
]
#definition("4.3", [Bahn von $m$])[
Eine Gruppe $G$ wirke auf die Menge $M$. Für $m in M$ wird die Teilmenge
$
G circs m := { g circs m | g in G } subset.eq M
$
die #bold[Bahn von $m$ unter $G$] genannt.
] <def>
#bold[Beobachtung:] In Beispiel 4.2:
In 1 und 2 entspricht die Bahn eines einzigen Elements der ganzen Menge.
In 3)
#definition("4.4", "transitive Wirkung")[
Eine Gruppe $G$ wirke auf der Menge $M$. Die Wirkung nennt man #bold[transitiv], wenn für alle Paare $m, tilde(m) in M$ ein $g in G$ existiert, so dass
$
m = g circs tilde(m)
$
Man nennt die Wirkung #bold[einfach transitiv], falls das Gruppenelement $g$ eindeutig bestimmt ist.
] <def>
#lemma("4.5")[
Eine Gruppe $G$ wirke auf die Menge $M$. Dann gilt:
#boxedenum[
Ist die Wirkung transitiv, so gilt für jedes $m in M$ die Gleichheit $G circs m = M$
][
Ist die Wirkung einfach transitiv, so existiert eine Bijektion zwischen $M$ und $G$.
]
]
#startproof
zu 1) Sei $m in M$ bel. gewählt. Dann existiert wegen der transitiven Wirkung zu $tilde(m) in M$ ein $g in G$ mit $tilde(m) = g circs m ==> M = G circs m$
zu 2) Für ein fest gewähltes $m in M$, definiert man
$
psi_m: G -> M, quad g arrow.bar g circs m
$
Wegen der transitiven Wirkung ist $phi_m$ surjektiv. Da die Wirkung einfach transitiv ist, ist $phi_m$ auch injektiv $==>$ Bijektivität
#endproof
== Affine Räume
#definition("4.6", "affiner Raum")[
Sei $V$ ein $K$-Vektorraum. Eine nichtleere Menge $M$ heißt #bold[affiner Raum] über dem Vektorraum $V$, wenn $V$ einfach transitiv auf $M$ wirkt. Die Elemente von $M$ werden als #bold[Punkte] bezeichnet. Ist $M = emptyset$, so wird $M$ ebenfalls als affiner Raum aufgefasst.
] <def>
Wie passt das zu 6.5 aus LinA I?
#bold[Beispiel 4.7:] Sei $cal(L)(A, b)$ die Lösungsmenge des LGS $A x = b$ mit $A in RR^(m,n), x in RR^n, b in RR^m$. Im Satz 6.3, LinA I, haben wir gezeigt, dass $cal(L)(A, 0)$ ist ein Unterraum des $RR^n$ ($==>$ Gruppe). Ist $cal(L)(A, b) != emptyset$, gilt nach Satz 6.4, LinA I, dass
$
cal(L)(A, b) = x_* + cal(L)(A, 0)
$
für ein beliebiges $x_* in cal(L)(A, b)$. Dann ist $M = cal(L)(A, b)$ ein affiner Raum über dem Vektorraum $cal(L)(A, 0)$, denn es gilt
$
plus: cal(L)(A, 0) times cal(L)(A, b) -> cal(L)(A, b), quad (y, x) arrow.bar y + x
$
dass
$forall y in G, forall x in M$:
$
A (y + x) = A y + A x = 0 + b = b
$
$==> y + x in M$
$forall y, tilde(y) in G, forall x in M$ gilt
$
"1."& quad (y +_G tilde(y)) +_W x = y +_W (tilde(y) +_W x) = y +_(RR^n) tilde(y) +_(RR^n) x \
"2."& quad 0 +_W x = 0 +_(RR^n) x = x
$
$==>$ $+$ ist eine Wirkung der Gruppe $G$ auf die Menge $M$. Sind $x, tilde(x) in cal(L)(A, b)$ ist $A x = A tilde(x) = b$ $==>$
$
A (x - tilde(x)) = b - b = 0
$
$y := x - tilde(x) in cal(L)(A, 0) ==> x = (x - tilde(x)) + tilde(x) = y + tilde(x) ==>$ Wirkung ist transitiv
Sei $tilde(y) in cal(L)(A, 0) = G$ so gewählt, dass auch
$
x = tilde(y) + tilde(x)
$
gilt.
$
x = y + tilde(x) \
x = tilde(y) + tilde(x) \
==> 0 = y - tilde(y) ==> y = tilde(y) ==> "einfach transitiv"
$
#endproof
#corollary("4.8")[
Sind $M$ und $tilde(M)$ zwei affine Räume, so existiert eine Bijektion zwischen $M$ und $tilde(M)$.
]
#startproof Folgt aus Lemma 4.5 und Komposition zweier bijektiver Abbildungen.
#endproof
#bold[Folgerung:] Ein affiner Raum ist bis auf eine Bijektion eindeutig bestimmt. Damit ist folgendes sinnvoll:
#definition("4.9", [affiner Raum von $V$ als $A(V)$])[
Wir bezeichnen den affinen Raum über den zugehörigen $K$-Vektorraum $V$ mit $A(V)$ bzw. A, wenn der Kontext klar ist. Die einfach transitive Wirkung $circs$ von $V$ auf $A(V)$ wird mit $+$ bezeichnet, d.h.
$
x circs P := P + x, quad x in V, P in A(V)
$
] <def>
#lemma("4.10")[
Sei $V$ ein $K$-Vektorraum und $A$ ein affiner Raum über $V$. Sei $P, Q, R, S in A$ und $v, w in V$. Dann gelten folgende Aussagen:
#boxedenum[
$
P + v = P + w ==> v = w
$
D.h. für $Q = P + x in A$ ist der Vektor $x in V$ eindeutig bestimmt.
][
$P + v = Q + v ==> P = Q$
][
$P + v = Q ==> P = Q + (-v)$
][
Für $Q = P + v in A$ wird $v$ als Verbindungsvektor von $P$ nach $Q$ bezeichnet und man schreibt
$
v = smar(P Q)
$
Es gilt
$
smar(P Q) + smar(Q R) = smar(P R)
$
][
Für $n in NN$ Punkte, $n > 1$, $P_1, ..., P_n in A$ gilt
$
smar(P_1 P_2) + smar(P_2 P_3) + ... + smar(P_(n-1) P_n) = sum_(i = 1)^(n-1) smar(P_i P_(i+1)) = smar(P_1 P_n)
$
][
$
smar(P Q) + smar(Q P) = 0 in V$, d.h. $smar(P Q) = - smar(Q P) in V
$
][
$smar(P Q) = smar(R S) ==> smar(P R) = smar(Q S)$
]
]
#startproof Hier nur der Beweis von einigen Aussagen
zu 1: Wegen der einfachen Transitivität existiert genau ein Vektor $v in V$ mit $Q = P + v = P + w$
#figure(image("bilder2/4_10.JPG", width: 40%))
Formal: $smar(P Q), smar(Q R), smar(P R)$ sind definitionsgemäß die eindeutig bestimmten Vektoren, für die gilt
$
Q = P + smar(P Q), R = Q + smar(Q R), R = P + smar(P R)
$
Damit folgt
$
R = (P + smar(P Q)) + smar(Q R) = P + (smar(P Q) + smar(Q R))
$
zu 7: Sei $smar(P Q) = smar(R S)$. Dann folgt mit 4:
$
smar(P R) = smar(P Q) + smar(Q R) = smar(R S) + smar(Q R) = smar(Q R) + smar(R S) =^("4)") smar(Q S)
$
#endproof
#definition("4.11", [Verbindungsvektor $v_O (P)$])[
Sei $V$ ein $K$-Vektorraum und $A$ ein affiner Raum über $V$. Für einen Punkt $O in A$ definiert man:
$
psi_O: V -> A, quad x arrow.bar P := O + x
$
Aus Lemma 4.10 folgt unmittelbar, dass $psi_O$ eine Bijektion ist. D.h. für alle $P in A$ ist der Vektor $v_O (P)$ das eindeutig bestimmte Element in $V$ mit
$
P = O + v_O (P)
$
] <def>
#definition("4.12", "Dimension affiner Räume")[
Sei $V$ ein $K$-Vektorraum und $A$ ein affiner Raum über $V$. Dann ist
$
dim A := dim V
$
die #bold[Dimension von $A$]. Ist $A = emptyset$, so definiert man $dim A = -1$.
] <def>
Als Verallgemeinerung von Def. 6.5, LinA I:
#definition("4.13", "affiner Unterraum")[
Sei $V$ ein $K$-Vektorraum, $A$ ein affiner Raum über $V$ mit der Verknüpfung $+: V times A -> A$ und $P in A$. Ist $U subset.eq V$ ein Untervektorraum von $V$, so nennt man die Menge
$
B := P + U := {Q in A | exists u in U: Q = u + P}
$
einen #bold[affinen Unterraum] von $A$.
] <def>
#lemma("4.14")[
Sei $B$ ein affiner Unterraum des affinen Raums $A(V)$, d.h. $B subset.eq A(V)$. Damit ist $B$ selbst ein affiner Raum über einen Vektorraum $U subset.eq V$.
]
#startproof Nach Definition existiert zu $B$ ein $P in A(V)$ und ein Unterraum $U subset.eq V$:
$
B = {Q in A | exists u in U: Q = P + u}
$
$A$ affiner Raum $==>$ $exists +: V times A -> A$
Einschränkung auf $B$ liefert:
$
+: V times B -> B
$
$
B = {Q in A | exists v in U: Q = P + v}, quad +: U times B -> B "wohldefiniert?"
$
$forall Q in A: forall v in V$ ist $Q = P + v$ definiert.
$==>$ $forall Q in B subset.eq A, forall u in U subset.eq V$ ist $Q = P + u$ wohldefiniert. Des Weiteren gilt: für $Q in B$, $u in U$ sowie $v in U$ erhält man
$
Q + u = (P+ v) + u = P + underbrace((v + u), in U) in B
$
Auch bei der Einschränkung auf $B$ bzw. $U$ bleibt die einfache Transitivität erhalten.
#endproof
Analago zu Satz 6.6 aus LinA I kann man zeigen:
#theorem("4.15")[
Sei $V$ ein $K$-Vektorraum, $A$ ein affiner Raum über $V$, $P, tilde(P) in A$ und $U, tilde(U) subset.eq V$ Untervektorräume von $V$. Dann gilt:
#boxedenum[
Für jedes $Q in P + U$ ist $P + U = Q + U$
][
Gilt $P + U = tilde(P) + tilde(U)$, so ist $U = tilde(U)$ und $smar(P tilde(P)) in U = tilde(U)$
]
]
#startproof siehe LinA I
#endproof
#definition("4.16", "Aufpunkt und Richtung")[
Sei $V$ ein $K$-Vektorraum, $A$ ein affiner Raum über $V$ und $A(W)$ ein affiner Unterraum von $A$. Gilt
$
A(W) = P + W
$
so nennt man $P$ einen #bold[Aufpunkt] von $A(W)$ und den Untervektorraum $W$ die #bold[Richtung] von $A(W)$
] <def>
== Lagebeziehungen von affinen Unterräumen
#definition("4.17", "(schwach) parallel")[
Sei $V$ ein $K$-Vektorraum, $A(V)$ ein affiner Raum und $A(W_1), A(W_2)$ zwei affine Unterräume von $A(V)$.
#boxedlist[
$A(W_1)$ und $A(W_2)$ heißen #bold[parallel], wenn $W_1 = W_2$ gilt $(A(W_1) || A(W_2))$
][
$A(W_1)$ und $A(W_2)$ heißen #bold[schwach parallel], falls $W_1 subset W_2$ gilt $(A(W_1) triangle.stroked.small.l A(W_2))$
]
] <def>
#theorem("4.18")[
Sei $V$ ein $K$-Vektorraum, $A(V)$ ein affiner Raum über $V$ und $A(W_1), A(W_2)$ zwei parallele affine Unterräume. Dann gilt entweder $A(W_1) = A(W_2)$ oder $A(W_1) sect A(W_2) = emptyset$
]
#startproof Gilt $A(W_1) || A(W_2) ==> W_1 = W_2$
Annahme: $A(W_1) sect A(W_2) != emptyset$. Dann existiert ein $P in A(W_1) sect A(W_2)$. Satz 4.15 liefert
$
A(W_1) = P + W_1 = P + W_2 = A(W_2)
$
#endproof
Bekannt ist:
#boxedlist[
Ein 0-dimensionaler affine Unterraum $RR^3$ heißt Punkt im $RR^3$.
][
Ein 1-dimensionaler affine Unterraum des $RR^3$ heißt Gerade im $RR^3$.
][
Ein 2-dimensionaler affine Unterraum des $RR^3$ heißt Ebene in $RR^3$.
]
Verallgemeinerung:
#definition("4.19", "Punkt, Gerade, Ebene")[
Sei $V$ ein $K$-Vektorraum, $A(V)$ ein affiner Raum über $V$ und $A(W)$ ein affiner Unterraum von $A(V)$.
#boxedlist[
Ist $dim(A(W)) = 0$, so heißt $A(W)$ #bold[(affiner) Punkt] von $A(V)$.
][
Ist $dim(A(W)) = 1$, so heißt $A(W)$ #bold[(affine) Gerade] von $A(V)$.
][
Ist $dim(A(W)) = 2$, so heißt $A(W)$ #bold[(affine) Ebene] von $A(V)$.
]
] <def>
#bold[Bemerkung:] Geraden können maximal schwach parallel zu Ebenen sein!
Für Untervektorräume gilt: $dim(U_1 sect U_2) = dim(U_1) + dim(U_2) - dim(U_1 + U_2)$, Satz 3.40, LinA I
#lemma("4.20")[
Es seien $U_1$ und $U_2$ zwei Untervektorräume eines $K$-Vektorraums $V$ sowie $A(U_1) =: A_1$ und $A(U_2) =: A_2$ zwei affine Unterräume des affinen Raums $A(V)$. Ist $A_1 sect A_2 != emptyset$, so ist $A_1 sect A_2$ ein affiner Unterraum von $A(V)$ mit dem zugehörigen Untervektorraum $U_1 sect U_2$ und es gilt
$
dim (A_1 sect A_2) = dim(U_1 sect U_2)
$
]
#startproof Es gilt:
$
A_1 = P_1 + U_1 quad "und" quad A_2 = P_2 + U_2
$
$A_1 sect A_2 != emptyset ==> exists Q in A_1 sect A_2$
$
A_1 sect A_2 = {P in A | exists u_1 in U_1, u_2 in U_2: P = Q + u_1 = Q + u_2}
$
Für jedes Paar $(P, Q)$ von Punkten aus $A$ genau einen Vektor $v in V$ mit $P = Q + v$ (Lemma 4.10, 1)
$
==> u_1 = u_2 ==> A_1 sect A_2 = {P in A | exists u in underbrace(U_1 sect U_2, "UVR"): P = Q + u}
$
$==>$ $A_1 sect A_2$ affiner Raum.
$dim(A_1 sect A_2) = dim(U_1 sect U_2)$ nach Def.
#endproof
#lemma("4.21")[
Es seien $U_1$ und $U_2$ zwei Untervektorräume des $K$-Vektorraums $V$, $A_1 = A(U_1)$ und $A_2 = A(U_2)$ zwei affine Unterräume eines affinen Raums $A(V)$ sowie $P_1 in A_1$ und $P_2 in A_2$ zwei beliebige Punkte
$
A_1 sect A_2 != emptyset <==> smar(P_1 P_2) in U_1 + U_2
$
]
#startproof
"$==>$": $A_1 sect A_2 != emptyset ==> exists Q in A_1 sect A_2$
Dann liegen die Verbindungsvektoren $smar(P_1 Q)$ bzw. $smar(P_2 Q)$ in den jeweiligen Untervektorräume $U_1$ bzw. $U_2$. Lemma 4.10, 4):
$
smar(P_1 P_2) = underbrace(smar(P_1 Q), in U_1) + underbrace(smar(Q P_2), in U_2) in U_1 + U_2 space checkmark
$
"$<==$": Sei $smar(P_1 P_2) in U_1 + U_2$ $==>$
$exists u_1 in U_1, u_2 in U_2: smar(P_1 P_2) = u_1 + u_2$. Setzt man $Q := P_1 + u_1 in A_1$, so gilt
$
Q &= P_1 + u_1 = P_1 + ((u_1 + u_2) - u_2) = P_1 + (smar(P_1 P_2) - u_2) \
&= (P_1 + smar(P_1 P_2)) - u_2 = underbrace(P_2, in A_2) + underbrace((-u_2), in U_2) in A_2
$
$==> A_1 sect A_2 != emptyset$
#endproof
#definition("4.22", "affine Hülle, Verbindungsraum")[
Sei $M subset A(V)$ eine Teilmenge eines affinen Raumes $A(V)$ über einem $K$-Vektorraum $V$. Der kleinste affine Unterraum von $A$, der $M$ enthält, wird #bold[affine Hülle] von $M$ gennant und mit $angle.l M angle.r_"aff"$ bezeichnet.
Sind $A(U_1)$ und $A(U_2)$ zwei affine Unterräume eines affines Raums $A(V)$, so bezeichnen wir die affine hülle $hull(A(U_1) union A(U_2))$ als #bold[Verbindungsraum] von $A_1$ und $A_2$.
] <def>
#lemma("4.23")[
Seien $U_1, U_2 subset.eq V$ zwei Untervektorräume des $K$-Vektorraums $V$, $A_1 = A(U_1)$ und $A_2 = A(U_2)$ zwei affine Unterräume eines affines Raums $A(V)$, sowie $P_1 in A_1$ und $P_2 in A_2$ d.h.
$
A_1 = P_1 + U_1 space "und" space A_2 = P_2 + U_2
$
Dann ist der Verbindungsraum durch
$
hull(A_1 union A_2) = P_1 + ("Span"(smar(P_1 P_2)) + U_1 + U_2)
$
bestimmt.
]
#startproof Sei $U$ der Untervektorraum zu $hull(A_1 union A_2)$. Nach Definition gilt
$
A_1 union A_2 subset.eq hull(A_1 union A_2)
$
also auch
$
A_1 = P_1 + U_1 subset.eq hull(A_1 union A_2) "und" \
A_2 = P_2 + U_2 subset.eq hull(A_1 union A_2)
$
$==> U_1 subset.eq V, U_2 subset.eq V$. $P_1, P_2 in hull(A_1 union A_2)$ und $hull(A_1 union A_2)$ affiner Unterrraum $==>$ $smar(P_1 P_2) in V$
Damit erhalten wir
$
P_1 + "Span"(smar(P_1 P_2)) subset.eq hull(A_1 union A_2)
$
Man kann sich überlegen:
Für Teilmengen $M_1, M_2 subset.eq V$, $V$ Vektorraum, gilt
$
&"Span"{M_1 union M_2} = "Span"{M_1} + "Span"{M_2} \
&==> "Span"{smar(P_1 P_2)} + U_1 + U_2 subset.eq U \
&==> P_1 + "Span"{smar(P_1 P_2)} + U_1 + U_2 subset.eq hull(A_1 union A_2)
$
Gleihheit gilt nach Definition der affinen Hülle.
#endproof
#theorem("4.24")[
#bold[Dimensionssatz]
Seien $U_1, U_2$ zwei Untervektorräume eines $K$-Vektorraums $V$ sowie $A_1 = P_1 + U_1$ und $A_2 = P_2 + U_2$ zwei affine Unterräume eines affinen Raums $A(V)$. Dann gilt
#boxedenum[
Ist $A_1 sect A_2 != emptyset$, so ist
$
dim hull(A_1 union A_2) &= dim(U_1 + U_2) \
&= dim(A_1) + dim(A_2) - dim(U_1 sect U_2) \
&= dim(A_1) + dim(A_2) - dim(A_1 sect A_2)
$
][
Ist $A_1 sect A_2 = emptyset$, so ist
$
dim hull(A_1 union A_2) &= dim(U_1 + U_2) + 1 \
&= dim(A_1) + dim(A_2) - dim(U_1 sect U_2) + 1
$
]
]
#startproof
zu 1) $A_1 sect A_2 != emptyset$ $==>$ Lemma 4.21: $smar(P_1 P_2) in U_1 + U_2$. Mit Lemma 4.23:
$
hull(A_1 union A_2) = P_1 + ("Span"{smar(P_1 P_2)} + U_1 + U_2) = P_1 + (U_1 + U_2)
$
Satz 3.40, LinA I (Dimensionssatz für UVR)
$
dim(U_1 + U_2) = dim(U_1) + dim(U_2) - dim(U_1 sect U_2)
$
Die Aussage folgt dann aus Lemma 4.20.
zu 2) Lemma 4.21: $smar(P_1 P_2) in.not U_1 + U_2$ $==>$
$
dim("Span"{smar(P_1 P_2)} + U_1 + U_2) = 1 + dim(U_1) + dim(U_2) - dim(U_1 sect U_2)
$
#endproof
== Affine Abbildungen
#definition("4.25", "affine Abbildung")[
Seien $A(V)$ und $A(W)$ zwei affine Räume über dem $K$-Vektorraum $V$ und $W$. Eine Abbildung $f: A(V) -> A(W)$, d.h. zwischen den Mengen, die $A(V)$ und $A(W)$ zugrundeliegen, heißt affine Abbildung, falls ein Punkt $P in A(V)$ existiert, so dass die Abbildung
$
smar(f_P) : V -> W, quad smar(f_P) (smar(P Q)) := smar(f(P) f(Q)) wide forall Q in A(V)
$
linear ist.
] <def>
#bold[Beispiel 4.26:] Für $n, m in NN$ sei $A in RR^(m,n)$, $b in RR^m$
$
g: RR^n -> RR^m, quad g(x) := A x + b quad "für" b != 0 "nicht linear!"
$
Ist diese Abbildung affin? Dazu: $V := RR^n, W = RR^m$, $A(V) = RR^n$, $A(W) = RR^m$, $P = "?"$, $g_P = "?"$
Sei $P in A(V)$ beliebig gewählt, $v in V$ und $Q := P + v$. Dann gilt:
$
g(P) = A P + b quad g(Q) = A Q + b = A (P + v) + b = A v + g(P)
$
Damit setzen wir
$
smar(g_P) (v) = smar(g(P) g(Q)) = A v
$
D.h. die resultierende Abbildung
$
smar(g_P): V -> W, quad smar(g_P) (v) = A v
$
ist linear, also ist $g$ affin
#sect_delim
Sind die Eigenschaften von $smar(f_P)$ Abhängigkeit von der Wahl von $P$?
#lemma("4.27")[
Die Definition einer affinen Abbildung $f: A(V) -> A(W)$ ist unabhängig von dem in der Definition angegebenen Punkt $P$.
]
#startproof Zuerst: Zeige für $v in V$ beliebig, dass das Bild $smar(f_P) in W$ unabhängig von $P$ ist. Dazu sei $Q in A(V)$ beliebig gewählt. Für $R := Q + v in A(V)$ gilt $Q, R in A(V)$, $v = smar(Q R)$. Wegen
$
smar(P R) = smar(P Q) + smar(Q R) ==> v = smar(P R) - smar(P Q)
$
$smar(f_P)$ linear $==>$
$
smar(f_P) (v) = smar(f_P) (smar(P R) - smar(P Q)) = smar(f_P) (smar(P R)) - smar(f_P) (smar(P Q)) = smar(f(P) f(R)) - smar(f(P) f(Q)) = op(plus.circle.arrow) \
(smar(f(P) f(R)) = smar(f(P) f(Q)) + smar(f(Q) f(R))) \
op(plus.circle.arrow) = smar(f(Q) f(R))
$
$==>$ $smar(f_P) (v)$ ist unabhängig von $P$.
#endproof
#bold[Bemerkung:] Ist $f: A(V) -> A(W)$ eine affine Abbildung, so erlaubt Lemma 4.27, die durch $f$ induzierte lineare Abbildung $smar(f_P) in L(V,W)$ mit $smar(f) in L(V, W)$ zu bezeichnen. Damit haben wir zwei Möglichkeiten $smar(f)$ zu charakterisieren: $P, Q in A(V)$
$
smar(f) (smar(P Q)) = smar(f(P) f(Q)) <==> f(Q) = f(P) + smar(f) (smar(P Q))
$
#definition("4.28", "affine Selbstabbildung, Fixpunkt")[
Seien $A(V)$, $A(W)$ zwei affine Räume mit zugehörigen $K$-Vektorräumen $V$ und $W$. Dann definiert man
$
A(V, W) := {f: A(V) -> A(W) | f "affin"}
$
Eine affine Abbildung $f: A(V) -> A(V)$ nennt man #bold[affine Selbstabbildung]. Für ein $f in A(V, V)$ nennt man einen Punkt $P in A(V)$ mit $f(P) = P$ #bold[Fixpunkt von $f$]. Die Menge der bijektiven affinen Selbstabbildungen bezeichnet man mit
$
"GA"(V) := {f: A(V) -> A(V) | f "affin und bijektiv"}
$
] <def>
#bold[Bemerkungen:]
#boxedlist[
Die Menge $"GA"(V)$ bildet eine Gruppe bzgl. der Komposition von Abbildungen. Sie wird deswegen auch #bold[affine Gruppe] zum $K$-Vektorraum $V$ gennant.
][
Betrachtet man einen Vektorraum $V$, als $A(V)$ über sich selbst, so lässt sich jede lineare Abbildung $f in L(V,V)$ als affine Abbildung interpretieren
$
f_P : V -> V, quad x arrow.bar 0_V + smar(f)(smar(0_V x)) = f(x)
$
Diese Abbildung besitzt immer den Fixpunkt $0_V$, denn $f_P (0_V) = f(0_V) = 0_V$.
]
#lemma("4.29")[
Seien $f in A(V, W)$ und $A(V')$ ein affiner Unterraum von $A(V)$. Dann ist das Bild $f(A(V'))$ ein affiner Unterraum von $A(W)$ mit der Richtung $smar(f) (V')$.
]
#startproof Nach Definition existiert ein $P in A(V')$ mit der Eigenschaft
$
A(V') = P + V'
$
$f in A(V, W)$ induziert eine lineare Abbildung $smar(f) in L(V,W)$. Für diese gilt:
$
f(A(V')) = f(P + V') = f(P) + smar(f) (V')
$
#endproof
Man kann sich relativ leicht überlegen:
Ist $f in "GA"(V)$, so werden mittels $f$ (affine) Geraden und Ebenen wieder in (affine) Geraden und Ebenen überführt. Deswegen nennt man eine Abbildung $f in "GA"(V)$ auch #bold[geradentreu]. Vgl. Lemma 4.37, Satz 4.39.
Beispiele für affine Abbildungen.
#definition("4.30", "Translation")[
Sei $V$ ein $K$-Vektorraum, $v in V$ und $A(V)$ ein affiner Raum. Dann heißt die Abbildung
$
f_v: A(V) -> A(V), quad f_v (P) = P + v
$
#bold[Verschiebung] oder #bold[Translation] um den Vektor $v$.
] <def>
#lemma("4.31")[
Für eine Translation $f_v$ gilt
$
f_v in "GA"(V)
$
]
#startproof
$f_v$ bijektiv: einfach zu zeigen
$f_v$ affin: Seien $P, Q in A(V)$. Dann gilt:
$
f_v (Q) &= f_v (P) + smar(f_v (P) f_v (Q)) \
&= P + v + smar(f_v) (smar(P Q)) \
&= Q + smar(Q P) + v + smar(f_v) (smar(P Q))
$
Des Weiteren gilt
$
&f_v (Q) = Q + v quad (==> Q + v = Q + smar(Q P) + v + f_v (smar(P Q))) \
&==> v = smar(Q P) + v + smar(f_v) (smar(Q P)) \
&==> smar(f_v) (smar(P Q)) = - smar(Q P) = smar(P Q) ==> smar(f_v) = "Id" "in V" \
&==> smar(f_v) in L(V,V)
$
#endproof
#bold[Bemerkungen:]
#boxedlist[
Nicht jede affine Abbildung besitzt einen Fixpunkt, z.B. hat jede Translation um $v != 0_V$ keinen Fixpunkt
][
Die Menge der Translationen werden mit
$
T(V) = {f in "GA"(V) | exists v in V: f = f_v}
$
zusammengefasst. Diese Menge bildet eine Untergruppe von $"GA"(V)$.
]
#corollary("4.32")[
$
D(V) = {f in "GA"(V) | exists lambda in K: smar(f) = lambda "Id"_V}
$
ist eine Verallgemeinerung von $T(V)$ und bildet wieder eine Untergruppe von $"GA"(V)$, wobei $T(V)$ eine Untergruppe von $D(V)$ ist. Die Elemente von $D(V)$ nennt man #bold[Dilationen].
]
#lemma("4.33")[
Es sei $f in D(V) without T(V)$, d.h. $smar(f) = lambda "Id"_V$ mit $lambda != 1$. Dann existiert ein eindeutig bestimmter Punkt $Z in A(V)$ mit
$
f(P) = Z + lambda smar(Z P) quad forall P in A(V)
$
]
#startproof ÜA
#endproof
#definition("4.34", "zentrische Streckung")[
Ist $f in D(V) without T(V)$ und besitzt $f$ den Fixpunkt $Z in A(V)$, so nennt man $f$ #bold[zentrische Streckung] mit dem #bold[Zentrum] $Z$ und dem Streckungsfaktor $lambda != 1$.
] <def>
Dafür zunächst noch:
#definition("4.35", "affin unabhängig")[
Die $(n+1)$ Punkte $P_0, ..., P_n in A(V)$ heißen #bold[affin unabhängig], falls die $n$ Verbindungsvektoren
$
smar(P_0 P_1), smar(P_0 P_2), ..., smar(P_0 P_n) in V
$
linear unabhängig sind.
] <def>
#definition("4.36", "kollinear")[
Drei Punkte $P, Q, R in A(V)$ heißen #bold[kollinear], falls eine affine Gerade $A(W) subset.eq A(V)$ existiert, so dass $P, Q, R in A(W)$.
] <def>
#lemma("4.37")[
Ist $f in "GA"(V)$ und sind $P, Q, R in A(V)$ kollinear, so sind auch die Bildpunkte kollinear.
]
#startproof Nach Vorraussetzung existiert eine affine Gerade $A(W)$ mit $P, Q, R in A(W)$
$f in "GA"(V)$ $==>$ (Ü A) $f$ bildet Geraden auf Geraden ab $==>$ Bildpunkte sind kollinear
#endproof
Hilfsresultat:
#lemma("4.38")[
Es sei $sigma: RR -> RR$ eine bijektive Abbildung, welche additiv und multiplikativ ist, d.h. es gilt
$
sigma(x+y) = sigma(x) + sigma(y) space "und" space sigma(x dot y) = sigma(x) dot sigma(y)
$
Dann ist $sigma = "Id"$.
]
#startproof Aus der Additivität folgt sofort $sigma(0) = 0$. Wegen $sigma(0) = 0$ und $sigma(1) != 0$ folgt aus der Multiplikativität $sigma(1) = 1$. Mit der Additivität erhält man $sigma(n) = n, forall n in NN$. Aus der Additivität und $sigma(0) = 0$ folgt, dass $sigma(-x) = - sigma(x)$
$
==> sigma(y) = y quad forall y in ZZ
$
Jetzt: $r in QQ, "d.h." r = p/q "mit" p, q in ZZ, q != 0$
$
p = sigma(p) = sigma(r dot q) = sigma(r) dot sigma(q) = sigma(r) dot q \
==> sigma(r) = r quad forall r in QQ
$
Wenn $sigma$ stetig wäre, wären wir fertig. Das wissen wir aber nicht. Zeige zunächst, dass $sigma$ monoton wachsend ist.
Ist $x >= 0$ $==>$ $exists y in RR: x = y^2$. Dann gilt:
$
sigma(x) = sigma(y^2) = sigma(y) dot sigma(y) >= 0
$
Ist also $a >= b$, also $a-b>=0$
$
&==> sigma(a - b) >= 0 \
&==> 0 <= sigma(a- b) = sigma(a) - sigma(b) \
&==> sigma(b) <= sigma(a)
$
Sei $x in RR$, dann kann man $x$ durch zwei monotone, rationale Zahlenfolgen ${hat(r)_n}$ von unten und ${caron(r)_n}$ von oben approximieren. Damit gilt
$
... <= hat(r)_n <= hat(r)_(n+1) <= ... <= x <= ... <= caron(r)_(n+1) <= caron(r)_n <= ...
$
Anwendung von $sigma$ liefert
$
... <= hat(r)_n <= hat(r)_(n+1) <= ... <= sigma(x) <= ... <= caron(r)_(n+1) <= caron(r)_n <= ...
$
$==>$ $abs(x - sigma(x)) <= caron(r)_n - hat(r)_n quad forall n in NN$
Für $n -> oo$ folgt $sigma(x) = x$.
#endproof
#theorem("4.39")[
#bold[Hauptsatz der affinen Geometrie]
Sei $K = RR$ und $A(V)$ ein affiner Raum der Dimension $n >= 2$. Ist $f: A(V) -> A(V)$ eine bijektive Abbildung, die je drei kollineare Punkte $P, Q, R in A(V)$ in drei kollineare Punkte $f(P), f(Q), f(R) in A(V)$ abbildet, so gilt $f in "GA"(V)$
]
#bold[Folgerung:] Wir hatten schon, dass bijektive affine Abbildungen geradentreu sind. Damit erhalten wir das Gesamtresultat:
#theorem("")[
Für $K = RR$ gilt:
Eine bijektive Abbildung $f: A(V) -> A(V)$ ist genau dann geradentreu, wenn sie affin ist.
]
Damit erhält man für $O in A(V)$
#align(center, stack(
dir: ltr,
figure(image("bilder2/4_39.png", width: 40%)),
align(center, [
#v(2.5cm)
$==> #h(1cm) smar(f) "bij."$
#v(2.5cm)
])
))
#startproof Der Beweis besteht aus 5 Schritten.
#enum[
Sind $A, B, C in A(V)$ affin unabhängig so sind auch $f(A), f(B), f(C)$ affin unabhängig.
][
Ist $A(W)$ eine affine Gerade in $A(V)$, so ist auch $f(A(W))$ eine affine Gerade.
][
Sind $A(W), A(tilde(W))$ parallele Geraden in $A(V)$, so sind $f(A(W))$ und $f(A(tilde(W)))$ auch parallele Geraden in $A(V)$.
]
$==>$ ÜA
#enum(start: 4)[
$smar(f): V -> V$ ist additiv, d.h. $smar(f)(x+y) = smar(f)(x) + smar(f)(y)$
][
$smar(f): V -> V$ ist homogen, d.h. $smar(f)(lambda x) = lambda smar(f)(x)$
]
#startproof
zu 4)
Seien $x, y in V$ beliebig gewählt:
Fall 1: $x, y$ sind linear abhängig
$==> y = lambda x ==> x + y = (1+lambda) x$. Z.z: $smar(f)(x) + smar(f)(y) = (1+ lambda) smar(f)(x) quad$ siehe 5 Homogenität
Fall 2: $x, y$ sind linear unabhängig
$f$ bijektiv $==>$ es gibt eindeutig bestimmte Punkte $A, B in A(V)$, so dass für $O in A(V)$ gilt:
$
x = smar(O A) quad "und" quad y = smar(O B)
$
Wir betrachten das durch die Punkte $O, A, B$ erzeugte "Parallelogramm"
#figure(image("bilder2/4_39_2.png", width: 50%))
Wegen 2) und $A(W_1) ||| A(tilde(W)_1)$ bzw. $A(W_2) || A(tilde(W)_2)$ sind $f(A(W_1)), f(A(tilde(W)_1)), f(A(W_2))$ und $f(A(tilde(W)_2))$ wieder affine Geraden. Wegen 3) gilt $f(A(W_1)) || f(A(tilde(W)_1))$ und $f(A(W_2)) || f(A(tilde(W)_2))$. Damit erhält man:
#figure(image("bilder2/4_39_3.png", width: 50%))
Wegen $x + y = smar(O A) + smar(O B) = smar(O C)$ folgt:
$
smar(f)(x + y) = smar(f)(smar(O C)) = smar(f(O) f(C)) = smar(f(O) f(A)) + smar(f(O) f(B)) = smar(f)(x) + smar(f)(y)
$
zu 5)
Es sei $x in V$, $x != 0$. Wir betrachten die affine Gerade $N subset A(V)$ mit dem Aufpunkt $O$ und der Richtung $"Span"{x}$, d.h.
$
N = O + "Span"{x}
$
Mit 2) $==>$ $f(N) subset A(V)$ ist eine affine Gerade mit dem Aufpunkt $f(O)$ und der Richtung $"Span"{smar(f)(x)}$, d.h.
$
f(N) = f(O) + "Span"{smar(f)(x)}
$
Sei $P in N$, d.h. $P = Q + lambda x$. Dann gilt für $f(P) in f(N)$, dass
$
f(P) = f(O) + smar(f)(lambda x) = f(O) + tilde(lambda) smar(f)(x)
$
Da $f$ bijektiv ist, ist $tilde(lambda)$ eindeutig durch $lambda$ bestimmt. Dies definiert eine bijektive Abbildung $sigma: RR -> RR$. Ist $sigma$ additiv und multiplikativ so folgt mit Lemma 4.38 $lambda = tilde(lambda)$ $==>$ 5)
Noch zu zeigen: $sigma$ ist additiv und multiplikativ
Da $dim(A(V)) >= 2$ existiert $y in V$, das von $x$ linear unabhängig ist. Für $x in V, lambda, mu in RR$ gilt:
$
lambda x + mu x = (lambda + mu ) x
$
sowie
#figure(image("bilder2/4_39_4.png", width: 50%))
Mit 2) + 3) folgt analog zu Schritt 4)
#figure(image("bilder2/4_39_5.png", width: 50%))
Als Beschreibung in Formeln erhält man:
$
&smar(f)(lambda x) + smar(f)(mu x) = smar(f)((lambda+ mu) x) \
==> &tilde(lambda) smar(f)(x) + tilde(mu) smar(f)(x) = tilde((lambda + mu)) smar(f)(x) /
==> &sigma(lambda) + sigma(mu) = sigma(lambda + mu) \
==> &sigma "ist additiv"
$
Wieder mit Hilfe von $y$ und $lambda(mu x) = (lambda mu) x$, $forall lambda, mu in RR$ kann man Parallelen konstruieren:
#figure(image("bilder2/4_39_6.png", width: 100%))
Aus dem Strahlensatz folgt
$
&tilde(lambda mu) = tilde(lambda) tilde(mu) \
==> &sigma(lambda mu) = sigma(lambda) sigma(mu) \
==> &sigma "ist mutliplikativ"
$
#endproof
#pagebreak()
= Projektive Geometrie
In Kapitel IV haben wir gezeigt:
Für zwei affine Geraden $L_1 = P_1 + W_1$ und $L_2 = P_2 + W_2$ gilt in einem zweidimensionalen affinen Raum $A(V)$, d.h. $dim(V) = 2$, $dim(W_1) = dim(W_2) = 1$:
Ist $W_1 = W_2$, so sind $L_1$ und $L_2$ parallel. Satz 4.18: $L_1 = L_2$ oder $L_1 sect L_2 = emptyset$.
Ist $W_1 != W_2$, so ist $W_1 sect W_2 = {0}$ und $"Span"{W_1, w_2} = V$. Aus der Bemerkung nach Definition 4.11 folgt dann $v_(P_1) in "Span"{W_1, W_2} = V$. Also ist $v_(P_1) = w_1 + w_2$, $w_1 in W_1, w_2 in W_2$. $==>$ $P_1 + w_1 = P_2 - w_2 in L_1 sect L_2$
Satz 4.24/Dimensionssatz für affine Räume liefert:
$
0 = dim(W_1 sect W_2) = dim(A(W_1 union W_2)) - dim(W_1) - dim(W_2) \
==> L_1 sect L_2 = P_1 + w_1
$
Insgesamt: Entweder sind zwei affine Geraden parallel oder sie genau einen Schnittpunkt.
Jetzt: Projektive Räume $==>$ zwei verschiedene Geraden besitzen immer genau einen Schnittpunkt.
Als Motivation dafür:
Betrachte die affine Ebene
$
E = {vec(x_1, x_2, x_3) in RR^3 | x_1, x_2 in RR, x_3 = 1} subset RR^3
$
sowie
$
cal(G) &:= {G subset RR^3 | G "ist eine Gerade durch" 0 in RR^3 and G subset.not {x_1, x_2}"-Ebene"} \
&= {G subset RR^3 | G "ist 1-dimensionaler Unterraum", G subset.not {x_1, x_2}"-Ebene"}
$
#figure(image("bilder2/5_1.png", width: 40%))
Damit gibt es eine Bijektion
$
E -> cal(G), quad P arrow.bar G := "Span"{smar(0 P)}
$
Die Umkehrabbildung ist gegeben durch
$
cal(G) -> E, quad G arrow.bar P := E sect G
$
Also kann man $E$ mit $cal(G)$ identifizieren.
Haupsatz der affinen Geometrie:
Die affine Struktur von $E$ wird im wesentlichen durch die Menge der affinen Geraden in $E$ charakterisiert.
Zusammenhang mit $cal(G)$?
$
cal(E) := {tilde(E) subset RR^3 | tilde(E) corres "Ebene durch" 0 in RR^3 and tilde(E) != (x_1, x_2)"-Ebene"}
$
Dies liefer die Bijektion
$
cal(E) -> { G subset E | G "affine Gerade" }, quad tilde(E) arrow.bar G = E sect tilde(E)
$
Bedigung $tilde(E) != (x_1, x_2)$-Ebene vernachlässigen?
$==>$ liefert projektive Ebene
$
macron(cal(E)) = {macron(E) subset RR^3 | macron(E) 2"-dimesionaler? Unterraum"}
$
beziehungsweise projektive Gerade
$
macron(cal(G)) = {G subset RR^3 | G 1"-dimensionaler? Untterraum"}
$
Die Elemente von $macron(cal(G))$ welche durch eindimensionale Unterräume der $(x_1,x_2)$-Ebene gegeben sind, nennt man unendlich feine Punkte der affinen Ebene $E$.
#definition("5.1", "Projektiver Raum")[
Sei $V$ ein endlichdimensionaler $K$-Vektorraum. Der #bold[projektive Raum] $P(V)$ zu $V$ ist gegeben durch
$
P(V) := { G subset V | G = "eindimensionaler Unterraum" }
$
Man setzt
$
dim P(V) = dim V -1
$
als Dimension von $P(V)$. Für $V=K^n$, $n in NN$, nutzt man auch
$
P^(n-1) (K) := P(K^n)
$
] <def>
#bold[Bemerkungen:]
#boxedlist[
Man kann auch $P(V) = { "Span"(x) | x in V without {0}}$ definieren
][
Im Fall $V = {0}$ gilt $P(V) = emptyset$ und $dim P(V) = -1$. Ist $dim V = 1$, so besteht $P(V)$ aus einem Punkt und $dim P(V) = 0$.
][
Die Abbildung
$
p: V without {0} -> P(V), quad v arrow.bar "Span"{v}
$
ist surjektiv. Denn: Jede Gerade in $V$ wird von einem $v in V, v != 0$ aufgepspannt. Es gilt für $v != 0 != w, v, w in V$:
$
p(v) = p(w) &<==> "Span"{v} = "Span"{w} \
&<==> v in "Span"{w} <==> v = lambda w, lambda != 0
$
]
#definition("5.2", "Projektiver Unterraum")[
Sei $P(V)$ ein projektiver Raum zu dem endlichdimensionalen $K$-Vektorraum $V$. Eine Teilmenge $N subset.eq P(V)$ heißt #bold[projektiver Unterraum] von $P(V)$, falls ein Unterraum $U subset.eq V$ existiert, so dass $N = P(U)$ gilt.
] <def>
#bold[Beispiel 5.3:] Es sei $V = RR^3$ und $U subset.eq V$ ein 2-dimensionaler Unterraum, d.h. eine Ebene durch $0 in V$. Dann sind $P(U) subset.eq P(V)$ Geraden in $P^2 (RR)$.
#definition("5.4", "projektiver Punkt, Gerade, Ebene")[
Sei $V$ ein endlichdimensionaler $K$-Vektorraum, $P(V)$ ein projektiver Raum über $V$ und $P(U)$ ein projektiver Unterraum von $P(V)$.
#boxedlist[
Ist $dim(P(U)) = -1$, so ist $P(U) = emptyset$
][
Ist $dim(P(U)) = 1$, so heißt $P(U)$ (projektive) Gerade
][
Ist $dim(P(U)) = 2$, so heißt $P(U)$ (projektive) Ebene
][
Ist $dim(P(U)) = dim(V) - 1$, so heißt $P(U)$ (projektive) Hyperebene
]
] <def>
Beziehung zur affinen Geometrie?
Wir betrachten für $K^(n+1)$, $n in NN$, den Unterraum
$
W := {(x_0, ..., x_n) | x_0 = 0}
$
Er bestimmt eine Hyperebene $H := P(W) subset P^n (K)$ mit der Form
$
H := { "Span"{x_0, ..., x_n} in P^k | x_0 = 0 }
$
Damit erhält man eine kanonische Einbettung des affinen Raums $K^n$ in dem projektiven Raum $P^n (K)$. Für $K = RR$, $n = 2$ kann man dies so skizzieren:
//bild
Allgemein: Man kann Punkte des $K^n$ mit den nicht in $W$ enthaltenen Geraden durch 0 in $K^(n+1)$ identifizieren.
#definition("5.5", "")[
Es seien $P(U_1)$ und $P(U_2)$ zwei projektive Unterräume eines projektiven Raums $P(V)$. Dann wird der kleinste projektive Raum, der $P(U_1)$ und $P(U_2)$ enthält mit $P(U_1, U_2)$ bezeichnet.
]
Ziel: Dimensionssatz
Dazu:
#lemma("5.6")[
Es seien $P(U_1)$ und $P(U_2)$ zwei projektive Unterräume eines projektiven Raums $P(V)$. Dann gilt:
$
P(U_1, U_2) = P(U_1 + U_2)
$
]
#startproof $P(U_1, U_2)$ ist der kleinste projektive Raum, so dass
$
P(U_1) subset P(U_1, U_2), quad P(U_2) subset P(U_1, U_2)
$
Es existiert ein Unterraum $U$: $P(U_1, U_2) = P(U)$ mit $U_1, U_2 subset U$.
Annahme: $exists u in U: u != u_1+u_2$ mit $u_1 in U_1, u_2 in U_2$
$
==>& 0 != macron(u) := underbrace(u, in U) - underbrace((u_1 + u_2), in U) in U \
==>& "Span"{macron(u)} in P(U) = P(U_1, U_2) \
$
$==> arrow.zigzag$ Dann ist $P(U_1, U_2)$ nicht der kleinste Unterraum der $P(U_1)$ und $P(U_2)$ entählt.
$
==>& forall u in U: u = u_1 + u_2 ==> U = U_1 + U_2 \
==>& P(U) = P(U_1 + U_2)
$
#endproof
#lemma("5.7")[
Es seien $P(U_1)$ und $P(U_2)$ zwei projektive Unterräume eines projektiven Raums $P(V)$. Dann gilt:
$
P(U_1) sect P(U_2) = P(U_1 sect U_2)
$
]
#startproof
$
"Span"{x} in P(U_1) sect P(U_2) <==> x in U_1 without {0} and x in U_2 without {0} \
<==> x in (U_1 sect U_2) without {0} <=> "Span"{x} in P(U_1 sect U_2)
$
#endproof
#theorem("5.8")[
#bold[Dimensionssatz]
Seien $P(U_1)$ und $P(U_2)$ zwei projektive Unterräume eines projektiven Raumes $P(V)$. Dann gilt
$
dim(P(U_1, U_2)) = dim(P(U_1)) + dim(P(U_2)) - dim(P(U_1) sect P(U_2))
$
]
#startproof
$
dim(P(U_1, U_2)) =^"5.6"& dim(P(U_1 + U_2)) \
=& dim(U_1 + U_2) - 1 \
=& dim(U_1) + dim(U_2) - dim(U_1 sect U_2) -1 \
=& dim(U_1) -1 + dim(U_2) - 1 -(dim(U_1 sect U_2) -1) \
=& dim(P(U_1)) + dim(P(U_2)) - dim(P(U_1 sect U_2))
$
#endproof
Damit folgt unmittelbar das Resultat für die Geraden.
#corollary("5.9")[
Sei $V$ ein $K$-Vektorraum und $P(V)$ ein projektiver Raum über $V$ mit $dim(P(V)) = n >= 0$. Sind $P(U_1)$ und $P(U_2)$ zwei projektive Unterräume von $P(V)$ mit $dim(P(U_1)) + dim(P(U_2)) >= n$, so gilt $P(U_1) sect P(U_2) != emptyset$.
]
#startproof
$
dim(P(U_1)) + dim(P(U_2)) >= n, quad dim(P(U_1, U_2)) <= n
$
Mit dem Dimensionssatz folgt
$
dim(P(U_1) sect P(U_2)) = underbrace(dim(P(U_1)) + dim(P(U_2)), >= n) - underbrace(dim(P(U_1, U_2)), <= n) >= 0
$
$==> P(U_1) sect P(U_2) != emptyset$
#endproof
#bold[Folgerung:] Ist $P(V)$ ein zweidimensionaler projektiver Raum, so besitzen je zwei projektive Geradden in $P(V)$ einen Schnittpunkt.
Sind $P(V)$ und $P(W)$ zwei projektive Räume zu den $K$-Vektoräumen $V$ und $W$ sowie $p_V : V without {0} -> P(V)$ und $p_W : W without {0} -> P(W)$ die entsprechenden Projektionsabbildungen.
$
f: P(V) -> P(W) space ?
$
Ist $smar(f) in L(V, W)$, so könnte man durch die Zuordnung
$
"Span"{ast} -> "Span"{smar(f)(x)}
$
eine Abbildung definieren. Problem $x in ker(f)$?
Dies motiviert:
#definition("5.10", "Projektive Abbildungen")[
Seien $P(V)$ und $P(W)$ zwei projektive Räume zu den $K$-Vektorräumen $V$ und $W$. Eine Abbildung $f: P(V) -> P(W)$ heißt #bold[projektive Abbildung], falls eine injektive lineare Abbildung $smar(f): V -> W$ existiert, so dass für alle $"Span"{x} in P(V)$ die Gleichung
$
f("Span"{x}) = "Span"{smar(f)(x)}
$
gilt. Die Abhängigkeit der projektiven Abbildung $f$ von $smar(f)$ wird mit $f = P(smar(f))$ bezeichnet.
] <def>
#theorem("5.11")[
#bold[Hauptsatz der projektiven Geometrie]
Seien $P(V)$ und $P(W)$ projektive Räume zu den $RR$-Vektorräumen $V$ und $W$ mit $dim(P(V)) = dim(P(W)) >= 2$. Dann gilt: Bildet die projektive Abbildung $f: P(V) -> P(W)$ je drei Elemente $P, Q, R in P(V)$, die in einer projektiven Geraden liegen in drei Elemente $f(P), f(Q), f(R) in P(W)$ ab, die wieder in einer projektiven Geraden liegen, so ist $f$ bijektiv.
]
#startproof Siehe Fischer, Analytische Geometrie, Satz 33.9
#endproof
#pagebreak()
= Tensorprodukt
Sei $K$ Körper, $V, W, U$ VR über $K$
$
B(V, W; U) = {beta: V times W -> U | beta "bilinear"}
$
d.h. für alle $v in V$ ist $beta(v, dot) in L(W, U)$ und für alle $w in W$ ist $beta(dot, w) in L(V, U)$.
#bold[Beispiel 6.1:] $K$ Körper, $V := W := K[x]$, $U := K[x_1, x_2]$, $beta in B(K[x], K[x]; K[x_1, x_2])$
$
beta(p q)(x_1, x_2) := p(x_1) q(x_2) \
p(x) = x^2, q(x) = (x+1) ==> beta(p, q)(x_1, x_2) = x_1^2 (x_2 + 1)
$
Dies zeigt, dass die Multiplikation von zwei Vektoren zummindest im Fall von Polynomen ein natürlicher Vorgang ist.
#bold[Bemerkung:] $K$ Körper, $V, W, U$ seien $K$-VR. $I$ und $J$ Indexmengen, sodass $(v_i)_(i in I)$ eine Basis von $V$ und $(w_j)_(j in J)$ eine Basis von $W$. Sei weiter $(u_(i j))_(i in I)^(j in J)$ eine beliebige Familie in $U$. Dann existiert ein $beta in B(U, W; U)$ eindeutig mit
$
beta(v_i, w_j) = u_(i j) quad i in I, j in J
$
Bemerkenswert ist, dass $beta$ nicht auf der Basis $(v_i, 0)_(i in I), (0, w_j)_(j in J)$ von $V times W$ festgelegt wurde. Stattdessen wurde $beta$ auf $(v_i, w_j)_(i in I)^(j in J)$ fixiert, was im Allgemeinen weder erzeugend noch linear unabhängig ist. Für $v in V, w in W$ mit
$
v = sum_(i in I) lambda_i v_i, quad w = sum_(j in J) mu_j w_j
$
wobei nur endlich viele Summanden verschieden von 0 sind, ist
$
beta(v, w) = limits(sum_(i in I))_(j in J) lambda_i mu_j u_(i j) = limits(sum_(i in I))_(j in J) lambda_i mu_j beta(v_i, w_j)
$
#theorem("6.2")[
Sei $K$ Körper, $V$ und $W$ $K$-VR. Dann gibt es ein bis auf Isomorphie eindeutig bestimmter $K$-VR $T$ und eine Abbildung $tau in B(V, W, T)$ mit der #bold[universellen Eigenschaft]:
Für beliebige $K$-VR $U$ und $beta in B(V, W; U)$ gibt es eindeutiges $b in L(T, U)$, sodass
$
beta = b circ tau
$
]
#figure(image("bilder2/6_1.png", width: 40%))
#startproof
Existenz: Seien $(v_i)_(i in I)$ und $(w_j)_(j in J)$ Basen von $V$ bzw. $W$. Dann ist
$
T := {t in "Abb"(I times J, K) | t(i, j) != 0 "für nur endlich viele" (i, j) in I times J}
$
ein Untervektorraum von $"Abb"(I times J, K)$. Sei für $i in I$ und $j in J$ die Abbildung $t_(i j) in T$ gegeben durch
$
t_(i j) := delta_(i k) delta_(j l) quad "für" (k, l) in I times J
$
Dann sind die $(t_(i j))_(i in I)^(j in J)$ ein Erzeugendensystem von $T$, denn für $t in T$ gilt
$
t = limits(sum_(i in I))_(j in J) t(i, j) t_(i j)
$
Für $(alpha_(i j))_(i in I)^(j in J)$ mit $0 = sum_(i, j) alpha_(i j) t_(i j)$ folgt sofort $alpha_(i j) = 0$ für alle $(i, j) in I times J$. Also ist $(t_(i j))_(i in I)^(j in J)$ eine Basis von $T$. Wir definieren $tau in B(V, W, T)$ durch
$
tau(v_i, w_j) = t_(i j)
$
Sei $U$ ein beliebiger $K$-VR und $beta in B(V, W; U)$, dann kann $b in L(T, U)$ eindeutig durch
$
b(t_(i j)) := beta(v_i, w_j)
$
definiert werden. Dann gilt per Konstruktion $beta = b circ tau$.
Eindeutigkeit: Erfülle $T'$ zusammen mit $tau'$ auch die universelle Eigenschaft
#figure(image("bilder2/6_2.png", width: 110%))
Wendet man die universelle Eigenschaft von $T$ auf $U = T$, $beta = tau$ an, so folgt die Existenz einer eindeutigen Abbildung $b in L(T, T)$ mit $tau = b circ tau$. Dies ist sicher durch $b = "id"_T$ erfüllt. Analog folgt, dass $"id"_(T'')$ die einzige Abbildung in $L(T', T')$ mit $tau' = "id"_T' circ tau'$. Wendet man nun die universelle Eigenschaft von $T$ auf $U = T', beta = tau'$ an, so sichert diese die eindeutige Existenz von $g' in L(T, T')$ mit $tau' = g' circ tau$. Analog $exists! g in L(T', T)$ mit $tau = g circ tau'$. Es folgt $tau' = g' circ g circ tau'$, also $g' circ g = "id"_(T')$. Analog folgt $g circ g' = id_T$. Also sind $g, g'$ invers und $T op(tilde(=)) T'$.
#endproof
#bold[Bemerkung 6.3:]
#boxedenum[
Im Folgenden werden wir keinen Bezug zur konkreten Konstruktion von $T$ nehmen, sondern lediglich die universelle Eigenschaft verwenden. Daher ist es nicht nötig zwischen $T$ und $T'$ zu unterscheiden. Dies wird durch die Notation $V otimes W := T$ ausgedrückt.
]
#let enum_6_2 = enum(start: 2)[
Für die zu $T$ gehörenden Abbildung $tau$ verwenden wir $otimes := tau$ zusammen mit der Infixnotation:
$
V otimes W = otimes(V, W) = tau(V, W)
$
Damit kann Satz 6.2 durch das kommutative Diagramm
#figure(image("bilder2/6_3.png", width: 40%))
ergänzt werden.
][
Die Abbildung die einem $beta in B(V, W, U)$ eindeutig ein $b in L(V otimes W, U)$ zuordnet erfüllt für $beta, beta' in B(V, W; U), lambda in K$ gilt
$
(beta + lambda beta')(v, w) &= beta(v, w) + lambda beta'(v, w) \
&= b(v otimes w) + lambda b'(v otimes w) \
&= (b + lambda b')(v otimes w)
$
Daher ist $beta arrow.bar b$ linear.
$
B(V, W; U) isomorph L(V otimes W; U)
$
][
Gilt $dim(V), dim(W) < oo$, so ist $dim(V otimes W) = dim(V) dim(W)$.
][
Da für $oo > dim(V), dim(W) > 2$ gilt, dass
$
dim(V times W) = dim(V) + dim(W) < dim(V) dim(W) = dim(V otimes W)
$
ist $otimes: V times W -> V otimes W$ im Allgemeinen nicht surjektiv.
][
Die Elemente von $V otimes W$ nennen wir Tensoren. Die Tensoren im Bild von $otimes$ heißten #bold[einfach].
]
#box(
width: 100%,
inset: (
right: 0.5cm,
left: 0.5cm,
),
enum_6_2
)
#bold[Bemerkung:] Für $v, v' in V$, $w, w' in W, lambda in K$ gilt aufgrund der Bilinearität von $otimes$, dass
#boxedenum[
$v otimes w + v' otimes w = (v + v') otimes w$
][
$v otimes w + v otimes w' = v otimes (w + w')$
][
$(lambda v) otimes w = v otimes (lambda w) = lambda (v otimes w)$
]
#corollary("6.4")[
Seien $V, W$ Vektorräume über einen Körper mit Basen $(v_i)_(i in I)$ bzw. $(w_j)_(j in J)$, dann ist $(v_i otimes w_j)_(i in I)^(j in J)$ eine Basis von $V otimes W$.
]
#startproof Wäre die $(v_i otimes w_j)_(i in I)^(j in J)$ linear abhängig, dann gäbe es $(k, l) in (I times J)$ und $(lambda_(i j))_((i, j) in (I times J) without {(k, l)})$ mit
$
v_k otimes w_l = sum_((i, j) in (I times J) without {(k, l)}) lambda_(i j) v_i otimes w_j
$
Sei $beta in B(V, W, K)$ mit $beta(v_i, w_i) := delta_(i k) delta_(j l)$, dann existiert ein eindeutiges $b in L(V otimes W, K)$ mit $beta = b circ otimes$.
$
1 = beta(v_k, w_j) = b(v_k otimes w_l) = sum_((i, j)) lambda_(i j) underbrace(b(v_I otimes w_j), beta(v_i, w_j)) = 0 quad arrow.zigzag
$
Also sind die $(v_i otimes w_l)_(i in I)^(j in J)$ linear unabhängig.
$T := "Span"{ v otimes w in V otimes W | v in V, w in W }, otimes': V times W -> T, otimes' := otimes, i: T -> V otimes W$. Nach u.E. gibt es ein $f(V otimes W; T)$ mit
$
otimes' = f circ otimes wide i circ otimes' = otimes
$
Betrachte
$
i circ f circ otimes = i circ otimes' = otimes
$
$==> i circ f = "id"_(V otimes W), i$ besitzt Rechtsinverse und surjektiv. Für $theta in V otimes W$ finden wir $t in T$
$
theta = i(t) = i(limits(sum_(i in I))_(j in J) lambda_(i j) underbrace(v_i otimes w_j, in T)) = limits(sum_(i in I))_(j in J) lambda_(i j) i(v_i otimes w_j) = limits(sum_(i in I))_(j in J) lambda_(i j) v_i otimes w_j
$
#endproof
#bold[Beispiel 6.5:] $K$ Körper. Für $K[x]$ sei die Basis $(x^i)_(i in NN union {0})$ gewählt. Dann ist nach Korollar 6.4 $(x^i otimes x^j)_(i,j in NN union {0})$ eine Basis von $K[x] otimes K[x]$. Sei $b in L(K[x] otimes K[x], K[x_1, x_2])$ gegeben durch
$
b(x^i otimes x^j) := x_1^i x_2^j
$
Dann ist $b$ die zu $beta$ aus Bsp. 6.1 gehörige linear Abbildung. Da $b$ Isomorph folgt
$
K[x] otimes K[x] isomorph K[x_1, x_2]
$
#bold[Beispiel 6.6:] Gegben sei ein $RR$-VR $W$. Dann kann die Komplexifizieren von $W$ durch $CC otimes W$ konstruiert werden. (Dabei ist $CC$ als zweidimensionaler VR über $RR$ aufgefasst.) Sei $(w_j)_(j in J)$ eine Basis von $W$. Sei ${1, i}$ eine Basis von $CC$ über $RR$. Dann ist $(1 otimes w_j)_(j in J), (i otimes w_j)_(j in J)$ eine Basis von $CC otimes W$. D.h. für $hat(w) in CC otimes WW$ existiert $(alpha_j)_(j in J)$ als Familie in $RR$ und $(beta_j)_(j in J)$ als Familie in $RR$ mit
$
hat(w) = sum_(j in J) alpha_j (1 otimes w_j) + sum_(j in J) beta_j (i otimes w_j)
$
wobei die Summen wieder nur endlich viele von 0 verschiedenen Summanden aufweisen.
$
= sum_(j in J) (alpha_j + i beta_j) otimes w_j
$
Dann definieren wir für $mu in CC$
$
mu hat(w)_j := sum_(j in J) (mu_j) (alpha_j + i beta_j) otimes w_j
$
die übliche Multiplikation über $CC$.
#corollary("6.7")[
Sei $K$ Körper, $V$ ein $K$-VR. Dann gilt $V isomorph V otimes K$.
]
#corollary("6.8")[
Seien $V, W, U$ VR über einem Körper $K$, dann gilt
$
(V otimes W) otimes U isomorph V otimes (W otimes U)
$
]
#theorem("6.9")[
Sei $K$ ein Körper, $V, W$ seien $K$-VR mit $dim(V), dim(W) < oo$, dann gilt:
#boxedenum[
$L(V, K) isomorph V^* otimes K$
][
$L(V, W) isomorph V^* otimes W$
][
$B(V, W; K) isomorph (V otimes W)^* isomorph V^* otimes W^* isomorph V^* otimes W^* otimes K$
]
Wobei der Isomorphismus in 2. gegeben ist durch
$
V^* otimes W #scale(x: -100%, $in$) phi otimes W arrow.bar phi(dot) w in L(V,W), quad phi in V^*, w in W
$
und der Isomorphismus für $(v otimes w)^* isomorph V^* otimes W^*$ gegeben ist durch
$
(phi otimes psi)(v otimes w) := underbrace(phi(v), in K) underbrace(psi(w), in K) in K, \
phi in V^*, psi in W^*, v in V, w in W
$
]
#startproof
1. Es gilt $V^* otimes K isomorph V^* := L(V, K)$ nach Korollar 6.7.
2. Sei $beta: V^* times W -> L(V, W)$ für $phi in V^*, w in W$ definiert durch
$
beta(phi, w) := phi(dot) w
$
Dann gilt $beta in B(V^*, W; L(V, W))$, also existiert $b in L(V^* otimes W; L(V, W))$ mit $beta = b circ otimes$.
Sei nun $m := dim(V), (v_1, ..., v_n)$ Basis von $V$ und $n := dim(W), (w_1, ..., w_n)$ Basis von $W$ und $(v_1^*, ..., v_m^*)$ die zu $(v_1, ..., v_m)$ duale Basis von $V^*$. Dann ist $(v_i^* otimes w_j)_(i in {1, ..., m})^(j in {1, ..., n})$ eine Basis von $V^* otimes W$. Auf $L(V, W)$ ist
$
F_(i j) in L(V, W) "definiert durch" F_(i j) (v_k) = delta_(i k) w_j quad i,k in {1, ..., m}, j in {1, ..., n}
$
eine Basis.
$
b(v_i^* otimes w_j)(v_k) &= beta(v_i^*, w_j)(v_k) \
&= v_i^*(v_k) w_j \
&= delta_(i k) w_j \
&= F_(i j)(v_k)
$
Also $b(v_i^* otimes w_j) = F_(i j)$ und damit ist $b$ eine Isomorphismus.
3. Es gilt
$
B(V, W, K) &isomorph L(V otimes W, K) \
&= (V otimes W)^* "(Bemerkung nach Satz 6.2)"
$
$
V^* otimes W^* isomorph V^* otimes W^* otimes K "nach Korollar 6.7"
$
Sei für $phi in V^*, psi in W^*, beta_(phi, psi) in B(V, W; K)$ definiert durch
$
beta_(phi, psi) (v, w) = phi(v) psi(w)
$
Dazu gibt es jeweils $b_(phi, psi) in L(V otimes W, K) = (V otimes W)^*$. Die so gegebene Abbildung $V^* times W^* -> (V otimes W)^*$
$
beta(phi, psi) := b_(phi, psi)
$
ist bilinear. Also existiert $b in L(V^* otimes W^*, (V otimes W)^*)$ mit $beta = b circ otimes$. Sei nun $phi in V^*, psi in W^*, v in V, w in W$
$
b(phi otimes psi)(v otimes w) = beta(phi, psi)(v otimes w) = b_(phi, psi) (v otimes w) = beta_(phi, psi) (v, w) = phi(v) psi(w)
$
Also zusätzlich zu den Basen in 2. sei $(w_1^*, ..., w_n^*)$ die zu $(w_1, ..., w_n)$ duale Basis von $W^*$. Dann ist $(v_i^* otimes w_j^*)_(i in {1, ..., m})^(j in {1, ..., n})$ eine Basis von $V^* otimes W^*$ und $(v_i otimes w_j)^*_(#stack(dir: ttb, spacing: 0.25em, [$i in {1,...,m}$],[$j in {1,...,j}$]))$ eine Basis von $(v otimes w)^*$
$
b(v_i^* otimes w_j^*)(v_k otimes w_l) = v_i^*(v_k) w_j^*(w_l) = delta_(i k) delta_(j l) = (v_i otimes w_j)^* (v_k otimes w_l) \
==> b(v_i^* otimes w_j^*) = (v_i otimes w_j)^*
$
also $b$ Isomorphismus.
#endproof
#pagebreak()
= Nichtlineare Algebra
Die #bold[lineare] Algebra $corres$ "Wissenschaft der linearen Gleichungssysteme"
D.h. Problemstellungen, die auf Aussagen zu
$
f(x_1, ..., x_n) = a_1 x_1 + a_2 x_2 + a_n x_n - b = 0 quad (ast)
$
über einem Körper $K$.
Mögliche Erweiterungen: Polynome, dasnn wäre $(ast)$ ein Polynom ersten Gerades über $K$.
#bold[Bisher:]
Kurven und Flächen erster Ordnung, d.h. solche, die durch lineare Gleichungen in den Unbekannten beschrieben werden.
#boxedlist[
Geraden, Ebenen, Hyperebenen
][
Interpretierbar als vorgegebenes Skalarprodukt mit einem Vektor
][
Geometrisch: Intepretation als Schnittmenge verschiedener Objekte
]
== Quadriken
Jetzt: Kurven $corres$ Flächen zweiter Ordnung
linear Gleichungen $arrow.wave$ quadratische Gleichungen:
#boxedlist[
In jedem Summanden treten Variablen insgesamt hächstens zweimal als Faktor auf, z.B.
$
x_1^2 + 2 x_1 x_2 + 5 = 0
$
][
Geometrische Interpretation:
Schnittmengen von Kreis-Kreis, Kreis-Ebene, Zylinder-Ebene
Hier: Quadriken als Gleichungen mit zwei Unbekannten in $RR^2$
$arrow.wave$ klassische Kegelschnitte, Hyperbel, Ellipse, Parabel
]
Verallgemeinerung von quadratischen Funktionen
$
f: RR -> RR, quad x arrow.bar a x^2
$
für $K$ als Körper?
#definition("7.1", "Quadratische Form")[
Sei $K$ ein Körper und $A in K^(n,n)$ symmetrisch. Die Abbildung
$
Q_A: K^n -> K, quad x arrow.bar x^T A x
$
nennt man die zu $A$ gehörenden #bold[quadratische Form].
]
#bold[Bemerkungen:]
#boxedlist[
Eine zu $A$ gehörende quadratische Form lässt sich auch für nicht symmetrische Matrizen, d.h. $A != A^T$, definieren. Man kann zeigen, dass sich jede qudratische Form auch mit Hilfe einer symmetrischen Matrix $B$ darstellen lässt.
][
Besonders interessant: quadratische Formen für $RR^(2,2)$.
]
#definition("7.2", "allgemeine quadratische Funktion")[
Eine quadratische Funktion über $K$ ist für $A in K^(n,n)$, $b in K^N$ und $c in K$ definiert als
$
f: K^n -> K, quad x arrow.bar x^T A x + b^T x + c
$
]
#bold[Bemerkung:] Eine quadratische Gleichung erhält man dann durch
$
f(x) = 0 quad <==> quad x^T A x + b^T x + c = 0
$
#bold[Beispiel 7.3:] Für $K^n = RR^3$ mit symmetrischen $A in RR^(3,3)$ und $b in RR^3$ ergibt sich
$
vec(x_1, x_2, x_3) arrow.bar \
a_(1 1) x_1^2 + a_(2 2) x_2^2 + a_(3 3) x_3^2 + 2 a_(1 2) x_1 x_2 + 2 a_(2 3) x_2 x_3 + 2 a_(1 3) x_1 x_3 + b_1 x_2 + b_2 x_2 + b_3 x_3 + c = 0
$
Die Lösungsmengen solcher allgemeinen quadratischen Gleichungen bezeichnet man als #bold[Quadrik].
//
allgemeine quadratische Gleichung $K^n = RR^n$
$
x^T A x + b^T x + c = 0 quad x in RR^n quad (ast)
$
für $A in RR^(n,n)$ symmetrisch, $b in RR^n$, $c in RR$
Was ist die Lösungsmenge? $corres$ Quadriken
Vollständige Klassifizierung der Quadriken im $RR^2$. Jede Quadrik zu $(ast)$ lässt sich mit Hilfe von affinen Isometrien, d.h. längenerhaltenden Koordinatenransformationen also Abbildungen der Form $RR^2 -> RR^2, x arrow.bar Q x + t$ mit einer orthogonalen Matrix auf eine der folgenden Formen bringen.
#boxedlist[
$A$-invertierbar: Dann wird $(ast)$ zu
$
mat(x_1 x_2) mat(lambda_1,0;0,lambda_2) vec(x_1, x_2) = lambda_1 x_1^2 + lambda_2 x_2^2 = d quad lambda_1 != 0 != lambda_2
$
][
$A$ nicht invertierbar $==>$ $lambda_2 = 0$
$
mat(x_1 x_2) mat(lambda_1,0;0,0) vec(x_1, x_2) = lambda_1 x_1^2 + b x_2 = d
$
]
Über $Q$ als Koordinatentransformation nach Satz 3.25.
Jetzt weitere Unterschiede
#boxedlist[
definite Fälle: $lambda_1, lambda_2 > 0$ oder $lambda_1, lambda_2 < 0$
#boxedlist[
Ellipse
$
lambda_1 x_1^2 + lambda_2 x_2^2 = d
$
$d = 0$
$
underbrace(lambda_1, > 0) underbrace(x_1^2, >= 0) + underbrace(lambda_2, > 0) underbrace(x_2^2, >= 0) = 0 quad x_1 = x_2 = 0
$
$d != 0$
$
lambda_1/d x_1^2 + lambda_2/d x_2^2 = 1
$
beide $> 0$: Ellipsengleichung, beide $< 0$: keine Lösung!
]
][
indefiniter Fall: $lambda_1 > 0, lambda_2 < 0$
$
lambda_1 x_1^2 + lambda_2 x_2^2 = d
$
$d = 0$
$
underbrace(lambda_1, > 0) x_1^2 + underbrace(lambda_2, < 0) x_2^2 = 0 \
==> "zwei sich schneidende Geraden"
$
$d != 0$
$
lambda_1/d x_1^2 + lambda_2/d x_2^2 = 1 \
"z.B. Hyperbel" cases(a x_1^2 - b x_2^2 = 1, a"," b > 0)
$
beide $> 0$: Ellipsengleichung, beide $< 0$: keine Lösung!
][
semidefiniter Fall: $lambda_2 = 0$
$
a x_1^2 &= 1, a > 0 quad "also zwei parallele Geraden" \
x_2^2 &= 0 quad "eine Gerade"
$
]
// loch
Schnitt von nichtlinearen Mengen und Ebenen
Diese Überlegungen lassen für höhere Polynomgerade verallgemeinern.
== Computeralgebra
Spannungsfeld: Rechner mit Gleitkommazahlen ($arrow.wave$ Rechnungsfehler!) $<-->$ symbolisches Rechnen (exakt, aber extrem aufwändig!)
Typische Anwendungen:
#boxedlist[
algebraische Terme vereinfachen oder vergleichen
][
algebraische Gleichungen lösen
][
lineare Gleichungssysteme lösen
][
Fehler
]
|
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/regression/issue21.typ | typst | Other | #version(1,2)
#version(1,2).at(3)
|
https://github.com/MrToWy/Bachelorarbeit | https://raw.githubusercontent.com/MrToWy/Bachelorarbeit/master/Template/titlePage.typ | typst | // Title page.
#v(0.6fr)
#align(left, image("Wortmarke.svg", width: 26%))
#v(1.6fr)
#text(2em, weight: 700, title)
#v(1.2em, weak: true)
#text(author)
#v(1.2em, weak: true)
#text(subtitle)
#v(1.2em, weak: true)
#text(1.1em, date)
#align(right, image("Logo.svg", width: 26%))
#pagebreak()
|
|
https://github.com/MultisampledNight/diagram | https://raw.githubusercontent.com/MultisampledNight/diagram/main/source/template.typ | typst | Other | #import "@preview/cetz:0.2.2"
#let draw = cetz.draw
#let input(name, default) = if name in sys.inputs {
json.decode(sys.inputs.at(name))
} else {
default
}
#let bg = luma(100%)
#let fg = luma(0%)
#let gamut = gradient.linear(bg, fg)
#let todo(..what) = {
let body = what.pos().at(0, default: [TODO])
text(fill: green, strong(emph(body)))
}
#let template(body) = {
set page(
width: auto,
height: auto,
fill: bg,
footer: align(right, text(0.7em)[
#show link: text.with(blue)
By MultisampledNight,
#link(
"https://creativecommons.org/licenses/by-nc-sa/4.0/",
[licensed under CC BY-NC-SA 4.0]
) \
#link("https://github.com/MultisampledNight/diagram")[Available on GH],
please do tell if there's anything wrong!
]),
)
set text(
size: 16pt,
font: "IBM Plex Sans",
fill: fg,
)
show raw: set text(font: "IBM Plex Mono")
body
}
// Draws a longer line
// starting at `start` following `path`.
// On direction change in `path`,
// the corners are rounded
// according to `radius`.
//
// Format in ABNF:
//
// pathdesc = 1*(dir *WSP len)
// dir = "v" / "^" / "<" / ">"
// len = 1*DIGIT
//
// TODO: implement rounding some day
// TODO: implement save+restore via parenthesis, like IUPAC formulas
#let thread(
start,
pathdesc,
..args,
) = {
import draw: *
let rel = (
"v": (0, -1),
"^": (0, 1),
"<": (-1, 0),
">": (1, 0),
)
// hacky, should actually use regex matches, but who cares
let dirs = pathdesc
.split(regex("\d"))
.slice(0, -1)
.map(str.trim)
let lens = pathdesc
.split(regex("[v^<>]"))
.slice(1)
.map(str.trim)
.map(float)
// HACK: using the previous coordinate specifier
// to keep track of the current position for us
// so this empty content is just for setting the start pos
content(start, none)
for (dir, len) in dirs.zip(lens) {
let mov = rel.at(dir).map(c => c * len)
line((), (rel: mov), ..args)
}
}
#let canvas(body, ..args) = cetz.canvas(..args, {
import draw: *
set-style(
stroke: (
cap: "round",
join: "round",
paint: fg,
),
)
body
})
#show: template
Use via:
```typst
#import "/template.typ": *
#show: template
#canvas({
import draw: *
// your wonderful cetz code comes here
})
```
|
https://github.com/FriendlyUser/IntroductionToTypst | https://raw.githubusercontent.com/FriendlyUser/IntroductionToTypst/main/template.typ | typst | Apache License 2.0 | // The project function defines how your document looks.
// It takes your content and some metadata and formats it.
// Go ahead and customize it to your liking!
#let resume(
title: "", location: "", postalCode: "", phoneNumber: "", email: "",
authors: (), experiences: (), education: (), body) = {
// Set the document's basic properties.
set document(author: authors, title: title)
// set page(numbering: "1", number-align: center)
set text(font: "Linux Libertine", lang: "en")
// Title row.
align(center)[
#block(text(weight: 700, 2em, title))
]
pad(
top: 0.05em,
bottom: 0.05em,
x: 1em,
grid(
columns: (1fr),
gutter: 0.15em,
align(center, location + ", "+ postalCode),
),
)
pad(
top: 0.1em,
bottom: 0.1em,
x: 1em,
grid(
columns: (1fr),
gutter: 0.05em,
align(center, phoneNumber + " - "+ email),
),
)
line(length: 100%)
let count = experiences.len()
let nrows = calc.min(count, 1)
grid(
column-gutter: 0pt,
row-gutter: 35pt,
..experiences.map(experience => [
#block( text(weight: 700, 1.5em, spacing: 50%, experience.employee))
#block(above: 0pt, below: 0pt, text(experience.jobTitle))
#pad(
y: -0.25em,
grid(
columns: 2,
gutter: 0.05em,
column-gutter: 0pt,
row-gutter: 0pt,
experience.startDate + " - " + experience.endDate + " ", " " + experience.location
),
),
#pad(
x: 1em,
y: -0.75em,
list(..experience.points.map(point => point))
)
]),
)
line(length: 100%)
pad(y: 2em,
grid(
columns: 1,
gutter: 0.05em,
column-gutter: 0pt,
row-gutter: 10pt,
block(above: 2pt, below: 2pt, text(weight: 700, 1.5em, spacing: 50%, education.name)),
block(above: 0pt, below: 0pt, text(education.degree)),
grid(
columns: 2,
gutter: 0.05em,
column-gutter: 0pt,
row-gutter: 0pt,
education.startDate + " - " + education.endDate + " ", " | " + education.location,
),
pad(
x: 1em,
list(..education.points.map(point => point))
)
)
)
line(length: 100%)
text(weight: 700, 1.5em, "References")
pad(y:1em, text("Available on Request"))
// Main body.
set par(justify: true)
} |
https://github.com/liuguangxi/fractusist | https://raw.githubusercontent.com/liuguangxi/fractusist/main/tests/test-dragon-curve.typ | typst | MIT License | #set document(date: none)
#import "/src/lib.typ": *
#set page(margin: 1cm)
= n = 1
#align(center)[
#dragon-curve(1, step-size: 40)
]
= n = 2
#align(center)[
#dragon-curve(2, step-size: 20, stroke-style: stroke(paint: red, thickness: 2pt, cap: "square"))
]
= n = 3
#align(center)[
#dragon-curve(3, step-size: 20, stroke-style: stroke(paint: orange, thickness: 4pt, cap: "square"))
]
= n = 4
#align(center)[
#dragon-curve(4, step-size: 20, stroke-style: stroke(paint: green, thickness: 6pt, cap: "square"))
]
= n = 5
#align(center)[
#dragon-curve(5, step-size: 16, stroke-style: stroke(paint: blue, thickness: 8pt, cap: "round", join: "round"))
]
= n = 6
#align(center)[
#dragon-curve(6, step-size: 16, stroke-style: stroke(paint: purple, thickness: 8pt, cap: "square"))
]
#pagebreak(weak: true)
= n = 7
#align(center)[
#dragon-curve(7, step-size: 10, stroke-style: stroke(paint: gradient.linear(..color.map.crest, angle: 45deg), thickness: 2pt, cap: "square"))
]
= n = 8
#align(center)[
#dragon-curve(8, step-size: 10, stroke-style: stroke(paint: gradient.linear(..color.map.crest, angle: 45deg), thickness: 3pt, cap: "square"))
]
= n = 9
#align(center)[
#dragon-curve(9, step-size: 10, stroke-style: stroke(paint: gradient.linear(..color.map.crest, angle: 45deg), thickness: 4pt, cap: "square"))
]
#pagebreak(weak: true)
|
https://github.com/Kasci/LiturgicalBooks | https://raw.githubusercontent.com/Kasci/LiturgicalBooks/master/CU/postna_triod/1_generated/0_all/Tyzden-01.typ | typst | #import "../../../all.typ": *
#show: book
= -01. #translation.at("TYZDEN")
#include "../Tyzden-01/7_Nedela.typ"
#pagebreak()
|
|
https://github.com/Jollywatt/typst-fletcher | https://raw.githubusercontent.com/Jollywatt/typst-fletcher/master/tests/diagram-math-mode/test.typ | typst | MIT License | #set page(width: auto, height: auto, margin: 1em)
#import "/src/exports.typ" as fletcher: diagram, node, edge
= Diagrams in math mode
The following diagrams should be identical:
#diagram($
G edge(f, ->) edge(#(0,1), pi, ->>) & im(f) \
G slash ker(f) edge(#(1,0), tilde(f), "hook-->")
$)
#diagram(
node((0,0), $G$),
edge((0,0), (1,0), $f$, "->"),
edge((0,0), (0,1), $pi$, "->>"),
node((1,0), $im(f)$),
node((0,1), $G slash ker(f)$),
edge((0,1), (1,0), $tilde(f)$, "hook-->")
)
#pagebreak()
= Explicit nodes in math mode
#diagram(
node-outset: 2pt,
node-corner-radius: 2pt,
$
A edge(->) & node(sqrt(B), fill: #blue.lighten(70%), inset: #5pt) \
node(C, stroke: #(red + .3pt), radius: #1em) edge("u", "=")
edge(#(1,0), "..||..")
$,
)
#diagram(
node-stroke: 1pt,
$ node(A B C, extrude: #(0,2)) edge(->) & pi r^2 $
)
#pagebreak()
= Relative coordinates in math mode
The following diagrams should be identical:
#diagram($
(0,0) edge(#(0,1), #(rel: (0, -1)), ->) & // first non-relative coordinate becomes `from`...
(1,0) edge(#(0,1), "=>") \ // ...unless it is the only coordinate, in which case it becomes `to`
(0,1) edge(#(0,0), "dr", "..>") &
(1,1) edge("u", "-->") // if a single relative coordinate is given, set `from: auto`
$)
#diagram(
node((0,0), $(0,0)$),
node((1,0), $(1,0)$),
node((0,1), $(0,1)$),
node((1,1), $(1,1)$),
edge((0,1), (0,0), "->"),
edge((1,0), (0,1), "=>"),
edge((0,0), (1,1), "..>"),
edge((1,1), (1,0), "-->"),
)
#pagebreak()
= Label side in math mode
#diagram(spacing: (1cm, 3mm), $
A edge(f, #left, "->") & B \
A edge(#center, f, "->") & B \
A edge(f, "->", #right) & B \
$)
|
https://github.com/lucannez64/Notes | https://raw.githubusercontent.com/lucannez64/Notes/master/Refaire_la_France.typ | typst | #import "template.typ": *
// Take a look at the file `template.typ` in the file panel
// to customize this template and discover how it works.
#show: project.with(
title: "Refaire la France",
authors: (
"<NAME>",
),
date: "30 Octobre, 2023",
)
#set heading(numbering: "1.1.")
Voici le fichier actualisé en intégrant mes suggestions de précisions et
d’améliorations :
= Améliorer la démocratie et les politiques publiques en France
<améliorer-la-démocratie-et-les-politiques-publiques-en-france>
#figure([#image("pouvoirs.png")],
caption: [
Pouvoirs
]
)
== Rénover le système de vote
<rénover-le-système-de-vote>
- Mettre en place le vote électronique sécurisé avec double
authentification, chiffrement de bout en bout, audits externes,
serveurs redondants (tester à petite échelle avant généralisation)
- Généraliser le vote par correspondance pour faciliter la participation
- Simplifier les modalités de vote par procuration via une plateforme en
ligne \
- Remplacer le scrutin uninominal majoritaire à deux tours par :
- Le vote par approbation (vote pour 1 ou plusieurs candidats)
- Le jugement majoritaire (notation des candidats)
- Le scrutin de Condorcet (duels deux à deux)
- Instaurer la proportionnelle intégrale pour les élections législatives
et sénatoriales
== Rééquilibrer les institutions
<rééquilibrer-les-institutions>
- Commencer par donner le droit au Parlement de censurer le gouvernement
- Elargir le RIP en abaissant le seuil de déclenchement
- Accorder plus de pouvoirs de contrôle au Parlement sur l’action du
gouvernement
- Expérimenter un statut d’opposition officielle avec moyens dédiés
- Transférer des compétences de l’Etat vers les régions et départements
- Elire le Sénat à la proportionnelle avec des sénateurs issus des
collectivités
- Ouvrir le Conseil Constitutionnel à d’autres profils que les anciens
présidents
== Améliorer le système éducatif
<améliorer-le-système-éducatif>
- Augmenter de 25% le salaire des enseignants sur 5 ans
- Diviser par deux les effectifs des classes en REP et REP+ \
- Permettre des parcours personnalisés orientation progressive
- Mettre l’accent sur lecture, écriture, calcul et culture générale dès
le primaire
- Rendre obligatoire 1h de lecture par jour et l’apprentissage du code
au collège
- Développer l’enseignement explicite et la méthodologie
- Renforcer les exigences sur l’orthographe et la grammaire
== Lutter contre les inégalités
<lutter-contre-les-inégalités>
- Allouer des moyens supplémentaires aux établissements en fonction du
taux de boursiers
- Favoriser la mixité sociale via une sectorisation repensée
- Généraliser la gratuité de la cantine scolaire
- Elargir l’attribution des bourses et aides sociales
== Réformer les examens
<réformer-les-examens>
- Prendre en compte le contrôle continu pour 40% de la note finale
- Introduire des épreuves de projet en groupe et oraux terminaux
- Maintenir une épreuve écrite nationale anonyme pour valoriser l’effort
- Publier des sujets de référence nationaux pour harmoniser l’évaluation
== Progresser sur l’intégration des immigrés
<progresser-sur-lintégration-des-immigrés>
- Multiplier les cours intensifs de français dès l’arrivée
- Proposer des formations professionnalisantes adaptées via des contrats
aidés
- Lever les freins à l’accès à l’emploi par des contrats aidés
- Lutter contre les discriminations via des CV anonymes
- Développer des projets communs entre établissements de différents
quartiers
- Lancer des campagnes de communication sur les apports positifs de
l’immigration
- Faciliter l’obtention de la nationalité française pour les immigrés de
longue date
== Moderniser la sécurité intérieure
<moderniser-la-sécurité-intérieure>
- Recruter 10 000 policiers et gendarmes supplémentaires
- Former les policiers aux approches de médiation et au contact avec la
population
- Recruter davantage de policiers issus des minorités
- Expérimenter des brigades mixtes police-population dans les quartiers
- Evaluer précisément les technologies avant déploiement
- Construire 20 000 places de prison pour l’application des peines
== Rapprocher le Président du peuple
<rapprocher-le-président-du-peuple>
- Limiter à 2 le nombre de mandats présidentiels sucessifs
- Créer une commission d’enquête parlementaire en fin de mandat
- Organiser un débat contradictoire annuel avec le président en direct à
la TV
- Rendre transparent le train de vie du Président
- Permettre le déclenchement du RIC par 1 million de citoyens
=== Réformer la fiscalité pour plus de justice sociale
<réformer-la-fiscalité-pour-plus-de-justice-sociale>
- Baisser la TVA à 5% sur les produits alimentaires et de première
nécessité
- Ajouter des tranches d’imposition sur le revenu pour les hauts revenus
(60% au-delà de 250 000€)
- Taxer les résidences secondaires à partir de la 3ème propriété
=== Lutter contre le réchauffement climatique
<lutter-contre-le-réchauffement-climatique>
- Maintenir la part du nucléaire à 50% de la production d’électricité en
2035
- Porter à 40% la part des énergies renouvelables dans le mix
énergétique en 2035
- Instaurer un bonus-malus écologique puissant sur les véhicules selon
leurs émissions
- Lancer un grand plan de rénovation thermique des logements
=== Réindustrialiser la France et relocaliser
<réindustrialiser-la-france-et-relocaliser>
- Favoriser l’implantation de sites industriels clés (batteries,
hydrogène, biomédicaments…)
- Renforcer les droits des salariés des sous-traitants des grands
groupes
- Conditionner les aides publiques à des relocalisations d’activité
=== Refondre la politique du logement
<refondre-la-politique-du-logement>
- Construire 150 000 logements sociaux par an et rénover 200 000
insalubres
- Plafonner les loyers dans les zones tendues
- Aider la rénovation énergétique des logements modestes
=== Financer ces réformes
<financer-ces-réformes>
- Faire expertiser chaque source par un organisme indépendant
- Conditionner les baisses d’impôts à une croissance économique solide
- Etablir une trajectoire précise de réduction du déficit \
- Maîtriser l’évolution de la dette pour préserver la confiance
- Mobiliser les prêts européens du plan de relance post-Covid
- Réduire niches fiscales et dépenses publiques inefficaces
- Réorienter une part des investissements d’avenir et aides aux
entreprises
- Lutter contre la fraude et l’évasion fiscales
- Accroître modérément la fiscalité écologique
- Emprunter à taux bas sur les marchés
- Développer des partenariats public-privé sur certains projets
- Étaler les investissements dans le temps
= Budget détaillé sur 5 ans
<budget-détaillé-sur-5-ans>
#align(center)[#table(
columns: 2,
align: (col, row) => (auto,auto,).at(col),
inset: 6pt,
[Dépenses existantes], [Milliards d’euros],
[Education nationale],
[80],
[Enseignement supérieur],
[15],
[Santé - Sécurité sociale],
[200],
[Hôpitaux],
[80],
[Transports],
[20],
[Transition écologique],
[15],
[Sécurité - Justice],
[40],
[Autres ministères],
[150],
[#strong[Total sur 5 ans]],
[#strong[3000]],
)
]
#align(center)[#table(
columns: 2,
align: (col, row) => (auto,auto,).at(col),
inset: 6pt,
[Dépenses supplémentaires], [Milliards d’euros],
[Education],
[35],
[Santé],
[25],
[Transition écologique],
[50],
[Sécurité],
[13],
[Autres],
[27],
[#strong[Total sur 5 ans]],
[#strong[150]],
)
]
#align(center)[#table(
columns: 2,
align: (col, row) => (auto,auto,).at(col),
inset: 6pt,
[Recettes supplémentaires], [Milliards d’euros],
[Fiscalité],
[45],
[Lutte fraude fiscale],
[10],
[Réduction niches fiscales],
[10],
[Nouvelles taxes],
[15],
[Prêts européens],
[40],
[Partenariats public-privé],
[10],
[Réorientation dépenses],
[20],
[#strong[Total sur 5 ans]],
[#strong[150]],
)
]
|
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/math/frac_04.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test multinomial coefficients.
$ binom(n, k_1, k_2, k_3) $
|
https://github.com/sa-concept-refactoring/doc | https://raw.githubusercontent.com/sa-concept-refactoring/doc/main/chapters/managementSummary.typ | typst | = Management Summary
The goal of this project was to add new refactorings to the clangd language server to support the use of concepts, which were introduced with C++20.
Two new refactoring operations were implemented and the resulting patches have been submitted to the LLVM project.
As of #datetime(year: 2023, month: 12, day: 22).display("[month repr:long] [year]"), the pull requests opened to merge the implemented refactorings into the LLVM project are awaiting review.
/ Inline Concept Requirement : #[
Inlines type requirements from _requires_ clauses into the template definition, eliminating the _requires_ clause.
An example of its capabilities is shown in @management_summary_inline.
]
/ Abbreviate Function Template : #[
Eliminates the template declaration by using `auto` parameters.
An example of its capabilities is shown in @management_summary_abbreviate.
]
The refactoring operations were implemented as part of the clangd language server.
@refactoring_contribution shows a diagram of how VS Code is using the clangd language server to display refactoring operations.
It is communicating with the language server using the language server protocol, for which the "clangd" extension can be used.
#figure(
table(
columns: (1fr, 1fr),
align: horizon,
[
#set text(size: 0.9em)
*Before*
],
[
#set text(size: 0.9em)
*After*
],
[
#set text(size: 0.9em)
```cpp
template <typename T>
void foo(T) requires std::integral<T> {}
```
],
[
#set text(size: 0.9em)
```cpp
template <std::integral T>
void foo() {}
```
]
),
caption: [Example of "Inline Concept Requirement" refactoring],
) <management_summary_inline>
#figure(
table(
columns: (1fr, 1fr),
align: horizon,
[
#set text(size: 0.9em)
*Before*
],
[
#set text(size: 0.9em)
*After*
],
[
#set text(size: 0.9em)
```cpp
template <std::integral T>
void foo(T param) {}
```
],
[
#set text(size: 0.9em)
```cpp
void foo(std::integral auto param) {}
```
]
),
caption: [Example of "Abbreviate Function Template" refactoring],
) <management_summary_abbreviate>
#figure(
image("../drawio/refactoring_contribution.drawio.png"),
caption: [Diagram showing integration of implemented refactoring],
) <refactoring_contribution>
#set heading(numbering: none)
=== Key Findings
- The clangd documentation is well-written and provides good support.
- Parts of the code within the LLVM project are quite old and use older language features.
- Pull requests often take a significant amount of time for reviewers to approve or even review.
- Clangd contains functions which were irritating and hard to understand and therefore leading to wrong conclusions.
=== Critical Issues and Challenges
- Building clangd for the first time takes a lot of cpu time and memory.
This resulted in initial builds taking multiple hours.
- Finding out how to add reviewers to the pull requests posed a considerable challenge due to the absence of instructions.
It appeared that the automated system malfunctioned, failing to allocate reviewers as intended.
=== Conclusions
Language servers offer an effective method to provide language support across multiple IDEs.
The presence of an open-source project such as LLVM is not only a commendable initiative, but also receives widespread appreciation among developers in the community.
Conversely, this circumstance contributes to a slower integration of new changes, given that a majority of contributors are engaged in the project during their leisure hours, impacting the pace of development.
One of the pull requests got a review from fellow contributor, who expressed anticipation for the integration of the refactoring in clangd, highlighting its potential usefulness.
Their comment serves as a promising conclusion to the project's development, and it is hoped that others will similarly perceive this addition as beneficial to the language server.
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/hydra/0.1.0/README.md | markdown | Apache License 2.0 | # hydra
Hydra is a [typst] package allowing you to easily display the current section anywhere in your
document. By default, it will assume that it is used in the header of your document and display
the last heading if and only if it is numbered and the next heading is not the first on the current
page.
By default hydra also assumes that you use `a4` page size, see the FAQ if you use different page
size or margins.
## Note on API
The current API is subject to change in the next version when new features for general handling of
headings is added.
## Example
```typst
#import "@preview/hydra:0.1.0": hydra
#set page(header: hydra() + line(length: 100%))
#set heading(numbering: "1.1")
#show heading.where(level: 1): it => pagebreak(weak: true) + it
= Introduction
#lorem(750)
= Content
== First Section
#lorem(500)
== Second Section
#lorem(250)
== Third Section
#lorem(500)
= Annex
#lorem(10)
```
![ex1]
![ex2]
![ex3]
![ex4]
![ex5]
## Non-default behavior
Changing the default behavior can be done using its keyword arguments:
```typst
#let hydra(
sel: heading, // the elements to consider
getter: default.get-adjacent, // gets the neighboring elements according to sel
prev-filter: default.prev-filter, // checks if the last element is valid
next-filter: default.next-filter, // checks if the next element is valid
display: default.display, // displays the last element
resolve: default.resolve, // contains the glue code combining the other given args
is-footer: false, // whether this is used from a footer
) = {
...
}
```
These functions generally take a queried element and sometimes the current location, see the source
for more info. The defaults assume only headings and fail if another element type is provided.
The `sel` argument can be an element function or selector, or either an array containing either
of those and an addiitonal filter function. The additional filter function is applied before the
adjacent arguments are selected from the result of the queries.
### Configuring filter and display
By default hydra will display `[#numbering #body]` of the heading and this reject unnumbered
ones. This filtering can be configured using `prev-filter` and `next-filter`.
```typst
#set page(header: hydra(prev-filter: (_, _) => true))
```
Keep in mind that `next-filter` is also responsible for checking that the next heading is on the
current page.
### In the footer
To use the hydra functon in the footer of your doc, pass `is-footer: true` and place a
`#metadata(()) <hydra>` somewhere in your header, or before your headings. Hydra will use the
location of this label to search for the correct headings instead of searching from the footer.
```typst
#set page(header: [#metadata(()) <hydra>], footer: hydra(is-footer: true))
```
Using it outside of footer or header should work as expected.
### Different heading levels or custom heading types
If you use a `figure`-based element for special 0-level chapters or you wish to only consider
specific levels of headings, pass the appropriate selector.
```typst
// only consider level 1
#set page(header: hydra(sel: heading.where(level: 1)))
// only consider level 1 - 3
#set page(header: hydra(sel: (heading, (h, _) => h.level <= 3)))
// consider also figures with this kind, must likely override all default functions other than
// resolve, or resolve directly, see source
#set page(header: hydra(sel: figure.where(kind: "chapter").or(heading), display: ...)
```
In short, `sel` can be a selector, or a selector and a filter function. When using anything other
than headings only, consider setting `display` too.
## FAQ
**Q:** Why does hydra display the previous heading if there is a heading at the top of my page?
**A:** If you use non `a4` page margins make sure to pass
`next-filter: default.next-filter.with(top-margin: ...)`. This margin must be known for the default
implementation. If it does but you are using `a4`, then you found a bug.
[ex1]: examples/example1.png
[ex2]: examples/example2.png
[ex3]: examples/example3.png
[ex4]: examples/example4.png
[ex5]: examples/example5.png
[typst]: https://github.com/typst/typst
|
https://github.com/Saadaiheb5/lab4 | https://raw.githubusercontent.com/Saadaiheb5/lab4/main/Lab-4.typ | typst | #import "Class.typ": *
#show: ieee.with(
title: [#text(smallcaps("Lab #4: ROS2 using RCLPY in Julia"))],
/*
abstract: [
#lorem(10).
],
*/
authors:
(
(
name: "<NAME>",
department: [Senior-lecturer, Dept. of EE],
organization: [ISET Bizerte --- Tunisia],
profile: "a-mhamdi",
),
(
name: "<NAME> ",
department: [Dept. of EE],
organization: [ISET Bizerte --- Tunisia],
profile: "nahdiasma2",
),
(
name: "<NAME>",
department: [Dept. of EE],
organization: [ISET Bizerte --- Tunisia],
profile: "saadaiheb5",
),
/*
(
name: "<NAME>",
department: [Dept. of EE],
organization: [ISET Bizerte --- Tunisia],
profile: "abc",
)
*/
)
// index-terms: (""),
// bibliography-file: "Biblio.bib",
)
= Introduction
in this lab they are two part in the first part ("Application") we gonna use also julia REPL to write down ROS2s codes as the figure (1) and in the second part ("clarafication") we gonna explain every commandes and her function /*and in the last part we gonna add the result in julia compliation*/
#figure(
image("Images/REPL.png", width: 100%, fit: "contain"),
caption: "Julia REPL"
)
#test[in this lab i can't simulate ROS2 with my laptop for that i'm gonna use the simulation picture in Images/ infodev folder ]
= Application
- first of all we need to install ROS2and then we gonna start sourcing our ROS2 installation as follows:
```zsh
source /opt/ros/humble/setup.zsh
```
- second we gonna open up a julia terminal and write down the codes underneath or luckly we can open it from our folder infodev/codes/ros2
#rect(fill: green)[The first programme is the publisher code ]
#let publisher=read("../Codes/ros2/publisher.jl")
#let subscriber=read("../Codes/ros2/subscriber.jl")
#raw(publisher, lang: "julia")
#rect(fill: green)[The second programme is the subscriber code]
After write down the two programme we need to execute each one of them in a newly opened terminal, Right then the subscriber will listen to the message brodcasted by the publisher
#raw(subscriber, lang: "julia")
- To lunch The graphical tool *rqt_graph* we need to write down this line to let the data fow between the publisher and subscriber by link it bouth of them to a node called "infodev" like showing figure 2 by write down this codes lines
```zsh
source /opt/ros/humble/setup.zsh
rqt_graph
```
#figure(
image("Images/rqt_graph.png", width: 100%),
caption: "rqt_graph",
)
- After linket the publisher and the subscriber the publisher will publish this message one hundred times to the node linked with subscriber
#rect(fill:aqua)[[Info [TALKER] Hello, ROS2 from Julia!(1...100)]]
then the subscriber will respond, in the node ,by
#rect(fill:aqua)[[ Info [LISTNER] I heard Hello, ROS2 from Julia!(1...100) ]] as in figure 3
#figure(
image("Images/pub-sub.png", width: 100%),
caption: "the dialog between the publisher and the subscriber ",
)
- Hint :
to know the current active topic we should write down this code then the terminal will show you the topic list as in figure 4
```zsh
source /opt/ros/humble/setup.zsh
ros2 topic list -t
```
#figure(
image("Images/topic-list.png", width: 100%),
caption: "List of topics",
) <fig:topic-list>
= Clarafication :
- in this part we gonna explain each code line and we gonna start with the pulisher code first :
#rect(fill:orange)[The First programme is the spublisher code]
```Julia
using PyCall
```
- this package can be useful when you want to leverage existing Python libraries or utilize Python-specific functionality within your Julia codebase.
```Julia
##Import the rclpy module from ROS2 Python
rclpy = pyimport("rclpy")
```
- import the rclpy module from Python using PyCall in Julia. rclpy is a Python client library for the Robot Operating System (ROS) 2
```JULIA
str = pyimport("std_msgs.msg")```
- import the std_msgs.msg module from ROS 2 into Julia
```Julia
rclpy.init()
```
- Initialize ROS2 runtime
```Julia
node = rclpy.create_node("my_publisher")
```
- create a node named "my_publisher" using the rclpy
```Julia
rclpy.spin_once(node, timeout_sec=1)
```
- use the spin_once function from the rclpy to execute a single iteration of the ROS 2 event loop within a given timeout period
```Julia
pub = node.create_publisher(str.String, "infodev", 10)
```
- create a publisher within a ROS 2 node named node using the create_publisher function from the rclpy module in Python. This publisher will publish messages of a certain type on a particular topic and particular name
```Julia
for i in range(1, 100)
msg = str.String(data="Hello, ROS2 from Julia! ($(string(i)))")
pub.publish(msg)
txt = "[TALKER] " * msg.data
@info txt
sleep(1)
end
```
- create a publisher node in Julia using PyCall to communicate with a ROS 2 system. It publishes messages to a topic named "infodev" with a string message containing "Hello, ROS2 from Julia!" along with an incrementing number from 1 to 99.
```Julia
rclpy.shutdown()
node.destroy_node()
```
- Deliting the rcply and destroy the node
#rect(fill:orange )[ The second programm : the subscriber code]
```Julia
rclpy = pyimport("rclpy")
```
- Import the rclpy module in Python using PyCall in Julia. This module is part of the Robot Operating System 2 (ROS 2) ecosystem and provides functionality for creating ROS 2 nodes, publishers, subscribers, and more.
```Julia
str = pyimport("std_msgs.msg")
```
- import the std_msgs.msg module from ROS 2 into Julia using PyCall. This module contains message types commonly used in ROS 2, such as standard messages for data types like strings, integers, floats, etc.
```Julia
node = rclpy.create_node("my_subscriber")
```
- creat a node called my subscriber in a specific topic
```Julia
function callback(msg)
txt = "[LISTENER] I heard: " * msg.data
@info txt
end
```
- define a callback function in Julia that will be called when messages are received by a subscriber. This function will print out the received message data along with a prefix indicating that it was received by the listener node.
```Julia
sub = node.create_subscription(str.String, "infodev", callback, 10)
```
- creat a subscriber within a ROS 2 node named node using the create_subscription function from the rclpy module in Python. This subscriber subscribes to messages of type std_msgs.msg.String on the topic "infodev" and invokes the callback function when messages are received.
```Julia
while rclpy.ok()
rclpy.spin_once(node)
end
```
- creat a loop in Python that continuously spins the ROS 2 node until the ROS 2 context (rclpy.ok()) is still valid. This loop ensures that the node continues to process messages and callbacks as long as the ROS 2 context is valid.
//#test[Some test]
|
|
https://github.com/dainbow/MatGos | https://raw.githubusercontent.com/dainbow/MatGos/master/themes/19.typ | typst | #import "../conf.typ": *
= Достаточные условия равномерной сходимости тригонометрического ряда Фурье
#proposition[
Анализ доказательства признака Дини (@dini) показывает, что критерием сходимости
тригонометрического ряда Фурье функции $f in L_(2 pi)$ к $S(x_0)$ в точке $x_0$ является
равенство
#eq[
$lim_(n -> oo) integral_0^delta phi_x_0 (t) sin(n t) dif mu(t) = 0$
]
] <dini-proof>
#lemma[
Пусть $f in L_(2 pi), g$ -- измеримая, $2pi$-периодическая, ограниченная
функция. Тогда коэффициенты Фурье функции $chi(t) = f(x + t)g(t)$ стремятся к
нулю при $n -> oo$ равномерно по $x$.
] <jordan-help-lemma>
#theorem(
"<NAME>",
)[
Если $f in L_(2pi)$ и является функцией ограниченной вариации на $[a, b]$, то
тригонометрический ряд Фурье $f$ сходится к $f(x_0)$ в каждой точке $x_0 in (a, b)$ непрерывности $f(x)$ и
к $(f(x_0 + 0) + f(x_0 - 0)) / 2$ в каждой точке разрыва $x_0 in [a, b]$.
Если, кроме того, $f in C[a, b]$, то тригонометрический ряд Фурье функции $f$ сходится
к ней равномерно на любом отрезке $[a', b'] subset (a, b)$.
]
#proof[
Так как $f$ ограниченной вариации, то она представима в виде $f = f_1 - f_2$,
где $f_1, f_2$ -- неубывающие. Значит нам достаточно доказать утверждения для
неубывающих функций.
По (@dini-proof) нам надо доказать лишь
#eq[
$lim_(n -> oo) integral_0^delta phi_x_0 (t) sin(n t) dif mu(t) = 0$
]
Раскроем $phi_x_0$ и $S(x_0)$ и будем доказывать лишь для
#eq[
$\
lim_(n -> oo) integral_0^delta (f(x_0 + t) - f(x_0 + 0)) / t sin(n t) dif mu(t) = 0$
]
А для слагаемого с минусами аналогично.
По определению правостороннего предела:
#eq[
$forall epsilon > 0: exists delta_1, 0 < delta_1 < delta : space 0 <= f(x_0 + delta_1) - f(x_0 + 0) < epsilon $
]
Перейдём к интегралу Римана, так как $f$ монотонна и используем теорему о
среднем для него:
#eq[
$exists delta_2, 0 < delta_2 < delta_1 : space &integral_0^delta_1 (f(x_0 + t) - f(x_0 + 0)) / t sin(n t) dif t = \ (f(x_0 + delta_1) - f(x_0 + 0))&integral_(delta_1)^(delta_2) sin(n t) / t dif t$
]
Но мы знаем, что $integral_0^(+oo) sin(t) / t dif t$ сходится, поэтому интеграл
с переменным верхним пределом ограничен:
#eq[
$exists C > 0 : space abs(integral_0^u sin(t) / t dif t) <= C$
]
Но теперь рассмотрим:
#eq[
$forall A > 0 : space abs(integral_0^A sin(n t) / t dif t) attach(=, t: n t =: u) abs(integral_0^(n A) sin(u) / u dif u) <= C$
]
Используя эту оценку, получим, что
#eq[
$abs(integral_0^(delta_1) (f(x_0 + t) - f(x_0 + 0)) / t sin(n t) dif t) <= 2 epsilon C$
]
Таким образом, разобьём исходный интеграл от $0$ до $delta$ на сумму интегралов
от $0$ до $delta_1$ и от $delta_1$ до $delta$.
Получим, что предел интеграла действительно равен нулю, применим признак Дини и
получим первую часть утверждения теоремы.
Перейдём к доказательству равномерной сходимости.
Вспомним, как мы расписывали разность $S_n (f, x_0) - S(x_0)$ на четыре
слагаемых в доказательстве признака Дини (@dini).
Применим к каждому из трёх последних слагаемых вспомогательную лемму
(@jordan-help-lemma) и сведём доказательство к тому, чтобы доказать
равномерность предела первого слагаемого (который мы уже рассматривали в текущем
доказательстве).
Это сделать несложно, заметим, что если $f$ непрерывна на $[a', b']$, то она
равномерно непрерывна на нём, а значит мы можем найти $delta_1$ из текущего
доказательства независимо от $x_0$.
Также независимо от $x_0$ мы ограничиваем интеграл от $sin(n x) / x$, поэтому
второе утвеждение текущец теоремы доказано.
]
|
|
https://github.com/rqy2002/typst-experiment | https://raw.githubusercontent.com/rqy2002/typst-experiment/main/README.md | markdown | This is some experimental Typst file by me. Maybe it will become a little library.
I want to implement:
- [x] Basic theorem environments like LaTeX. (here is a better at [Typst-theorems](https://github.com/sahasatvik/typst-theorems))
- [ ] Commutative diagrams like Tikzcd.
- [ ] Fonts configuration as in xeCJK. (Now partially implemented)
- [ ] Maybe more others?
Problems with Typst:
- [ ] Cannot set the font according to the unicode class or the range of characters.
- [x] I can't find a similar symbol as \varinjlim in LaTeX. (I implement it by myself!)
- [ ] I can't update a counter when another counter (like header) steps.
|
|
https://github.com/jneug/schule-typst | https://raw.githubusercontent.com/jneug/schule-typst/main/src/api/helper.typ | typst | MIT License | #import "../theme.typ"
/// Hilfesfunktion für die Formatierung von Füllfarben für Tabellen.
/// Die Funktion wird mit der
/// #example[```
/// #table(
/// columns: 4,
/// fill: tablefill(
/// footerfill: gradient.linear(..color.map.vlag, angle:90deg),
/// oddfill: color.map.vlag.first(),
/// headers: 2,
/// footers: 1,
/// colheaders: 1,
/// rows: 9
/// ),
/// ..range(36).map(str)
/// )
/// ```]
#let table-fill(
fill: white,
headerfill: theme.table.header,
footerfill: theme.table.header,
oddfill: theme.bg.muted,
striped: true,
headers: 1,
footers: 0,
colheaders: 0,
colfooters: 0,
columns: auto,
rows: auto,
fills: (rows: (:), cols: (:)),
) = (column, row) => {
if row < headers or column < colheaders {
return headerfill
} else if rows != auto and (row >= rows - footers) {
return footerfill
} else if columns != auto and (column >= columns - colfooters) {
return footerfill
} else if "rows" in fills and str(row) in fills.rows {
return fills.rows.at(str(row))
} else if "cols" in fills and str(column) in fills.cols {
return fills.cols.at(str(column))
} else if striped and calc.odd(row) {
return oddfill
} else {
return fill
}
}
#let eval-math(term) = {
// eval(term.text, mode: "math")
// " = "
let _eval_term = term.text.replace(":", "/").replace("dot", "*")
[#eval(_eval_term, mode: "code")]
}
#let repeat(n, sep: pagebreak, body) = {
for i in range(n) {
if i > 0 {
sep()
}
body
}
}
#let pnup(n, body) = {
grid(
columns: calc.ceil(calc.sqrt(n)),
..for i in range(n) {
([#body],)
},
)
}
|
https://github.com/kom113/typst-examples | https://raw.githubusercontent.com/kom113/typst-examples/master/README.md | markdown | # typst-examples
Personal small code snippets written in typst.
|
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/spread-12.typ | typst | Other | // Spread at beginning.
#{
let f(..a, b) = (a, b)
test(repr(f(1)), "((), 1)")
test(repr(f(1, 2, 3)), "((1, 2), 3)")
test(repr(f(1, 2, 3, 4, 5)), "((1, 2, 3, 4), 5)")
}
|
https://github.com/TGM-HIT/typst-protocol | https://raw.githubusercontent.com/TGM-HIT/typst-protocol/main/src/l10n.typ | typst | MIT License | #import "@preview/linguify:0.4.0": set-database as _set_database, linguify
/// *Internal function.* Initializes Linguify with the template's translation file.
///
/// -> content
#let set-database() = _set_database(toml("l10n.toml"))
#let supervisor = linguify("supervisor")
#let grade = linguify("grade")
#let version = linguify("version")
#let started = linguify("started")
#let finished = linguify("finished")
#let figure = linguify("figure")
#let table = linguify("table")
#let listing = linguify("listing")
#let contents = linguify("contents")
#let bibliography = linguify("bibliography")
#let list-of-figures = linguify("list-of-figures")
#let list-of-tables = linguify("list-of-tables")
#let list-of-listings = linguify("list-of-listings")
#let glossary = linguify("glossary")
|
https://github.com/stephane-klein/typst-sklein-resume-poc | https://raw.githubusercontent.com/stephane-klein/typst-sklein-resume-poc/main/README.md | markdown | # Proof of concept repository to test Typst to build my future resume
Warning, this project is under development, several parts are poorly done.
```sh
$ mise install
$ typst --version
typst 0.10.0 (70ca0d25)
$ ./scripts/watch.sh
$ evince ./resume.pdf
```
I was inspired by the The https://github.com/mintyfrankie/brilliant-CV project.
|
|
https://github.com/teamdailypractice/pdf-tools | https://raw.githubusercontent.com/teamdailypractice/pdf-tools/main/typst-pdf/thirukkural-thankyou/001-tty.typ | typst | #set page("a4")
#set text(
font: "TSCu_SaiIndira",
size: 16pt
)
#set align(center)
நன்றி
\
\
#set align(left)
#set text(
font: "TSCu_SaiIndira",
size: 14pt
)
#show link: underline
1. #link("https://www.tamilvu.org/")[தமிழ் இணையக் கல்விக்கழகம்]
2. #link("https://www.tamilvu.org/library/l2100/html/l2100ind.htm")[தமிழ் இணையக் கல்விக்கழகம் - திருக்குறள்]
3. #link("https://www.tamildigitallibrary.in/")[தமிழிணையம் - மின்னூலகம்]
4. #link("https://www.projectmadurai.org/pmworks.html")[மதுரை தமிழ் இலக்கிய மின்தொகுப்புத் திட்டம்]
5. #link("https://www.azhagi.com/")[அழகி - மென்பொருள் செயலி]
\
இந்த முயற்சி, அனைவரும் திருக்குறளை\
+ எளிதில் படிக்க
+ நகல் எடுக்க
+ ஒரு அதிகாரத்தை மட்டும் நகல் எடுத்து படிக்க
+ தாளில் படிக்க வசதியாக, எழுத்து அளவு பெரியதாக
\
+ திருக்குறள் முழுவதும், தமிழ் இணையக் கல்விக்கழகத்தின் வலைத்தளத்தில் இருந்து எடுக்கப்பட்டது.\
+ சில இடங்களில், படிப்பதற்கு எளிதாக மாற்றப்பட்டு உள்ளது.\
#set text(
font: "TSCu_SaiIndira",
size: 16pt
)
#set align(center)
நன்றி
\
#set text(
font: "TSCu_SaiIndira",
size: 14pt
)
#set align(left)
+ திருக்குறளுக்கு உரை எழுதிய, உரை ஆசிரியர்கள் அனைவருக்கும். \
+ இனி காலத்தின் மாற்றத்திற்கு ஏற்ப, தமிழிலும், ஆங்கிலத்திலும், பிற மொழிகளிலும் உரை எழுதப்போகும் ஆசிரியர்கள் அனைவருக்கும். \
\
நிறை-உடைமை நீங்காமை வேண்டின்; பொறை-உடைமை,
போற்றி ஒழுகப்-படும்.
|
|
https://github.com/typst-cn/awesome-typst-cn | https://raw.githubusercontent.com/typst-cn/awesome-typst-cn/master/README.draft.md | markdown | # Awesome Typst 中文版
列表收集了 [Typst](https://github.com/typst/typst) 相关的资源,扩展,应用等。
本列表由 [Typst 中文社区](https://typst.cn) 维护,欢迎提交 PR 一起维护。微信群:
<img src="./assets/wechat-qrcode.jpeg" style="height:500px"/>
<!-- 目录由 https://github.com/pbzweihander/markdown-toc 工具生成 -->
<!-- markdown-toc -->
## 官方项目链接
- [typst.app](https://typst.app): Typst 官网和 Typst 在线 App.
- [Typst 文档](https://typst.app/docs)
- [GitHub](https://github.com/typst/typst)
- [博客](https://typst.app/blog/)
- 社交媒体: [Discord] [Instagram] [LinkedIn] [Twitter]
[discord]: https://discord.gg/2uDybryKPe
[instagram]: https://instagram.com/typstapp/
[linkedin]: https://www.linkedin.com/company/typst/
[twitter]: https://twitter.com/typstapp/
## 文档和教程
### 文档
- [Typst中文文档](https://github.com/Zuttergutao/Typstdocs-Zh-CN-): 随便翻译的Typst中文文档
## 第三方工具
### 工具
- [yank](https://addons.mozilla.org/en-US/firefox/addon/yank/):Firefox 扩展,用到了 typst 作为内容输出格式支持 , Yank URL and title of current tab, format to a chosen markup language, and copy to clipboard (supports typst link format)
- [typst-bot](https://github.com/mattfbacon/typst-bot):discord 机器人,支持 typst 渲染 ,A discord bot to render Typst code
- [typst-fmt](https://github.com/astrale-sharp/typst-fmt/): typ 文件格式化工具,An in development Typst formatter (PR welcomed)
- [typst-live](https://github.com/ItsEthra/typst-live): 基于浏览器的 PDF 自动刷新工具,Hot reloading of pdf in web browser
- [typst-pandoc](https://github.com/lvignoli/typst-pandoc): Pandoc 集成 ,Typst custom reader and writer for Pandoc
### 编辑器
- [Drafts](https://github.com/limads/drafts): Typst 的编辑器(WIP),Drafts is an editor for technical writing that leverages the Typst typesetting system.
- [typster](https://github.com/wflixu/typster): Tauri 编写的 typst 阅读和编辑器,typst reader and editor
## 模板
### 官方
- [typst/templates](https://github.com/typst/templates): 官方提供的模板,可以下载,也可以直接在 typst.app 在线服务中使用
### 中国大学论文
- [pkuthss-typst](https://github.com/lucifer1004/pkuthss-typst): 北京大学学位论文模板,Typst template for dissertations in Peking University (PKU).
- [BUAA-typst](https://github.com/cherichy/BUAA-typst): 北京航空航天大学学位论文模板
- [bupt-typst](https://github.com/QQKdeGit/bupt-typst): 北京邮电大学本科学士学位论文模板
- [HUST-typst-template](https://github.com/werifu/HUST-typst-template): 用于华科毕业设计(本科)的 typst 模板。
- [SHU-Bachelor-Thesis-Typst](https://github.com/shuosc/SHU-Bachelor-Thesis-Typst): 上海大学本科毕业论文 typst 模板 (开发ing)
- [sysu-thesis-typst](https://github.com/howardlau1999/sysu-thesis-typst): 中山大学学位论文 Typst 模板
- [ZJGSU-typst-template](https://github.com/jujimeizuo/ZJGSU-typst-template): 浙江工商大学毕业设计(本科)的 typst 模板。
- [CQUPTypst](https://github.com/jerrita/CQUPTypst): 一个 Typest 模板,但是大专
- [zjut-report-typst](https://github.com/zjutjh/zjut-report-typst): 浙江工业大学一些实验报告的 Typst 模板, Some report templates of Zhejiang University of Technology.
- [HIT-Thesis-Typst](https://github.com/chosertech/HIT-Thesis-Typst): 适用于哈尔滨工业大学学位论文的 Typst 模板
### 论文
- [typst-apa7ish](https://github.com/mrwunderbar666/typst-apa7ish): APA格式第七版模板, Typst Template that (mostly) complies with APA7 Style (Work in Progress).
- [ieee-typst-template](https://github.com/bsp0109/ieee-typst-template): IEEE 论文的模板,A template to write IEEE Papers in Typst
- [simple-typst-thesis](https://github.com/zagoli/simple-typst-thesis): 编写简单论文的模板,A template useful for writing simple thesis in Typst
- [SimplePaper](https://github.com/1bitbool/SimplePaper): SimplePaper 是 Typst 的模版,用于生成简单的论文。
- [typst-templates](https://github.com/eigenein/typst-templates): 个人编写的模板,Templates for Typst
- [typst-templates](https://github.com/haxibami/typst-template): 个人编写的模板,My typst templates
- [typstry](https://github.com/qjcg/typstry): 个人编写的模板,A Tapestry of Typst Templates & Examples
- [tyspt-mla9-template](https://github.com/wychwitch/tyspt-mla9-template): MLA 第九版模板,An MLA 9th edition template
- [writable-gm-screen-inserts](https://github.com/LLBlumire/writable-gm-screen-inserts):类似游戏 cheat sheet, Writable Game Master Screen Inserts
- [simple-typst-thesis](https://github.com/zagoli/simple-typst-thesis): 简单的论文模板 ,This template defines a frontpage with a centered title and author informations, and an optional logo.
### 信件
- [typst-din-5008-letter](https://github.com/ludwig-austermann/typst-din-5008-letter): DIN 5008 标准的商务文稿模板, A template for DIN 5008 inspired typst letter. Furthermore, there is a envelope template.
### 笔记
- [vex-typst-notebook](https://github.com/frosty884/vex-typst-notebook): Vex 机器人比赛工程笔记模板 ,This repository contains an open source VEX Robotics notebook template for use with the Typst note-taking app.
- [typst-notebook](https://github.com/Fr4nk1in-USTC/typst-notebook): 简单的笔记模板 ,A simple template for taking notes in Typst.
### 任务 工作 作业
- [assignment-template](https://github.com/AntoniosBarotsis/typst-assignment-template): 简单的作业模板,A simple assignment template
- [typst-assignment-template](https://github.com/astrale-sharp/typst-assignement-template.git): 作业模板,Yet another simple assignment template
- [typst-homework-template](https://github.com/OriginCode/typst-homework-template): 作业模板,A simple homework template inspired by the LaTeX homework template by <NAME>
- [typst-assignment-template](https://github.com/gRox167/typst-assignment-template.git): 作业模板,Yet another simple assignment template with a cover and several useful math symbols.
### 简历
- [uniquecv-typst](https://github.com/gaoachao/uniquecv-typst): 一个使用Typst编写的简历模板,基于uniquecv。
- [typst-cv-miku](https://github.com/ice-kylin/typst-cv-miku): 简历模板,有多种版本,包括中文 ,This is a simple, elegant, academic style CV template for typst. Support for English and Chinese (and more).
- [alta-typst](https://github.com/GeorgeHoneywood/alta-typst): 一份简历模板,参考 `AltaCV`,A simple Typst CV template, inspired by AltaCV by <NAME>
- [attractive-typst-resume](https://github.com/Harkunwar/attractive-typst-resume):一份有吸引力的简历模板, A modern looking, attractive CV/Resume template by <NAME>
- [moderncv.typst](https://github.com/giovanniberti/moderncv.typst): 参考 `moderncv` 的简历模板 ,A CV template inspired by LaTeX's `moderncv`
- [resume.typ](https://github.com/wusyong/resume.typ): 简历模板,Simple and ergonimic template to generate resume and CV
- [simplecv](https://github.com/LaurenzV/simplecv): 一份简单的简历模板,SimpleCV is a simple and elegant CV template written in Typst
- [typst-cv-template](https://github.com/skyzh/typst-cv-template): 好像是模板作者自己的简历,Chi CV Template (For Typst)
- [typst-resume-template](https://github.com/bamboovir/typst-resume-template): 一份简历模板,Aesthetic style inspired by the Awesome-CV project
- [vercanard](https://github.com/elegaanz/vercanard): 一份彩色的简历模板,A colorful resume template for Typst.
- [awesomeCV-Typst](https://github.com/mintyfrankie/awesomeCV-Typst) - 一份参考 `Awesome-CV` 的简历模版,支持多语言简历管理, An opinionated, relived CV template inspired by the LaTeX `Awesome-CV` project, but with multilingual support and more
- [Chinese-Resume-in-Typst](https://github.com/OrangeX4/Chinese-Resume-in-Typst): 使用 Typst 编写的中文简历, 语法简洁, 样式美观, 开箱即用, 可选是否显示照片
### 学术海报
- [typst-poster](https://github.com/pncnmnp/typst-poster): 一份学术海报模板,An academic poster template
### 演示文稿
- [typst-slides](https://github.com/andreasKroepelin/typst-slides): 创建演示文稿的模板,A template for creating slides in Typst
## 库和工具类
### 格式 工具
- [typst-index](https://github.com/RolfBremer/typst-index): 创建索引的工具库, Automatically create a handcrafted index in typst. This typst component allows the automatic creation of an Index page with entries that have been manually marked in the document by its authors. This, in times of advanced search functionality, seems somewhat outdated, but a handcrafted index like this allows the authors to point the reader to just the right location in the document.
- [typst-tablex](https://github.com/PgBiel/typst-tablex): 表格组件, More powerful and customizable tables in Typst.
- [typst-diagbox](https://github.com/PgBiel/typst-diagbox): 对角线分割符,A library for diagonal line dividers in Typst tables
- [typst-ansi_render](https://github.com/8LWXpg/typst-ansi_render): ANSI 转义序列渲染,ANSI Escape Sequence Renderer
### 图形 色彩
- [typst-canvas](https://github.com/johannes-wolf/typst-canvas): Typst Canvas 库
- [typst-palette](https://github.com/kaarmu/typst-palette): 调色板工具包,A package of color palettes for Typst
- [typst-plot](https://github.com/johannes-wolf/typst-plot): 绘图库,A library for plotting line charts
- [typst-boxes](https://github.com/lkoehl/typst-boxes): 可以绘制彩色的文本框,还有一种可以旋转的便利贴样式,A library to draw colorful boxes.
- [typst-color-emoji](https://github.com/silent-dxx/typst-color-emoji):emoji 库, A simple library for drawing color emoji for Typst. Drawing using twemoji and openmoji open-source emoji libraries.
### 语言 文本
- [notes.typ](https://github.com/tbug/notes.typ): 脚注,尾注,Footnotes, endnotes, notes.
- [leipzig-gloss](https://gitea.everydayimshuflin.com/greg/typst-lepizig-glossing): 莱比锡标注系统支持库,A library that provides primitives for creating glossing rules according to Leipzig.
- [typst-ipa](https://github.com/imatpot/typst-ipa): ASCII 码,国际音标转换,🔄 ASCII / IPA conversion for Typst
### 数学
- [Formal-Methods-Typst](https://github.com/txtxj/Formal-Methods-Typst): 用于书写形式化中数理逻辑证明题
- [commutative-diagrams](https://gitlab.com/giacomogallina/typst-cd):交换图/交换图表库, A library for creating commutative diagrams
- [typst-theorems](https://github.com/sahasatvik/typst-theorems): 一个辅助编号的库,A library for creating numbered theorem environments
- [typst-undergradmath](https://github.com/johanvx/typst-undergradmath): `undergradmath` Typst 移植,A Typst port of [undergradmath](https://gitlab.com/jim.hefferon/undergradmath)
- [typst-undergradmath-zh](https://github.com/AlexanderMisel/typst-undergradmath-zh) : Typst大学数学,一个大学数学常用符号在Typst中如何输入的总结,A Typst port of undergradmath
### 物理 化学 电学
- [circuitypst](https://github.com/fenjalien/circuitypst): 移植 `circuitikz` 实现电路图形的支持 ,A port of circuitikz to Typst using typst-canvas
- [typst-physics](https://github.com/Leedehai/typst-physics): 物理符号库,A library for usual physics notations, e.g. vectors, matrices, derivatives, Dirac brakets, tensors, isotopes
### 杂项
- [typst-timetable](https://github.com/ludwig-austermann/typst-timetable): 时刻表模板 ,A typst template for timetables
- [typst-algorithms](https://github.com/platformer/typst-algorithms): 用于编写算法,为代码的工具包,Typst module for writing algorithms. Use the algo function for writing pseudocode and the code function for writing codeblocks with line numbers.
- [typst-truthtable](https://github.com/PgBiel/typst-truthtable): 生成真值表的库 , A library for generating truth tables
- [typst-raytracer](https://github.com/SeniorMars/typst-raytracer): raytracer in typst
## 编程
- [jupyter2typst](https://github.com/dermesser/jupyter2typst): 将 `jupyter notebooks` 转换成 `typst ` 代码的工具 ,A handy tool for converting jupyter notebooks into typst code for producing PDFs.
- [typst.ts](https://github.com/Myriad-Dreamin/typst.ts): 在 javascript 环境中渲染 typ 文件 ,Typst.ts allows you to independently run the Typst compiler and exporter (renderer) in your browser.
- [inktyp](https://github.com/herlev/inktyp): Inkscape 插件,用于在 inkscape 中插入 typst 公式, Insert and edit typst equations in inkscape.
- [typst-egui](https://github.com/mattfbacon/typst-egui): 在 egui 中显示 Typst 文档 ,Very restricted proof-of-concept for showing Typst documents inside egui.
- [typst-py](https://github.com/messense/typst-py): Typst 的 Python 绑定, Python binding to typst, a new markup-based typesetting system that is powerful and easy to learn.
- [Typst xmake](https://github.com/star-hengxing/typst-xmake): 使用 xmake 编译 typst ,实现伪热更新 , Use xmake as build system to compile typst to pdf.
- [leetcode.typ](https://github.com/lucifer1004/leetcode.typ): 在 Typst 中刷 Leetcode 题目
## 编辑器集成插件
### 通用
- [frozolotl/tree-sitter-typst](https://github.com/frozolotl/tree-sitter-typst):TreeSitter 插件, A tree-sitter grammar with a focus on correctness.
- [SeniorMars/tree-sitter-typst](https://github.com/SeniorMars/tree-sitter-typst):TreeSitter 插件, A TreeSitter parser for the Typst File Format
### Emacs
- [typst-mode.el](https://github.com/Ziqi-Yang/typst-mode.el): Emacs 插件,An Emacs major mode for the `typst` markup-based typesetting system
### 语言服务 LSP
- [typst-lsp](https://github.com/nvarner/typst-lsp): typst lsp, rust 编写,A brand-new language server for Typst, plus a VS Code extension
### Obsidian
- [obsidian-typst](https://github.com/fenjalien/obsidian-typst): obsidian 插件,Renders typst code blocks in Obsidian into images using Typst through the power of WASM!
### Vim
- [typst.nvim](https://github.com/SeniorMars/typst.nvim): nvim 插件, WIP. Goals: Treesitter highlighting, snippets, and a smooth intergration with neovim
- [typst.vim](https://github.com/kaarmu/typst.vim): Vim 插件,Vim plugin for Typst
### VSCode
- [Typst LSP VS Code Extension](https://marketplace.visualstudio.com/items?itemName=nvarner.typst-lsp) ,VSCode 插件
- [typst-preview-vscode](https://github.com/Enter-tainer/typst-preview-vscode): VSCode Typst 预览插件, Preview your Typst files in vscode instantly
### 其他
- [typst-action](https://github.com/lvignoli/typst-action): Github Action 支持,Build Typst documents using GitHub actions
|
|
https://github.com/TycheTellsTales/typst-pho | https://raw.githubusercontent.com/TycheTellsTales/typst-pho/main/tests/registerBoard/test.typ | typst | #import "../../lib.typ": boards
= Pre-Registration
#context boards.get()
#context boards.register("Test123", ("Test1", "Test2", "Test3"))
= Post-Registration
#context boards.get()
|
|
https://github.com/thornoar/typst-libraries | https://raw.githubusercontent.com/thornoar/typst-libraries/master/drawing.typ | typst | #import "@preview/cetz:0.2.2" as cz
#import "@preview/fletcher:0.4.5" as fr
|
|
https://github.com/404Wolf/stainless-technical-breifing-natug | https://raw.githubusercontent.com/404Wolf/stainless-technical-breifing-natug/main/README.md | markdown | # NATuG Technical Breifing
A technical breifing of [NATuG](https://github.com/natug3/natug), a nucleic acid nanotube graphing desktop application, made in preperation for a StainlessAPI interview.
## Building
To build the PDF, run `nix build github:404Wolf/stainless-technical-breifing-natug` (or, if you don't care about repreducibility guarentees, `make`. This is a typst project that also has some mermaid plots.
|
|
https://github.com/Roger-luo/tu | https://raw.githubusercontent.com/Roger-luo/tu/main/README.md | markdown | MIT License | # Tu
Collection of drawing tools for typst, built on top of [cetz](https://github.com/johannes-wolf/cetz/)
|
https://github.com/maxgraw/bachelor | https://raw.githubusercontent.com/maxgraw/bachelor/main/apps/document/src/2-theory/web-component.typ | typst | Web Components sind eine Reihe an Web APIs, welche es ermöglichen abgekapselte und wiederverwendbare Komponenten in Webdokumenten sowie Webanwendungen zu erstellen. Die Technologie besteht aus mehreren Komponenten, welche teilweise einzelnd oder in Kombination verwendet werden können. Die drei Kernkomponenten sind Custom Elements, Shadow DOM und HTML Templates @web-components-introduction.
=== Custom Elements
Custom Elements ermöglichen es, eigene voll funktionsfähige DOM-Elemente zu erstellen. Durch die Definition eines Custom Elements kann ein Element korrekt konstruiert werden und es wird festgelegt, wie sich Elemente dieser Klasse bei Änderungen verhalten sollen @web-components-introduction @html-spec.
Ein Custom Element wird als JavaScript-Klasse definiert, die von einem HTMLElement abgeleitet wird. Die Klasse besitzt verschiedene vordefinierte Methoden, die das Verhalten des Elements definieren. Die Methode „connectedCallback“ ermöglicht das Ausführen von Code, wenn das Element dem Dokument hinzugefügt wird, während die Methode „disconnectedCallback“ das Verhalten beim Entfernen des Elements definiert @html-spec.
In @customElement-listing wird eine einfache Klasse „MyCustomElement“ definiert, die von HTMLElement abgeleitet ist. Hierbei werden die zuvor erläuterten Methoden „connectedCallback“ und „disconnectedCallback“ implementiert und anschließend die Klasse über die Methode „customElements.define“ registriert. Durch die Registrierung wird das Custom Element als benutzerdefiniertes DOM-Element verfügbar @html-spec.
#let code = ```js
class MyCustomElement extends HTMLElement {
constructor() {
super();
}
connectedCallback() {
}
disconnectedCallback() {
}
}
customElements.define("my-custom-element", MyCustomElement);
```
#figure(
code,
caption: [Definition eines Custom Elements in JavaScript]
) <customElement-listing>
=== Shadow DOM
Shadow DOM ist ein wesentlicher Bestandteil der Webkomponenten-Technologie, der es ermöglicht, die internen Implementierungsdetails von Webkomponenten zu kapseln und somit den Stil und das Verhalten dieser Komponenten vor äußeren Einflüssen zu schützen. Durch die Verwendung des Shadow DOM kannn ein separates DOM für jede Komponente erstellt werden. Dieser Mechanismus sorgt dafür, dass die Strukturen, Stile und Skripte innerhalb des Shadow DOM nicht mit dem restlichen Dokument kollidieren oder versehentlich beeinflusst werden @web-components-introduction @dom-spec.
Um die Verwendung von Shadow DOM zu verdeutlichen, wurde die zuvor definierte "MyCustomElement" Klasse in @shadowDom-listing erweitert. In der „connectedCallback“-Funktion der Klasse wird ein Shadow Root über die Methode „attachShadow“ an die Instanz des Custom Elements angehängt. Anschließend wird ein span-HTML-Element als Wrapper erstellt und diesem eine Klasse zugewiesen. Ein Style-Element wird erstellt und der Style des Wrappers definiert. Abschließend werden das Style- und das Wrapper-Element dem Shadow Root hinzugefügt. Dadurch wird eine isolierte DOM-Struktur sowie ein eigener Stil für das Custom Element erstellt.
#let code = ```js
class MyCustomElement extends HTMLElement {
constructor() {
super();
}
connectedCallback() {
const shadow = this.attachShadow({ mode: "open" });
const wrapper = document.createElement("span");
wrapper.setAttribute("class", "wrapper");
const style = document.createElement("style");
style.textContent = `
.wrapper {
position: relative;
}
`;
shadow.appendChild(style);
shadow.appendChild(wrapper);
}
}
```
#figure(
code,
caption: [Erweiterung des Custom Elements mit Shadow DOM]
) <shadowDom-listing>
=== HTML Templates
HTML Templates sind eine weitere wichtige Technologie innerhalb der Web Components-Spezifikation, die zur Strukturierung von wiederverwendbaren HTML-Blöcken innerhalb von Webanwendungen verwendet wird. Sie ermöglichen es Markup zu definieren, das im Dokument inaktiv bleibt, bis es durch JavaScript instanziiert und als Teil einer Webkomponente verwendet wird @web-components-introduction @html-spec. Für diese Arbeit sind HTML Templates von geringer Relevanz und werden daher nicht weiter erläutert. |
|
https://github.com/protohaven/printed_materials | https://raw.githubusercontent.com/protohaven/printed_materials/main/common-policy/filing_a_tool_report.typ | typst | = Filing a Tool Report
If you are using a tool, and the tool becomes unsafe, damaged, or is not working properly, you must notify a tech. The tech may instruct you to submit a tool report:
https://airtable.com/appbIlORlmbIxNU1L/shrluff2WSzy8c3xd
Notifying the tech will help us keep signage up to date, and make sure the users who come in after you have all the information they need to use the tool safely, even if they don't use discord.
|
|
https://github.com/kokkonisd/typst-phd-template | https://raw.githubusercontent.com/kokkonisd/typst-phd-template/main/src/lib.typ | typst | The Unlicense | #import "colors.typ": *
#import "common.typ": *
#import "presentation.typ": *
#import "report.typ": *
|
https://github.com/Nikudanngo/typst-ja-resume-template | https://raw.githubusercontent.com/Nikudanngo/typst-ja-resume-template/main/template.typ | typst | MIT License | #let systemFontSize = 8pt
#let nameFontSize = 16pt
#let inputFontSize = 10pt
#let addSpace(input) = {
box(
[#pad(left:1cm,[#input])],
)
}
#let 私(性読み: "",名読み: "", 性: "",名: "",生年月日: "",年齢: 0) = {
stack(
place(
top + right,
dy: -10pt,
datetime.today().display(
"[year]年[month]月[day]日現在",
)
),
rect(
stroke: (
bottom: none,
top: 1.5pt,
left: 1.5pt,
right: 1.5pt
),
height: auto,
width: 100%,
[
#grid(
columns: (1.5cm,4cm,1fr),
[ふりがな],
[#align(center,性読み)],
[#align(start,名読み)]
)
]
),
line(
length: 100%,
stroke: (
dash:"dashed",
)
),
rect(
stroke: (
bottom: 0.5pt,
top: none,
left: 1.5pt,
right: 1.5pt
),
height: auto,
width: 100%,
[
#align(top,
grid(
columns: (1.5cm,4cm,1fr),
[氏 #h(0.6cm)名],
[
#pad(y: 0.4cm,align(center + horizon,text(nameFontSize,性)))
],
[
#pad(y: 0.4cm,align(start + horizon,text(nameFontSize,名)))
]
)
)
]
),
rect(
stroke: (
bottom: 0.5pt,
top: none,
left: 1.5pt,
right: 1.5pt
),
height: auto,
width: 100%,
[
#align(start + top,
grid(
columns: (1.5cm,1fr),
[生年月日],
pad(y: 0.2cm,[#addSpace(text(inputFontSize,[#生年月日 生 #h(0.6cm) (満 #h(0.5em) #年齢 才)]))])
)
)
]
)
)
}
#let 証明写真(写真: "") = {
set text(size: 7pt)
pad(
bottom: 0.3cm,
left: 0.4cm,
box(
stroke: (
dash:"dashed",
),
height: 4cm,
width: 3cm,
[
#if (写真 == ""){
align(
center + horizon,
[
写真を貼る位置\
(縦 40mm, 横 30mm)
]
)
} else {
image(写真, width: 3cm, height: 4cm)
}
]
)
)
}
#let アドレス(住所ふりがな1: "", 住所1: "",住所ふりがな2: "", 住所2: "",郵便番号1: "",郵便番号2: "", 電話番号1:"",Email1:"",電話番号2:"",Email2:"") = {
stack(
grid(
columns: (5fr,2fr),
[
#stack(
rect(
stroke: (
bottom: none,
top: none,
left: 1.5pt,
right: 0.5pt
),[
#grid(
columns: (1.5cm,1fr),
[ふりがな],
[#align(center,住所ふりがな1)]
)
]
),
line(stroke: (dash:"dashed"), length: 100%)
)
],
[
#rect(
width: 100%,
stroke: (
bottom: 0.5pt,
top: 1.5pt,
left: none,
right: 1.5pt
),[
電話 #h(10pt) #電話番号1
]
)
]
),
grid(
columns: (5fr,2fr),
[
#rect(
width: 100%,
height: 1.8cm,
stroke: (
bottom: 0.5pt,
top: none,
left: 1.5pt,
right: 0.5pt
),[
#if (郵便番号1 == "") {
[現住所 (〒 #h(20pt) - #h(20pt))]
} else {
[現住所 (〒 #text(tracking: 1pt,systemFontSize,郵便番号1))]
}
#pad(y: 0.2cm ,align(center,text(inputFontSize,住所1)))
]
)
],
[
#rect(
width: 100%,
height: 1.8cm,
stroke: (
bottom: 0.5pt,
top: none,
left: none,
right: 1.5pt
),[
E-mail
#pad(y: 0.3cm ,align(center,Email1))
]
)
]
),
grid(
columns: (5fr,2fr),
[
#stack(
rect(
stroke: (
bottom: none,
top: none,
left: 1.5pt,
right: 0.5pt
),[
#grid(
columns: (1.5cm,1fr),
[ふりがな],
[#align(center,住所ふりがな2)]
)
]
),
line(stroke: (dash:"dashed"), length: 100%)
)
],
[
#rect(
width: 100%,
stroke: (
bottom: 0.5pt,
top: none,
left: none,
right: 1.5pt
),[
電話 #h(10pt) #電話番号2
]
)
]
),
grid(
columns: (5fr,2fr),
[
#rect(
width: 100%,
height: 1.8cm,
stroke: (
bottom: 1.5pt,
top: none,
left: 1.5pt,
right: 0.5pt
),[
#if (郵便番号2 == "") {
[連絡先 (〒 #h(20pt) - #h(20pt))]
} else {
[連絡先 (〒 #text(tracking: 1pt,systemFontSize,郵便番号2))]
}
#pad(y: 0.2cm ,align(center,text(inputFontSize,住所2)))
]
)
],
[
#rect(
width: 100%,
height: 1.8cm,
stroke: (
bottom: 1.5pt,
top: none,
left: none,
right: 1.5pt
),[
E-mail
#pad(y: 0.3cm ,align(center,Email2))
]
)
]
)
)
}
#let 学歴(年:"", 月:"",学歴: "") = {
set text(inputFontSize)
grid(
columns: (1.5cm,0.8cm,1fr),
[
#align(center,年)
],
[
#align(center,月)
],
[
#if (年 == "" and 月 == "" and 学歴 == "") {
align(center,[学歴])
} else {
align(start + horizon,[#h(5pt)#学歴])
}
]
)
}
#let 職歴(年:"", 月:"",職歴:"") = {
set text(inputFontSize)
grid(
columns: (1.5cm,0.8cm,1fr),
[
#align(center,年)
],
[
#align(center,月)
],
[
#if (年 == "" and 月 == "" and 職歴 == "") {
align(center,[職歴])
} else {
align(start + horizon,[#h(5pt)#職歴])
}
]
)
}
#let 資格(年:"", 月:"",資格:"") = {
set text(inputFontSize)
grid(
columns: (1.5cm,0.8cm,1fr),
[
#align(center,年)
],
[
#align(center,月)
],
[
#align(start + horizon,[#h(5pt)#資格])
]
)
}
#let 以上() = {
set text(inputFontSize)
grid(
columns: (1.5cm,0.8cm,1fr),
[],
[],
[
#align(end + horizon,[以上#h(2cm)])
]
)
}
// mode: "学歴・職歴" or "資格"
#let 経歴(children,hegithLength: 12.6cm,columns: 0,mode:"") = {
stack(
box(
stroke: (
bottom: 1.5pt,
top: 1.5pt,
left: 1.5pt,
right: 1.5pt
),
height: hegithLength,
width: 100%,
[
#grid(
columns: (1.5cm,0.8cm,1fr),
[
#rect(
stroke: (
bottom: none,
top: none,
left: none,
right: 0.5pt
),
height: 100%,
width: 100%,
[
#align(center,[年])
]
)
],
[
#rect(
stroke: (
bottom: none,
top: none,
left: none,
right: 0.5pt
),
height: 100%,
width: 100%,
[
#align(center,[月])
]
)
],
[
#rect(
width: 100%,
height: 100%,
stroke: (
bottom: none,
top: none,
left: none,
right: none,
),
align(center,[
#if (mode == "学歴・職歴") {
[学歴・職歴(各別にまとめて書く)]
} else if (mode == "資格") {
[免許・資格]
}]
)
)
]
)
#place(
start + top,
dy: 10pt,
[
#let n = 0
#while n < columns {
[#pad(y: 0.26cm,line(stroke: 0.5pt, length: 100%))]
n = n + 1
}
]
)
#place(
top + left,
dy: 0.9cm,
children
)
]
),
)
}
#let 志望動機(children) = {
stack(
rect(
stroke: (
bottom: 1.5pt,
top: 1.5pt,
left: 1.5pt,
right: 1.5pt
),
height: 5cm,
width: 100%,
[
志望の動機、特技、好きな学科、アピールポイントなど
#linebreak()
#set text(inputFontSize)
#children
]
)
)
}
#let 本人希望(children) = {
stack(
rect(
stroke: (
bottom: 1.5pt,
top: 1.5pt,
left: 1.5pt,
right: 1.5pt
),
height: 5cm,
width: 100%,
[
本人希望記入欄(特に給料・職種・勤務時間・勤務地・その他についての希望があれば記入)
#linebreak()
#set text(inputFontSize)
#children
]
)
)
}
|
https://github.com/phinixplus/docs | https://raw.githubusercontent.com/phinixplus/docs/master/source/cpu/intro.typ | typst | Other | #let intro = [
= Introduction <heading-introduction>
This document is the official specification for the PHINIX+ Central Processing
Unit. It is intended to explain in detail the capabilities and the layout of
the processor in an abstract manner in order to remain agnostic of the possible
implementations of it. While this document doesn't try to make any assertions
of a “correct” sort of implementation, the architecture was built with the
intention to exploit pipelining to gain in performance.
== Ancestral History
PHINIX+ is a "constructed" acronym which stands for _Pipelined High-speed
INteger Instruction eXecutor_. The "+" in the name is meant to signify
advancements from a previously designed processor, PHINIX, from which most ideas
were directly taken and improved upon. PHINIX used 16-bit word-addressing, which
turned out to be unwieldy and did not deliver in terms of memory capacity.
PHINIX+ expands to 32 bits while also adding byte-addressing to simplify
integration with the existing computing paradigms, all based around 8-bit units.
== Influence Sources
PHINIX+ mainly derives from the _Reduced Instruction Set Computing_ (RISC)
paradigm. However, that does not mean it follows the established norm for a RISC
processor, opting instead for a more expansive set of instructions, mainly
concerning the improvement of flags management and bit math. The core principles
of RISC, like the load-store paradigm and the general usage nature of the
provided registers, do exist in PHINIX+ but not without being improved upon.
One of the most apparent features a programmer wishing to use PHINIX+ encounters
is the dual register file. This is a feature influenced directly by the Motorola
68000 series of processors. Though that processor was in no way following RISC,
the adoption of the dual register file was due to similar reasons. As a
result, PHINIX+ has been lovingly nicknamed the _Actually-RISC#emoji.tm m68k_
#footnote[Disclaimer, not actually a trademark.].
== Things Done Differently
As mentioned prior, PHINIX+ mostly follows RISC but has changed how a few
things work in the interest of exploration. Many of the decisions taken could be
considered "unorthodox", but one of the most important premises of this project
is to try new ways of doing things for the educational value. Great care has
been taken to devise methods that improve performance using the minimum amount
of required hardware. Following is a list of the most important novel features
of the CPU:
#show table.cell: set align(center + horizon)
#figure(table(columns: (1fr, 2fr),
table.header([Feature], [Justification]),
[Dual register files. #footnote[As mentioned prior in relation to the m68k.]
(The separation of the registers into data and address register files.)],
[Allows for a trivial auto-increment operation, removing the need for
special hardware for the stack and other pointers. This feature also allows
for two independent operations to be executed in parallel with little
increase to the size of the implementation.],
[Condition codes register file. (The ability to use any
single-bit "flag" for any purpose.)],
[Makes operations on them a feasible prospect, reducing the amount of
branches. The now explicit nature of flag operations makes each instruction
wishing to modify them now opt-in instead of opt-out, reducing flag use.],
[Load-store instruction byte permutations. (The ability to choose a
preferred ordering for the bytes when loading or storing them.)],
[Addresses the age-old dilemma of little- VS big-endian while both making
the least significant bits of an address useful and eliminating the need
for bus errors, but doing so without requiring the system to perform
unaligned memory accesses.]
), caption: [Notable novel features of PHINIX+]) <table-novelfeatures>
]
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/hint-01.typ | typst | Other | #{
let a = 2
a = 1-a
a = a -1
// Error: 7-10 unknown variable: a-1 – if you meant to use subtraction, try adding spaces around the minus sign.
a = a-1
}
|
https://github.com/eduardz1/Bachelor-Thesis | https://raw.githubusercontent.com/eduardz1/Bachelor-Thesis/main/main.typ | typst | #import "utils/template.typ": *
#let declaration_of_originality = [
I declare to be responsible for the content I'm presenting in order to obtain
the final degree, not to have plagiarized in all or part of, the work produced
by others and having cited original sources in consistent way with current
plagiarism regulations and copyright. I am also aware that in case of false
declaration, I could incur in law penalties and my admission to final exam could
be denied
]
#let acknowledgments = [
I would like to thank my supervisor, Prof. <NAME>, and the team at
the research laboratory in Oslo, in particular the Prof. <NAME>,
Dr. <NAME> and Dr. <NAME>, for giving me the opportunity to
work on this project and for the support they provided me during the internship.
A special note of appreciation goes to <NAME> for always being
available to help us during the internship to figure out the electronics and
<NAME> and <NAME> with whom I collaborated closely on this
project.
I express my gratitude to all the friends who shared this journey and many long
study sessions over these three years with me. I am sure a big part of my
achievements is to be attributed to them. In particular, <NAME>, who has
been and still is always inviting us to study together, <NAME> whom I had
great pleasure studying and working with on several assignments, and <NAME> who always pushed me to do my best and has always been available to
explain concepts I had trouble with. I would also like to thank the friends from
Oslo who contributed to making my last semester at UniTO memorable.
Finally, I give a heartfelt thanks to my mother for always supporting me
throughout the studies and to my sister for helping me proofread my thesis.
]
#let abstract = [
In this thesis, we will talk about what digital twins are and how they can be
used in a range of scenarios, we will introduce some concepts of the Semantic
Web that will serve as a basis for our work. We will also introduce a novel
programming language, SMOL, developed to facilitate the way to interface with
digital twins. We will talk about the work of myself and my colleagues in the
process of building the physical twin with a focus on the structure and the way
the responsibilities of the different components are modularized. Finally, we
will talk about the software components that we wrote as part of this project,
including the code to interact with the sensors and the actuators - with a focus
on the Python code and the way it's structured - and the SMOL code that serves
as a proof of concept for the automation of the greenhouse.
]
#show: project.with(
title: "Design and Development of the Digital Twin of a Greenhouse",
subtitle: "Bachelor's Thesis",
abstract: abstract,
keywords: [
Digital Twins, Raspberry Pi, SMOL, Python
],
acknowledgments: acknowledgments,
declaration-of-originality: declaration_of_originality,
affiliation: (
university: "Università degli Studi di Torino",
school: "SCUOLA DI SCIENZE DELLA NATURA",
degree: "Corso di Laurea Triennale in Informatica",
),
candidate: (name: "<NAME>", id: "947847"),
supervisor: "Prof. <NAME>",
cosupervisor: [
Prof. <NAME>\
Dr. <NAME>\
Dr. <NAME>
],
date: "Academic Year 2022/2023",
logo: "../img/logo.svg",
bibliography-file: "../works.yml",
)
#show link: underline
#counter(page).update(1)
#include "chapters/introduction.typ"
#include "chapters/tools-and-technologies.typ"
#include "chapters/digital-twins.typ"
#include "chapters/smol.typ"
#include "chapters/overview-of-the-greenhouse.typ"
#include "chapters/raspberries-responsabilities-and-physical-setup.typ"
#include "chapters/software-components.typ"
#include "chapters/conclusions.typ"
#pagebreak()
|
|
https://github.com/coco33920/.files | https://raw.githubusercontent.com/coco33920/.files/mistress/typst_templates/timeline-cv/template.typ | typst | #let item(title, content) = [
#set align(left)
#text(size: 13pt, title)\
#text(size: 11pt, weight: "light", style: "italic", content)
]
#let s = state("lower_bound")
#let timeline_entry(
start : none,
end : datetime.today().year(),
title : none,
content : none,
) = locate(loc => {
let timeline_width = 30%
let lower_bound = s.at(loc)
let total_width = datetime.today().year() - lower_bound
let left_pad = 100% * ((start - lower_bound) / total_width)
let rect_width = 100% * (end - start) / total_width
let timeline = box(width: timeline_width,
stack(dir: btt, spacing: 4pt,
// starting text
stack(dir: ltr, h(left_pad * 0.8), str(start)),
// rectangle and line
stack(
stack(dir: ltr,
h(left_pad),
rect(width: rect_width, height: 6pt, fill: rgb("#1A54A0"))
),
line(length: 100%, stroke: 0.6pt),
),
// ending date text
stack(dir: ltr,
h((left_pad + rect_width) * 0.9),
if end != datetime.today().year() { str(end) } else { "auj." }
),
)
)
let info = box(width: 95% - timeline_width, item(title, content))
stack(dir: ltr,
timeline,
h(5%),
info,
)
})
#let entry(
title : none,
content : none,
) = pad(x: 5%, align(left, item(title, content)))
#let conf(
name : none,
github : none,
phone : none,
email : none,
last_updated : none,
lower_bound : none,
margin : none,
doc
) = {
// Configs
set document(author: name, title: "Curriculum Vitae")
set text(font: "Latin Modern Sans", lang: "fr", fallback: true)
set par(justify: true, leading: 0.55em)
set page(margin: (x: 40pt, y: 40pt))
show link: it => underline(text(style: "italic", fill: rgb("#4E7BD6"), it))
show heading.where(level: 1): it => {
stack(
text(font: "Latin Modern Roman Caps", weight: "black", size: 20pt, fill: rgb("#1A54A0"), it),
v(2pt),
line(length: 100%, stroke: 0.7pt)
)
}
assert(lower_bound != none, message: "must set lower_bound for timeline to work")
s.update(_ => lower_bound)
// Header
{
let name = text(26pt)[*#name*]
let image_text(p, t) = box[#box(height: 1em, baseline: 20%, image(p)) #t]
stack(dir:ltr,
align(horizon)[#name],
align(right)[
#if github != none [
#image_text("icons/github.svg")[ #link("https://github.com/" + github)[#github]]\
]
#if phone != none [
#image_text("icons/phone-solid.svg")[#phone]\
]
#if email != none [
#image_text("icons/envelope-regular.svg")[#email]
]
],
)
}
// Body
doc
// Footer
if last_updated != none {
set text(size: 9pt, fill: luma(80))
set align(right + bottom)
emph[ Dernière m-à-j : #last_updated ]
}
}
|
|
https://github.com/supersurviveur/typst-math | https://raw.githubusercontent.com/supersurviveur/typst-math/main/README.md | markdown | MIT License | # Typst math VS Code Extension
A VS Code extension to simplify math writing in [Typst](https://typst.app/home).
# Installation
The extension can be downloaded from the [Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=surv.typst-math).
To preview math symbols, some fonts are required, which you can either install [manually](./assets/fonts/README.md) or let the extension install them automatically on first launch (works on Windows only).
Unfortunatly, you also need to set your theme colors in the extension settings, as the extension can't access theme colors directly. You can find the settings in `File > Preferences > Settings > Extensions > Typst Math`.
By default, the extension will use the monokai theme colors.
# Features
- Math snippets, commands and keywords to simplify math writing
- Math preview directly in the editor :
- Render math symbols from : \

- To : \
 \
When you edit a line containing math symbols, these symbols will be displayed as text (as in the first image) for easy editing.
# Settings
- **Colors**: Select your theme colors. They can be in `#RRGGBB` or `rgb(r, g, b)` format.
- **RenderSymbolsOutsideMath**: If set to true, the extension will render symbols everywhere in the document, not only in math equations.
- **RenderSpaces**: If set to true, the extension will render space symbols like space, wj, space.quad...
- **HideUnnecessaryDelimiters**: If set to true, the extension will hide unnecessary delimiters in math equations, like paretheses in `x^(2 x)`
- **RenderingMode**: Choose whether to render only simple symbols or also complex equations.
- **RevealOffset**: The number of lines to reveal before and after the current line.
- **CustomSymbols**: You can add or override symbols with your own. The format is
```json
{
"name": "mySymbol",
"symbol": "|some chars|",
"category": "operator"
}
```
`category` can be `keyword`, `operator`, `comparison`, `number`, `letter`, `bigletter`, `set`, `space` or `default`.
# Issues
If you encounter any issues, please report them on the [GitHub repository](https://github.com/supersurviveur/typst-math/issues).
Feel free to contribute to the project ! See the [CONTRIBUTING.md](./CONTRIBUTING.md) file for instructions on how to build the project.
# Acknowledgements
- Thanks to [Enter-tainer](https://github.com/Enter-tainer) for his advices
- Thanks to [Le-Foucheur](https://github.com/Le-Foucheur) for testing |
https://github.com/Lindronics/skipper-reference | https://raw.githubusercontent.com/Lindronics/skipper-reference/main/appendix/passage_plan.typ | typst | MIT License | #set heading(numbering: "1.")
#set page(
margin: 1.5cm
)
#set text(
size: 10pt,
)
#layout(size => {
let notes = 50%;
let info = 15%;
let latlon = 20%;
let gap = 5%;
let pat = pattern(size: (25pt, 20pt))[
#polygon(
fill: gray,
stroke: none,
(0%, 0%),
(50%, 50%),
(100%, 0%),
(100%, 50%),
(50%, 100%),
(0%, 50%),
)
]
let n = 0
while n < 3 {
n = n + 1
grid(
stroke: 2pt,
row-gutter: 5pt,
table(
rows: (30pt),
align: horizon,
table.cell([*Waypoint*]),
),
table(
columns: (latlon, info, info, notes),
rows: (auto, 30pt, auto, 30pt),
align: horizon,
row-gutter: (0pt, 5pt, 0pt, 0pt),
[*LAT/LON*], [*LOG $<$*], [*Distance $<<$*], [*Fix and Notes*],
[], [], [], table.cell(rowspan: 3, []),
table.cell(stroke: none, []), [*Time*], [*Time - HW*],
table.cell(stroke: none, []), [], [],
),
table(
columns: (gap, info, info, info, notes),
rows: (auto, 30pt, auto, 30pt),
align: horizon,
row-gutter: (0pt, 5pt, 0pt, 0pt),
table.cell(
rowspan: 4,
align: horizon,
fill: pat,
[]
),
[*CTS $<$*], [*#math.Delta LOG $<$*], [*COG $<<$*], [*Dangers and Notes*],
[], [], [], table.cell(rowspan: 3, []),
[*Min depth #sym.arrow.b*], [*#math.Delta Time*], [*Tides $<<<$*],
)
)
}
}) |
https://github.com/TOD-theses/old-paper-T-RACE | https://raw.githubusercontent.com/TOD-theses/old-paper-T-RACE/main/thesis.typ | typst | #import "@preview/fletcher:0.5.1" as fletcher: diagram, node, edge
/*
Notable differences to the Latex template:
- ToC not perfect (contains no Abstract, List of Figures, ...; Bibliography is small)
- Not such a fancy new-chapter style
- Numbered citing rerences instead of letters
- "Figure"/"Table" 1 references are always uppercased
*/
#set table(inset: 6pt, stroke: 0.4pt)
#show table.cell.where(y: 0): strong
#set document(title: "T-RACE", author: "<NAME>")
#set par(justify: true)
#set text(lang: "en", region: "UK", size: 11pt, spacing: 3pt)
#show emph: it => {
text(it, spacing: 4pt)
}
#show par: set block(spacing: 14pt)
#set page(numbering: none)
#show link: underline
#let todo(content) = {
text("[TODO: " + content + "]", fill: red)
}
// Add current chapter to page header
#set page(header: context {
let current-page = counter(page).get()
let all-headings = query(heading.where(level: 1))
let is-new-chapter = all-headings.any(m => counter(page).at(m.location()) == current-page)
if is-new-chapter {
return
}
let previous-headings = query(selector(heading.where(level: 1)).before(here())).filter(h => h.numbering != none)
if previous-headings.len() == 0 {
return
}
let heading = previous-headings.last()
str(previous-headings.len()) + "."
h(1em)
text(upper(heading.body))
line(length: 100%)
})
// not using numbering function, as this caused a trailing dot
// https://github.com/typst/typst/discussions/4574
// #set heading(numbering: (..numbers) => {
// if numbers.pos().len() <= 3 {
// numbering("1.", ..numbers)
// }
// })
#set heading(numbering: "1.")
#show heading.where(level: 1): it => {
pagebreak()
text(it, size: 26pt)
v(14pt)
}
#show heading.where(level: 2): it => {
text(it, size: 16pt)
v(6pt)
}
#show figure: it => {
it
v(30pt)
}
#show outline.entry.where(level: 1): it => {
if it.body.has("children") {
let t = it.body.children.first().text
if t.starts-with("Table") or t.starts-with("Figure") {
text(it, size: 12pt)
} else {
v(14pt, weak: true)
strong(text(it, size: 15pt))
}
} else {
it
}
}
#show outline.entry.where(level: 2): it => {
text(it, size: 12pt)
}
#let in-outline = state("in-outline", false)
#show outline: it => {
in-outline.update(true)
it
in-outline.update(false)
}
#let flex-caption(long, short) = locate(loc => if in-outline.at(loc) {
short
} else {
long
})
#import "@preview/ctheorems:1.1.2": *
#show: thmrules.with(qed-symbol: $square$)
#let theorem = thmbox("theorem", "Theorem")
#let definition = thmbox("definition", "Definition", inset: (x: 1.2em, top: 1em))
#let proof = thmproof("proof", "Proof")
#let pre = math.italic("prestate")
#let post = math.italic("poststate")
#let colls = math.italic("collisions")
#let changedKeys = math.italic("changed_keys")
#for value in range(3) {
"Four blank pages resembling the four pages from the templates frontpage. This makes it easier to retain the PDF metadata for the bookmarks (ToC) after merging with the frontpages from the template."
pagebreak()
}
#heading("Erklärung zur Verfassung der Arbeit", outlined: false, numbering: none)
<NAME>
#v(2em)
Hiermit erkläre ich, dass ich diese Arbeit selbständig verfasst habe, dass ich die verwendeten Quellen und Hilfsmittel vollständig angegeben habe und dass ich die Stellen der Arbeit $dash.fig$ einschließlich Tabellen, Karten und Abbildungen $dash.fig$, die anderen Werken oder dem Internet im Wortlaut oder dem Sinn nach entnommen sind, auf jeden Fall unter Angabe der Quelle als Entlehnung kenntlich gemacht habe.
Ich erkläre weiters, dass ich mich generativer KI-Tools lediglich als Hilfsmittel bedient habe und in der vorliegenden Arbeit mein gestalterischer Einfluss überwiegt. Im Anhang „Übersicht verwendeter Hilfsmittel“ habe ich alle generativen KI-Tools gelistet, die ver- wendet wurden, und angegeben, wo und wie sie verwendet wurden. Für Textpassagen, die ohne substantielle Änderungen übernommen wurden, habe ich jeweils die von mir formu- lierten Eingaben (Prompts) und die verwendete IT-Anwendung mit ihrem Produktnamen und Versionsnummer/Datum angegeben.
#v(20em)
#box(
height: 68pt,
columns(2, gutter: 11pt)[
#align(left)[Wien, #datetime.today().display("[day].[month].[year]")]
#colbreak()
#align(right)[
#align(center)[
#line(length: 50%)
<NAME>
]
]
],
)
#heading("Acknowledgements", outlined: false, numbering: none)
#heading("Danksagung", outlined: false, numbering: none)
#heading("Abstract", outlined: false, numbering: none)
#heading("Kurzfassung", outlined: false, numbering: none)
#outline(depth: 2, indent: auto)
#set page(numbering: "1")
#counter(page).update(1)
= Introduction
TBD.
== Contributions
- Precise definition of TOD in the context of blockchain transaction
analysis.
- Theoretical discussion of TOD, including compilation of instructions
that can cause TOD.
- Methodology to mine potential TOD transaction pairs using only the RPC
interface of an archive node, rather than requiring local access to
it.
= Background
This chapter gives background knowledge on Ethereum, that is helpful to follow the remaining paper. We also introduce a notation for these concepts.
== Ethereum
Ethereum is a blockchain, that can be characterized as a \"transactional singleton machine with shared-state\". @wood_ethereum_2024[p.1] By using a consensus protocol, a decentralized set of nodes agrees on a globally shared state. This state contains two types of accounts: #emph[externally owned accounts] (EOA) and #emph[contract accounts] (also referred to as smart contracts). The shared state is modified by executing #emph[transactions]. @tikhomirov_ethereum_2018
== World State
Similar to @wood_ethereum_2024[p.3], we will refer to the shared state as #emph[world state]. The world state maps each 20 byte address to an account state, containing a #emph[nonce], #emph[balance], #emph[storage] and #emph[code]#footnote[Technically, the account state only contains hashes that identify the storage and code, not the actual storage and code. This distinction is not relevant in this paper, therefore we simply refer to them as nonce and code.]. They store following data @wood_ethereum_2024[p.4]:
- #emph[nonce]: For EOAs, this is the number of transactions submitted
by this account. For contract accounts, this is the number of
contracts created by this account.
- #emph[balance]: The value of Wei this account owns, a smaller unit of
Ether.
- #emph[storage]: The storage allows contract accounts to persistently
store information across transactions. It is a key-value mapping where
both, key and value, are 256 bytes long. For EOAs, this is empty.
- #emph[code]: For contract accounts, the code is a sequence of EVM
instructions.
We denote the world state as $sigma$, the account state of an address $a$ as $sigma (a)$ and the nonce, balance, storage and code as $sigma (a)_n$, $sigma (a)_b$, $sigma (a)_s$ and $sigma (a)_c$ respectively. For the value at a storage slot $k$ we write $sigma (a)_s [k]$. #todo("We will also *use* an alternative notation") We will also an alternative notation $sigma (K)$, where we combine the identifiers of a state value to a single key $K$, which simplifies further definitions. We have the following equalities between the two notations:
$
sigma(a)_n &= sigma(("'nonce'", a)) \
sigma(a)_b & = sigma(("'balance'", a)) \
sigma(a)_c & = sigma(("'code'", a)) \
sigma(a)_s[k] & = sigma(("'storage'", a, k))
$
== EVM
The Ethereum Virtual Machine (EVM) is used to execute code in Ethereum. It executes instructions, that can access and modify the world state. The EVM is Turing-complete, except that it is executed with a limited amount of #emph[gas] and each instruction costs some gas. When it runs out of gas, the execution will halt. @wood_ethereum_2024[p.14] For instance, this prevents execution of infinite loops, as it would use infinitely much gas and thus exceed the gas limit.
Most EVM instructions are formally defined in #todo("... the Yellowpaper"). @wood_ethereum_2024[p.30-38] However, the Yellowpaper currently does not include the changes from the Cancun upgrade @noauthor_history_2024, therefore we will also refer to the informal descriptions available on #link("https://www.evm.codes/")[evm.codes]. @smlxl_evm_2024
== Transactions
A transaction can modify the world state by transferring Ether and executing EVM code. It must be signed by the owner of an EOA and contains following data relevant to our work:
- #emph[sender]: The address of the #todo("... EOA that signed this transaction") transaction sender#footnote[The
sender is implicitly given through a valid signature and the
transaction hash. @wood_ethereum_2024[p.25-27] We are only interested
in transactions that are included in the blockchain, thus the
signature must be valid and the transaction’s sender can always be
derived.].
- #emph[recipient]: The destination address.
- #emph[value]: The value of Wei that should be transferred from the
sender to the recipient.
- #emph[gasLimit]: The maximum number of gas, that can be used for the
execution.
If the recipient address is empty, the transaction will create a new contract account. These transactions also include an #emph[init] field, that contains the code to initialize the new contract account.
When the recipient address is given and a value is specified, this will be transferred to the recipient. Moreover, if the recipient is a contract account, it also executes the recipient’s code. The transaction can specify a #emph[data] field to pass input data to the code execution. @wood_ethereum_2024[p.4-5]
For every transaction the sender must pay a #emph[transaction fee]. This is composed of a #emph[base fee] and a #emph[priority fee]. Every transaction must pay the base fee. The amount of Wei will be reduced from the sender and not given to any other account. For the priority fee, the transaction can specify if, and how much they are willing to pay. This fee will be taken from the sender and given to the block validator, which is explained in the next section. @wood_ethereum_2024[p.8]
We denote a transaction as $T$, sometimes adding a subscript $T_A$ to differentiate from another transaction $T_B$.
== Blocks
The Ethereum blockchain consists of a sequence of blocks, where each block builds upon the state of the previous block. To achieve consensus about the canonical sequence of blocks in a decentralized network of nodes, Ethereum uses a consensus protocol. In this protocol, validators build and propose blocks to be added to the blockchain. @noauthor_gasper_2023 It is the choice of the validator, which transactions to include in a block, however they are incentivized to include transactions that pay high transaction fees, as they receive the fee. @wood_ethereum_2024[p.8]
Each block consists of a block header and a sequence of transactions. We denote the nth block of the blockchain as $B_n$ and the sequence of transactions it includes as $T (B_n) = (T_1 , T_2 , dots.h , T_m)$.
== Transaction submission
This section discusses, how a transaction signed by an EOA ends up being included in the blockchain.
Traditionally, the signed transaction is broadcasted to the network of nodes, which temporarily store them in a #emph[mempool], a collection of pending transactions. The current block validator then picks transactions from the mempool and includes them in the next block. With this submission method, the pending transactions in the mempool are publicly known to the nodes in the network, even before being included in the blockchain. This time window will be important for our discussion on frontrunning, as it gives nodes time to react on a transaction before it becomes part of the blockchain. @eskandari_sok_2020
A different approach, the Proposer-Builder Separation (PBS) has become more popularity recently: Here, we separate the task of collecting transactions and building blocks with them from the task of proposing them as a validator. A user submits their signed transaction or transaction bundle to a block builder. The block builder has a private mempool and uses it to create profitable blocks. Finally, the validator picks one of the created blocks and adds it to the blockchain. @heimbach_ethereums_2023
== Transaction execution
In Ethereum, transaction execution is deterministic. @wood_ethereum_2024[p.9] Transactions can access the world state and their block environment, therefore their execution can depend on these values. After executing a transaction, the world state is updated accordingly.
We denote a transaction execution as $sigma arrow.r^T sigma prime$, implicitly letting the block environment correspond to the transaction’s block. Furthermore, we denote the state change by a transaction $T$ as $Delta_T$, with $pre(Delta_T) = sigma$ being the world state before execution and $post(Delta sigma_T) = sigma prime$ the world state after the execution of $T$.
We denote a transaction execution as $sigma arrow.r^T sigma prime$, implicitly letting the block environment correspond to the transaction’s block. Furthermore, we denote the state change by a transaction $T$ as $Delta_T$, with $pre(Delta_T) = sigma$ being the world state before execution and $post(Delta sigma_T) = sigma prime$ the world state after the execution of $T$.
For two state changes $Delta_(T_A)$ and $Delta_(T_B)$, we say that $Delta_(T_A) = Delta_(T_B)$ if they changed the same set of state fields and the pre- and poststate for these changed fields is equal, otherwise $Delta_(T_A) eq.not Delta_(T_B)$. For instance, if both $Delta_(T_A)$ and $Delta_(T_B)$ modified only the storage slot $sigma (a)_s [k]$, and both changed it from the value $x$ to the value $y$, we would call them equal. If $Delta_(T_B)$ changed it from $x prime$ to $y$, or from $x$ to $y prime$ or even modified a different storage slot $sigma (a)_s [k prime]$, we would say $Delta_(T_A) eq.not Delta_(T_B)$.
We define the set of changed state keys as:
$ changedKeys(Delta) colon.eq {K \| pre(Delta)(K) eq.not post(Delta) (K)} $
We let the equality $Delta_(T_A) = Delta_(T_B)$ be true if following holds, else $Delta_(T_A) eq.not Delta_(T_B)$:
$
changedKeys(Delta_(T_A)) & = changedKeys(Delta_(T_B))\
forall K in changedKeys:
& pre(Delta_(T_A)) (K) = pre(Delta_(T_B)) (K)\
" and " & post(Delta_(T_B)) (K) = post(Delta_(T_B)) (K)
$
We define $sigma + Delta_T$ to be equal to the state $sigma$, except that every state that was changed by the execution of $T$ is overwritten with the value in $p o s t s t a t e (Delta_T)$. Similarly, $sigma - Delta_T$ is equal to the state $sigma$, except that every state that was changed by the execution of $T$ is overwritten with the value in $p r e s t a t e (Delta_T)$. Formally, these definitions are as follows:
$
(sigma + Delta_T) (
K
) & colon.eq cases(
post(Delta_T) (K) & "if" K in changedKeys(Delta_T),
sigma (K) & "otherwise"
)\
(sigma - Delta_T) (
K
) & colon.eq cases(
pre(Delta_T) (K) & "if" K in changedKeys(Delta_T),
sigma (K) & "otherwise"
)
$
For instance, if transaction $T$ changed the storage slot 1234 at address 0xabcd from 0 to 100, then $(sigma + Delta_T) ("0xabcd")_s [1234] = 100$ and $(sigma - Delta_T) ("0xabcd")_s [1234] = 0$. For all other storage slots we have $(sigma + Delta_T) (a)_s [k] = sigma (a)_s [k] = (sigma - Delta_T) (a)_s [k]$.
== Nodes
A node consists of an #emph[execution client] and a #emph[consensus client]. The execution client keeps track of the world state and the mempool and executes transactions. The consensus client takes part in the consensus protocol. For this work, we will use an #emph[archive node], which is a node that allows to reproduce the state and transactions at any block. @noauthor_nodes_2024
== RPC
Execution clients implement the Ethereum JSON-RPC specification. @noauthor_ethereum_2024 This API gives remote access to an execution client, for instance to inspect the current block number with `eth_blockNumber` or to execute a transaction without committing the state via `eth_call`. In addition to the standardized RPC methods, we will also make use of methods in the debug namespace, such as `debug_traceBlockByNumber`. While this namespace is not standardized, several execution clients implement these additional methods @noauthor_go-ethereum_2024@noauthor_rpc_2024@noauthor_reth_2024.
= Transaction order dependency
In this chapter we discuss our definition of transaction order dependency (TOD) and various properties that come with it. We first lay out the idea of TOD with a basic definition and then show several shortcomings of this simple definition. Based on these insights, we construct a more precise definition that we will use for our analysis.
== Approaching TOD
Intuitively, a pair of transactions $(T_A , T_B)$ is transaction order dependent (TOD), if the original execution order leads to a different result than a reordered execution order. In formal terms, we write this as following:
$
sigma arrow.r^(T_A) sigma_1 arrow.r^(T_B) sigma prime \
sigma arrow.r^(T_B) sigma_2 arrow.r^(T_A) sigma prime prime \
sigma prime eq.not sigma prime prime
$
So, starting from an initial state, when we execute first $T_A$ and then $T_B$ it will result in a different state, than when executing $T_B$ and afterwards $T_A$.
We will refer to the execution order $T_A arrow.r T_B$, the one that occurred on the blockchain, as the #emph[normal] execution order, and $T_B arrow.r T_A$ as the #emph[reversed] execution order.
== Motivating examples
TBD.
#todo("Add a motivating example for write-read TOD (e.g. TOD-recipient) and for write-write TOD (e.g. ERC-20 approval).")
== Relation to previous works
In @torres_frontrunner_2021 the authors do not provide a formal definition of TOD. However, for displacement attacks, they include the following check to detect if two transactions fall into this category:
#quote(block: true)[
\[...\] we run in a simulated environment first $T_A$ before $T_V$ and then $T_V$ before $T_A$. We report a finding if the number of executed EVM instructions is different across both runs for $T_A$ and $T_V$, as this means that $T_A$ and $T_V$ influence each other.
]
Similar to our intuitive TOD definition, they execute $T_A$ and $T_V$ in different orders and check if it affects the result. In their case, they only check the number of executed instruction, instead of the resulting state. This would miss attacks where the same instructions were executed, but the operands for these instructions in the second transaction changed because of the first transaction.
In @zhang_combatting_2023, they define an attack as a triple $A = angle.l T_a , T_v , T_a^p angle.r$, where $T_a$ and $T_v$ are similar to the $T_A$ and $T_B$ from our definition, and $T_a^p$ is an optional third transaction. They consider the execution orders $T_a arrow.r T_v arrow.r T_a^p$ and $T_v arrow.r T_a arrow.r T_a^p$. They monitor the transactions to check if the execution order impacts financial gains, which we will discuss later in more detail. #todo("Reference the frontrunning sections, when it's written")
We note that if these two execution orders result in different states, this is not because of the last transaction $T_a^p$, but because of a TOD between $T_a$ and $T_v$. As we always execute $T_a^p$ #todo("... last"), and transaction execution is deterministic, it only gives a different result if the execution of $T_a$ and $T_v$ gave a different result. Therefore, if the execution order results in different financial gains, then $T_a$ and $T_v$ must be TOD.
== Imprecise definitions
Our intuitive definition of TOD, and the related definitions shown above, are not precise on the semantics of a reordering of transactions and their executions. These make it impossible to apply exactly the same methodology without analyzing the source code related to the papers. We detect three issues, where the definition is not precise enough and show how these were differently interpreted by the two papers.
For the analysis of the tools by @zhang_combatting_2023 and @torres_frontrunner_2021, we will use the current version of the source codes, @zhang_erebus-redgiant_2023 and @torres_frontrunner_2022 respectively.
=== Intermediary transactions
To analyze the TOD $(T_A , T_B)$, we are interested in how $T_A$ affected $T_B$. Our intuitive definition did not specify how to handle transactions that occurred between $T_A$ and $T_B$, which we will name #emph[intermediary transactions].
For instance, let us assume that there was one transaction $T_X$ in between $T_A$ and $T_B$: $sigma arrow.r^(T_A) sigma_A arrow.r^(T_X) sigma_(A X) arrow.r^(T_B) sigma_(A X B)$. The execution of $T_B$ clearly could depend on both, $T_A$ and $T_X$. When we are interested in the impact of $T_A$ on $T_B$, we need to define what happens with $T_X$.
For executing the normal order, we would have two possibilities:
+ $sigma arrow.r^(T_A) sigma_A arrow.r^(T_X) sigma_(A X) arrow.r^(T_B) sigma_(A X B)$, the same execution as on the blockchain, including the effects of $T_X$.
+ $sigma arrow.r^(T_A) sigma_A arrow.r^(T_B) sigma_(A B)$, leaving out $T_X$ and thus having a normal execution that potentially diverges from the results on the blockchain (as $sigma_(A B)$ may differ to $sigma_(A X B)$).
When executing the reverse order, we could make following choices:
+ $sigma arrow.r^(T_B) sigma_B arrow.r^(T_A) sigma_(B A)$, which ignores $T_X$ and thus may impact the execution of $T_B$.
+ $sigma arrow.r^(T_X) sigma_X arrow.r^(T_B) sigma_(X B) arrow.r^(T_A) sigma_(X B A)$, which executes $T_X$ on $sigma$ rather than $sigma_A$ and now also includes the effects of $T_X$ for executing $T_A$.
All of these scenarios are possible, but none of them provides a clean solution to solely analyze the impact of $T_A$ on $T_B$, as we always could have some indirect impact from the (non-)execution of $T_X$.
In @zhang_combatting_2023, this impact of the intermediary transactions is acknowledged and caused a few false positives:
#quote(block: true)[
In blockchain history, there could be many other transactions between $T_a$, $T_v$, and $T_p^a$. When we change the transaction orders to mimic attack-free scenarios, the relative orders between $T_a$ (or $T_v$) and other transactions are also changed. Financial profits of the attack or victim could be affected by such relative orders. As a result, the financial profits in the attack-free scenario could be incorrectly calculated, and false-positively reported attacks may be induced, but our manual check shows that such cases are rare.
]
Nonetheless, it is not clear, which of the above scenarios they applied for their analysis. The other work, @torres_frontrunner_2021, does not mention this issue at all.
#todo("Consider to move the code analysis to an appendix")
#heading(level: 4, numbering: none)[Code analysis of @zhang_combatting_2023]
As shown in their algorithm 1, they take as input all the executed transactions. They use these transactions and their results in the `searchVictimGivenAttack` method, where `ar` represents the attack transaction and result and `vr` represents the victim transaction and result.
For the normal execution order ($T_a arrow.r T_v$), they simply use `ar` and `vr` and pass them to their `CheckOracle` method which then compares the resulting states. As `ar` and `vr` are obtained by executing all transactions, they also include the intermediary transactions for these results (similar to our $sigma arrow.r^(T_A) sigma_A arrow.r^(T_X) sigma_(A X) arrow.r^(T_B) sigma_(A X B)$ case).
For the reverse order ($T_v arrow.r T_a$), they take the state before $T_a$, i.e. $sigma$. Then they execute all transactions obtained from the `SlicePrerequisites` method. And finally they execute $T_v$ and $T_a$.
The `SlicePrerequisites` method uses the `hbGraph` built in `StartSession`, which seems to be a graph where each transaction points to the previous transaction from the same EOA. From this graph, it takes all transactions between $T_a$ and $T_v$, that are from the same sender as $T_v$. This interpretation matches the test case \"should slide prerequisites correctly\" from the source code. As the paper does not mention these prerequisite transactions, we do not know why this subset of intermediary transactions was chosen.
We can conclude, that @zhang_combatting_2023 executes all intermediary transactions for the normal order. However, for the reverse order, they only execute intermediary transactions that are also sent by the victim, but do not execute any other intermediary transactions.
#heading(level: 4, numbering: none)[Code analysis of @torres_frontrunner_2021]
In the file `displacement.py`, they replay the normal execution order at the lines 154-155, and the reverse execution order at the lines 158-159. They only execute $T_A$ and $T_V$ (in normal and reverse order), but do not execute any intermediate transactions.
=== Block environments
When we analyze a pair of transactoins $(T_A , T_B)$, it can be, that these are not part of the same block. The execution of these transactoins can depend on the block environment they are executed in, for instance if they access the current block number. Thus, executing $T_A$ or $T_B$ in a different block environment than on the blockchain may alter their behaviour. From our intuitive TOD definition, it is not clear which block environment(s) we use when replaying the transactions in normal and reverse order.
#heading(level: 4, numbering: none)[Code analysis of @zhang_combatting_2023]
The block environment used to execute all transactions is contained in `ar.VmContext` and as such corresponds to the block environment of $T_a$. This means $T_a$ is executed in the same block environment as on the blockchain, while $T_v$ and the intermediary transactions may be executed in a different block environment.
#heading(level: 4, numbering: none)[Code analysis of @torres_frontrunner_2021]
In the file `displacement.py` line 151, we see that the emulator uses the same block environment for both transactions. Therefore, at least one of them will be executed in a different block environment than on the blockchain.
=== Initial state $sigma$
While our preliminary TOD definition specifies that we start with the same $sigma$ in both execution orders, it is up to interpretation which world state $sigma$ is.
#heading(level: 4, numbering: none)[Code analysis of @zhang_combatting_2023]
The initial state used to execute the first transaction is `ar.State`, which corresponds to the state directly before executing $T_a$. This includes all previous transactions of the same block.
#heading(level: 4, numbering: none)[Code analysis of @torres_frontrunner_2021]
The emulator is initialized with the block `front_runner["blockNumber"]-1` and no single transactions are executed prior to running the analysis. Therefore, the state cannot include transactions that were executed in the same block before $T_A$.
Similar to the case with the block environment, this could lead to differences between the emulation and the results from the blockchain, when $T_A$ or $T_V$ are impacted by a previous transaction in the same block.
== TOD definition
To address the issues above, we will provide a more precise definition for TOD #todo("... that tries to be as close to the execution that happened on the blockchain, while also minimizing the impact of intermediary transactions on the analysis results.")
#definition("TOD")[
Consider a sequence of transactions, with $sigma$ being the world state right before $T_A$ was executed on the blockchain:
$ sigma arrow.r^(T_A) sigma_A arrow.r^(T_(X_1)) dots.h arrow.r^(T_(X_n)) sigma_(X_n) arrow.r^(T_B) sigma_B $
Let $Delta_(T_A)$ and $Delta_(T_B)$ be the corresponding state changes from executing $T_A$ and $T_B$, and let all transactions be executed in the same block environment as they were executed on the blockchain.
We say, that $(T_A , T_B)$ is TOD if and only if executing $(sigma_(X_n) - Delta_(T_A)) arrow.r^(T_B) sigma_B prime$ produces a state change $Delta_(T_B prime)$ with $Delta_(T_B) eq.not Delta_(T_B prime)$.
]
Intuitively, we take the world state exactly before $T_B$ was executed, namely $sigma_(X_n)$. We then record the state changes $Delta_(T_B)$ from executing $T_B$ directly on $sigma_(X_n)$, the same way it was executed on the blockchain. Then we simulate what would have happened if $T_A$ was not executed before $T_B$ by removing its state changes and executing $T_B$ on $sigma_(X_n) - Delta_(T_A)$. If we observe different state changes for $T_B$ when executed with and without the changes of $T_A$, then we know that $T_A$ has an impact on $T_B$ and conclude TOD between $T_A$ and $T_B$. If there are no differences between $Delta_(T_B)$ and $Delta_(T_B prime)$, then $T_B$ behaves the same regardless of $T_A$ and there is no TOD.
We chose to compare the two executions on the state changes $Delta_(T_B) eq.not Delta_(T_B prime)$, rather than on the resulting states $sigma_B eq.not sigma_B prime$, to detect a wider range of TODs. Comparing on $sigma_B eq.not sigma_B prime$ would be sufficient to detect #emph[write-read] TODs, where the first transaction writes some state and the second transaction accesses this state and outputs a different result because of this. However, we are also interested into #emph[write-write] TODs, where $T_A$ writes some state and $T_B$ overwrites the same state with a different value, thus hiding the change by $T_A$.
For example, let $T_A$ write the value 'aaaa' to some storage, s.t. we have $sigma_(X_n) (a)_s [k] = "'aaaa'"$, and $T_B$ write 'bbbb' to the same storage, s.t. we have $sigma_B (a)_s [k] = "'bbbb'"$. When executing $T_B$ last, the world state would have 'bbbb' at this storage slot, and when executing $T_A$ last, it would be 'aaaa'. Therefore, the resulting world state is dependent on the order of $T_A$ and $T_B$. To check for this case, we compare the prestates of each change in $Delta_(T_B)$ and $Delta_(T_B prime)$. In our example, when executing $T_B$ on $sigma_(X_n)$ we would have $pre(Delta_(T_B)) (a)_s [k] = "'aaaa'"$ (as the changes from $T_A$ are included in this scenario), but when executing on $sigma_(X_n) - Delta_(T_A)$ we have $pre(Delta_(T_B prime)) (a)_s [k] = "'0000'"$ (as the changes from $T_A$ are undone in this scenario). Therefore, checking for inequality between the prestates from the state changes $Delta_(T_B)$ and $Delta_(T_B prime)$ can detect write-write TODs.
Our definition does not include #emph[read-write] TODs, i.e. we do not check whether executing $T_B$ before $T_A$ would have an impact on $T_A$. We focus on detecting TOD attacks, in which the attacker tries to insert a transaction prior to some transaction $T$ and impact the behaviour of $T$ with this. Therefore, we assume that the first transaction tries to impact the second transaction, and not ignore the other way round.
=== Definition strengths
#heading("Performance", level: 4, numbering: none)
To check if two transactions $T_A$ and $T_B$ are TOD, we need the initial world state $sigma$ and the state changes from $T_A$, $T_B$ and the intermediary transactions $T_(X_n)$. With the state changes we can compute $sigma_(X_n) - Delta_(T_A) = sigma + Delta_(T_A) + (sum_(i = 0)^(i = n) Delta_(T_(X_i))) - Delta_(T_A)$ and then execute $T_B$ on this state. Using state changes allows us to check if $T_A$ and $T_B$ are TOD with only one transaction execution, despite including the effects of arbitrary many intermediary transactions.
If we want to check n transactions for TOD, we could execute all n transactions to obtain their state changes. There are $frac(n^2 - n, 2)$ transaction pairs, thus if we wanted to test each pair for TOD we would end up with a total of $n + frac(n^2 - n, 2) = frac(n^2 + n, 2)$ transaction executions. Similar to @torres_frontrunner_2021 and @zhang_combatting_2023, we can filter irrelevant transactions pairs to drastically reduce the search space.
#heading("Similarity to blockchain executions", level: 4, numbering: none)
With our definition, the state change $Delta_(T_B)$ from the normal execution is equivalent to the state change that happend on the blockchain. Also, the reversed order is closely related to the state from the blockchain, as we start with $sigma_(X_n)$ and only modify the relevant parts for our analysis. Furthermore, we prevent effects from block environment changes by using the same one as on the blockchain.
This contrasts other implementations, where transactions are executed in different block environments than originally, are executed based on a different starting state or ignore the impact of intermediary transactions. All three cases can alter the execution of $T_A$ and $T_B$, such that the result is not closely related to the blockchain anymore.
=== Definition weaknesses
<sec:weaknesses>
An intuitive interpretation of our definition would be, that we compare $T_A arrow.r T_(X_i) arrow.r T_B$ with $T_(X_i) arrow.r T_B$, i.e. reckon what would have happened if $T_A$ was not executed. However, the definition we provide does not perfectly match this concept. Our definition does not consider interactions between $T_A$ and the intermediary transactions $T_(X_i)$.
In the intuitive model, removal of $T_A$ could also impact the intermediary transactions and thus indirectly change the behaviour of $T_B$. Then we would not know if $T_A$ directly impacted $T_B$, or only through some interplay with intermediary transactions. Therefore, excluding the interactions between $T_A$ and $T_(X_i)$ may be desirable, however it can lead to unexpected results if one is not aware of this.
#heading("Indirect dependencies", level: 4, numbering: none)
When we analyze a TOD for $(T_A , T_B)$ and there is a TOD between $T_A$ and some intermediary transaction $T_X$, then removing $T_A$ would impact $T_X$ and thus could indirectly impact $T_B$.
Consider the three transactions $T_A$, $T_X$ and $T_B$:
+ $T_A$: sender $a$ transfers 5 Ether to address $x$.
+ $T_X$: sender $x$ transfers 5 Ether to address $b$.
+ $T_B$: sender $b$ transfers 5 Ether to address $y$.
When executing these transactions in the normal order, and $a$ initially has 5 Ether and the others have 0, then all of these transactions would succeed. If we remove $T_A$ and only execute $T_X$ and $T_B$, then firstly $T_X$ would fail, as $x$ did not get the 5 Ether from $a$, and consequently also $T_B$ fails.
However, when using our TOD definition and computing $(sigma_(X_n) - Delta_(T_A))$, we would only modify the balances for $a$ and $x$, but for $b$ as $b$ is not modified in $Delta_(T_A)$. Thus, $T_B$ would still succeed in the reverse order according to our definition, but would fail in practice due to the indirect effect. This shows, how the concept of removing $T_A$ does not map exactly to our TOD definition.
In this example, we had a TOD for $(T_A , T_X)$ and $(T_X , T_B)$. However, we can also have an indirect dependency between $T_A$ and $T_B$ without a TOD for $(T_X , T_B)$. For instance, if $T_X$ and $T_B$ would be TOD, but $T_A$ caused $T_X$ to fail. When inspecting the normal order, $T_X$ failed, so there is no TOD between $T_X$ and $T_B$. However, when executing the reverse order without $T_A$, then $T_X$ would succeed and could impact $T_B$.
== State collisions
We denote state accesses by a transaction $T$ as a set of state keys $R_T = { K_1 , dots.h , K_n }$ and state modifications as $W_T = { K_1 , dots.h , K_m }$.
We define the state collisions of two transactions as:
$ colls(T_A , T_B) = (W_(T_A) sect R_(T_B)) union (W_(T_A) sect W_(T_B)) $
With $W_(T_A) sect R_(T_B)$ we include write-read collisions, where $T_A$ modifies some state and $T_B$ accesses the same state. With $W_(T_A) sect W_(T_B)$ we include write-write collisions, where both transactions write to the same state location, for instance to the same storage slot. We do not include $R_(T_A) sect W_(T_B)$, as we also did not include read-write TOD in our TOD defintion.
== TOD candidates
We will refer to a transaction pair $(T_A , T_B)$, where $T_A$ was executed before $T_B$ and $colls(T_A , T_B) eq.not nothing$ as a TOD candidate.
A TOD candidate is not necessarily TOD, for instance consider the case that $T_B$ only reads the value that $T_A$ wrote but never uses it for any computation. This would be a TOD candidate, as they have a collision, however the result of executing $T_B$ is not impacted by this collision.
Conversely, if $(T_A , T_B)$ is TOD, then $(T_A , T_B)$ must also a TOD candidate. For a write-write TOD, this is the case, because both $T_A$ and $T_B$ write to the same state, therefore we have $W_(T_A) sect W_(T_B) eq.not nothing$. If we have a write-read TOD, then $T_B$ reads some state that $T_A$ wrote, hence $W_(T_A) sect R_(T_B) eq.not nothing$.
Therefore, the set of all TOD transaction pairs is a subset of all TOD candidates.
== Causes of state collisions
This section discusses, what can cause two transactions $T_A$ and $T_B$ to have state collisions. To do so, we show the ways a transaction can access and modify the world state.
=== Causes with code execution
When the recipient of a transaction is a contract account, it will execute the recipient’s code. The code execution can access and modify the state through several instructions. By inspecting the EVM instruction definitions @wood_ethereum_2024[p.30-38]@smlxl_evm_2024, we compiled a list of instructions that can access and modify the world state.
In @tab:state_reading_instructions we see the instructions, that can access the world state. For most, the reason of the access is clear, for instance `BALANCE` needs to access the balance of the target address. Less obvious is the nonce access of several instructions, which is because the EVM uses the nonce (among other things) to check if an account already exists@wood_ethereum_2024[p.4]. For `CALL`, `CALLCODE` and `SELFDESTRUCT`, this is used to calculate the gas costs. @wood_ethereum_2024[p.37-38] For `CREATE` and `CREATE2`, this is used to prevent creating an account at an already active address @wood_ethereum_2024[p.11]#footnote[In the Yellowpaper, the check for the existence of the recipient for `CALL`, `CALLCODE` and `SELFDESTRUCT` is done via the `DEAD` function. For `CREATE` and `CREATE2`, this is done in the `F` condition at equation (113).].
In @tab:state_writing_instructions we see instructions that can modify the world state.
#block[
#block[
#figure(
align(center)[#table(
columns: 5,
align: (left, center, center, center, center),
table.header([Instruction], [Storage], [Balance], [Code], [Nonce]),
table.hline(),
[`SLOAD`], [$checkmark$], [], [], [],
[`BALANCE`], [], [$checkmark$], [], [],
[`SELFBALANCE`], [], [$checkmark$], [], [],
[`CODESIZE`], [], [], [$checkmark$], [],
[`CODECOPY`], [], [], [$checkmark$], [],
[`EXTCODECOPY`], [], [], [$checkmark$], [],
[`EXTCODESIZE`], [], [], [$checkmark$], [],
[`EXTCODEHASH`], [], [], [$checkmark$], [],
[`CALL`], [], [$checkmark$], [$checkmark$], [$checkmark$],
[`CALLCODE`], [], [$checkmark$], [$checkmark$], [$checkmark$],
[`STATICCALL`], [], [], [$checkmark$], [],
[`DELEGATECALL`], [], [], [$checkmark$], [],
[`CREATE`], [], [$checkmark$], [$checkmark$], [$checkmark$],
[`CREATE2`], [], [$checkmark$], [$checkmark$], [$checkmark$],
[`SELFDESTRUCT`], [], [$checkmark$], [$checkmark$], [$checkmark$],
)],
caption: flex-caption(
[Instructions that access state. A checkmark indicates,
that the execution of this instruction can depend on this state type.],
[State accessing instructions],
),
kind: table,
)<tab:state_reading_instructions>
]
]
#block[
#block[
#figure(
align(center)[#table(
columns: 5,
align: (left, center, center, center, center),
table.header([Instruction], [Storage], [Balance], [Code], [Nonce]),
table.hline(),
[`SSTORE`], [$checkmark$], [], [], [],
[`CALL`], [], [$checkmark$], [], [],
[`CALLCODE`], [], [$checkmark$], [], [],
[`CREATE`], [], [$checkmark$], [$checkmark$], [$checkmark$],
[`CREATE2`], [], [$checkmark$], [$checkmark$], [$checkmark$],
[`SELFDESTRUCT`], [$checkmark$], [$checkmark$], [$checkmark$], [$checkmark$],
)],
caption: flex-caption(
[Instructions that modify state. A checkmark indicates,
that the execution of this instruction can modify this state type.],
[State modifying instructions],
),
kind: table,
)
<tab:state_writing_instructions>
]
]
=== Causes without code execution
Some state accesses and modifications are inherent to transaction execution. To pay for the transaction fees, the balance of the sender is accessed and modified. When a transaction transfers some Wei from the sender to the recipient, it als modifies the recipient’s balance. To check if the recipient is a contract account, the transaction also needs to access the code of the recipient. And finally, it also verfies the sender’s nonce and increments it by one. @wood_ethereum_2024[p.9]
=== Relevant collisions for attacks
<sec:relevant-collisions>
#todo("Reference papers, that only used storage and balance without arguing why")
The previous sections list possible ways to access and modify the world state. Many previous studies have focused on storage and balance collisions, however they did not discuss if or why code and nonce collisions are not important. Here, we try to argue, why only storage and balance collisions are relevant for TOD attacks and code and nonce collisions can be neglected.
The idea of an TOD attack is, that an attacker impacts the execution of some transaction $T_B$, by placing a transaction $T_A$ before it. To have some impact, there must be a write-write or write-read collisions between $T_A$ and $T_B$. Therefore, our scenario is that we start from some (vicim) transaction $T_B$ and try to create impactful collisions with a new transaction $T_B$. We assume some set $A$ to be the set of codes and nonces that $T_B$ accesses and writes.
Let us first focus on the instructions, that could modify the accessed codes and nonces in $A$, namely `SELFDESTRUCT`, `CREATE` and `CREATE2`. Since the EIP-6780 update@ballet_eip-6780_2023, `SELFDESTRUCT` only destroys a contract if the contract was created in the same transaction. Therefore, `SELFDESTRUCT` can only modify a code and nonce within the same transaction, but cannot be used to attack an already submitted transaction $T_B$. The instructions to create a new contract, `CREATE` and `CREATE2`, both use the sender’s address for the calculation of the new contract account’s address, and both fail when there is already a contract at the target address. @wood_ethereum_2024[p.11] Therefore, we can only modify the code if the contract did not exist previously. If this is the case, it is unlikely that $T_B$ would make a transaction to exactly this attacker-related address. Therefore, none of these instructions is usable for a TOD attack via code or nonce collisions. A similar argument can be made about contract creation directly via the transaction and some init code.
Apart from instructions, the nonces of an EOA can also be increased by transactions themselves. The only way that $T_B$ can access the nonce of an EOA is through the gas cost calculation when sending Ether to this address. The calculation returns a different cost if the recipient already exists, or has to be newly created. Thus, an attack would be that $T_B$ transfers some Ether to an attacker controlled EOA address $a$ which does not yet exist, and the attacker creates the account at addres $a$ in $T_A$, which slightly increases the gas cost for $T_B$. Again, this attack seems negligible.
Therefore, the remaining attack vectors are `SSTORE`, to modify the storage of an account, and `CALL`, `CALLCODE`, `SELFDESTRUCT` and Ether transfer transactions, to modify the balance of an account.
== Everything is TOD
Our definition of TOD is very broad and marks many transaction pairs as TOD. For instance, if a transaction $T_B$ uses some storage value for a calculation, then the execution likely depends on the transaction that previously has set this storage value. Similarly, when someone wants to transfer Ether, they can only do so when they first received that Ether. Thus, they are dependent on some transaction that gave them this Ether previously.
#todo("What about block rewards?")
#theorem[For every transaction $T_B$ after the London upgrade#footnote[We reference the London upgrade here, as this introduced the base fee for transactions.], there exists a transaction $T_A$ such that $(T_A , T_B)$ is TOD.]
#proof[
Consider an arbitrary transaction $T_B$ with the sender being some address $italic("sender")$. The sender must pay some upfront cost $v_0 > 0$, because they must pay a base fee. @wood_ethereum_2024[p.8-9]. Therefore, we must have $sigma italic("sender")_b gt.eq v_0$. This requires, that a previous transaction $T_A$ increased the balance of $italic("sender")$ to be high enough to pay the upfront cost, i.e. $pre(Delta_(T_A)) (italic("sender"))_b < v_0$ and $post(Delta_(T_A)) (italic("sender"))_b gt.eq v_0$.
When we calculate $sigma - Delta_(T_A)$ for our TOD definition, we would set the balance of $italic("sender")$ to $pre(Delta_(T_A)) (italic("sender"))_b < v_0$ and then execute $T_B$ based on this state. In this case, $T_B$ would be invalid, as the $italic("sender")$ would not have enough Ether to cover the upfront cost.
]
#todo("Check reference when frontrunning sections are written")
Given this property, it is clear that TOD alone is not a useful attack indicator, else we would say that every transaction has been attacked. In the following, we provide some more restrictive definitions.
= TOD candidate mining
In this chapter, we discuss how we search for potential TODs in the Ethereum blockchain. We use the RPC from an archive node to obtain transactions and their state accesses and modifications. Then we search for collisions between these transactions to find TOD candidates. Lastly, we filter out TOD candidates, that are not relevant to our analysis.
== TOD candidate finding
We make use of the RPC method `debug_traceBlockByNumber`, which allows replaying all transactions of a block the same way they were originally executed. With the `prestateTracer` config, this method also outputs, which state has been accessed, and using the `diffMode` config, also which state has been modified#footnote[When running the prestateTracer in diffMode, several fields are only implicit in the response. We need to make these fields explicit for further analysis. Refer to the documentation or the source code for further details.].
By inspecting the source code from the tracers for Reth@paradigm_revm-inspectors_2024 and results from the RPC call, we found out, that for every touched account it always includes the account’s balance, nonce and code in the prestate. For instance, even when only the balance was accessed, it will also include the nonce in the prestate#footnote[I opened a #todo("link visibility") #link("https://github.com/ethereum/go-ethereum/pull/30081")[pull request] to clarify this behaviour and now this is also reflected in the documentation@noauthor_go-ethereum_2024-1.]. Therefore, we do not know precisely which state has been accessed, which can be a source of false positives for collisions.
We store all the accesses and modifications in a database and then query for accesses and writes that have the same state key, giving us a list of collisions. We then use these collisions to obtain a preliminary set of TOD candidates.
== TOD candidate filtering
Many of the TOD candidates from the previous section are not relevant for our further analysis. To prevent unnecessary computation and distortion of our results, we define which TOD candidates are not relevant and then filter them out.
A summary of the filters is given in @tab:tod_candidate_filters and more detailed explanations are in the following sections. The filters are executed in the same order as they are presented in the table and always operate on the output from the previous filter. The only exception is the "Same-value collision" filter, which is directly incorporated into the initial collisions query for performance reasons.
The "Block windows", "Same senders" and "Recipient Ether transfer" filters have already been used in @zhang_erebus-redgiant_2023. The filters "Nonce and code collision" and "Indirect dependency" followed directly from our previous theoretical arguments. Further, we also applied an iterative approach, where we searched for TOD candidates in a sample block range and manually analyzed if some of these TOD candidates could be filtered. This led us to the "Same-value collisions" and the "Block validators" filter.
#block[
#block[
#figure(
align(center)[#table(
columns: 2,
align: (left, left),
table.header([Filter name], [Description of filter criteria]),
table.hline(),
[Same-value collision], [Only take collisions where $T_A$ writes exactly the value, that is read or overwritten by TB.],
[Block windows], [Drop transactions that are 25 or more blocks apart.],
[Block validators], [Drop collisions on the block validator’s balance.],
[Nonce and code collision], [Drop nonce and code collisions.],
[Indirect dependency], [Drop TOD candidates with an indirect dependency. e.g. if TOD candidates $(T_A , T_X )$ and $(T_X , T_B)$ exist.],
[Same senders], [Drop if $T_A$ and $T_B$ are from the same sender.],
[Recipient Ether transfer], [Drop if $T_B$ does not execute code.],
)],
caption: flex-caption(
[TOD candidate filters sorted by usage order. When a filter describes the removal of collisions, the TOD candidates will be updated accordingly.],
[TOD candidate filters],
),
kind: table,
)
<tab:tod_candidate_filters>
]
]
=== Filters
#heading("Same-value collisions", level: 4, numbering: none)
When we have many transactions that modify the same state, e.g. the balance of the same account, they will all have a write-write conflict with each other. The number of TOD candidates grows quadratic with the number of transactions modifying the same state. For instance, if 100 transactions modify the balance of address $a$, the first transaction would have a write-write conflict with all other 99 transactions, the second transaction with the remaining 98 transactions, etc., leading to a total of $frac(n^2 - n, 2) = 4950$ TOD candidates.
To reduce this growth of TOD candidates, we also require for a collision, that $T_A$ writes exactly the value that is read or overwritten by $T_B$. Formally, following must hold to pass this filter:
$
forall K in colls(T_A , T_B) :
post(Delta_(T_A)) (K) = pre(Delta_(T_B)) (K)
$
With the example of 100 transactions modifying the balance of address $a$, when the first transaction sets to balance to 1234, it would only have a write-write conflict with transactions where the balance of $a$ was exactly 1234 before the execution. If all transactions wrote different balances, this would reduce the amount of TOD candidates to $n - 1 = 99$.
Apart from the performance benefit, this filter also removes many TOD candidates that are potentially indirect dependent. For instance, let us assume that we removed the TOD candidate $(T_A , T_B)$. By definition of this filter, there must be some key $K$ with $post(Delta_(T_A)) (K) eq.not pre(Delta_(T_B)) (K)$, thus some transaction $T_X$ must have modified the state at $K$ between $T_A$ and $T_B$. Therefore, we would also have a collision (and TOD candidate) between $T_A$ and $T_X$, and between $T_X$ and $T_B$. This would be a potential indirect dependency, which could lead to unexpected results as argued in @sec:weaknesses.
#heading("Block windows", level: 4, numbering: none)
According to a study of 24 million transactions from 2019 @zhang_evaluation_2021, the maximum observed time it took for a pending transaction to be included in a block, was below 200 seconds. Therefore, when a transaction $T_B$ is submitted, and someone instantly attacks it by creating a new transaction $T_A$, the inclusion of them in the blockchain differs by at most 200 seconds. We currently add a new block to the blockchain every 12 seconds according to Etherscan @etherscan_ethereum_2024, thus $T_A$ and $T_B$ are at most $200 / 12 approx 17$ blocks apart from each other. As the study is already 5 years old, we use a block window of 25 blocks instead, to account for a potential increase in latency since then.
Thus, we filter out all TOD candidates, where $T_A$ is in a block that is 25 or more blocks away from the block of $T_B$.
#heading("Block validators", level: 4, numbering: none)
In Ethereum, each transaction must pay a transaction fee to the block validator and thus modifies the block validator’s balance. This would qualify each transaction pair in a block as a TOD candidate, as they all modify the balance of the block validator’s address.
We exclude TOD candidates, where the only collision is the balance of any block validator.
#heading("Nonce and Code collisions", level: 4, numbering: none)
We showed in @sec:relevant-collisions, that nonce and code collisions are not relevant for TOD attacks. Therefore, we ignore collisions for this state type.
#heading("Indirect dependency", level: 4, numbering: none)
#todo("Do we simply want to remove everything but the smallest TOD candidates instead? Would be more clean, but remove many more TOD candidates")
As argued in @sec:weaknesses, indirect dependencies can cause unexpected results in our analysis, therefore we will filter TOD candidates that have an indirect dependency. We will only consider the case, where the indirect dependency is already visible in the normal order and accept that we potentially miss some indirect dependencies. Alternatively, we could also remove a TOD candidate $(T_A , T_B)$ when we also have the TOD candidate $(T_A , T_X)$, however this would remove many more TOD candidates.
We already have a model of all direct (potential) dependencies with the TOD candidates. We can build a transaction dependency graph $G = (V , E)$ with $V$ being all transactions and $E = { (T_A , T_B) divides (T_A , T_B) in "TOD candidates" }$. We then filter out all TOD candidates $(T_A , T_B)$ where there exists a path $T_A , T_(X_1) , dots.h , T_(X_n) , T_B$ with at least one intermediary node $T_(X_i)$.
@fig:tod_candidate_dependency shows an example dependency graph, where transaction $A$ influences both $X$ and $B$ and $B$ is influenced by all other transactions. We would filter out the candidate $(A , B)$ as there is a path $A arrow.r X arrow.r B$, but keep $(X , B)$ and $(C , B)$.
#figure(
[
#text(size: 0.8em)[
#diagram(
node-stroke: .1em,
mark-scale: 100%,
edge-stroke: 0.08em,
node((3, 0), `A`, radius: 1.2em),
edge("-|>"),
node((2, 2), `X`, radius: 1.2em),
edge("-|>"),
node((4, 3), `B`, radius: 1.2em),
edge((3, 0), (4, 3), "--|>"),
edge("<|-"),
node((5, 1), `C`, radius: 1.2em),
)
]
],
caption: flex-caption(
[ Indirect dependency graph. An arrow from x to y indicates that y depends on x. A dashed arrow indicates an indirect dependency. ],
[Indirect dependency graph],
),
)
<fig:tod_candidate_dependency>
#heading("Same sender", level: 4, numbering: none)
If the sender of both transactions is the same, the victim would have attacked themselves.
To remove these TOD candidates, we use the `eth_getBlockByNumber` RPC method and compare the sender fields for $T_A$ and $T_B$.
#heading("Recipient Ether transfer", level: 4, numbering: none)
If a transaction sends Ether without executing code, it only depends on the balance of the EOA that signed the transaction. Other entities can only increase the balance of this EOA, which has no adverse effects on the transaction.
Thus, we can exclude TOD candidates, where $T_B$ has no code access.
== Experiment
In this section, we discuss the results of applying the TOD candidate mining methodology on a randomly sampled sequence of 100 blocks, different to the block range we used for the development of the filters. Refer to @cha:reproducibility for the experiment setup and the reproducible sampling.
We mined the blocks from block 19830547 up to block 19830647, containing a total of 16799 transactions.
=== Performance
The mining process took a total of 502 seconds, with 311 seconds being used to fetch the data via RPC calls and store it in the database, 6 seconds being used to query the collisions in the database, 17 seconds for filtering the TOD candidates and 168 seconds for preparing statistics. If we consider the running time as the total time excluding the statistics preparation, we analyzed an average of 0.30 blocks per second.
We can also see that 93% of the running time was spent fetching the data via the RPC calls and storing it locally. This could be parallelized to significantly speed up the process.
=== Filters
In @tab:experiment_filters we can see the number of TOD candidates before and after each filter, showing how many candidates were filtered at each stage. This shows the importance of filtering, as we reduced the number of TOD candidates to analyze from more than 60 millions to only 8,127.
Note, that this does not directly imply, that "Same-value collision" filters out more TOD candidates than "Block windows", as they operated on different sets of TOD candidates. Even if "Block windows" would filter out every TOD candidate, this would be less than "Same-value collision" did, because of the order of filter application.
#block[
#block[
#figure(
align(center)[#table(
columns: 3,
align: (left, right, right),
table.header([Filter name], [TOD candidates after filtering], [Filtered TOD candidates]),
table.hline(),
[(unfiltered)], [(lower bound) 63,178,557], [],
[Same-value collision], [56,663], [(lower bound) 63,121,894],
[Block windows], [53,184], [3,479],
[Block validators], [39,899], [13,285],
[Nonce collision], [23,284], [16,615],
[Code collision], [23,265], [19],
[Indirect dependency], [16,235], [7,030],
[Same senders], [9,940], [6,295],
[Recipient Ether transfer], [8,127], [1,813],
)],
caption: flex-caption(
[This table shows the application of all filters used to reduce the number of TOD candidates. Filters were applied from top to bottom and each row shows how many TOD candidates remained and were filtered. The unfiltered value is a lower bound, as we only calculated this number afterwards, and the calculation does not include write-write collisions.],
[TOD candidate filters evaluation],
),
kind: table,
)
<tab:experiment_filters>
]
]
=== Transactions
After applying the filters, 7864 transactions are part of at least one TOD candidate. This is, 46.8% of all transactions, that we mark as potentially TOD with some other transaction. Only 2381 of these transactions are part of exactly one TOD candidate. On the other end, there exists one transaction that is part of 22 TOD candidates.
=== Block distance
In @fig:tod_block_dist we can see, that most TOD candidates are within the same block. Morevoer, the further two transactions are apart, the less likely we include them as a TOD candidate. A reason for this could be, that having many intermediary transactions makes it more likely to be filtered by our "Indirect dependency" filter. Nonetheless, we can conclude that when using our filters, the block window could be reduced even further without missing many TOD candidates.
#figure(
image("charts/tod_candidates_block_dist.png", width: 80%),
caption: flex-caption(
[
The histogram and eCDF of the block distance for TOD candidates. The blue bars show how many TOD candidates have been found, where $T_A$ and $T_B$ are n blocks apart. The orange line shows the percentage of TOD candidates, that are at most n blocks apart.
],
[Block distances of TOD candidates],
),
)
<fig:tod_block_dist>
=== Collisions
After applying our filters, we have 8818 storage collisions and 5654 balance collisions remaining. When we analyze, how often each account is part of a collision, we see that collisions are highly concentrated around a small set of accounts. For instance, the five accounts with the most collisions#footnote[All of them are token accounts:
#link("https://etherscan.io/address/0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2")[WETH],
#link("https://etherscan.io/address/0x97a9a15168c22b3c137e6381037e1499c8ad0978")[DOP],
#link("https://etherscan.io/address/0xdac17f958d2ee523a2206206994597c13d831ec7")[USDT],
#link("https://etherscan.io/address/0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48")[USDC]
and
#link("https://etherscan.io/address/0xf938346d7117534222b48d09325a6b8162b3a9e7")[CHOPPY]]
are responsible for 43.0% of all collisions. In total, the collisions occur in only 1472 different account states.
One goal of this paper is to create a diverse set of attacks for our benchmark. With such a strong imbalance towards a few contracts, it will take a long time to analyze TOD candidates related to these frequent addresses, and the attacks are more likely related and do not cover a wide range of attack types. To prevent this, we may filter out duplicate addresses for collisions.
@fig:collsions_address_limit depicts, how many collisions we would get when we only consider the first $n$ collisions for each address. If we set the limit to one collision per address, we would end up with 1472 collisions, which is exactly the number of unique addresses where collisions happened. When we keep 10 collisions per address, we would get 3964 collisions. Such a scenario would already reduce the amount of collisions by 73%, while still retaining a sample of 10 collisions for each address, that could cover different types of TOD attacks.
#figure(
image("charts/collisions_limited_per_address.png", width: 80%),
caption: flex-caption(
[
The chart shows, how many collisions we have, when we limit the number of collisions we include per address. For instance, if we only include 10 collisions for each address we would end up with about 4000 collisions.
],
[Limit for collisions per address],
),
)
<fig:collsions_address_limit>
== Deduplication
TBD.
#todo("ethutils GPL license")
= Trace analysis
= TOD Attack results
Overall findings of the TOD attack mining and analysis.
= Tool benchmarking
== Systematic Literature Review
== Result
= Data availability
TBD.
= Reproducibility
<cha:reproducibility>
== Tool
TBD.
== Randomness
TBD.
== Experiment setup
The experiments were performed on Ubuntu 22.04.04, using an AMD Ryzen 5 5500U CPU with 6 cores and 2 threads per core and a SN530 NVMe SSD. We used a 16 GB RAM with an additional 16 GB swap file.
For the RPC requests we used a public endpoint@noauthor_pokt_2024, which uses Erigon@noauthor_rpc_2024 according to the `web3_clientVersion` RPC method. We used a local cache to prevent repeating slow RPC requests. @fuzzland_eth_2024 Unless otherwise noted, the cache was initially empty for experiments that measure the running time.
#heading("Overview of Generative AI Tools Used", numbering: none)
No generative AI tools where used in the process of researching and writing this thesis.
#outline(
title: [List of Figures],
target: figure.where(kind: image),
)
#outline(
title: [List of Tables],
target: figure.where(kind: table),
)
#bibliography("refs.bib", style: "ieee")
|
|
https://github.com/KaarelKurik/conditional-plasticity | https://raw.githubusercontent.com/KaarelKurik/conditional-plasticity/main/main.typ | typst | #import "@preview/ctheorems:1.1.2": *
#show: thmrules.with(qed-symbol: $square$)
#import "template.typ": *
#show: project.with(
title: [Conditional plasticity of the unit ball
of the $ell_infinity$‑sum of finitely many strictly convex Banach~spaces],
authors: (
(name:"<NAME>", email:"<EMAIL>",
affiliation: "Institute of Mathematics and Statistics, University of Tartu, Narva mnt 18, 51009 Tartu, Estonia"),
),
subject-class: ("46B20", "47H09", "05C69"), // ??
key-words: ("non-expansive map", "unit ball", "plastic metric space"),
)
#let card(x) = $abs(#x)$
#let theorem = thmbox("theorem", "Theorem", fill: rgb("#ccffcc"))
#let lemma = thmbox("lemma", "Lemma", fill: rgb("#ffddff"))
#let problem = thmbox("problem", "Problem", fill: rgb("ffffaa"))
#let adj = math.tilde
#let pih = $hat(pi)$
#let clo(L) = math.overline(L)
#let corollary = thmplain(
"corollary",
"Corollary",
base: "theorem",
titlefmt: strong
)
#let definition = thmbox("definition", "Definition", inset: (x: 1.2em, top: 1em))
#let example = thmplain("example", "Example").with(numbering: none)
#let proof = thmproof("proof", "Proof")
#let ihom = $g$
#let lip = $f$
#pad(x:10%)[
#smallcaps[Abstract.] We prove that for any $ell_infinity$-sum $Z = plus.circle.big_(i in [n]) X_i$ of finitely many strictly convex #box[Banach] spaces $(X_i)_(i in [n])$, an extremeness preserving 1-Lipschitz bijection $lip: B_Z -> B_Z$ is an isometry, by constraining the componentwise behavior of the inverse $ihom=lip^(-1)$ with a theorem admitting a graph-theoretic interpretation. We also show that if $X, Y$ are Banach spaces, then a bijective 1-Lipschitz non-isometry of type $B_X -> B_Y$ can be used to construct a bijective 1-Lipschitz non-isometry of type $B_X' -> B_X'$ for some Banach space $X'$, and that a homeomorphic 1-Lipschitz non-isometry of type $B_X -> B_X$ restricts to a homeomorphic #box[1-Lipschitz] non-isometry of type $B_S -> B_S$ for some separable subspace $S <= X$.
]
= Introduction
The central aim of this article is to present a generalization of a key lemma
found in <NAME>'s proof of the plasticity of the closed unit ball of
the $ell_infinity$-sum of two strictly convex Banach spaces, where the generalization
extends the lemma to the $ell_infinity$-sum of any finite number of strictly convex Banach
spaces by way of a graph-theoretic analogue. In addition, the generalized lemma is applied to prove that any 1-Lipschitz bijection from
the closed unit ball of such a space to itself
which maps extreme points to extreme points
or the sphere into itself must be an isometry.
Two additional results are also presented, which may prove no less important for the study of plasticity than the main result. The first states that the existence of a non-isometric 1-Lipschitz bijection between the unit balls of two distinct Banach spaces implies the existence of a non-isometric 1-Lipschitz bijection from the unit ball of some Banach space to itself, thereby proving that the general question of unit ball plasticity for Banach space pairs is equivalent to unit ball plasticity for single Banach spaces. The second states that the homeomorphic plasticity of the unit ball for a Banach space is equivalent to the homeomorphic plasticity of all of the space's separable subspaces.
= Preliminaries and notation
== Background
The notion of plasticity for metric spaces was introduced by Naimpally, Piotrowski, and Wingler in their 2006 article @naimpally:2006.
A metric space is said to be _EC-plastic_ (or just _plastic_) when
all 1-Lipschitz bijections from the space into itself are isometries.
In their 2016 article @cascales:2016, Cascales, Kadets, Orihuela, and Wingler began an investigation
of the following question.
#problem[
Is the closed unit ball of every Banach space plastic?
]
In said article, this question was answered affirmatively for the special case of strictly convex Banach spaces. (Recall that a Banach
space is strictly convex if its
unit sphere contains no segments with distinct
endpoints.) The general case, however, remains open.
All totally bounded metric spaces are known to be plastic, including the unit balls of finite-dimensional Banach spaces @naimpally:2006.
The unit ball is also known to be plastic
in the following cases:
- spaces whose unit sphere is a union of finite-dimensional polyhedral extreme subsets (incl. all strictly convex Banach spaces) @angosto:2019 @cascales:2016,
- any $ell_1$-sum of strictly convex Banach spaces (incl. $ell_1$ itself) @kadets_zavarzina:2016 @kadets_zavarzina:2018,
- the $ell_infinity$-sum of *two* strictly convex Banach spaces @haller:2022,
- $ell_1 plus.circle_2 RR$ @haller:2022,
- $C(K)$, where $K$ is a compact metrizable space with finitely many accumulation points (incl. $c tilde.equiv C(omega+1)$, i.e. the space of convergent real sequences) @fakhoury:2024 @leo:2022.
In @haller:2022, it was shown (with proof essentially due to <NAME>) that the $ell_infinity$-sum of two strictly
convex Banach spaces has a plastic unit ball. While the proof does not directly apply to an arbitrary finite sum of strictly convex Banach spaces, a crucial step in the proof can be modified to suit this purpose. By generalizing this step, we can establish a similar but weaker property than plasticity, which only considers a specific class of 1‑Lipschitz bijections that is well-behaved with respect to extreme points.
== Conventions, notation
We adopt the conventions that $0 in NN$ and $[n] = {i in NN : i < n}$.
For any map $f$ from a metric space $(M,d)$ to itself,
we say that it is _non-expansive_ when it is a 1‑Lipschitz
map, and that it is _non-contractive_ when for all
$x, y in M$, we have $d(x,y) <= d(f(x), f(y))$ (note
that this is dual to the inequality $d(x,y) >= d(f(x), f(y))$ defining
1‑Lipschitz maps).
= Standalone results
We begin with two theorems relating to plasticity which can be stated and proved without much preamble, and which are independent of both each other and the remainder of the article.
#let induced = $lip'$
#theorem[Suppose there are Banach spaces $X, Y$, and a non-expansive bijection $lip : B_X -> B_Y$ such that $lip$ is not an isometry. Then there is a Banach space $Z$ and a non-expansive bijection $induced : B_Z -> B_Z$ such that $induced$ is not an isometry.]
This result is motivated by the work of <NAME> in @zavarzina:2017.
#proof[
Let $C_i$ be a Banach space for each $i in ZZ$, such that $C_i = X$
for $i < 0$ and $C_i = Y$ for $i >= 0$. Take $Z colon.eq plus.circle.big_(i=-infinity)^infinity C_i$ with the $infinity$-norm. Define $induced : B_Z -> B_Z$ by $pi_i induced(z) = pi_(i-1) z$ for $i != 0$ and $pi_0 induced(z) = lip(pi_(-1)z)$.
It is clear by inspection that the codomain of $induced$ is correct and that it is a non-expansive bijection. That it is not an isometry follows from considering the natural inclusions of two points $x, x' in C_(-1)$ into $Z$, where $norm(induced(x) - induced(x')) = norm(lip(x)-lip(x')) < norm(x-x')$.
]
#lemma[Let $X$ be a Banach space and let $A subset.eq X$ be closed under scaling by rationals. Then $clo(A sect B_X) = clo(A) sect B_X$.] <rat-scaling>
#proof[Since $A sect B_X subset.eq clo(A) sect B_X$ and the latter is closed, we have $clo(A sect B_X) subset.eq clo(A) sect B_X$. It thus suffices to show the opposite inclusion.
Fix any $a in clo(A) sect B_X$ and a sequence $a_i in A$ that converges to $a$. If $norm(a) < 1$, then $a_i in A sect B_X$ for all sufficiently large $i$, from which $a in clo(A sect B_X)$. If $norm(a) = 1$, then choose a sequence of rationals $q_i in QQ$ such that $abs(q_i) <= 1/norm(a_i)$ for all sufficiently large $i$, and $q_i -> 1$. This is possible since $norm(a_i) -> norm(a) = 1$. We then have $q_i a_i in A sect B_X$ for all sufficiently large $i$, and $q_i a_i -> a$, so $a in clo(A sect B_X)$. We thus have that $clo(A) sect B_X subset.eq clo(A sect B_X)$, so $clo(A) sect B_X = clo(A sect B_X)$.
]
#let restr = $rho$
#theorem[Let $X$ be a Banach space and $lip: B_X -> B_X$ be a non-expansive homeomorphism that is not an isometry. Then $X$ has a separable closed subspace $Y$ such that $lip(B_Y) = B_Y$ and $restr colon.eq lip|_B_Y : B_Y -> B_Y$ is a non-expansive homeomorphism that is not an isometry.]
#proof[
Let $x, x' in B_X$ be points for which $norm(lip(x)-lip(x')) < norm(x-x')$.
Define the set function $H colon 2^X -> 2^X$ as $ H(S) = lip(S sect B_X) union lip^(-1)(S sect B_X) union QQ dot S union (S+S). $ Note that $H(S)$ is countable whenever $S$ is countable, $S subset.eq H(S)$. Moreover, since $H$ is a union of set functions which are monotonic and continuous with respect to ascending chains of set inclusions, then $H$ is monotonic and continuous with respect to ascending chains also.
Define $S_0 = {x,x'}$ and $S_(n+1) = H(S_n)$ for $n in NN$. Let $L = union.big_(n=0)^infinity S_n.$ Since $L$ is the limit of an ascending chain, we have $H(L) = union.big_(n=0)^infinity H(S_n) = union.big_(n=0)^infinity S_(n+1) = L$, so $L$ is a fixed point of $H$. Since $L$ is a countable union of countable sets, then $L$ is itself countable.
Since $L$ is a fixed-point of $H$, we have that it is closed under addition and rational scaling, from which $clo(L)$ is closed under addition and real scaling, so $clo(L)$ is a closed subspace of $X$.
Since $L$ is countable, $clo(L)$ is separable. By @rat-scaling,
we have that $clo(L sect B_X) = clo(L) sect B_X = B_(clo(L))$.
Since $lip$ is continuous and $L$ is closed under $lip$, we have
that $lip(clo(L sect B_X)) subset.eq clo(lip(L sect B_X)) subset.eq clo(L sect B_X)$, so $lip(B_clo(L)) subset.eq B_clo(L)$. Analogously, we have $lip^(-1)(B_clo(L)) subset.eq B_clo(L)$. From these, we have $lip(B_clo(L)) = B_clo(L)$, so $restr$ is a well-defined non-expansive homeomorphism. Since $x, x' in B_clo(L)$, we also have that $restr$ is not an isometry.
]
= Primary results <sec:main>
== Conventions, notation <sec:main_notation>
Throughout @sec:main, we consider the following structure and a certain weakening thereof, explained below:
- $n$ is a fixed value in $NN$ such that $n >= 1$.
- $X_i$ is a family of strictly convex nontrivial Banach spaces, indexed by $i in [n]$.
- $B_i, S_i$ are the unit ball and sphere of $X_i$ respectively.
- $Z = plus.circle.big_(i in [n]) X_i$ is the direct sum of the family $X_i$ endowed with the $infinity$-norm, i.e. if $z = (x_0, dots, x_(n-1)) in Z$, then $norm(z) = max_(i in [n]) norm(x_i)$.
- $pi_i : Z -> X_i$ is the projection onto the $i$-th component of $Z$.
- More generally, $pi_i$ should be understood as the projection onto the $i$-th component of _any_ structure with components indexed by a set containing $i$.
- $pih_i : Z -> hat(X)_i$ is the complementary projection of the $i$-th component, where $hat(X)_i = plus.big.circle_(j in [n] \\ {i} )X_j$. For all $j in [n] \\ i$, we have that $pi_j z = pi_j pih_i z$, and $pi_i pih_i z$ is ill-defined.
- $B_Z, S_Z$ are the unit ball and sphere of $Z$ respectively.
- $E subset.eq B_Z$ is the set of extreme points in $B_Z$. This can be shown to be those $z in S_Z$ for which $forall i in [n], z_i in S_i$.
- $ihom : B_Z -> B_Z$ is a non-contractive injection from $B_Z$ to itself.
- $lip : B_Z -> B_Z$ is a 1-Lipschitz bijection, such that $lip = ihom^(-1)$ if $ihom$ is invertible.
- $sigma : [n] -> [n]$ is a permutation of $[n]$.
- $g_i : S_i -> S_sigma(i)$ is a non-contractive injection satisfying $pi_i x in S_i => ihom(x)_sigma(i) = ihom_i (pi_i x)$. Proving that such functions exist is one of the central results of this article.
- $lip_i : S_sigma(i) -> S_i$ is the inverse of $ihom_i$, whenever the inverse exists.
Many of our results require no geometric considerations, and thus can be carried out on an analogous graph structure:
- $B_i$ is a disconnected graph with maximal vertex degree exactly 1. This means every connected component of $B_i$ consists of either a single vertex or two vertices joined by an edge.
- $S_i <= B_i$ is the subgraph of $B_i$ consisting only of vertices with a neighbor.
- $B_*$ is the co-normal product of the graphs $B_i$, while $B_square$ is the Cartesian product of the same. $B_square$ is a subgraph of $B_*$. Their common set of vertices is denoted $B$.
- $E$ is the set of those vertices $e in B$ for which $forall i in [n], e_i in S_i$. $E_* <= B_*$ and $E_square <= B_square$ are the respective induced subgraphs.
- $S$ is the set of those vertices $s in B$ for which $exists i in [n], s_i in S_i$. $S_* <= B_*$ and $S_square <= B_square$ are the respective induced subgraphs.
- The adjacency of vertices $u,v in B_*$ is denoted $u adj v$, and adjacency in $B_square$ is denoted $u adj' v$.
- The adjacency of vertices $u,v in B_i$ is denoted either $u adj v$ or $u adj' v$, since the two product structures agree on $B_i$.
- $ihom : B_star -> B_star$ is an injective graph homomorphism.
- $lip : B_star -> B_star$ is the set-theoretic inverse of $ihom$ whenever $ihom$ is invertible.
- $ihom_i : S_i -> S_sigma(i)$ is a local isomorphism satisfying $pi_i x in S_i => ihom(x)_sigma(i) = ihom_i (pi_i x)$.
- $lip_i : S_sigma(i) -> S_i$ is the set-theoretic inverse of $ihom_i$, whenever this exists.
The analogies in notation between the Banach space structure and the graph structure are motivated by these identifications:
- Given points $a,b in B_i$ with $B_i$ the unit ball of $X_i$, we may construct the graph $B_i$ by setting $a adj b$ iff $norm(a-b) = 2$. This will be a disconnected graph with maximal vertex degree exactly 1.
- $B_*$ is the graph constructed from $B_Z$ with precisely the same condition for adjacency, while $B_square$ has the adjacency condition $u adj v <=> exists i in [n], (norm(u_i - v_i) = 2) and (pih_i u = pih_i v$).
- Each subgraph $S_i$ of $B_i$ is generated by the unit sphere of $B_i$. Analogously, $S,E$ are the vertex sets corresponding to the unit sphere $S_Z$, and the extreme points of $B_Z$, respectively.
The remaining analogies are left for the reader to verify.
Throughout @sec:main, every theorem and lemma shall have an annotation indicating whether it requires the Banach space structure. If the graph-theoretic structure is sufficient, there will be no annotation.
#let annotation = [(Geometric)]
== Graph-theoretic results
#let bcn = $B_*$
#let bbox = $B_square$
We would like to prove the following theorem:
#theorem[Let $ihom : bcn -> bcn$ be an injective homomorphism. Then there exists a permutation $sigma: [n]->[n]$ and a family of local isomorphisms $g_i : S_i -> S_(sigma(i))$ such that for all $x in bcn$ and all $i in [n]$, we have $x_i in S_i => pi_(sigma(i))ihom(x) = ihom_i (pi_i x)$.] <thm:factors>
#lemma[
A clique of $2^n-1$ points in $bcn$ has at most one extension
to a clique of $2^n$ points.
] <lem:clique-ext>
#proof[
Induction on $n$. The case with $n=1$ is clear by inspection.
First note that if the statement is true for a graph $bcn$, then the maximal clique size for $bcn$ is at most $2^n$. If there were a clique of $2^n+1$ points, then every subclique of size $2^n - 1$ would have at least two distinct extensions to a clique of size $2^n$.
Consider an enumerated family $(x_i)_(i in [2^n])$ of pairwise adjacent
vertices in $bcn$. We want to show that any vertex $q in bcn$ that is
adjacent to all vertices $x_i$ with $i > 0$ is equal to $x_0$.
Partition the family into $C union.dot D$ such that $C$ is any maximal
subset of the family such that $pi_0 C$ is an edgeless graph.
Note two things: $pi_0 D$ is also edgeless, and $card(C) = card(D) = 2^(n-1)$. From this, it follows that $D$ also satisfies the defining condition of $C$.
Let's verify the first observation. If any element $pi_0 x$ of $pi_0 D$ were disconnected from $pi_0 C$, then $C$ could be extended to $C union {x}$, so $C$ would not be maximal. This implies that every element of $pi_0 D$ is connected to some element of $pi_0 C$, so $pi_0 D subset.eq N(pi_0 C)$. Since $pi_0 C$ is edgeless, then $N(pi_0 C)$ is edgeless,
thus $pi_0 D$ is edgeless.
Now the second observation. Since $C$ is maximal among those subsets $S$ for which $pi_0 S$ is edgeless, and $pi_0 D$ is edgeless, we have $card(C) >= card(D)$, from which $card(C) >= 2^(n-1)$. Now, since $forall c, c' in C, c adj c'$ while $pi_0 c adj.not pi_0 c'$, we must have $pih_0 c adj pih_0 c'$. This means that $pih_0 C$ is a clique of at least $card(C) >= 2^(n-1)$ points in $pih_0 bcn$ (we have that $card(pih_0 C) = card(C)$ since $c != c' => c adj c' => pih_0 c adj pih_0 c' => pih_0 c != pih_0 c'$). By the induction assumption, the largest clique size in $pih_0 bcn$ is at most $2^(n-1)$, so $card(C) >= 2^(n-1) >= card(pih_0 C) = card(C)$, hence $card(C) = 2^(n-1)$.
Having shown that $D$ is also maximal w.r.t. C's defining condition, we have that $pi_0 C subset.eq N(pi_0 D)$. Since $N^2(S) subset.eq S$ for a subset $S$ of a graph of maximal degree 1, we have that $N(pi_0 C) subset.eq pi_0 D$, from which $pi_0 D = N(pi_0 C)$ and $pi_0 C = N(pi_0 D)$.
Assume WLOG that $x_0 in C$. If $pi_0 q$ had no edge to $pi_0 D$, then
$pih_0 D union.dot {pih_0 q}$ would form a clique of $2^(n-1)+1$ elements
in $pih_0 bcn$, which is impossible. Thus $pi_0 q adj pi_0 b$ for some $b in D$. It follows that $pi_0 q$ has no edge to $pi_0 C$, from which $pih_0 q$ has edges to all members of $pih_0 (C - {x_0})$. This means $pih_0 (C - {x_0})$ is a clique of $2^(n-1)-1$ points in $pih_0 bcn$ which can be extended by $pih_0 q$ or by $pih_0 x_0$. By the induction assumption, this means $pih_0 q = pih_0 x_0$.
Running the same argument by some index $j > 0$, we also have that $pih_j q = pih_j x_0$. Since these two projections cover all components, we must have $q = x_0$.
]
#let xfam = $(x_i)_(i in [2^n])$
#let yfam = $(y_i)_(i in [2^n])$
For any $x in E$, define $T(x)$ as the connected component of $x$ in $bbox$. It may be readily verified that $T(x)$ has $2^n$ members: there is a bijection between subsets $J subset.eq [n]$ and vertices $x_J in T(x)$, such that $forall j in J, pi_j x_J = pi_j x$ and $forall j in [n]\\J, pi_j x_J adj pi_j x$. $T(x)$ is thus naturally isomorphic to a pointed $n$‑hypercube graph.
From here on, let $x_0 in E$. Let $xfam = T(x_0)$ and define $y_i = ihom(x_i)$.
#lemma[
$ihom$ is an $E_square$-homomorphism.
] <one-component>
#proof[
Let us have $x_a, x_b in E$ such that $x_a adj' x_b$. There exists some $x_0 in E$ for which $x_a, x_b in T(x_0)$ --- we may choose $x_0 := x_a$.
Since $ihom$ is a $bcn$-homomorphism and $x_a adj x_b$, we have $y_a adj y_b$, so $y_a, y_b$ are adjacent in at least one component. Note also that $yfam$ is a clique in $bcn$.
We first show that $y_a, y_b$ differ in exactly one component.
Suppose for the sake of contradiction that $y_a, y_b$ differ in at least two components, and let the first two of these have indices $i, j$.
Let $C_i union.dot D_i$ be a partition of $yfam$ such that $pi_i C_i$ is maximal and edgeless, and $y_a in C_i, y_b in D_i$. To prove that such a partition exists, it is sufficient to find any $y_c$ such that $pi_i y_c adj pi_i y_b$, start with ${y_a, y_c} subset.eq C_i$ and extend $C_i$ to maximality (noting that $pi_i y_a != pi_i y_b$ implies $pi_i y_a adj.not pi_i y_c$). Such a $y_c$ exists, since any partition of $yfam$ into two maximal $pi_i$‑edgeless sets consists of two nonempty sets, one of which contains $y_b$, and any member of the other component can be $y_c$.
Also construct an analogous partition $C_j union.dot D_j$ for $pi_j$.
Let $k$ be the index for which $pi_k x_a adj pi_k x_b$. By the definition of $T$, this is the unique index at which $pi_k x_a != pi_k x_b$, so $pih_k x_a = pih_k x_b$. Fix $v in B$ such
that $pih_k v = pih_k x_a, pih_k x_b$, and $pi_k v != pi_k x_a, pi_k x_b$. Note that $ihom(v)$ has $bcn$-edges to all of $yfam$ except possibly for $y_a, y_b$.
Suppose that $pi_i ihom(v)$ has an edge to $pi_i C_i$. This implies $pi_i ihom(v) in N(pi_i C_i) = pi_i D_i$,
// ref here?
from which $pih_i ihom(v)$ forms a $bcn$-clique with $pih_i (D_i - {y_b})$. By @lem:clique-ext this implies that $pih_i ihom(v) = pih_i y_b$, from which $pi_j ihom(v) = pi_j y_b$. From this, we have that $pih_j ihom(v)$ forms a $bcn$-clique with $pih_j (D_j - {y_b})$, which implies by @lem:clique-ext that $pih_j ihom(v) = pih_j y_b$. Since $pih_i ihom(v) = pih_i y_b$ and $pih_j ihom(v) = pih_j y_b$, we have $ihom(v) = y_b = ihom(x_b)$, from which $v = x_b$, which contradicts our choice of $v$.
Consequently, $pi_i ihom(v)$ has no edge to $pi_i C_i$, and by symmetrical argument it has no edge to $pi_i D_i$ either. From these, we have that $pih_i ihom(v)$ forms a $bcn$-clique with $pih_i (C_i - {y_a})$ and with $pih_i (D_i - {y_b})$, which implies by @lem:clique-ext that $pih_i y_a = pih_i ihom(v) = pih_i y_b$, so $pi_j y_a = pi_j y_b$, contradicting our choice of $j$ and thus our claim that $y_a, y_b$ differ in at least two components.
Since $y_a$ and $y_b$ are adjacent in at least one component and differ in at most one component, these bounds must be saturated and exactly one component accounts for both, which is some $m$ at which $pi_(m) y_a adj pi_(m) y_b$ and $pih_(m) y_a = pih_(m) y_b$ --- consequently $y_a adj' y_b$.
]
#lemma[
There is a permutation $sigma: [n]->[n]$ and a family of local isomorphisms $ihom_i : S_i -> S_sigma(i)$ such that for $e in E$, we have $pi_sigma(i) ihom(e) = ihom_i (pi_i e)$.
]
#proof[
First, note that $yfam$ is a subgraph of $T(y_0)$, since $yfam$ is connected (on account of being a homomorphic image of $T(x_0)$), and $T(y_0)$ is the connected component of $y_0$ in $bbox$. Consequently, $ihom$ is a homomorphism from $T(x_0)$ to $T(y_0)$.
Because $ihom$ is an injective homomorphism between the finite graphs $T(x_0)$
and $T(y_0)$, and the two graphs have an equal number of edges, $ihom$ must be an isomorphism from $T(x_0)$ to $T(y_0)$.
That this isomorphism arises from a componentwise isomorphism (up to permutation of components) follows from e.g. Theorem 6.8 in @handbook.
Since these isomorphisms must glue compatibly across all of $E$, we have that there exists a permutation $sigma : [n] -> [n]$ and a family of local isomorphisms $ihom_i : S_i -> S_sigma(i)$ such that for all $x in E$, $pi_sigma(i) ihom(x) = ihom_i (pi_i x)$.
]
We would like to extend this slightly to the case of vertices outside of $E$ to finish our proof of @thm:factors.
#lemma[
Let $q in B$ and $i in [n]$ be such that $pi_i q = pi_i x_0$. Then $pi_sigma(i) ihom(q) = pi_sigma(i) ihom(x_0) = ihom_i (pi_i x_0)$.
] <lem:interior>
#proof[
This is trivial for the $n=1$ case (since then $q in E$ or the claim is vacuous), so we will assume $n > 1$.
Let $(x_j)_(j in J)$ be the subfamily of $xfam$ for that satisfies $forall j in J, pi_i x_j adj pi_i x_0$. We then have that $card(J) = 2^(n-1)$ and $forall j in J, pi_sigma(i)y_j adj pi_sigma(i) y_0$. Note also that since $q adj x_j$, we have $ihom(q) adj y_j$ for each $j in J$.
Suppose that $pi_(sigma(i)) ihom(q) != pi_(sigma(i)) y_0$. This implies that $pi_sigma(i)ihom(q) adj.not pi_sigma(i)y_j$ for each $j in J$, from which (by way of $ihom(q) adj y_j$) we must have $pih_sigma(i)ihom(q) adj pih_sigma(i)y_j$. This induces a clique of $2^(n-1)+1$ vertices ${pih_sigma(i)ihom(q)} union {pih_sigma(i)y_j : j in J}$ in $pih_(sigma(i))bcn$, which is impossible by @lem:clique-ext. We thus have that $pi_sigma(i) ihom(q) = pi_sigma(i)y_0 = ihom_i (pi_i x_0)$.
]
Since for each $x_i in S_i$ there exists an $x in E$ with $pi_i x = x_i$, @lem:interior concludes our proof of @thm:factors.
== Applications to plasticity
We begin with a straightforward corollary of @thm:factors.
#theorem[#annotation
Let $Z := plus.circle.big_(i in [n]) X_i$ and let $ihom : B_Z -> B_Z$ be a non-contractive function. Then there is some permutation $sigma : [n] -> [n]$ and a family of non-contractive functions $ihom_i : S_i -> S_(sigma(i))$ such that for all points $x in B_Z$ and all $i in [n]$ we have
$pi_i x in S_i => pi_(sigma(i)) G(x) = ihom_i (pi_i x)$.
] <thm:banach-factors>
@thm:banach-factors follows from applying @thm:factors to the graph with vertices in $B_Z$, edges ${{x,x'} : norm(x-x')=2}$ and $ihom$ as the injective homomorphism.
This can be applied to prove some more natural theorems concerning plasticity.
#theorem[#annotation
Let $lip: B_Z -> B_Z$ be a 1-Lipschitz bijection. If $lip$ maps extreme points to extreme points, or $lip(S_Z) subset.eq S_Z$, then $lip$ is an isometry.
] <thm:natural>
Our proof of @thm:natural draws upon <NAME>'s work in @haller:2022
for its outline.
We begin with some graph-theoretic lemmas mirroring the conditions of @thm:natural.
#lemma[
If $S subset.eq ihom(S)$ or $E subset.eq ihom(E)$, then each $ihom_i$ is a bijection.
] <lem:bijective-factors>
Note that the conditions $S subset.eq ihom(S)$ and $E subset.eq ihom(E)$ in @lem:bijective-factors are equivalent to $lip(S) subset.eq S$ and $lip(E) subset.eq E$ respectively in the setting of @thm:natural.
#proof[
It is enough to show that each $ihom_i$ is surjective, i.e. that $y in ihom_i (S_i)$ for any $y in S_(sigma(i))$.
We shall first consider the case where $S subset.eq ihom(S)$.
First, fix a point $q in S$ such that $pi_sigma(i) q = y$ and $pi_j q in.not S_j$ for all $j != sigma(i)$. By $S subset.eq ihom(S)$, we may fix $x in S$ such that $ihom(x) = q$. Let $J = {j in [n] : pi_j x in S_j}$. From @thm:factors, we know that $pi_sigma(j) ihom(x) in S_sigma(j)$ for all $j in J$. It follows that $J subset.eq {i}$. Since $x in S$, we have that $J$ is nonempty, from which $J = {i}$. This allows us to use @thm:factors to conclude that $ihom_i (pi_i x) = pi_sigma(i) ihom(x) = y$, so $y in ihom_i (S_i)$.
We now consider the case where $E subset.eq ihom(E)$.
Fix a point $q in E$ such that $pi_sigma(i) q = y$. Since $q in E$, there is some $x in E$ for which $ihom(x) = q$, and by @thm:factors, we have $ihom_i (pi_i x) = pi_sigma(i) ihom(x) = y$.
]
#lemma[
If $S subset.eq ihom(S)$, then $E subset.eq ihom(E)$.
] <lem:sphere-implies-extreme>
#proof[
Fix any point $y in E$. We aim to construct a point $x in E$ such that $ihom(x)=y$.
By $S subset.eq ihom(S)$ and @lem:bijective-factors, we have that each $ihom_i$ is a bijection, so we may define $x$ such that $ihom_i (pi_i x) = pi_sigma(i) y$ for each $i in [n]$. By @thm:factors, we have that $pi_sigma(i) ihom(x) = ihom_i (pi_i x) = pi_sigma(i) y$, so $ihom(x) = y$ as desired.
]
// I have no idea whether the converse holds - I should try
// pushing the formalism at some point.
We now proceed with more geometric results for which a graph-theoretic analogue has not been recovered. Whenever @lem:bijective-factors is applicable, we define $lip_i = ihom_i^(-1)$ in analogy with the relation $lip = ihom^(-1)$.
For convenience in what follows, we define the functions $gamma_i : B_i -> B_sigma(i)$ such that $gamma_i (x) =
cases(norm(x) ihom_i (x/norm(x)) &"if" x != 0, 0 &"otherwise")$, and $phi_i : B_sigma(i) -> B_i$ as $phi_i = gamma_i^(-1)$. We additionally define
$gamma, phi : B_Z -> B_Z$ as $pi_sigma(i) gamma(x) = gamma_i (pi_i x)$, and
$pi_i phi(x) = phi_i (pi_sigma(i) x)$.
The reader may readily verify that $phi = gamma^(-1)$.
#definition[
Given a $1$-Lipschitz bijection $lip : B_Z -> B_Z$ and $ihom = lip^(-1)$ satisfying the conditions of @lem:bijective-factors, $ihom$ is said to be _homogeneous in $k$ components_ if for all $x in B_Z$ such that $x$ has norm $1$ on at least $n-k$ components, we have $ihom(x) = gamma(x)$. Analogously, we say that $lip$ is homogeneous in $k$ components when, for the same $x$, we have $lip(x) = phi(x)$.
]
// This entire homogeneity business could use some reworking to have no reliance
// on a division by zero.
#lemma[#annotation
If $lip(E) subset.eq E$, then
the function $ihom$ is homogeneous in $k$ components if and only if
$lip$ is also.
] <lem:homogeneity-equiv>
#proof[
We will show that $ihom$ being homogeneous in $k$ components implies that $lip$ is also. The proof of the converse is analogous.
Let us have $J subset.eq [n]$ with $card(J) <= k$ and $x in B_Z$ such that $x$ has norm 1 on components $[n] \\ J$.
First, define $a_i = norm(pi_i x)$ and fix $y in E$ such that
$pi_i x = a_i pi_i y$. Then fix $q in B_Z$ such that $pi_i q = a_sigma(i) pi_i lip(y)$, and note that $q$ has norm 1 in exactly as many components as $x$, since $f(y) in E$. Consequently, the homogeneity of $ihom$ in $k$ components applies to $q$, from which
$ pi_sigma(i) ihom(q) = gamma_i (pi_i q) = 0 = pi_sigma(i) x " if"
pi_i q = 0 $
and
$ pi_sigma(i) g(q) = gamma_i (pi_i q) = norm(pi_i q) ihom_i ((pi_i q)/norm(pi_i q)) = a_sigma(i) ihom_i (pi_i lip(y)) =_4 a_sigma(i) pi_sigma(i) y = pi_sigma(i) x " otherwise," $
where equality 4 follows from $lip(y) in E$ and @thm:banach-factors.
Consequently, we have $ihom(q) = x$ and $gamma(q) = x$, from which
$q = f(x)$ and $q = phi(x)$, so $f(x) = phi(x)$.
]
#lemma[
#annotation
If $lip(E) subset.eq E$, then $ihom$ is homogeneous in $n$ components, i.e. $g = gamma$.
] <lem:g-homogeneous>
#proof[
We proceed by induction. The base case of $n=0$ is covered by @thm:banach-factors.
Suppose $ihom$ is homogeneous in $k < n$ components. Let $J subset.eq [n]$ with
$card(J) = k+1$, and let $x in B_Z$ have norm 1 on components $[n] \\ J$. We
already know that for $i in [n]\\J$, $pi_sigma(i) ihom(x) = ihom_i (pi_i x)
= pi_sigma(i) gamma (x)$ by
the construction of $ihom_i$.
It thus suffices to show that for $i in J$, we have $pi_sigma(i) ihom(x) =
pi_sigma(i) gamma(x)$.
Fix any $i in J$. Define $y$ such that $pih_sigma(i) y = pih_sigma(i) ihom(x)$ and $norm(pi_sigma(i) ihom(x)) pi_sigma(i) y = pi_sigma(i) ihom(x)$. Note that
$y$ has norm 1 on $k$ components, so we can apply the induction assumption to it.
We also define $z$ as equal to $y$, except at $sigma(i)$, where we flip the
sign of the component.
Since $ihom_j (-x) = -ihom_j (x)$ holds for all $j in [n], x in S_j$, we also have that
$lip_j (-x) = -lip_j (x)$ holds for
$j in [n], x in S_sigma(j)$. Since $pi_sigma(i) z = -pi_sigma(i) y$, we have that
$lip_i (pi_sigma(i) z) = -lip_i (pi_sigma(i) y)$, from which
$pi_i lip(z) = -pi_i lip(y) in S_i$ by the homogeneity of $lip$ in $k$ components.
We have that
$ norm(pi_i lip(y) - pi_i x) <= norm(lip(y) - x) <= norm(y - ihom(x)) = 1 - norm(pi_sigma(i) ihom(x)). $
Similarly, we have that
$ norm(-pi_i lip(y) - pi_i x) = norm(pi_i lip(z) - pi_i x) <= norm(lip(z) - x) <= norm(z - ihom(x)) = 1 + norm(pi_sigma(i) ihom(x)). $
This means that $pi_i x$ lies in the
intersection of the two balls $B(pi_i lip(y), 1-norm(pi_sigma(i) ihom(x)))$ and
$B(-pi_i lip(y), 1+norm(pi_sigma(i) ihom(x)))$. The intersection of these is a convex
set in $X_i$, and is contained in the sphere of each ball, since the distance between their centers is the sum of their radii. Since $X_i$ is a strictly
convex space, this intersection can contain at most one point. Since
$norm(pi_sigma(i) ihom(x))pi_i lip(y)$ belongs to both balls, we must have that
$norm(pi_sigma(i) ihom(x))pi_i lip(y) = pi_i x$. Since we had $norm(pi_i lip(y))=1$
by the homogeneity of $lip$, this gives us
$norm(pi_i x) = norm(pi_sigma(i) ihom(x))$.
If $pi_i x = 0$, we are done, since this gives us
$pi_sigma(i) ihom(x) = 0$. We shall proceed with the assumption that
$pi_i x != 0$.
In this case, we have that
$norm(pi_i x)lip_i (pi_sigma(i) y) = pi_i x$.
Dividing both sides by $norm(pi_i x)$ and applying $g_i$, we get
$pi_sigma(i) y = ihom_i ((pi_i x)/norm(pi_i x))$.
By the definition of $y$, this gives us
$ (pi_sigma(i) ihom(x))/norm(pi_sigma(i) ihom(x)) = (pi_sigma(i) ihom(x))/norm(pi_i x) = ihom_i ((pi_i x)/norm(pi_i x)), $
from which
$pi_sigma(i) ihom(x) = norm(pi_i x)ihom_i ((pi_i x)/norm(pi_i x)) = pi_sigma(i) gamma(x)$
is immediate.
]
We now have enough to prove @thm:natural.
#proof[
By @lem:bijective-factors, we have that the functions $lip_i$ exist,
so $phi$ is well-defined.
It's clear by the construction of $phi$ that $phi$ is a bijection, and $phi(alpha x) = alpha phi(x)$
for all $x in S_Z, alpha in [-1,1]$. Moreover, because $gamma(S_Z) subset.eq S_Z$ and $gamma(B_Z\\S_Z) subset.eq B_Z \\ S_Z$, we have $gamma(S_Z) = S_Z$, so $phi(S_Z) = S_Z$.
By @lem:sphere-implies-extreme, we have $lip(E) subset.eq E$.
By @lem:homogeneity-equiv and @lem:g-homogeneous, we have $lip = phi$,
so $lip$ satisfies the same properties stated above for $phi$.
By #cite(<cascales:2016>, supplement: [Lemma 2.5]), the facts just stated about $lip$, along with the fact that $lip$ is $1$‑Lipschitz, are sufficient for $lip$ to be an isometry of $B_Z$.
]
#heading(numbering: none, "Acknowledgements")
This work was supported by the Estonian Research Council grants PRG1901. The author thanks #box[<NAME>] and #box[<NAME>] for their comments and support.
#bibliography("refs.yml") |
|
https://github.com/HPDell/typst-starter-journal-article | https://raw.githubusercontent.com/HPDell/typst-starter-journal-article/main/test.typ | typst | MIT License | #import "@preview/starter-journal-article:0.2.0": article, author-meta
#let affiliations = (
"UCL": "UCL Centre for Advanced Spatial Analysis, First Floor, 90 Tottenham Court Road, London W1T 4TJ, United Kingdom",
"TSU": "Haidian District, Beijing, 100084, P. R. China"
)
#let author-list(authors, template, affiliations: affiliations) = {
stack(dir: ltr, spacing: 1em, ..authors.map(it => template(it)))
}
#show: article.with(
title: "Article Title",
authors: (
"Author One": author-meta(
"UCL", "TSU",
email: "<EMAIL>",
),
"Author Two": author-meta(
"TSU",
cofirst: true
),
"Author Three": author-meta(
"TSU"
)
),
affiliations: affiliations,
abstract: [#lorem(100)],
keywords: ("Typst", "Template", "Journal Article"),
template: (
title: (title) => {
set align(left)
set text(size: 1.5em, weight: "bold", style: "italic")
title
}
)
)
= Section
#lorem(20)
|
https://github.com/AU-Master-Thesis/thesis | https://raw.githubusercontent.com/AU-Master-Thesis/thesis/main/sections/0-predoc/preface.typ | typst | MIT License | #import "../../lib/mod.typ": *
= Preface <preface>
// Master Thesis
// Computer Engineering
// Authors
// - <NAME>
// - <NAME>
// Supervisor: <NAME>
// Co-supervisor: <NAME>
// Department of Electrical and Computer Engineering
// Aarhus University
// Aarhus, Denmark
// Dates
// - Start: 29th of January 2024
// - End: 4th of June 2024
This master thesis is titled _"#project-name"_ and is devised by #a.kristoffer and #a.jens. Both authors are students at Aarhus University, Department of Electrical and Computer Engineering, enrolled in the Computer Engineering Master's programme. Both authors have completed a Bachelor's degree in Computer Engineering under the same conditions.
The thesis has been conducted in the period from #important-datetimes.project.start.display("[day]-[month]-[year]") to #important-datetimes.project.end.display("[day]-[month]-[year]"), and supervised by Assistant Professor <NAME> and co-supervised by PhD #supervisors.jonas. We would like to express our gratitudes to both our supervisors for their support and advice throughout the project. Over multiple sessions Sejersen has provided extensive feedback on the structure and content of the thesis, which has been of substantial value.
An additional thanks goes to our friends; <NAME> and <NAME> for their help with proof reading, and for providing us with constructive feedback on the thesis.
All software developed in this thesis is released under the MIT license, and is provided as is without any warranty.
\ \
// Sign off
Enjoy reading, \
#a.kristoffer & #a.jens \
|
https://github.com/8LWXpg/typst-ansi-render | https://raw.githubusercontent.com/8LWXpg/typst-ansi-render/master/CHANGELOG.md | markdown | MIT License | # Changelog 📝
## [0.6.1] - 2023-12-26
### Changed
* Changed default foreground color name to `default-fg`
* Slightly reduce pdf size by removing default box fill
## [0.6.0] - 2023-12-06
### Fixed
* Removed workaround for a bug in `raw` that fixed in Typst 0.10.0
## [0.5.1] - 2023-10-21
### Fixed
* Fixed height with empty newline
## [0.5.0] - 2023-09-29
### Added
* Added `bold-is-bright` option #2
### Changed
* Allow setting font to none #3
* Changed default font size to `1em`
* Use `raw` to render content now
### Fixed
* Fixed 8-bit colors 8-15 use the wrong colors #4
## [0.4.2] - 2023-09-24
### Added
* Added gruvbox themes #1
## [0.4.1] - 2023-09-22
### Changed
* Changed default font size to `9pt`
* Prevent `set` affects box layout from outside of the function
## [0.4.0] - 2023-09-13
### Added
* Added most options from [`block`]([https://](https://typst.app/docs/reference/layout/block/)) function with the same names and default values
* Added `vscode-light` theme
### Changed
* Changed outmost layout from `rect` to `block`
* Changed default theme to `vscode-light`
## [0.3.0] - 2023-09-09
### Added
* Added `radius` option, default is `3pt`
### Changed
* Changed default font size to `10pt`
* Changed default font to `Cascadia Code`
* Changed default theme to `solarized-light`
## [0.2.0] 2023-08-05
### Changed
* Changed coding style to kebab-case and two spaces
## [0.1.0] 2023-07-02
first release
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/ops-prec-03.typ | typst | Other | // Not in handles precedence.
#test(-1 not in (1, 2, 3), true)
|
https://github.com/goshakowska/Typstdiff | https://raw.githubusercontent.com/goshakowska/Typstdiff/main/tests/test_complex/ordered_list/ordered_list_mix.typ | typst | + The climate
- Precipitation
- Temperature factors
+ degree
- hot
- cold
+ Something new
+ Monkey |
|
https://github.com/sysu/better-thesis | https://raw.githubusercontent.com/sysu/better-thesis/main/CHANGELOG.md | markdown | MIT License | # Changelog
All notable changes to this project will be documented in this file.
## [0.3.0] - 2024-06-15
### 🚀 Features
- *(heading)* 正文及附录部分一级标题前分页
- *(header)* 使用章标题与论文标题作为页眉
### 🐛 Bug Fixes
- 用 i-figured 修复图标题没有重置按章重置的问题
- *(appendix)* 修复附录图表、公式编码样式
### 📚 Documentation
- *(README)* 修复 README 文档中没有链接到规范问题
- *(README)* 说明本科生模板已经完成模板规范要求
- *(README)* 更新typst.app使用方法中指向的模板版本
### ⚙️ Miscellaneous Tasks
- 修改样例文档以通过编译
## [0.2.0] - 2024-06-09
### 🚀 Features
- *(specification/bachelor)* 根据规范文件配置图题、标题字体
- *(specification/bachelor)* 根据规范修改脚注样式
- *(specification/bachelor)* 表格标题放置到表格上方
- *(specification/bachelor)* 设置基本的页眉
- *(specification/bachelor)* 修改参数要求
- *(specification/bachelor)* 附录标题中取消中英文分隔
- *(specification/bachelor)* 默认不渲染附录页,并修改附录默认参数
### 🐛 Bug Fixes
- *(typst.toml)* Fix typo of "morden" to modern
- *(specification/bechelor)* 修复附录各级标题不符合规范的问题
- *(specification/bachelor)* 修复论文编号设置错误问题
### 🚜 Refactor
- *(specification/bachelor)* 修改默认参数
### 📚 Documentation
- *(specification/bachelor)* 调整过长的规范注释
- *(specification/bachelor)* 删去错误的文档注释
- *(README)* 更新文档规范实现指向 #6
### ⚙️ Miscellaneous Tasks
- *(specification/bachelor)* 调整默认参数分行
- *(gitlab-ci)* 自动更新模板中版本号
## [0.1.1] - 2024-05-21
### 📚 Documentation
- *(README)* 更新文档
### ⚙️ Miscellaneous Tasks
- *(gitlab-ci)* 修复变量错误
- *(template)* 修改发版 MR 模板
- *(gitlab-ci)* Automate everything!
- *(gitlab-ci)* 提取默认镜像配置
- *(gitlab-ci)* 恢复手动触发打标签
- *(gitlab-ci)* 更新打版命令
- *(GitLab-ci)* 分离每一步命令以检查命令执行原因
- *(git-cliff)* Release-note 排除自动化提交的记录
- *(gitlab-ci)* 修复版本 bump up 命令与推送问题
- *(gitlab-ci)* 修复语法错误
- *(gitlab-ci)* Fix cache issue
- *(gitlab-ci)* 修复打版提交推送不成功的问题
- *(gitlab-ci)* 尝试修复 release-commit 无法推送到分支的问题
- *(gitlab-ci)* 重新合并经过验证的命令
- *(GitLab-ci)* 更换安全git-lfs 的命令
- *(gitlab-ci)* 更换安装 lfs 的命令
- *(gitlab-ci)* 再次更换安装 lfs 的命令
- *(GitLab-ci)* 切换回默认分支进行验证
- *(README)* 修复 README 图标显示版本号不全问题
- *(typst.universe)* 添加自动打包发布到 sysu/package 仓库的流水线
- *(typst.universe)* 修复脚本顺序错误导致的程序执行失败
- *(typst-universe)* Fix push problem
- *(gitlab-ci)* 修改流水线作业触发条件
- *(gitlab-ci)* 调整发布提交任务中命令的顺序
- *(gitlab-ci)* 补充用于检测流水线的触发规则
- *(gitlab-ci)* 启用调试规则
- *(gitlab-ci)* 去除无法使用的测试流水线触发规则
- *(gitlab-ci)* 修复触发发版流程的版本标签表达式错误问题
- *(github)* 删去 github 发版流程
- *(gitlab-ci)* 修复没有获取标签以致于无法发版的问题
### Build
- *(typst)* 修复错误的版本号格式
- *(thesis)* 简化构建论文的指令
## [0.1.1-alpha.5] - 2024-05-21
### ⚙️ Miscellaneous Tasks
- *(README)* 修复 README 图标显示版本号不全问题
- *(typst.universe)* 添加自动打包发布到 sysu/package 仓库的流水线
- *(typst.universe)* 修复脚本顺序错误导致的程序执行失败
- *(typst-universe)* Fix push problem
- *(gitlab-ci)* 修改流水线作业触发条件
- *(gitlab-ci)* 调整发布提交任务中命令的顺序
- *(gitlab-ci)* 补充用于检测流水线的触发规则
- *(gitlab-ci)* 启用调试规则
- *(gitlab-ci)* 去除无法使用的测试流水线触发规则
## [0.1.1-alpha.4] - 2024-05-20
### ⚙️ Miscellaneous Tasks
- *(gitlab-ci)* 尝试修复合并后错误触发流水作业
- *(gitlab-ci)* 仅在 tag 推送到 default branch,并且在页面上指定 tag 标签时执行流水线
- *(gitlab-ci)* 修复变量错误
- *(gitlab-ci)* 修复 tag 标签检查发版流程触发逻辑
- *(template)* 修改发版 MR 模板
- *(release)* Prepare for v0.1.1-alpha.4
- *(gitlab-ci)* Automate everything!
- *(gitlab-ci)* 提取默认镜像配置
- *(gitlab-ci)* 恢复手动触发打标签
- *(gitlab-ci)* 更新打版命令
- *(GitLab-ci)* 分离每一步命令以检查命令执行原因
- *(git-cliff)* Release-note 排除自动化提交的记录
- *(gitlab-ci)* 修复版本 bump up 命令与推送问题
- *(gitlab-ci)* 修复语法错误
- *(gitlab-ci)* Fix cache issue
- *(gitlab-ci)* 修复打版提交推送不成功的问题
- *(gitlab-ci)* 尝试修复 release-commit 无法推送到分支的问题
- *(gitlab-ci)* 重新合并经过验证的命令
- *(GitLab-ci)* 更换安全git-lfs 的命令
- *(gitlab-ci)* 更换安装 lfs 的命令
- *(gitlab-ci)* 再次更换安装 lfs 的命令
- *(GitLab-ci)* 切换回默认分支进行验证
### Build
- *(typst)* 修复错误的版本号格式
## [0.1.1-alpha.4] - 2024-05-17
### ⚙️ Miscellaneous Tasks
- *(gitlab-ci)* 修复 tag 标签检查发版流程触发逻辑
- *(template)* 修改发版 MR 模板
## [0.1.1-alpha.3] - 2024-05-17
### ⚙️ Miscellaneous Tasks
- *(gitlab-ci)* 修复CI脚本复用问题
## [0.1.1-alpha.2] - 2024-05-17
### 📚 Documentation
- *(template)* 添加发布版本的 Merge Request 模板
### ⚙️ Miscellaneous Tasks
- *(gitlab)* Fix syntax error in .gitlab/ci.yml
- *(cliff)* Add cliff config
- *(gitlab-ci)* 注释自动发布版本流程
- *(gitlab-ci)* Pull the checks for each merge request
- *(gitlab-ci)* Check when merged to the default branch
- Change gitlab-ci's trigger rules
- *(gitlab-ci)* 一键发布新版本
- *(gitlab-ci)* Fix syntax error rasied by wrong use of colon
- *(merging)* 'ci-refactor-build-flow' into 'main'
- *(gitlab-ci)* 修复 git-cliff 命令错误
- *(merging)* 'ci-refactor-build-flow' into 'main'
- *(gitlab-ci)* 修复添加缓存文件的问题
- *(merging)* 'ci-refactor-build-flow' into 'main'
- *(gitlab-ci)* 修复添加缓存文件的问题
- *(gitlab-ci)* 删去在 CI 中提交代码的操作
- *(CHANGELOG)* Init
- *(template)* 修改 CHANGELOG.md 需要的参数
## [0.1.1-alpha.1] - 2024-05-17
### 📚 Documentation
- *(github stars)* Github badge changed to the stars one
- *(README)* 修正跳转到仓库的 badges 并增加反馈交流渠道
- *(README)* Refer the repository from upstream to gitlab
### ⚙️ Miscellaneous Tasks
- *(github)* Remove build flow for gitee pages
- *(github)* Remove cron schedule
- *(gitlab)* Add a construct flow for gitlab
- *(gitlab)* Only deploy when it's tagged
- *(gitlab)* Fix the wrong install of typst-cli
- *(gitlab)* Fix the typo in build command
- *(gitlab)* Make a release when a commit is tagged and merged to the default branch
## [0.1.0] - 2024-05-15
### *
- Init
- Fix crlf of ps and bat
- Fix crlf of ps and bat
- Add ug cover and integrity declaration
- Arabic numbering
- Fix counters
### 🚀 Features
- [**breaking**] Add bachelor-title-page
- *(heading)* Adjust headings' style, indent for SYSU
### 🐛 Bug Fixes
- *(submit-date)* Change size from 字体.四号 to 字体.小四
- *(cover)* Change display style fo supervisor
- *(fakebold)* Import @preview/cuti and use fakebold to fix missing bold weight on 黑体 fonts
- *(bachelor-decl-page)* Use fakebold to fix bold issue on 黑体 font
- *(style)* The not-working bold issue in 黑体 font is solved
- *(bachelor-abstract)* Change size and font according to the document
- *(loc)* Change locs' titles' font from 宋体 to 黑体
- *(thesis)* Replace nju-emblem.svg with sysu_logo.svg
- *(typst.universe)* Correct the wrong import
### 🚜 Refactor
- *(bachelor-cover)* Remove parameters about anonymous
- *(bachelor-cover)* Remove the useless datetime-display parameter
- *(bachelor-decl-page)* Replace templates/declaration.typ with pages/bachelor-decl-page
- *(template)* Re-arrange the order of imports and remove useless import in template/thesis.typ
- *(style)* Change page hader fonts from 楷体 to 宋体
### 📚 Documentation
- *(lib.typ)* Change lib document description to SYSU
- *(cover)* Add TODO of fixing bold issue of "黑体“ fonts
- *(thesis)* Change the title params for SYSU
- *(README)* 更新代码仓库地址
- *(utils/bibliography)* Change `/* */` comment to the only legal `//` comment
- *(fonts)* Change reference to docs
- Update repository link and the thesis title
### 🎨 Styling
- *(trailing)* 移除所有文件中的行末空白
- *(color)* Define the standard color for SYSU logo
- *(bachelor-cover)* Add trailing comma for params
### ⚙️ Miscellaneous Tasks
- Fix action
- *(typst)* 增加用于 typst.universe 的配置文件
- *(gitignore)* Ignore pdf & develop workspace config
- *(template)* 调整模板的导入为开发设置
- *(typst)* Change build source
- *(fonts)* Track **/*.otf file as LFS
- *(typst)* Bump up to v0.1.0 for release
- *(README)* Modify the build script
### LICENSE
- Add MIT license
### Makefile
- Fix tabs
- Add watch
### README
- Improve user guide
- Add nightly typst info
- Add macos
- Update QA and QQ group
- Add QQ group link
- Publish preview PDF every 6 hours
- Add gitee link
- Add github link
- Update zip download link
### Abstract
- Fix keywords pos and style
- Capitalize en title
### Build
- Fix build
- *(typst)* 修改描述与定义版本为未稳定状态
- *(typst)* Add keyword "thesis" to typst.toml
- *(typst.universe)* Add thumbnail for typst universe
- *(package)* Rename as morden-sysu-thesis
### Functions
- Rename mod to rem
### Github
- Publish preview pdf every day
- Sync to gitee
- Sync preview pdf to gitee pages
- Use custom repo mirroring action
### Script
- Do not use ghproxy
- Update windows bat script
- Fix rustup installation
- Fix rustup install
- Use tuna mirror for rustup
### Scripts
- Use ghproxy for installing
- Use ghproxy for installing
### Template
- Add footnote
- Fix booktab ref
- Port pkuthss aec9080 and a179ca0
- Use tablex
- Fix supplement
- Add cover without emblem
- Fix list of figures
<!-- generated by git-cliff -->
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.